 A audible Zoom, can I be heard? Yes, we hear you well. OK, good. Thanks. As I don't know if I should wait another minute for late commerce, because several people who usually come aren't there. Are there any? Does anybody want to ask any questions about anything I've said or anything related up to now or also people on Zoom? Or do I just start? So maybe I should start with an announcement or a correction, because first, anyway, I should remind you that this course is 12 lectures, so six weeks, two a week. And the first eight of which today is the last were Tuesday and Thursday, as you obviously know, since you're here, from 4 to 5.30. But the last two weeks, so the last four, I said last time, because that was written on the poster here, that it was 3 to 4.30. But that was a mistake. It's been corrected in an email to people in ICTP. On the MPI website, it's written correctly anyway. It was from the MPI that I was warned that there was a contradiction. So I should put this maybe in case, I think most people look at email and announcements more anyway, but just in case somebody looked at the poster, then I should say it's not 3 to 4.30 as written on the poster. So starting next Monday, so today's Thursday, soon, four days, the course will be at 2 to 3.30. This room was not available at other times. That's the reason for the shift. So I wanted to tell two topics that I wanted to talk about today. I hope I have time to more or less finish both of them. So today is number eight. Today's topics, well, both of them are in various guises about rapidly divergent series. So you have a formal power series, but it does not converge for any ex-adoles. So the coefficients blow up more than exponentially. And the first part I'll talk about the case, which is the most frequent case by far when it's factorial divergence, so like n factorial. And then in the second part, I'll talk about more general classes and their amusingly. So first I'll be talking about series. So I'm talking about divergent series. I typically will write the variable as h to remind you of Planck's constant. The physicists and depositors often put h bar because it reminds you even more. Because there are many h-symathematics, like my h was 8. There's only one h bar. But in the actual applications, not theory, this h was naturally 2 pi i times another constant. So I call that when h bar and this when h. Anyway, it doesn't matter. So this is a formal power series. It might be with rational coefficients. It doesn't really matter, let's say complex coefficients. But a and typically, in the first part, remember we had more generally n factorial to the alpha, beta to the n, n to the gamma times c0 plus c1 over n. This was the kind of thing that we were trying to interpolate. But this is the case when alpha is 1. So that's what I'll talk about first. And then I'll talk second about this when alpha is bigger than 1. And surprisingly, some phenomena are different and actually easier. Or certain things are true for alpha bigger than 1 that simply aren't true in the factorial that I've heard in case. But also the problem that we'll be addressing is quite different. Here we want to know about asymptotic sufferings, the inverse power series, that kind of thing. Whereas here we want to know about evaluating the series. So the first question is, if I have, well, the question for the first half of the lecture, so this first topic, is if I have a series like that and for the moment I'll be assuming that it is this kind of behavior, it's just a single n factorial, how to evaluate this numerically? So get an actual number. Now obviously if h is three, that makes no sense at all. I mean the terms are just absolutely huge right from the beginning. But if h is one millionth, then the first few terms, the h to the n is rapidly decreasing and the a answer just fixed constants. So you get a nice power series in one over h as you get a good approximation. And so the question is how can you actually make some sort of numerical sense to high precision of this number. But you don't expect to get an exact number is the answer. You expect to get a number which is well-defined up to a certain precision. So the completely naive is just truncate. So let's call it fixed truncation. So then you can certainly say five h is approximately the sum from n from zero to let's say 20 just to say a random number. Well it's exactly equal to this plus o of h to the 21. So if I simply, if I've computed the first hundred coefficients by divergence series and they're divergent, then I can just decide that's how we usually define power series as limits of polynomials. So we just approximate by polynomial or truncate it and then say the error is h to the 21. So if h is 1,000, there should be 10 to the minus 63. But of course we have a factor in front a 20. And so if it's factorial that I've urgent, it may no longer be 10 to the minus 63 but maybe 10 to the 20 times 10 to the minus 63. So this is an o but of course this o depends I'll put o sub 20 if I chose, it's a fixed number but if I rotate more coefficients then typically the, I mean this is the order of the next term a n plus one is here a 21 but that's growing that 21 factorial. So it's a very big o. So what we actually want to do as a first approximation is optimal truncation. So we stop when a n h to the n is roughly minimal. So the idea is here's a I make a graph here's n and here's a n h to the n let's pretend everything is positive at the beginning these terms are getting exponentially smaller because they're like h to the n. So if h to the 10th, well here h to half this will be like a half a quarter and eight. So they're getting exponentially smaller up to well but of course eventually they're growing exponentially because a n is like n factorial. And so there will be some value of n where it's the smallest possible and you just stop there and you see it on the computer or if you know the asymptotics like this of course you can compute a priori where it happens. And so this is what you do that's sort of the naive approach that's certainly done in physics. I mean where the h is maybe Planck's concert some other fine structure concert some other parameter which is small and typically this is asymptotic a perturbative expansion maybe each a n is given by a sum of certain loop find the diagrams with n loops and anyway you can't compute very many a n because if n is large there are many too many diagrams and each one is very hard to compute. So maybe you only know a small number of course you only know six terms there's nothing to discuss you just take the six you know and hope for the best. So this in physics for instance used to compute the magnetic moment of the electron that's a famous calculation where the Feynman diagram calculations and the experiments agreed to I forget how many decimal digits maybe 13 or 15 anyway to very high precision but here we want to take this optimal truncation. So let's assume that this is my asymptotics and here the next comment just so the details of how to do this how the Washington state were worked out in the joint work of which I gave a whole course last year on various parts of this work and it's leading to a series of many papers. So one or two papers are already out. This part isn't out but it's written up but not yet on the archive. So we work this out for specific applications where these things with the expansions of certain quantum invariance of knots. I might give an example of that later if I get around to if there's enough time otherwise I'll leave it out but I talked about that in quite a bit of detail in my course this is the course last year and some of the people are the same and so and the numerical aspects of how to do that for the quantum invariance is in the paper which came out on the archive three or four months ago. So I probably won't say too much but I did want to mention that thinking this through was all joint work with Vistapras. So coming back to this, so we're going to assume this and here I started a sentence and I might as well assume that beta is one because of course if you have a series of integers you care about the exponential growth but if I just rescale a n by dividing by beta at the end obviously I'm just rescaling h by factor beta and if h is small so is beta h it's just a renaming of the variable. So I could schlep it around the whole time and keep the beta but let me no bother. So I'll assume beta is one. Okay so well now what does that mean? To look when it's maximum you have to look of course at the ratio of let's say the nth term to the n minus first term. So a n has a factor n factorial so that'll give me an n. The beta I could have kept it would have been beta n so I'll put it here but as I said I'm going to assume it's one anyway and then n to the gamma over n minus one to the gamma is just one to the leading order but h to the n over h to the n minus one is of course h. And so you see that this should be essentially one because at the minimum it's going down and then it's going up. So the ratio is less than one until it starts being bigger than one and that's the point where you should stop. So therefore we should stop. So the stopping stop at n equals n which should be well according to this formula it would be one over beta h but since I've decided to set beta equal to one I simply stop at one over h or more correctly one over absolute value of h because of course you might well need this for a complex parameter or more to the point the h might even be real but the beta that had originally might be complex and so when I rescale so h in general is a complex parameter and of course this thing was with the absolute value or as n is already positive. Okay so I'm going to stop here. So this I'll call optimal truncation and that's the sort of obvious thing and at first sight you would think that's the best you can do. So n is equal to h inverse plus o of one a n h to the n. Now this is several disadvantages. One of course it's not completely well defined because o of one could be where we couldn't take it to be exactly zero because of course absolute value of h inverse may not be an integer but it's some large real number so we can take n to be one over the absolute value of h plus b where b is a real constant and it's at the order of one. Of course if you want you could always take it between minus a half and half. So I can choose a rule for where to stop like I take the ceiling of this number the next the smallest integer bigger than it or the floor the biggest integer smaller than it or the rounded number the nearest integer where even in that case if it's exactly half integer you have to decide you take the one below or the one above but it's obviously a little arbitrary. So the problem is the choice is a little choice of stopping time of stopping value is kind of arbitrary. As I say you can fix a rule to have if you want a well-defined function but whatever you do let's say that I take the rule I take the floor of n over h so the biggest integer, the integer part but still that will go up continuously n over h as h decreases and then sorry one over h and then at some point it'll jump to the next integer. So at some point I'll be taking 83 terms in this series and for some smaller h I'll be taking 84 terms and so at that point there's going to be a jump. So if I took the sum up to 83 terms I'd have an error term which is maybe getting smaller and if I took the sum for 84 terms I'd have some other term error term which would be doing whatever it does but at some point you jump from one to the other. So that's the thing we want to compensate so that we get something that works. However already this gives the expected error here. Well of course we don't know what the error is first of all it doesn't even make sense because when I say the error I mean the difference between my numerical approximation and the true value but we don't know what the true value is we don't know how to define it. There can be many different functions of h which are perfectly good c infinity functions at zero but have exactly the same asymptotic expansion to all orders. For instance you could add e to the minus one over h and so that would have a different value at a particular h. So it's not even quite well defined but in practice most functions that come up you believe that there is actually a true function somewhere in the background and this is the asymptotic expansion of a canonically kind of a god given scene a smooth function and sometimes it really is that you start with the function that you only know asymptotically and so the function is well defined and you can compare the actual value of the function to small h with the value you get in this way but you will never be able to do better then roughly e to the minus n where n is again one over h. And the reason for that is simply the following if I change my mind by one n so instead of stopping at n, n zero I solve it n zero plus one and I've changed this some by the nth term but here you see that if I take what the actual nth term is we just checked when it's a minimum but this is approximately n factorial well there's a gamma to the n I don't really even care about it so much n to the gamma so there's an n factorial n to the gamma because of our asymptotics and the constant c zero that I really don't care about but then we have to we're dividing by h to the n we're multiplying by h to the n and h is roughly one over n so it's roughly n to the n and so forgetting unimportant things that the constant and the n to the gamma by Stirling's form do we know that this should have been capital N because it's at the point where you reach the minimum which is capital N then you see this is by Stirling's form to roughly n to the n over e to the n this is n to the n and so this is roughly e to the minus n as I said so you certainly can't do better than that because even if there is a true function out there you take this error as you move along and the value of n something jumps by one the value of this sum will jump by e to the minus n so even the function you get as h goes to infinity will have jump so if there is a smooth function and this thing has jumped by 10 to the minus 10 then at least one of the errors has to be at least half of 10 to the minus 10 so you definitely can't do better than that but in practice you expect to get about that I mean times the small power of n it will depend on gamma and Stirling there's a one n to the one half from Stirling and so on that's not important so that gives an exponentially good error and that's of course much better than n to the minus 20 so here if I stop if this is called n then this error will be h to the minus n which is a fixed power of n whereas here we have e to the minus I mean n is not given it's h which is given so this is roughly exponentially big exponentially small in one over h so it's much much better than any power of h so this works fine in practice and in an example that I was going to do in detail I've prepared but since it's kind of a red herring for this course and I did it last year in detail so some of you have seen in any way it's those experiments are in the paper I can give you say which page or something I think I'll skip how it actually worked but there was a case where we knew the coefficients let's say dozens or 50 or 100 200 coefficients of such a power series but we had an actual function that was supposed to be as some sort of the equal to it that was a true function but when you took the difference we kind of expected that there was a second series a n prime h to the n where this had a similar behavior but with a smaller, with a different constant a different beta and which would mean a different stopping point itself and so with just to power translation power truncation you couldn't see the second term at all with the optimal truncation you could see it and you could uniquely so this series was one we predicted and there was a specific coefficient which we didn't know but we expected a multiple particular function some power series and you could actually do this and get that constant to 20 digits and recognize it was in one case it was squared of three to 20 digits and so on but then actually there was a third power series which had yet a slower growth a smaller order of growth actually to be more honest and what we actually had was a certain constant over h times sum sum a n h to the n let's call it c but this is very big then we had another one and we knew what c and c prime and the a n and a n prime at least the first few hundred but there were actually three terms that we expected I mean from the analysis and c was bigger than c prime, bigger than c double prime so this thing if you just truncated a finite point it's completely drowned by this if this is would say suppressed I mean I could take out the factor e to the c over h then here there would be a power of h and here would be exponentially small but now we've improved that the error here is exponentially small also and we were lucky that exponent e to the minus n was it was a bigger exponent so a bigger negative power of n than the difference between c and c prime and therefore we could see there was supposed to be a coefficient here that we had to find actually we had infinitely many such series and we could actually find this coefficient you may have to do very high precision but then when we subtracted that we wanted the next but the next one the error in this naive optimal truncation was better than the difference between c prime and c but not as good as the difference between c double prime c so you simply couldn't see this last term because even with the optimal truncation the error was exponentially smaller than the main term but it was exponentially bigger than the third term we were lucky it was smaller than the second term we actually had many examples sometimes you could see three terms and then the fourth you couldn't see so that's when we thought if you could improve this and actually the idea is very easy so we have a thing we called smooth truncation so we again take n which is going to be one over h plus b where b is of the order of one in real the a and h to the n but then there's an error term and now I have to check my notation so that I don't get contradictions with the notation so if you imagine b as a fixed number let's say b is 0.3 it's a fixed O of one number well if b is 0.3 then since n is an integer let's say h is roughly one thousandth then if h is roughly zero zero one and if b is let's say zero point three then n might be a thousand point three or it might be a thousand one point three or it might be nine ninety nine point three et cetera so if I fix b then all that's happening as I change my h and make a new choice this dependence on b I would ideally like this if this were an actual function this would be an exact form the epsilon b of the error and then you see immediately that epsilon b plus one of sorry this should be this would be I guess if I call it h or n doesn't matter whatever I call it maybe get the notations right I can't read my handwriting I copied everything and I can't quite really it's epsilon b of h that makes more sense after all it's h which is given not n n is the thing we're choosing so this correction term that I want to add should have the property that epsilon b of h is equal to epsilon b plus exactly an h to the n where n remember once again is one over h plus b so if I could solve this equation exactly so if I could invent some epsilon b which is a well-defined function for every real number b let's say the order of one so it's a function on the real line in b and it's a function of a parameter h if I could solve this in closed form I give an a n where n remember is given by this formula then of course I could simply add that to this and the sum would be completely independent of where I stopped I could change from b to b plus one so remember the problem is I'm adding the first term the second term the third and I'm stopping up to let's say 83 terms but if I move a little further suddenly I have to take 84 terms so at that point it's going to jump and I don't want it to jump that's why we call it optimal smooth truncation I want to make it smooth so what I would like is this is at least true to all orders in h because that would mean or at least to the first few orders that would mean that when I take my jump there was a jump but now if I added this epsilon to it and now the modified function will no longer jump I get something which is smooth and then you would expect that the error term is not the size of that term but much much later and that's indeed what happens so if you write that out and think what it means so let me write so that I don't have to keep writing one over h let me call that capital x so this is going to infinity except that there's a subtlety and the subtlety is that x maybe it's going to infinity in the sense that its absolute value is going to infinity but it actually matters what the phase is so I write it as a phase the complex number of absolute value one times x is going to infinity and mu is some phase so now if I do this then by Sterling's form let me see that this is what I just did a moment ago but now more precisely there's a full asymptotic expansion well I can actually give the first few terms it's mu this phase to the power minus n so that's the number of absolute value one but it does depend on n and then from Sterling you get two pi e to the minus x that's what we had before the e to the minus n remember n is very close to b and then here it's x actually I think it's gamma plus a half in my notes it's a half minus gamma we've changed the sign of gamma so let me just see if I got the the sign right yeah so ignoring this is just absolute value one that's a constant this is some small part of x that's the e to the minus x that we had but actually it's equal by Sterling's formula that's only the leading term but Sterling's formula gives you an entire asymptotic expansion and in terms of our original c0, c1 and c2 that we had it'll now be an expansion in one over absolute value of x starts with c0 which was our constant that's the leading term but now there's a correction in c0 it's linear of course so it's c0 times something it's c0 over 2 times b squared minus c0 over 2 plus b minus c0 over 2 and I can't quite read my handwriting but I think it's gamma plus a 12 and probably that's not exactly right plus c1 plus dot dot dot so the general term will be pb which is thick I mean all these ci's are thick so the next term will be pL of b pk of b I think I called it divided by x to the power k and this is a polynomial in b of degree 2k and you can compute as many of the ci's you want just from Sterling's formula so we get an entire asymptotic expansion remember c0, c1 and so we assume we know we know how the asymptotics for our series actually look that was these numbers I've assumed beta is 1 to keep life simple and so we have you know precise asymptotic expansion and so this is the one that we want to be equal to epsilon b plus 1 of h minus epsilon b of h so now you can unravel that it takes a little bit of thinking and it's kind of a pain in the neck but what you find is that what you need but now I need to maybe I'll just take the print I wrote everything on paper but my handwriting is so hard to read that I think I'm better off taking the printed paper and at least reading the forms correctly so this gives you the ansatz which is in fact true that the epsilon b of h will have the same form the same mu to the minus n the same square root of 2 pi e to the minus absolute value of x times again x to the power gamma plus a half and then there'll be a new power series k from 0 to infinity let's call it qk of b over x to the k where these qk have to satisfy the following very nice equation because of the phase because it is mu to the n when you put this into the equation here remember we had epsilon b plus 1 epsilon b b is fixed but when I change by 1 I change n by 1 I pick up the factor of mu and so the expression that you have to solve I mean the q will depend on mu too but I'm not going to write that so it's a power series actually it's a function of polynomial in b it's not polynomial in mu you'll see in a second but it's a function of mu so what we need is the following functional equation I'm going to write the mu and this should be exactly p of b pk of b now you see that's very nice because p of b is some explicit polynomial I wrote the first two here let me check if I didn't have no I think actually this was plus and the gamma should have been well plus and then minus gamma it doesn't matter plus c1 it's some polynomial now when you work out the q's then you can do it completely explicitly if mu is not 1 then there's no problem because this is a polynomial it's a combination of terms b to the j so if I make this also a polynomial the top term will have a b to the j but it'll be mu times that minus 1 and so I have to divide by mu minus 1 and I can do that and then I can subtract that off and get the next term and so it's a completely well-defined procedure and you get you know as many terms as you need and it starts like this q0 will be the c0 over mu minus 1 remember p0 was just c0 but p1 was already c0 of 2b squared minus c0 of 2b so the next one q1 will again if degree 2 it's c0 over 2 times mu minus 1 times b squared well there are about 10 more terms I'm not going to write them out but you can trivially find a unique polynomial in b which satisfies this recursion where pk p2 of b is the specific quadratic polynomial and it's also quadratic polynomial so here the degree of qk of b is again 2k and so that's very nice because it means you have an asymptotic expansion of epsilon I don't know where I wrote it where's the definition this was the thing that we were approximating but somewhere I wrote here is the asymptotic expansion of epsilon b of k and you can compute as many of these qb's qk of b as you want and so let's say I take 20 of them if I use q0 up to q let's say capital K which might be 20 then the new error is now roughly up to a fixed power of x which I don't care it's e to the minus x that already had times x to the power of minus 20 that's a lot bigger I mean if h was one thousandth then x is a thousandth of thousandth of 20 I've gained a factor of 10 to the 60 so that's not hey it's very nice now you could say or you should say what happens if mu is one and then you see this equation makes no sense all of the coefficients are polynomials in b, mu and one over mu minus one but if mu is one of course one of mu minus one is infinite but now I have the qk of b plus one minus qk of b is a known polynomial then qk of b has to have a degree one bigger qk of b is now 2k plus one but there's an ambiguity of a constant because of course if I add to each given k some constant then it because mu is one it just drops out so therefore I don't have quite a well-defined procedure so the next question since it very often happens that your numbers are all positive we definitely don't want to exclude the case when x was already positive rather than having a phase remember what I'm doing I have my mu to somewhere on the unit circle here's mu this is one and then my x I'm assuming is that phase times an absolute value which is going to infinity but if x is going to plus infinity which it often does if all the coefficients are positive and h is positive then of course mu is one and so I actually want that case but it turns out that even apart from that there's a better way to do it which is much smoother and will give us a nicer formula so a better way I'll keep the beta now just to remind you but I'm going to set it equal to one in a minute replace the asymptotic expansion that I had uh... which was you know some beta to the n times n factorial times uh... n to the gamma times c zero plus c one over n plus c two over n squared see one thing that's a mess about that this is a combination of two kinds of functions this a gamma function gamma n plus one if you think of n is smooth but this is a power function and they don't really fit together but what you could say is what this really is is gamma n plus one plus gamma so what I could do so I replace this a n is equal to this by a n has the asymptotic expansion to all orders and then I'll let the c zero prime plus the c one, well c zero prime would actually be the same plus c one prime over n I can have an equally good asymptotic expansion but I've combined now I can keep the beta to the n but it's already set it's not excuse me replace a n approximately this by a n approximately this and again I can rescale and assume beta is one so that's better but it's still not good for exactly the same reason here I didn't like multiplying n factorial by n to the gamma what makes sense with n factorial is to multiply not by n but by n plus one then it's n plus one factor that's the function equation so for the same reason I've now improved the n factorial n to the gamma by gamma function which is smooth but now I have gamma divided by n which is nothing at all but of course this I could put c zero double prime this one will actually be equal to c zero this will also be c zero and then but instead of doing it like that I'll keep that first term c zero double prime plus c one double prime of gamma of n minus gamma plus c two double prime times gamma of n minus one minus gamma and so on it's just as good as an asymptotic expansion so in other words I don't write the thing as the product of a pre determined n factorial times powers of one over k but I shift the n by integers or because if I had the n to the gamma I actually shift my gamma first and then by integers and this has the advantage the analysis will be much smoother and also if I did the other it would depend separate separate on two different real parameters but now it only depends on the combination so now once you do this uh... it will be uniform in this thing gamma so that's what I'm going to do and the other advantage is that the actual ones that come up when you do asymptotics like for these quantum invariance very very often the form of the asymptotic expansion that you got it came out in this form you got one term like this then the next term was the next smaller gamma and so on of course you can always translate there's a complete you can write down in closed form uh... for instance c one this will still be c zero this will still be c one prime I think this is the first term with the same the next one will be a combination of c one prime and c two prime they're completely equivalent but this one is much smoother and so this is the one I want to use and so if I call that lambda so my new ansatz for the a n uh... so now we're going to assume that a n has the asymptotic expansion now there's no particular reason uh... to put the double prime I'll just call them c k so they're new c k's times gamma n plus lambda minus k where lambda was originally one plus gamma well it was one plus gamma in the original notation but this is my new ansatz again there could be a beta to the n but I can leave it out so it's actually much smoother there aren't so many constants instead of a beta to the n and n to the gamma and something there's just a single shift lambda is the only parameter and we're writing this as a combination of a big shifted factorial and then shifted by one less you know one below that and so on so we're writing our thing as a linear combination of these asymptotically large functions okay so if I do that then uh... with this new notation and the same you as before what you find that you need now so now five h you know this is a smooth thing for given b so I'm fixing b so this will be the sum n from zero to capital n which once again n plus b is going to be one over h or one over beta h but I've set b equal to one so it's this uh... so it's the original power series that I'm not allowed to change it's the coefficients that I have a different description this is the same but then what you find now is that each c of l will give you h to the power l minus this fixed number lambda times the new thing and that's very nice because we already had one parameter before we had one two parameters we had only one now there would have been three parameters in the original way but now there's only a second one because as I shift by one at a time uh... I'm going to have a certain function a universal function epsilon sub b prime of one over h so power series uh... which will be a power series in h and but it no longer depends on b and lambda separately it only depends on b minus lambda so I have less things to compute I've only one kind of universal function that sense was hard to follow but I'll write down what uh... what I need where eb of x so remember b is a real number of the order of one x is very large with a given phase mu which will have to worry this is the order of one and maybe real uh... should satisfy should satisfy in principle exactly or to all orders of proximate but in principle I can do it exactly this is a I have the same phase factors before and then it's very similar to the thing I'd before with q of b plus one times mu minus q of b but now it's eps on b of x of sorry x minus eps on b is exactly on the nose gamma of x minus b over x to the power x minus b the exact details doesn't matter but now it only depends on a single parameter which I've called b but it wasn't originally b it will be b minus the lambda which was one plus gamma plus some integers so it'll keep changing but I can just call it b since it's an index of some new function so now we've produced the problem of how to add a nice smoothing term the problem of finding a function that satisfies this equation so now there are two approaches I'll say them very briefly and then I think I'll stop on this theme would just say a couple of refinements well I'm going to explain so how to find, how to compute how to find this eps on b of x that's either numerically the very high accuracy as an exact function or as an asymptotic series in one over x which typically will be good enough because x is large with asymptotic series anyway and you can do both but one small comment that we make in the paper you only have to do it once here you have epsilon the initial term is b shifted by this fixed number of lambdas b minus lambda but then we're going to need that which is going to be a lot of work to compute but then we'll need the next one which is that b plus one but that you don't have to compute again because once you have epsilon b for some b then this is an exact function your computer knows the gamma function then you know eb plus one as well so once you have the first one you're in good shape so you only have to compute one to high accuracy as I said that could be as an exact number or as a power series in in one over x so if you do it as a power series in one over x so the power series approach or asymptotic you use again sterling for the n times x is going to infinity here there's no mu in this formula it only depends on the absolute value of x and of course on this b and so by sterling to all orders this is the square root of 2 pi over x times e to the minus x times well I can actually write down exactly what it is this is the sum i from one to infinity of the i plus first renewing number evaluated to b plus one if there are no mistakes here times i times i plus one times x to the minus i but of course you can multiply this thing all out and what you'll get is one plus again called p1 of b over x plus p2 of b over x squared et cetera it's a perfectly good asymptotic expansion you can find as many terms as you want p0 is of course one p1 is b squared over 2 plus b over 2 plus plus a 12th and p2 all of the terms you'll just get the first and the last it's a fourth degree polynomial which starts b to the fourth over 8 and ends 1 over 288 so you have these b's and now we do exactly the same as we did before we make an unsatz that we want this is a fact the epsilon b this is an unsatz we assume that it is in fact a theorem that it is like that it's exactly the same leading term but now to all orders and now there's some new polynomial qk of b which of course depends on mu just as it did before x to the k and there's an equation exactly like there was before in fact it's the same one so when you do this you can immediately find all of the q's in less mu is one and so that gives you an expansion but there is a cuter way so let me say that just very briefly and then I'll make a few more comments about how you use this in practice and then I'll go on to the other theme so the idea is that we can write the point is we want mu I'll write again the functional equation that we're trying to solve remember that x is a fixed phase times a number which is going to infinity so we have this is supposed to be and then I'll write out the gamma function in closed form that's what Euler found is a formula for an integral formula for the gamma function so if you look at the formula that I had here and you just substitute that thing and rescale then what you find is t minus mu where mu is this phase factor so you see that there would be a problem mu is remember on the complex on this unit circle I'm integrating from zero to infinity so there will be a problem if mu is one but for the moment let's say mu is is not one okay well this you can solve kind of completely treatably in closed form I think you can solve it you just write down the solution sorry I've already written down the solution I made a mistake the gamma function is this thing with just without the t minus mu and what I've done is to just solve this it's a geometric series so it's sorry this was supposed to be I'm sorry I'm sleepy and I'm writing the wrong thing epsilon b of x it's still written here I don't have to copy it as this this when you write as a very simple integral zero to infinity of the other and then you can you can solve this equation the integrant and you can you can do it immediately and that's what I I wrote the right answer with the wrong question so this is what eb of x is at least if mu is different from one then you can just write down this actual integral it's a perfectly convergent integral you can compute it by numerical integration or you can also expand in powers of x by the method of stationary phase and if you do that then if mu is different from one you'll get exactly the same qk that we got by recursive from the function equation but you can also do this to any accuracy just as an actual integral and get this as an exact function and I'll just put maybe Cauchy principle value if mu equals one so you integrate as usual up to if mu is one you integrate up to one minus epsilon and from one plus epsilon and then that combined integral so you integrate from t equals zero up to one minus epsilon you skip one and then you start again at one plus epsilon with the same epsilon go to infinity and the limit of that is epsilon goes to infinity as you all know is the Cauchy principle value it's a well-defined function and you can compute it with a numeric there in any way you want so that's basically what you do but the advantage of doing it this way is that this is now an exact function when I did it before with the sum qk of b this is of course again factorial at a virgin you can even estimate this again goes like k factorial so this wouldn't be exact but it doesn't bother you because remember I said before that even if you stopped here at twenty terms uh... then you still are gaining a factor uh... x to the minus k twenty but here because this itself is asymptotically convergent I can do it again so that's the second level of truncation I can replace this by the by the summed function so I take more and more terms and then with this particular smoothing so you stop the series when the when the terms of the same order that or you can even repeat it and use smooth truncation on that too if you have enough information about the qk using this well sorry the I'm not writing this well because I didn't write the key thing sorry I wrote uh... I don't even know if I if I wrote the form I'm sure I did but let me write once again what I'm doing because I I'm sure you've lost the thread a little and even I had the five h smooth is the optimal the initial truncation you summed to this capital n just take the first capital n terms with this is again uh... you know the usual n plus b is uh... one over h which uh... sorry one over h which is what I'm calling x but then we have to add I think I'm sure I did write this but it's the sum l greater than equal to zero and then the cl's that I had remembered these are the new cl's I've assumed that my a n is now uh... c zero after change rotation times the shifted gamma plus c one times shifted gamma at the argument one less and so on that was my assumption so these new expansion coefficients and then each one separately I think I did write it but if I didn't I apologize skipped a line uh... it's a specific power series now I know I did write it so I remember talking about this b minus lambda plus l of one over h so so we have this expansion and this is not an exact function so we can like with this integral we can assume we know it all precision but of course this series now again the birds it's not that series this one also diverts this function I can get exactly so this is exact if I use the integral representation but here of course this is divergent so I have to stop at some capital l and then the error that I make I've gained a factor h to the l that h is one over x so that is what I said before but I some other it's not the x to the k it's uh... it's this x to the l so I've gained a power but now I can do exactly the same thing in practice since the cl's grow factorially even if I don't know enough to know their full uh... form if I combine the notion of the cl's with this I can actually do the next stage use the same optimal truncation and therefore I can choose l optimally so if I choose l optimally then this gives me uh... you know the optimal smooth truncation as it were that's the best you can do by this method and when you do that you can do the analysis I'm not going to add down to the notes and it's not interesting you find that the new error remember the old error was a constant which in my normalization was one with simply one over h but the new error will be some other constant c and c will be strictly bigger than one so you don't just gain a power you gain an exponential factor but it's a specific one and that's all we know how to do you could do a third one by doing it again on this but first of all it's very complicated we never tried and secondly there are uh... earlier papers by uh... uh... especially michael berry dingle and uh... some of these names I did in your house and they have do something of the same general sort it's much more specific it's for a specific series but they discovered that if you try to keep iterating the procedure you get an exponentially good term a better one a better one but there's a limit you don't get an arbitrary e to the minus lambda over h for any lambda the lambda stopped and and there's not that much percentage going on so therefore you can do the second truncation so the final formula will be this so we have n terms which is at the order of one over h and l terms and it turns out that n is well it's exactly one over h but l is some other constant you can compute in terms of everything else I don't remember so we have a second correction term which is much smaller remember this thing is already as small as e to the minus n so therefore this is already correct with an exponentially small the next one is a better exponential and then you stop and then the final thing is that once you do this you see when l is fixed let's say l is three well let's start capital l is zero or even minus one you haven't done this then we know that you have to take the n to be exactly remember one over h plus b where b is a fixed number but now it turns out you don't have to do that once you're going to add a correction term and it's a very big correction term with the same order of magnitude of terms as that one you can sometimes go to a later stopping point or earlier I forget which way so the optimal stopping point if you let n be some constant one over over h and n2 to be some constant two over h then when c2 was chosen was pre-chosen to be zero so we weren't going to do that when at all then see this only value for c1 was one but if you change c2 make a little you know take different values then it turns out c1 changes and the total error you can make work out the total error you can optimize not by the original stopping point so the final step is that you stop not at the place where the original terms are smallest but where the terms together with the truncated correction the total error is the smallest and then it turns out you gain another exponential so you get a third we already have with simple orbital truncation roughly e to the minus x then we had e to the minus something bigger than one c bigger than one times x and if you do this you'll actually get a c double prime which is bigger than c times x and we work this out in some examples I don't want to go into it more this isn't really a main subject of this course but it's kind of it's very much about asymptotics and it's also about using the kind of series functions that we were approximating before to recognize the asymptotics that it was you know some n factorial times some beta to the n and the gamma etc now we're assuming we found the asymptotics but only to many terms and we want to numerically evaluate this power series okay so that was my first topic and I want to come to the second I have a little more than half an hour left so I can give it at least some treatment probably not say everything so now I want to talk about the case when it's general so let's imagine series again I'm going to be looking at power series uh... think of very slightly changed the this was the truncation story here I want to assume that I have so I'm going to be looking at power series like we had before some a n some variable maybe x to the n and the a n will again be of the type the most important is always is the power of n factorial the next most important is the exponential the next is important is this and again you might want to write it differently with the single gamma function shift that as I did before so if I assume that I've something like that this is called something like so I'll call the set of functions that are all of this so I just put all of the same thing that we call that class as of alphabeta gamma this is called the samshevrei class you look at all power series with a given alphabeta and gamma alpha of course should be positive we want coefficients to be increasing typically beta should certainly well in typical examples positive gammas could be positive or negative so you look at functions that are all of n factorial to the alpha and you look at the class of such functions so this is the set of all such functions a n x to the n it's a little vague where a n satisfies us asymptotic expansion this class is obviously closed under addition it's a vector space if I've oh it's closed under multiplication for alpha positive that's still relatively easy but it's not closed under other operations so I don't want to really be quite this specific and I mean I could so we have well I'll do it exactly as in the paper this part of the way is from paper with Dawei Chen several years ago Martin Miller well it's not so many years back in 2018 Martin Miller and myself so that it's a published paper and in the last section or an appendix maybe we discussed these functions of very rapid decay so here you can take if I don't care about the beta I just take s of alpha beta it's the union of all s of alpha beta gamma overall gamma so I just say it's oh with this for some gamma so I fix the exponential simply s of alpha is the union over beta of s of alpha beta and then finally we had a class called s asymptotic alpha beta and these are functions that have the power series n factorial to the alpha beta at the end but you don't want to fix the gamma because then you wouldn't have even a power even a vector space because you know if you subtract things the order can drop you just have a power of expansion in distinct powers of of you know one over n but not to worry too much there were three or four different classes and then the theorem says that if alpha is bigger than one part of it is true even if alpha is positive alpha equals one is the case that we discussed today the whole time simply factor oil divergent but if alpha is bigger than one alpha equals two is of course the most common so now we're talking about the series the simplest test case I'll just call this f of x would be the sum n factorial squared x dn so that's more diverse than what we did before what I said before wouldn't apply here that was specifically for factorials so now if alpha is bigger than one then it turns out that each of these so it's a little proposition or a theorem each of these classes s alpha s alpha beta s alpha beta gamma and the asymptotic one each class is closed under various operations and the operations are first of all addition so if f of x and g of x have expansion of that sort then you can form their sum multiplication so f of x times g of x now a composition so this is g of f of x we're now of course you have to assume that f of x starts with a non-seric constant which you can normalize to be one by normally we don't care about the first coefficient is the asymptotics for large coefficients so I'm saying if you have a function whose nth coefficient grows roughly like n factorial let's say squared so some number bigger than one and you have another one which also grows at most like n factorial squared the leading coefficient is x then when you do the composition it's still this n factorial squared it's always going to be the same alpha and even the same beta okay and then it's also powers f of x to some power, lambda, of course if lambda is a positive integer you can just do that by duration from this and then most interesting is inversion so again if f is an invertible power series f inverse of x with a power coefficient at the same order that's simply not true if alpha is one and it's kind of obvious because we could take a really simple is it obvious that you I think now I can't think of an easy example but it's simply not true if you have a power series f of x and it's coefficient grows like n factorial not n factorial to a bigger power then the inverse power series might have much smaller coefficients I mean there's I think it's simply not true so but it is true if alpha is bigger than one and that's kind of a surprising thing I mean even intellectually why it's easier to deal with the rarer case when you have extremely rapidly divergent series so they're presumably even harder to evaluate in any intelligent way numerically but as far as working with them as formal power series it's actually very good so let me give I'm not just give a few bits the whole appendix is 12 pages or something I'm not going to give everything so let me write a n factorial as maybe I'll drop well I can keep the so I've n factorial to the alpha, beta to the n I'm not fixing the gamma remember when I did the asymptotic class I had to let the gamma be a variable but I'm fixing alpha and beta and let me call what's left a and tilde so this will be of the order o of n to some power so we now have coefficients that grow sort of normally then let me take the example if f of x is the sum a and x to the n and g of x is the sum say b and x to the n and f of x times g of x is the sum c and x to the n then of course c n is the sum a m times b n minus m but if I do it for the modified ones after I've taken up this factor then what I'll get is the sum m from 0 to n and now let the binomial coefficient of the power minus alpha the beta at the n part drops out because n n is the sum of m at n minus m but the nth factor all to the alpha gives you a binomial coefficient so we get a convolution but it's a kind of a twisted convolution now what happens is this I mean how big is n over m of course it's at most 2 to the n because the sum of all the binomial coefficients is 2 to the n so these coefficients in the middle are very very small but at the edge they start and it's roughly 1 to the minus alpha n to the minus alpha n to the minus 2 alpha and the same at the other end when m is near so at the end points this factor starts at the order of 1 so if I do the small terms this will start with a 0 which is just a number times b n tilde then a 1 tilde is another number but it's now divided by n to the alpha which is a little smaller b n minus 1 tilde etc and then plus that's the beginning and then some stuff of the same sort at the other end but the things in the middle will be very small because this binomial coefficient is roughly 2 to the n to some fixed positive power that will be completely natural remember the growth of these AIs is only like a power so therefore you can compute the exact asymptotics by taking these are just numbers that you know and this you know the exact asymptotics and so you can work out the asymptotics in the beginning and each term is negligible compared to its predecessors you get a contribution from the end from the beginning and from the end roughly okay so but when you compose then it's much much more interesting and inverse is actually related to compositional more or less skip it but it is true so let's think of composition now so actually I'm going to change my notation slightly I'm going to let this be n 0 to infinity n x to the n but I'm going to assume a 0 is 1 so before when I composed I wanted something that started with s I'm going to take g of x and sorry g is some other power series of course the sum vk x to the k and then I'm going to take the sum I'm going to take g of x times f of x so this read the sum of some uh... uh... cn okay and then of course if you just substitute this then you obviously see that cn is just the sum k from 1 to infinity but it stops at n otherwise the coefficient for the coefficient v0 and now it's just the combination of these b sorry this is bk uh... it's just that where an minus k is defined if I take the k power of this then I'll call those coefficients an k so this is of course so far just a triviality I've substituted what it means to compose okay so of course that you can do completely explicitly so if you if you do that so the sum the first few and the first find of the many you can work out exactly what they are because the sum x to the n is just the power series of 1 plus a1x plus a2x squared sorry uh... etc to the kth power and so this is just some power series 1 plus k a1 x plus something else times x squared you can obviously work out as many terms so if you fix n of course there's a fixed finite formula for a n a nk but what we need in order to see how this whole thing grows and this is where it now gets interesting and kind of fun we need to make estimates and there'll be a crude estimate and then a much better estimate after I want to make me concentrate on that because that's the mathematically interesting thing that happens so once again I'm fixing the notation I have a power series sum a and x to the n where for convenience a0 is 1 and I define new sequences a and k so it depends on two indices as the nth coefficient of the kth power of that original one and so we have to define it so we do so here's a proposition it's called the lem in the appendix uh... let's assume since I'm assuming it's oh if a n is bound to make the constant exactly one by n factorial to the alpha and then we have a discussion if you have a constant of power of n let's keep it simple so if a n is bounded by n factorial to the alpha for all n greater than or equal to zero and alpha here is allowed to be one but not smaller the moment okay then the claim is that a nk we have two different estimates and sometimes one is better than the other sometimes the other is better so one of them is a constant to the power k uh... times sorry to the power k minus one times n factorial to the alpha and the other is also uh... n factorial to the alpha but here it's a binomial coefficient n plus k minus one over either n or k minus one whichever you prefer so we have these two things so the proof is very easy it's the same proof remember that i was using a n tilde k to be the same a n k i'm in the class of every class of power alpha there's no beta and no gamma i just divide by n factorial to the alpha so i have two things to prove i've showed that this is bounded for k equals one by definition it's simply less than or equal to one because that is my assumption here so the constant is one if k is one so it's quite reasonable here to have a k minus one and i've showed that this renormalize is bounded by both k minus first power of a certain constant uh... then this polynomial and see as a well-defined function i'll say but it isn't a second but the two most interesting cases are one and two and the numbers are eight thirds and nine quarters so in particular if i take the case of n factorial squared which will be my test case in a moment this bound would say it grows at most exponentially like n over four to the k minus one so uh... now i can tell you what c of alpha is c of alpha is the maximum over all integers n greater than or equal to one and then we take the same sum we have before i take the binomial coefficients but then to the power minus alpha and then it's easy to see that if n is very large that tends to two but if it's smaller it's bigger anyway it's it's a number which is reached the thing goes up and then it goes down uh... and so there is a well-defined maximum which is specific value of this it's always attained for some okay well let me actually prove this because it's short and sweet so uh... if i if i take the a and the the recursion but then i'm going to divide by all of the tillers remember a until the means i simply divide a and k by n factorial to the alpha then if i put absolute values we're going to find that this is the sum well it's bounded i mean it is the sum but i want the absolute value binomial coefficient of the minus alpha a mk a n minus m but a n sorry but this is tilde and a until the one is simply one i mean this was what i just wrote a n minus tilde one but this one the absolute value is less than or equal to one that's exactly my assumption so therefore you know that's what i just said so therefore this is less than or equal to the sum n to the minus alpha times a m tilde k zero to n so that's certainly true and i'll prove both statements by induction so for this one we see that by further for one and for two for one we get that a n tilde k plus one is less than or equal to the sum n over m to the minus alpha but now this a m tilde k with simply universal number it's c of alpha to the k minus one this doesn't depend at all on n and this sum is less than or equal to c of alpha because that was the definition c of alpha was the maximum of this so this is certainly c of alpha to the k so that proves that by induction and the other one is so equally easy you you'll find by induction again that a n tilde k plus one since the inductive step says that a n tilde k is at most m plus k minus one over k minus one this sum this is just retrieval by normal coefficient identity that is exactly a n plus k over n and this result the thing simply duplicate so both of these things are proved by a one line induction so we have this kind of stupid bound and then from that you can estimate the other thing and now what happens is this it's the same as before when you do the thing you have a big sum you have terms at both ends but if alpha is less than one or even one then the terms at least at one end are all at the same order of magnitude so you can't do anything but if alpha is two or 1.3 if it's bigger than one then the leading term in this sum is huge the next term is huge but it's less by a positive power of n and the next is less than that by another power of n and so what you get is a well-defined asymptotic expansion of the whole sum as the leading term which is something with n factorial to the alpha times a well-defined power series of one over n and that only works if the alpha is big enough that there's no cancellation as I said these terms are actually false if alpha is one then one can quite easily write down counter examples so what I think I want to do now for the last part is find the actual asymptotics because that's really a fun mathematical problem and I've brought this to course about asymptotics and what can you say so let's define a and k like this but now let me take the case alpha will be two and a and is simply n factorial squared so I'll take the very simplest function and so a and k is a function of n and k and the question is how big is it so the balance that we just gave I think I just know I haven't erased them one would be n factorial squared times c of two which I told you is nine quarters so it's less than nine quarters the power k minus one there's an actual inequality not just a bound not just asymptotic and it's also less than n factorial squared times n plus k minus one k minus one so if k is something small then this is independent of n and this depends on it's a polynomial so this one is worse but if k is a little bigger then of course the exponential starts hurting you so either of these can can dominate the other but both of them are very very far from the truth so the question now as I say this is the last thing that I'll do today is 17 minutes and I'll spend a little well the remaining time on it without going into the discussion of the inverse power series which is just a corollary of all of these estimates if you care about it it's in the paper that I just gave you the reference so the question is how what is the actual truth how do these things actually grow and as a test case for the particular case n is n factorial squared so if k is one that would be by definition a n one and so both of these of k is one give the same number but what happens if k is large you know and also how large should it be for different domains so here I think I can go back to my handwritten notes maybe they'll be legible and maybe they won't so I'll try so we have these two bounds that we already had nine-fourths of the k minus one n factorial squared and n plus k minus one k minus one k factorial squared so first let's first think of n fixed fixed and well I was going to say small of course every fixed number is small this other things go to infinity so then the sum a m k x d n so now I'm looking at the first so and so many terms well by definition this is the k part the original sum and the original sum is just a fixed power series whose nth coefficient is n factorial squared and I've taken this to the k so as already said before we can of course multiply this out and I'll write out the first three terms so it's one plus k x plus k squared plus seven k over two is actually four terms count the constant one twenty one k squared plus a hundred and forty four k I think over six x cubed and so on so we get some polynomials and so what that means is that if I look at a three let me take as an example this a three so I'm going to take a three tilde k but remember a three tilde k is a three of k divided by n factorial which is only six to the k I mean now n is fixed and so this will be uh... sorry I'm doing something wrong it's n factorial I'm sorry excuse me I'm always dividing no it was right it's a three of k tilde which is a three k over six to the I'm suddenly can somebody help me a three k I'm taking three to be k maybe I should write it again a n k will always be divided by n factorial squared a n k that's right so k is three so I'm divided by six squared but it's this coefficient so I'm actually dividing by six cubed so it's k cubed over six cubed times one plus twenty one over k plus 144 over k squared that's an exact formula for the third coefficient after this tilde thing but if you continue and you think about it bit what you find is that if n is fixed that you'll if so n fixed and k going to infinity then you'll find that a well now you don't even have to divide by tilt I'll just leave as it was it's going to be approximately I'll call this first function a one of n k and it's defined you can expand take a lot of them and expand it's just it's a five minute exercise but the answer I can just write out it grows roughly like k to the n over n factorial cubed and then so n remember is fixed for instance n could be three or five so this is just a fixed number this is the nth power of k here the first term we have a power series but I take its log it turns out it's much better take the log and then it starts with one plus seven halves n times n minus one over k but if you take the log then all of the terms of this n n minus one over k and the next term is nine hundred ninety four n minus three hundred and thirty five if I'm reading correctly divided by twelve k and so on so what you find is is uh... and all of these terms are small because n is fixed and this is one over k one over k squared and so on but actually now you see that this actually makes sense if k is much bigger than n squared so if n is much less than the square root of k then this is already small but here the degree will go up and here is the degree and it won't work it turns out that you actually need n to be much less than k cubed the k to be much less than sorry n to be much less than k to the one-third so if n is much less than k to the one-third then even though you want to really derive this by thinking if n is fixed and expanding it that expansion makes perfectly good sense in this domain where n is less than k to the one-third okay well time is a little short so I think I'll just give the other if k is fixed and n goes to infinity then we use what we did before if k is fixed remember what we're doing is we take one plus x plus four x squared plus thirty six x cubed and so on to the kth power so before I said if I take the coefficient of x cubed or x to the tenth that's just a polynomial in k for a fixed n but it has this asymptotics as I showed you I didn't do it very well but if n is three and I go back you would see that this is k cubed times x of since n is three this would be two over k times seven halves plus it should come out exactly like what I had before so one plus seven k so for any fixed n this will be the form of the asymptotic expansion and as I said it's not even fixed you can now let n go all the way up to the cube root of k but you can also do it the other way if k is fixed then I'm taking for instance the fifth power of this series but remember when I multiplied two series it's just a convolution I make an easy estimate this recursion and so if you do that you find this is another four lines of calculation I'm not going to do it then you find so both of these are true statements that for n fixed this is the asymptotics it's this completely explicit power series I'm to all orders but I didn't give it to all orders but it's a power series with coefficients in one over k with coefficients in q of n but if you do it the other way then the asymptotics are completely different it's k times it's still exponentially big in g in n so let's call the coefficient in n g minus one then g zero then g one over n g two over n squared and so on so we have a completely different kind of an expansion which looks like this but now k has a certain form and again this is going to be true but here I didn't say here this is actually if n is less much less than so less than some small constants k to the one third and this is if n is considerably bigger than k to the one third so actually each of these when you look at the nature of this expansion which I'm about to write out you find that each one makes sense as an asymptotic series and therefore gives you arbitrary precision to any given degree of accuracy in these two domains which leave you know if you have the curve n to the one third then if you're well to the to the left of it you have one expansion if you're well to the right you have another so on one side you have a one and the other you have a two so I haven't yet said g minus one g minus two are and that's actually quite amusing so the formula for g minus one g zero so first of all I didn't say what this g i is it's a universal function of the of the quotient k over n cubed members are basic parameter because now we know that case is sort of tricky places when case of the order of n cubed so in one asymptotic thing g this I'll call this number t the argument t will be very small in one case you'll have the asymptotics of the g i one side and the other it's the other so here I can tell you that the first one is g minus one of t is the sum when you write up the first twenty terms and then you recognize them very easily it's just hypergeometric it's the following sum t to the n so it's uh... this is convergent because three n factorial over n factorial two n factorials are often like the binomial coefficient so the converge with some radius converges which I think is four over twenty seven I'm sure it's four over twenty seven and this is actually the algebraic function if you know this kind of uh... binomial thing you would see immediately it's going to be algebraic and so I can actually tell you it's not quite algebraic it's derivative is algebraic so there's a log so what you do is you write t as a times one minus a squared this is a cubic polynomial it's maximum between zero and one is at four twenty sevens if t is less than four twenty sevens there's a unique root a which is between one and a third and that's the root you have to take so that's what g minus one is and then it turns out all the others you can get in closed form in terms of a so for instance uh... g zero is one half log of one minus a cubed over one minus a cubed all the rest are rational functions uh... in fact polynomials in a one over a i'll just give you one of them seven a minus fifty nine a squared plus a hundred and ninety one a cubed minus two hundred and four eight of the fourth plus nine eight of the fifth and then it's that whole thing divided by two times one minus a squared you see the same seven halves where before that's the extreme case one minus a squared times one minus three a cubed etc so each g2 is a well-defined rational function of this a and so you have a completely well-defined asymptotic expansion and now the final joke is when you do this now let's look by sterling's form to see how big these things are so here this i wrote it twice this is much smaller than k k to the n by sterling's form i mean sorry n factorial by sterling's form you can work out this thing is asymptotically equal to one over two pi n the three halves times e cubed times t power n so up to a simple power of n it's purely x it's exponentially n but it's e cubed times t and remember t is this fixed number which is k over n cubed so i imagine i'm fixing t and i'll let in k go like a multiple of n cubed if t is very small we're in one regime if k is t is very large we're in the other i may have got it backwards which one i pointed at and so t is the key parameter and so this will grow exponentially like t but this one here you can read that off immediately that's a two of nk will grow like some constants which doesn't matter what it is i can even say what it is in terms of the same number a which remember is constant it only depends on t it's given by that the algebraic equation a times one minus a squared is t the constant is a times one minus a to the seven halves over one minus three a to the one half and then there's a small power n cubed so just as here we had a power n to the minus three halves and it's again exponentially now the exponential is not fixed here whether here was also not fixed it was linear it was e cubed t so one of them is like e cubed t but the other is some other function b of t and b of t i can write here which is in terms of a it's simply one minus a squared times e to the three a so at some point these two curves will cross and so there's going to be a t zero where i don't know if i drew the second corrected it's a t zero and it's actually so here i just said if n or if t is much less than one very small you're in one regime t is very big you're in the other but now we can be very precise this is specific t zero and if i take a little epsilon neighborhood then uh... in one regime less to the left to the left of t zero minus epsilon i'm in the range in the regime a two and here i'm in the regime a one so here's t zero here's t zero minus epsilon t zero plus epsilon so it goes into these two two regimes and uh... and the crossing point is given by the following really bizarre formula so t zero you can't really it's always a which is the true parameters i have to tell you what a zero is so t zero will be a zero times one minus a zero squared and a zero because what i just said is the root of uh... that this function b of t which i just wrote out is equal to e cubed t and when you work that all well that just means for a zero like that i can write out e to the three a zero minus three is equal to a zero and so if you enter the numbers then a zero zero point zero five nine something in t zero uh... zero point zero five two six or make one more digits okay so these are some specific numbers so you have asymptotics that make perfectly good sense and if you look at the convergence of this you'll find that's a two if t is in the regime where this is the formula you like then this series will uh... have a well-defined asymptotics of one sort and the others of the other and you have everything to all orders so the only question is what happens as they cross and now the most naive guess would be well one of these is much bigger than the other on the left and the other is much bigger than the first on the right so let's just try a of nk i just defined as a one of nk plus a two of nk which of course not well-defined numbers but they're given by asymptotic power series expansions which uh... where a few terms will give you very high precision and so now i can give you the actual numbers so we did a little numerical experimental end with that we took k to be fifty thousand because remember that the n is around k to the one-third i don't want n to be three or something it should be a little big so if n is k to the one thousand then the the crucial n zero which would be the square root of k over the t zero i just defined is actually about ninety eight point three ninety eight point two nine something so therefore in the little table that we gave i'll just give extra well i mean it's it's uh... it's several numbers so i won't give them all so here's n and here i'll give a one divided by the true value so i'm going to normalize everything and a two divided by the true value so my k is always going to be fifty thousand but my m will vary and we took eighty eighty five ninety ninety one ninety two ninety three ninety four ninety five so we went one at a time because that's where the number actually was happening and then we jumped to a hundred and so when you do this i don't know how many digits there are but i'll just write a few of them then you find this one is one and this one is zero two you know eight digits or nine digits so indeed it's completely dominated by a one and the next one is zero point nine nine six nines but to make up for it this one is then zero zero zero that's actually four four and this is six zeros and then it's five six it's exactly the complement of this when you add them a one plus a two the sum of those two numbers over a and k then to the number of digits i'm giving which i think is eleven it's exactly one point zero zero zero zero zero all of the digits that you need and here again and then as you move on this one is suddenly is no longer so dominant this is zero point zero eight five the next time this is zero point four five and this is zero point five four two but the next time it's already zero point zero six and this is the big one you see the second domain taking over uh... the next one that was ninety two then ninety three not that it really matters is zero point zero zero uh... maybe i'll i'll skip to ninety four this is zero point zero zero zero three three five and this is the complementary number nine nine nine whatever it is zero point nine nine nine a lot of nines the numbers are in the published paper and the final number as you get to a hundred this to i think eleven digit accuracy is exactly zero and this is exactly one but the sum every time to the full accuracy of the computations one point zero zero zero zero zero so it's a very amusing thing of a phase change that you can pin down exactly i'm sure one could prove all of this rigorously we didn't try very hard but the asymptotic expansions are well defined and as i say although they're not convergent you can compute this to many digits of accuracy and then in each case you find that it adds up beautifully the two a little to the left the actual cutoff point was somewhere here ninety eight or something when you're well to the left of it it's this is everything and this is nothing then this is still almost nothing but it's a little tiny thing and this is very slightly lefts and when you get a little beyond t zero then it's the other way around the first one is zero percent and the other is a hundred percent but the sum of the two is always exactly a hundred percent so it's a nice the kind of acute example of phase change and asymptotic stress i'm sure one could do rigorously we didn't try very hard we were just trying to get the actual information and on then the last word of this the very last word that's also written down in my handwritten notes but before i find it will be faster so if we now actually add so in this case remember a and k was the coefficient of x to the n in the sum r factorial squared x to the r to the k that was the definition and this was less we know that to some constants mk times n factorial squared so if i if i for any given k we know that there is such a bound and what we have before is uh... i can choose certainly nine quarters to the k minus one but now if we use this new analysis we find what mk actually is so the old and the new is not even an upper bound it's an actual formula the asymptotic formulas e to the power three times k to the one-third divided by two pi the three-halves times k to the one-half so the important factor is exponentially k to the one-third whereas this thing's exponentially k so for k at all large the new forms is very much better and it's not even a bound this is actually it's actually equal to this plus one of k to the minus a third so assuming everything else is true we actually know how much the kth power how much it is off from just n factorial squared at most the specific constant mk so i've gone four minutes over time sorry but that's the end of this as i said that ends the first eight weeks of the course which are a little separate from the first other four because of the different time and the last two weeks the last four lectures i'll spend certainly most of it maybe all of it on a very i think very very nice example of the circle method and for square partitions in particular that i'll do in some detail and if there's any time i don't need all four lectures to do that then i'll either stop early we lecture early or i will talk a little about these very very very slowly convergent series which was the third of the problem on the poster on the announcement of the course there was this kind of crazy some one over n sign x to the n which obviously converts perfectly well for every x but the question is how do you compute it when x is very large and so it's highly oscillatory incredibly slowly convergent so huge number of terms to actually get the value even though it is convergent so that's all for for now and maybe we should sign off soon so that the people running the audiovisual can go home and less their questions from anybody quick questions otherwise maybe on monday morning on the beginning of the lecture people here can ask questions if they want nobody seems to be asking questions anyway so then i can tell whoever's listening maybe my code that we can finish we can we can log out