 Thanks. Got it. Okay, thanks, Mike. And thanks to everybody for coming. It's still a little early here on the West Coast. I'm kind of waking up but here we go. So, I've been in a group of 12 people that have been meeting sort of every Thursday, actually about this time. And so we're not meeting today. Everybody I think is probably here, but this is a group that's that's involved in thinking about these things that I'm going to talk about. And I want to especially mentioned SIG, Beluyat, who we've actually been meeting in person believe it or not for quite some time now. I can't even remember exactly how long but we go into aim and we had been wearing our masks now we're not wearing our masks anymore but it's been great. And yeah, so I wanted to talk to you about moments ratios of L functions families of L functions and how that all looks in random matrix theory, which has been, I have to say random matrix series become an essential in number theory in analytic number theory in studying statistics of L functions. And, you know, I think when that first started people were slightly negative about it say oh you're never going to prove the Riemann hypothesis that way well nobody thinks that we are, but as as a tool for handling the complicated problem of torics that arise in studying these statistics arithmetic statistics if you like statistics of L functions it's it's pretty indispensable. Okay, so I'm going to try to do this on my iPad and make that a little bigger. And I think I start off all my talks kind of the same way. We know about the second moment of the Zeta function. And we know about the fourth moment of the Zeta function. Hardy and little wood back to 1918 and Ingham back to 1926. That's almost 100 years. Since we've got any new moments of the Zeta function we still don't have a theorem about the sixth moment of the Zeta function. Back in 1980. I was a sort of a finishing graduate student at the time and was in spent he got to spend the year in Cambridge with my advisor Hugh Montgomery who's on sabbatical there and I went to a conference in Exeter. Number three, my first conference and on an outing to the, to the Moors X more Dartmoor. I forget where exactly on the bus I was talking to Heath Brown, and he mentioned he said you know we don't even know a good conjecture, or any we don't even know a conjecture for moments of the Zeta function. And that was well 41 years ago, but I got interested in the question. And in 1992. Oh, my ghost and I actually figured out a conjecture for for the sixth moment of the Zeta function. And then involved this convergent product over primes we call that sort of the AK this would be a two. And then we knew about log to the we knew that power on the log was going to go up like a square so three, this is a two kth moment case three, three squared is nine. So you got the ninth power, but the mysterious part was this 42, and that's a little surprising because it was so big you know 1242. Well, six years later with Steve gone, we figured out a conjecture for the eighth moment, and that actually had a. Now you got an even bigger number 24,000 24. And with the log to the 16th. Our method with Ghana, which involves looking at coral divisor correlations DK event DK event plus H, where DK is the cable divisor function for this one we were looking at D for the average of D for event D for event plus H, and up to x or something like that and we're able to figure out a, a conjecture for that, but when we tried to apply that same technique to the 10th moment. Instead of a nice answer here we got a negative number. It wasn't integer but it was negative so that was kind of the end in some ways of that part of it. Along came, Keating and snaith. Okay, one came Keating and snaith at essentially the same time, exactly the same time really. And they figured out to average the characteristic polynomials right here at at one over the unitary group, the group UN of and by unitary matrices of tooth cave moment and calculated an exact formula for this, and then said well we should make up and is being log T in this. And if you look at this as a power into the case squared. And so this is asymptotic as n goes to infinity some constant times end of the case squared over case squared factorial. And that constant is exactly this, which if you calculate gives exactly those same numbers that we had for the moments 1242 24,024. And not well in case you're interested the fifth one next one will be 701 million 149,020. And so they made the conjecture that, in fact, these GK should give you the, the two case moment of this data function. And you know that's after Montgomery and Dyson with the pair correlation and end correlation and figured out that the zeros of the Zeta function are distributed like eigenvalues of large, either Hermitian or unitary with the compact group is maybe easier to think of matrices. And so, so their logic was well if, or I don't know exactly I can't say, but you know if the, if the eigenvalues are distributed like the zeros and maybe the values of the characteristic polynomials are distributed like the values of the data function. Now, we actually wanted to go a little bit further than this and figure out the lower order terms. But if you look at the lower order terms of say the fourth moment. They're quite involved and quite complicated. The one thing is they only do involve gammas and gamma ones and gamma twos. These are the still cheese constants in the Taylor expansion of Zeta of s around s equals one. And then they also involve derivatives of Zeta at two. And that's an explicit formula but the idea of trying to take that further was daunting. But in fact, we were actually able to to, well, I say solve this problem but I mean it's not really solved it's just a conjecture, but we were able to figure out something that made sense. This is CFKRS farmer Keating Rubenstein and states, we call this the recipe, and it's a way to write down a conjecture for moments in basically any family. So this is just the, what happens in the Zeta function case, we have some function psi that let's say supported on compactly supported on one to see infinity function if you like, sorry this integral is over the reels. And as here is a half plus it. So we're integrating up the half line. So we have two shifts, we have a set A of complex numbers, and I said B of complex numbers. And these should have a small real parts like less than one over log T, the imaginary parts could be quite large, maybe as large as even a power of T. And then you see the generalized, the sort of most general kind of moment that you might look at for the Zeta function. If a and B have size K, and you let all the shifts go to zero. Then you're just looking at the two kth moment. Okay, and in some sense the answer can be written down. Kind of amazingly in an amazingly count compact form here. It's just the same integral again over the test function but now you sum over subsets, you and V of a and B of equal size. T over two pi to the minus you minus V. I'm going to try out my 10 souls if it works to that's really two over two pi to the minus sum over alpha in a alpha minus sum over beta and B beta. So that I just abbreviated that by T to the minus you. Oh, sorry, not alpha you you and you. Well, all right, I'm not going to correct it. All right. And then you have this basic function, which has an analytic continuation, it's the sum of tau a m tau bm over m to the s and tau a is the coefficient just of that product of all the Zetas. So this is a convolution of little shifts, the function one shifted by the little alphas. Okay, and we tested this numerically all the way up to, I think like the 28th moment or something like that. We actually think this error term is a square root size t to the one half plus epsilon. We factored that in general for all our families but it turns out for symplectic families. There are definitely some cases where that square root error term is not correct there's some extra main terms that come in from like averaging quadratic L functions. Okay, well, all right so so for example then. In this lecture, this is what you would get for the sixth moment. Now the code the actual coefficients of the sixth, the of the of this p three for the sixth moment p threes and 93 polynomial here. And they're not nice numbers they're they're infinite products over primes. They're infinite products and sums over primes that you can't evaluate in terms of any zeta functions so what happened there for the fourth moment doesn't happen here. But that's an example, and you can test that I mean you can test this just using on Mathematica it's accurate enough that even just integrating over a small range. If you put in these numbers you'll get something very, you'll get something surprisingly accurate. So, we definitely believe it's correct. And there's an analog of this in random matrix theory where it's an actual theorem and exact analog. So, let's say you have a is a n by and entry matrix, and we're going to use lambda. As a notation for the characteristic polynomial. It's defined slightly differently than you normally would one minus s times either the minus i theta and where the i either the i theta ends are the are the eigenvalues. So when s is an eigenvalue this is zero, but it's characteristic polynomial is usually written a little differently but that's how we write it. So we have a nice functional equation that's like a functional equation in some sense for the zeta function. And here is the random matrix analog of the recipe. So, you, your basic function instead of zeta function take z of x is one minus either the minus x to the minus one. And capital Z of AB is the product of Z of alpha plus beta. Yeah, so I should say that that, that, that thing we said B of AB before, which was the sum of tau a m tau b m over and the yes, I should have written. Sorry, I should have written that down. This is really the product over alpha in a beta in B zeta. Let's make this up. Sorry, without the s just m zeta one plus alpha plus beta, and then times some arithmetic factors that this is a product over prime some converts an Euler fact so that so you have the point is that you have this zeta one plus alpha plus beta looks like this Z zeta one plus X has a pole at X equals zero of residue one and the same thing holds for this for Z of X. Let me try this. Anyway. So the theorem is that this average of shifted characteristic polynomials the shifts are now appear sort of as exponentials is it's pretty much the same thing same formula that I wrote down before. So you take with Z of you take, you take a subset S of a, and then you take, you take a, and you kick out all the things in S, and add in all the things, all the negatives of the things in T. Okay, so. Yeah, so. If, if, if the, if the size of s is zero, then you just have Z of a B, that would be kind of the first term in a way, and that's what we call, we call that the zero swap term. And if, if the cardinality of S and T were one, then you're taking all the singletons in a and all the singleton in B, you remove a singleton from a and replace it with the negative of a singleton from be that's what this means. Okay, are there any questions. All right. So the fact that this is a theorem in random matrix theory and matches exactly what we have in our conjecture and number three is encouraging that the number three conjecture may be correct. Now. Okay, so I want to shift to moments of long polynomial so this is a technique that say Gannick and I were using. When we were getting the eighth moment we did the moment I did something a little different for the sixth moment but then we recovered also the sixth moment from this and Gannick have a paper on how to average long polynomials average over T long Dirichlet polynomials. And so, so X here is X is the length. So when I say long. I'm talking about as far as X goes and X let's say maybe is T to the Ada, and long would be Ada bigger than one. If it is less than one, then you have the theorem of Montgomery and Vaughn that the only thing that survives this integration it's an approximate sort of partial formula or something like that that's just a diagonal term that that survives. But what Goldstein and Gannick did, if, if it is bigger than one, then you should, you should add in what you get from the off diagonal terms. Okay, and so these are like this which if you organize them. If you're starting to the difference between M and N, then you have something like this. And this side had the log M over and is kind of like just T times h over 2 pi m so this is, this is nice and accurate. And so yeah. To go beyond to do it to average a long polynomial you need information about these coefficient correlations. Okay, and so that's where I was saying that we were looking for information about some and less than an X for the fourth moment it'd be D4 of N, even D4 of N plus one. So nobody knows how to do this formula asymptotically, even when that four is three, so D3 of N D3 of N plus one, we're completely stuck on that, and do not know how to evaluate that. And we actually only needed on average, right on some kind of average over age but we can't do that either. If we could we could do the sixth moment but in some sense that's a kind of the sticking point. And so, but, but we have conjectures using the circle method or the delta method. You can figure out conjectures for this. So we know this conjecturally, and then you input that into say the Goldstone gonic mechanism, and you come out with those moments the sixth moment and the eighth moment so that's how you do it. So we're looking for the 10th, sorry for the 10th moment. Well, we didn't know, didn't know for really long time. Still, in some sense don't really know but we have taken some steps towards going along that, but I want to point out sort of a phenomenon that happens with these moments. So the second moment say of Zeta, you can get by with just the Montgomery Vaughn sort of diagonal kind of reasoning in a sense. And when you do the fourth moment, you really need to bring in these divisor correlations. And when you do the sixth moment, then you're going to need something else. And so what happens is, as K, if we're looking at the two kth moment is that advances through integers. K equals one K equals two K equals three, whatever, then you need there's some new terms that enter into the asymptotics of that moment. And it's kind of a, it's a funny phenomenon, and it's like, well, what is, you know, where are the new terms going to come from, and you know what's causing them. And we see that in other families to moments of cuss form L functions, we're using, say the Pedersen formula to detect initially just to type the diagonal terms, but then you have the other piece that involves the sum sums and then, you know, maybe some of the Cluster and sums that degenerate to Ramanujan sums they might give the sort of next term if you're looking at, say, a third moment. And then when you look at the fourth moment you get off off diagonal terms is what Kowalski and Michelle and Vander cam called them. But anyway, they're in whatever families of L functions are looking at you have this phenomenon that as you look at higher and higher moments. There's new sources of main terms that come into to this. And so we can quantify that a little bit and that's sort of what this talk is about is what do those new main term we don't know exactly where they come from but what do they look like. And so we're thinking of doing the two kth moment of the zeta by just looking at a super long polynomial a large x and and and zeta vast courses. So the K is summation DK of n over NDS. And so you're looking at a long polynomial approximation to zeta to the K and X is going to be, well I said t to the, so now it's to the alpha. And well just this, if you're doing this for some other, not zeta but some L function without a pole you wouldn't have to worry about this term but because of the pole of zeta you need to subtract this thing off. And we want to know about the mean square of that. And if we scale things right. In other words, sort of divide out the log to the case squared and the arithmetic constant a k and the case word factorial. We get left conjecturally with a polynomial that we call mk of alpha. And for the, for K equals two for the fourth moment these are the polynomials get. And then if you graph them, you get this nice. It looks like it's, you know, nice completely smooth thing it is it's, but it has changes. There's a change here at one, and then a change it to in this, lots of derivatives are equal at these points so it is very smooth. It's transitions from zero up to two, and then it stays at two. In this case, once, once your polynomial, once your x is bigger than t squared, you've captured all of zeta squared. And so, you don't need anything else to get the fourth moment and so you get that to in that you know 1242 whatever that's the two right there. Okay, and here's. M3, you can work out what that is we can work any of these out but yeah they get complicated. Let's see. Well so there's a picture of m3 and m4. And like I said they're very smooth. This one m3 goes up to 42 m4 goes up to 24,000 24 transitions from zero up to 24,000 24 through these piecewise polynomial chunks. And then we're calling mk alpha. And. Okay, and so, so there's a there's an analog and random matrix theory of these. And so I want to describe that. Okay, so we let the polynomial that's a random matrix characteristic polynomial have coefficients that are secular coefficients. That's what they're called so SC sub J the J secular coefficient kind of a strange notation. And so, how does that work. Well, so Diaconas and Gamber have a very nice formula, if you for averaging these secular coefficients products of secular coefficients. And basically, if you have equal sums on these things, but those sums have to be less than n, then the answer to this question to this integral is, it's, it's the number of matrices with row sums, equal to J one through K, K by L matrices and column sums equal to H one through HL, kind of a nice little combinatorial thing that they've done. Now, the connection with long polynomials, and the secular coefficients is this, that this long polynomial for Zeta should be the average of the secular coefficients now sums J one plus JK summed up to alpha times N. Now that the Diaconas gambered theorem that only works for Diaconas gambered is really for alpha less than equal to one. And so we're interested in what happens here. And so we think when alpha is bigger than one so extending it's in random matrix theory extending the Diaconas gambered alpha bigger than one. And yeah, and by the way, Sandro proved assumes the recipe, which is a CFKRS conjecture for moments and then proves this theorem. Now, I'm going to show in fact that you can just take. Yeah, you can just take these things equal to each other and just take a mod square like this. We call this IKMN. Keating Rogers Roddy Gerson and Rudnick have studied this quite extensively they have a very nice paper on all of this, and they evaluate it in a range beyond sort of the diagonal range. And what they prove is this very nice theorem that these IK's are equal to this quantity gamma KC times end of the K squared minus one plus that big O and so I think I mentioned that that group of 12 is called the gamma KC group well that's because we're studying sort of phenomena related to gamma KC. And this has a nice formula. Nice integral formula G is a barn's function and delta here is the the bandermond. And in fact it's related to those this gamma case is related to the M cases that I was just talking about by it's a derivative basically. And the nice thing. I think I got my slides out of order. Oh, no, oh, sorry, I guess that's okay. Yeah, all right. Random insertion of a picture here. This is the, the polynomial. The sixth moment basically evaluated that the little shift X of characteristic polynomials and summed when I went in the matrix sizes 25. You get this 2525 or 53 polynomial and these are the zeros. Kind of a cool pattern. I'm going to say more about that later. All right, sorry back to where it was. So, what, what the KR cubed and also Rogers and sound have worked out our arithmetic instances of the gamma KC. And so one of them is about divisor function in short intervals, the subject that Steve Lester and many others have studied for quite some time but you look at the BK event and you look in the short interval and average that and then you've got these two parameters X and H and they're related to each other in some way. And in fact, what care cube says is the way they should be related is in fact by the formula where C is a log X over divided by log X or H should approach C. And then you should get this gamma KC, which is itself a polynomial a piecewise polynomial function. Smooth. That comes up in, you know, moments of characteristic polynomials. And so. You can prove that in fact the recipe implies. It implies this. So that's in a paper of central and Zen mind. And oops. And same thing for divisor function and arithmetic progressions. So you may end now over ends congruent aim on Q and you sum over that cues and take this variance. And now X and Q are related by log of X over log Q should be C, and this also the behavior of that gives you gamma KC. So what happens then is as this ratio log X over log Q gets larger that's your C as that passes through integers, then you get a change in behavior of this divisor function in arithmetic progressions, and this gamma KC describes exactly how that change in behavior works. Okay, I see I'm going way too slow. I mentioned that Bayes or be in Rubenstein. This is a sort of another theme I'm very fond of is that they found a pan levée equation that governs the gamma KC. So pan levées are a little obscure maybe to numbers theorists but I think they're really important and they're becoming more and more present in especially in this arithmetic statistics, but they are a big staple of random matrix and so I like to encourage thoughts along pan levées lines whenever possible. And basically they found a determinant whole expression for for gamma KC in terms of kind of like a what you might call sort of a double Ron skin like this. And it turns out there's a Lewis, there's a formula of Lewis Carroll about these double Ron skins, that's a recursion formula for these things. And you can apply that and get a recursion formula then for the gamma KCs and that's typical of a pan levée equation differential equation, you get this, you get different you get recursion formulas. And that allows you to calculate much much further than you would otherwise. So if nothing else that's the reason to find a pan levée is you can do a lot more calculations. Okay. Now you can do this for any L fun, you take any single L function and look in T aspect. And you can do exactly the same thing. So you can study the. So the moment if it's one, just one primitive, let's say in terms of like Selberg class L function. Then the moments are basically going to behave the same way as the moments for the data function get exactly the same gks from Keating and Snape. And you can do all the same. So you have a recipe for those can do all the same stuff. You have the same gamma KCs and that relates then to, if you take the convolution of the coefficients of your L function, and look at them in either short intervals, or in arithmetic progressions, then you're going to get the same gamma KC a behavior. Definitely. Okay. Alright, so there was two things I wanted to do. And one is that there's kind of this amazing. Well, surprising factorization that that happens for for moments in for characteristic polynomials. And so, well, I don't explain that. So first of all, it's useful in our moments formula to change everything into polynomials or rational functions. Before we had sort of either the minus alphas were our shifts. And so now what we want to do is make a change a variable that either the minus alpha is just a. And then all these formulas so that so this is now the formula for moments, but the Z is now just a rational function. And this is the moments formula as to the end is just the product of all the s's and little s to the end. And so you have a rational expression in fact it's a polynomial. So this is just a polynomial. So I wanted to write down the moment formulas for all three groups for unitary, symplectic and orthogonal matrices in this notation. And so now we have sort of three different Z functions, the Z for unitary, the Z for orthogonal, and the Z for symplectic. And one thing you might notice is that the Z o times Z s is actually equal to the Z u of, in this case, x, x. And that's kind of a hint of the multiplication, the factorization we're going to find. But for a symplectic. So here's a product of shifted symplectic characteristic polynomials averaged over some plectic group of size to and and so you just get this formula with the Z s thing and it's x minus t plus t minus so this means take your subset T of x. And then this is just, this is just t bar the complement, and then, and then add in the inverses one over T's before it was negatives but now with this at multiplicative additive whatever notation it's now you throw in the one over T's okay. And so these are the how to average characteristic polynomials over all three of these groups. Let's say you were looking at a unitary, and you just had that four of these a be with shifts a be and also a be then. This is the formula you get you can, you can calculate this in Mathematica or whatever. And that's what you get for that that matrix and if you take m equals to, for example, you get. It's a polynomial, whatever m is this thing boils down to a polynomial even though there's this denominator here, and it factors. m equals three factors. There's a factorization and for forwards a factorization. And it turns out in this factorization, the first factor is the fourth moment for the symplectic group and the second factor is the fourth moment for the orthogonal group. And, and, and this is true it's a theorem. I showed it to Estelle baser and she figured out a proof for it using toklets and ankle on call matrices. And so that's kind of cool. And you can use that then to give a relationship about the gamma kc functions, you can define a gamma kc for unitary which is what we've done you can also do it for orthogonal and symplectic. Oops, I'm not writing it down. Well it's a convolution. Basically the unitary gamma kc is a convolution, you know, in the sense of, you know, of gamma ks and gamma ku I think I have it written down later, which, okay, I'm clearly never going to get to okay. So here's, yeah, here's the theorem is that the unitary moment of size m, if you put in a and a so you have to take b equals a doesn't work for a and b, but if both of these are the same a and a, then that factors into a symplectic and an orthogonal. So I don't know what that means for L functions it's like the moment that you know to kth moment of the Riemann zeta function is somehow related to the to the kth moment of you know quadratic L functions times the kth moment of cuss form L functions that would be a symplectic and an orthogonal. Oh yeah so here's that yeah okay here's the convolution formula. It works like that. Okay, so. All right. Okay, so. Okay, so I'm going to have to skip this stuff so one of the things that comes out of these recipe formula, the recipe is actually they're really powerful this this way of writing these averages of characteristic polynomials. So that they match like zeta functions it's not something the random matrix theorists would have thought of because they're not thinking about zeta functions. So this is sort of a new ingredient into random matrix theory that comes from number theory. It's like well we're trying to figure do everything so that it matches up with the way that we do moments in number theory which is you take some approximate functional equation and then you'd average you have some averaging formula and you average term by term. And that's not random matrix series obviously don't do it that way. And so our recipe, the CF KRS thing is sort of a new ingredient and it turns out it's really powerful and in fact it gives a simple proof of the diaconas gambered thing that I mentioned earlier. And I also said that that was only for a very limited range and in fact, Sig has worked out that you can do this in any range and so he has formulas for diaconas gambered in, in any range whatsoever not just sort of the first range. I was going to give you a proof that we'll skip that. Instead, because I want to talk about ratios. I do all this stuff, not just for moments of L functions but for averages of ratios of products of L functions. And these are the things that you need to study zeros, like if you're doing zero correlations or whatever. You do ratios. And what turns out is the formulas are actually very similar for ratios as they are for moments. We have, we have the same Z function as before the Z of two variables but now it's a four. So Z ABCD we want to be Z AB Z CD over Z AD Z BC. Okay, and then the thing that we're going to average is a product of character ratio of products of characteristic polynomials, which shifts, either for you or for you star. And the formula looks just the same, except there's this extra CD that just tags along. And, and it's defined by by this. So for example, if you just took one, one letter for each of these, then this average turns out to be sort of exactly this thing. Well, okay, so we want to do the exact same thing we did before with look at this thing in the case, let's say you have AA and CC, and it factors. So the ratios factors also. And these are the formulas for how to average ratios, or the formula for how to average ratios over the symplectic group. And this is the ZS of AC, which is curiously ZSA Z o of C over Z u of AC. And this is the formula for averaging those ratios. And it turns out that these rational functions that you get are exactly sort of the first factor in each of these things. And when you do this for the orthogonal do the same thing and so you end up with a factorization that the ratios for unitary factors in the case where you take sort of the same in the numerator you take the A and B to be the same set in the denominator you take the C and D to be the same set and then you get this factorization. And so we can prove that also. Okay. All right, let's see. Okay, so somehow okay so there's a why missing here, there should be a why in this thing. Okay, so one. So one of the things that I think is would be really great to understand is moments of the logarithmic derivative. So Zeta prime over Zeta shifted by a little bit. And the two kth moment of that and that should have by all rights there should be something good about that the coefficients are. They are sort of supported on almost primes. And if you could figure out average to this, then you can do sort of things like almost prime average almost primes and short intervals, or, or, yeah. And, okay, I realize I've got like a million little formulas everywhere okay, but so. I've been studying these moments of logarithmic derivatives of characteristic polynomials and trying to understand the structure of those. And, okay, I'm not going to bore you with all the details of this but we can calculate these up to a certain point. We can figure out lots of properties of these things. But I just want to show a couple pictures. So this is the zeros of that average so this is a. So when there's a denominator that's obvious when you get rid of the denominator, well I guess it doesn't matter so you're just looking at zeros. That's the fourth that's the eighth moment and n equals 75. This is what the zero set looks like. And we saw kind of a similar sort of concentric zero set earlier with the moment polynomials. Now there there was three rings, and that was a sixth moment of characteristic polynomial. And that formula has sort of three swaps in it. So these t sets could go up to size three, and that's why we think that there's three rings there. And in this one, there's just two swaps, the swaps somehow come later like they're not going to be three swaps until you get up to the 18th moment. But if we can calculate the 18th moment of this logarithmic derivative, we believe if you looked at the zero set for like a large and you'd see, you know, another ring around this thing. Now, here's a picture of zeros of. So some of the panel of a equations have polynomial solutions. And these panel of a solutions come in families you have a parameterization and you have families and so you'll have a sequence of polynomials and again, when you look at those zeros as the parameter gets larger and larger. And these beautiful patterns that are kind of concentric. I mean, you know, ever and ever larger sets of zeros of the same shape. And so people have known that about panel of a polynomial and rational solutions for a long time. And so what I want to claim is that in fact, the zero sets that we're seeing here for these moments of, and, and log and moments of logarithmic derivatives, indicate that there's a panel of a, or recursion formula at the very least, hiding behind these things that that's waiting to be discovered and so that is one of the things that we would like to do. Okay. I think I'm out of time so I'm going to. Oh, I had a, the other thing I had was a proof of so Diacona shahani, probably haven't heard of it but it's a pretty famous result in random matrix theory that involves averaging products of powers of traces of characteristic polynomials. And again, that only is for a very limited range way at the beginning. And what happens is this formula, there's actually a formula that Nina state and I found a number of years ago when we were looking at the end correlation of zeros of the zeta function trying to provide lower terms for all of the, like for the specifically for the end correlation of zeros of the zeta function with all the arithmetic factors and everything. Anyway, so it turns out you can use that and again you have the sort of swap terms, and the zero swap is just when s and t are empty the one swap wouldn't be when s and t are singletons to swap, you know when there's two sets. So we can give a proof of Diacona shahani directly from this formula just by looking at the zero swap and it's kind of cute but I just wondered my time I think. Okay, there's some references and I can put the slides up if that's I don't know if you do the slides or not. If you're just looking at the video and it's not going to help you to look at all those slides I skipped. Okay. Thank you for listening. And that's all.