 Thank you very much for the introduction. So I will talk about, so one of my favorite topics which is character sums and especially real character sums, so sums of plus minus one defined by characters. So I will talk about two results, which are like in two papers with Olexic Lermann from Wisterl and Jonas Lamzuri from Nancy. Okay, so let's start. So the object we will look at is we take a character, modulo q, we can take q prime, and we look at partial sums of characters. So the classical example, you take the Legendre symbol, so you take a prime and you have the application which takes, which is plus one if n is a square modulo p and minus one if n is not a square modulo p and you can kind of study other edges of these quantities up to some large n and see if you have some cancellation in this sum. So obviously this sum is bounded by capital N and we expect some randomness for the values chi of n. So we expect that they distribute randomly. So for instance, if you take n large enough, you expect half of a time plus one, half of a time minus one. So if you want to show this, you want to show that this sum is a liter of n for sufficiently large n and the important thing is, so you go up to p and how large n should be to ensure that you have some randomness in this character values. So the first classical result in number theory on this is that these sums are bounded by square root of p log p and it's a bound of poly-avionic order. It's a simple FOIA argument, which shows that essentially if your sum, you take more than square root p terms, you have like half of the time plus one, half of the time minus one. So this was the first result in this field and then the people tried to show that this sums cancels for shorter intervals. So indeed they could push it a bit and Burgess showed that you don't need intervals like larger than square root p, but you can take something a bit bigger than p to the quarter plus epsilon. And up to now it's kind of the best results apart from like some small improvements. We cannot show better consolation in shorter intervals without any hypothesis so unconditionally. So this is related to what I say, the program of the least quadratic non-residue. So you take primes p and you want to take a look at the first time you hit minus one. So maybe you will have like all your first numbers such that the characters, like the legendary symbol is equal to one. And then you look at the first time you hit minus one. So if you look, you use the barn, I told you that you have cancellation for intervals like larger than p to the quarter. You can use some trick by Vinogradov and kind of push it down a bit by some factor square root of e. And so that you are sure that the least quadratic non-residue verifies this bound. So actually the trick is just that if you're like plus one on the first, say k numbers, k primes, but you will also be plus one on smooth numbers which have this prime factors. So you can push the argument a bit and show that this bound on the least quadratic non-residue. So then there was this famous conjecture of Vinogradov which says that in fact, we expect this least quadratic non-residue and we expect cancellation on much shorter intervals. So we expect that if we take the modulus to any small power, there should be cancellation in the quadratic sum and we should be sure to find quadratic non-residues very small compared to the modulus. So this is like really open up to now. But we can, if we assume the Riemann hypothesis for the directly held function, Ankeny proved that actually we can even have a very strong bound on the least quadratic non-residue which is logarithmic bound and it's used like in some algorithm that we have this very strong bound. But unconditionally we have like some like kind of really large power one quarter and unconditionally we have a logarithmic bound. So we will look at some related problems on this but not directly on, we will not study the least quadratic non-residue but we will look at sign changes of character subs. So of course if you look at the character sum, so you take quadratic character not you don't take only the legend or symbol, you can take any discriminant D and you look at the real product quadratic character associated to D and you form the partial sum. So then you look at the number so you look when this sum changes sign. So of course if you change signs on the average means that you hit many minus one values because like if you can draw this like you can draw this as a random work. So when you have plus one you go up when you have minus one you go down and you will look at how many times you cross the X axis. So when you hit zero and then you go up again you go down et cetera. And you wanna know if you take all the first say X values or this sum, so you sum up to one, up to two, up to three, et cetera. And you wanna understand like how many times you hit this this partial sum changes sign. So this goes back to actually this goes back to very old question of Shahola and Fakete which asked like in general for discriminants can we show that we have like an infinite number of sign changes when the discriminant goes to infinity for instance. Can we show that for some these? Can we show that for all these? Can we show that for almost all these? Can we find like some discriminant such that it never changes sign? So for instance that the random work stays up the X axis. So it never has a value of n such that this sum is equal to zero. So many questions which are which are kind of open on this problem. And if we show that they are like for instance infinitely many sign changes can we quantify the number of these sign changes? So can we say if I have a discriminant D and I look at the first D sums that maybe I will have a D to the quarter sign changes. And can I say something maybe stronger which is related to what I said? Can I localize where these sign changes happen? So can I have like sign changes at the beginning of the interval? So for instance in the Vino grad of range if I take D to the epsilon I look at the first partial sums am I sure that here I will change sign or some discriminant Ds? So it's really related to the distribution of quadratic residues and non residues modulo sum prime or modulo sum, some discriminant D. So before like stating any results on this problem like that we obtained I will like talk about some related problem that we can think about the probabilistic model for this question. So a probabilistic model for this question comes from this beginning of 20th century. Vintner introduced a famous model to model the Mobius function. So he wanted to have like a good random model for the Mobius function. So he introduced what is called the Rademacher random variables. So how does this work? So you take a sequence of random variable which are like indexed by primes and you just ask that the probability to be one is the same as the probability to be minus one. So it's like one half on the primes. But then we are interested like as a character as a multiplicative function the Mobius function is a multiplicative function. So it's not a completely independent situation it's just independent on the primes. And then we extend our function on all the integers and we define what is called the Rademacher random multiplicative function which is just extending the values as any n by multiplicativity. So for all square free integers n we define fn as a product of fp and we put zero on non-square free integers. So clearly f is giving me a random multiplicative function because f of mn is clearly f of m times f of n because I defined it as a product of fp on the prime. So this is a way to define what is called a random multiplicative function and we can ask when we have a questions on characters or Mobius function we can translate it in this random size and a random side and see if we can obtain some results for this model which should be simpler than in the deterministic side. So what is the questions we can ask? So we do the same like we have a bunch of plus minus one. We look at averages and we count the number of sign changes of this averages of random multiplicative function from n up to x. So these problems like get so recently there were a lot of works on random multiplicative functions especially Harper had many good results on random multiplicative function but on the specific topic of sign changes so there was this result of m on a hip and Zao which showed that almost surely your average of random multiplicative function will have infinite number of sign changes. So we did it by studying the Dirichlet series associated to this random multiplicative function and then this year like few months ago guys and Yari made this argument quantitative and showed that the number of sign changes is almost surely a triple log. So log index three is like log X to some power one over C for C strictly larger than two. So the best we can do in the random size is like triple log number of sign changes almost surely. So then we can ask can we do that for characters in the deterministic size. So let me mention all the results that are related to questions I said just before. So there were a result by Angelouxu, Kalminin, Sanda Rajan which announced it in his ICM talk but I haven't seen the paper which study the probability that there is no sign changes at all. So in both cases, you can look at this question some and ask, can it stay always positive? Can it stay above the axis? So and you can see that this you can, they have some words where you can bound the probability that this happened. So you can see that if it happened, it's a really rare event. So it should happen for very few characters for very few legendary symbols for very few random multiplicative function, et cetera. So I don't state like precisely what they prove with a bunch of results on this questions. There was a question, no, it was just, I heard something. Okay, so back to our character sum. So what we proved is a following thing that for almost all fundamental discriminant. So for almost all D, so if you pick randomly your discriminant D and you look at the partial sum you will have more than log log D divided by a fourth iterated logarithm. So something close to log log D, sign changes in a very kind of specified intervals, which is something like E to sum power of log D which is very small. So we can locate if you, so some remark here. So this is like in the literature, like Baker and Montgomery were able to produce a bounded number of sign changes in the full range. So when you look at all the sum up to D. So here we have a quantitative result of the number of sign changes, but maybe like the most important thing is that these sign changes are located for very small capital N for very small intervals. For instance, like this, these sign changes occurred in intervals which are shorter than the one predicted by Vinogradov's conjecture. So these are, these like, these E to the log D to the alpha are smaller than D to the epsilon. And our method like allows us, so we can like of course like I made it but you can forget like we made an explicit estimate on the number of exceptional discriminants which does not satisfy this. And it's rather flexible and we can like do the same thing for random multiplicative function or we could look at partial sums of for the coefficient of modular forms, et cetera and use the same kind of strategy that I will explain later. So for instance, if we go back to random multiplicative function, we can prove the following. So it's exactly the same analog of what we prove in this deterministic side. So basically you will in the proof it works effectively the same instead of averaging over these you take expectations, et cetera. And you can prove that for large x the number of sign changes of your average of random multiplicative function is larger than essentially log log x with high probability. You see like with priority essentially one. So for almost all, almost surely a random multiplicative function will have at least this number of sign changes. So the thing you can ask now is is it the right order or can we do what can we expect? So here like the thing if you look at some situation which is much more simple where everything is independent. So you have a range over, you have a range independent Bernoulli variables. So plus minus one, not only on prime but everywhere. So here's the well-studied objects. So it's a classic random work and you can see that for this classic random work we expected number of sign changes when n goes from one to x is of order square root of x. So it's much bigger than log log x. So an open problem if you have any idea what we should formulate as a plausible conjecture. What is should be something like log x should be something like square root or in both case character sums and random multiplicative function. For instance, I haven't written down but you can prove if you have a range like Mobius function that you will have like something if you go all the sums up to x you will have like more than log x sign changes. So you have like different behavior and so this is like, I haven't seen stated anywhere like any conjecture on the expected number of sign changes in this multiplicative kind of random work. So let's talk about some related objects where we can connect this to the problem I've just talked about and I will need it later in the second part of the talk. So our method allows us to treat zeros of fakety polynomials. So what are these fakety polynomials? But you still take a fundamental discriminant and you form the polynomials where the coefficients are the values of a Dirichlet character. So these polynomials were introduced by fakety to study the associated Dirichlet L function. So in particular, you observe that if this polynomial has no zeros, then the Dirichlet L function has no real zeros. So in particular has no zero and this would be fantastic results. But if formulated the hypothesis that for large D this polynomial should have no zeros between zero and one. This was a quickly disproved, quickly seven years. Relatively quickly bipolar, which showed that in fact for positive proportion of discriminant, there is at least one zero of this polynomial. And so then let me tell you about another upon problem which is related to the one I told you about the random walk which never crossed the X axis. There is a problem which was formulated by Chawla and fakety and then by Sarnac again and one in some later. Can we construct infinitely many primes such that this polynomial Fp or infinitely many Ds has no zeros on the interval? Is it possible that this polynomial never crossed the X axis? So with all methods, we can actually lower bound the number of real zeros of this polynomial and we can localize it. So it's the analog of what we do for character sums works for real zeros of fakety polynomial. So in particular, Baker and Montgomery conjectured that the number of real zeros should be of the order log log D for almost all discriminants. So there should be log log D real zeros. And we can prove that for almost all discriminant that we have at least log log D divided by quadruple log of D. And the interesting thing is that we know where these zeros are. So we know how close they are on the, how close to one these zeros are. So we know how to locate in specified intervals of these real zeros. So like you see, if you recognize this quantity like E to the log D to the alpha, it's really you see that the connection with the character sums with where we were, we had like the similar expression to locate the sign changes of character sums. So indeed, like the two proofs goes more or less in the same spirit and I will present the ideas in the character sum case and then just tell you that we can do the same for these zeros. So how do you detect sign changes? So you start from this identity on the top here that if you take your Dirichlet L function you can apply partial summation and write that your Dirichlet L function is just an integral of your partial sums divided by some function, some power function. So now, well, so it's related to study of course then this S, K, D and study the L function. So what do you do if you do, you take derivative both sides with respect to S, you obtain some identity which will involve the first derivative of L and on the right hand side, you will have another function of U and the partial sums of Dirichlet characters. So then if you apply some change of variables, I can do it, believe me, and you arrive on the blue identity at the bottom which shows you more or less that on the right hand side, you have some Laplace transform of your partial sums of Dirichlet characters. And on the left hand side, you have something like L prime over L which is like a nice object of study in I think number three and you have a factor L, S, K, D plus something one over S which will not bother us much. So then the idea is that if the left hand side changes signs a lot, it will force the right hand side, the integrand on the right hand side to have some sign changes. So this follows from a general results in analysis which tells you that, which is the following. If you have a function defined on R which has a Laplace transform, so you can define this Laplace transform like integral of G S minus Sx which converge for S strictly larger than zero. Then the number of sign changes of the integrands of G is bigger than the number of sign changes of the Laplace transform. So in our case, what we have inside the integrands are our partial sums of characters. What we have on the left hand side is something related to the L function. So if you show that the left hand side which is this L prime over L has a lot of sign changes, then you will have the same thing for the partial sums of characters. So that's actually how it goes but then we have something a bit more involved but because we localized the sign changes so we need something stronger, we need to control these integrals on restricted intervals. So how do we show that the left hand side has many variations of sign? So we need to study what is on the left which is more or less, if you forgot about the other terms is more or less L prime over L for some S applied for some Ds. And we want to understand this on average of other discriminants. So the idea will be to take points like very close to one half and see that when we take some sequence of points there will be many sign changes on the sequence of points L prime over L, S. So the way to do that is to model this L prime over L by some random object so by like, no I skipped a bit of things but L prime over L, you can write it as a polynomial. If you expand it, you can write it as a polynomial over primes and if you do use some like zero density theorems you can show that for almost all D it would be some short Dirichlet polynomial supported on primes which are around e to the gx if S is one half plus one over gx. So it turns out that this polynomial you can show that they are well approximated by Gaussian random variables with mean zero and some relatively large variance, one over two S minus one. So when S is very close to one half the things to have in mind is that this one over two S minus one is very large. So when you approach the central point the variance like blows. So this is good because we have a Gaussian random variables with large variance. So if you have a lot of Gaussian random variables which are all independent with large variance you will be sure that they will take, they will have a lot of sign changes with high probability. So what we will do, we will take a sequence of points S one, S R with some like function of all points like a tending to one half. In the way that the supports of the polynomial which approximate this L prime over L as disjoint. So different primes are involved in all the approximation. So because different primes are involved in the approximation it means on the random size that we have like independent random variables. And then if you have independent random variables which are Gaussian with large variance you can deduce that you have many sign changes. So what do you do indeed? You take many points and you form the vector of approximation and you on the deterministic side and you look at on the random side and on the associated random vector. And with high probability by some probability results you are sure that these random vectors will have a positive proportion of sign changes which is very important. They change signs, they change signs so we take positive and negative values but positive large values and positive negative large values. So when you go back to the deterministic side it tells you that the approximation will have the same behavior. We have a lot of sign changes. So to do like to compare like to the random model we need some results on the discrepancy and use the method of Lamzuri, Leicester and Ratzivil. And from the approximation you go back easily to L prime over L and you see that you have quantitative sign changes for L prime over L. So then when you go back and use the results I told you about the sign changes over Laplace transform you have a lot of sign changes for the character sums. So let's summarize. I had this identity. We selected points such that L prime over L is very large and takes plus and minus values. So on the right hand side it should also has a lot of sign changes. So we escape because every this t factor is just positive so it's not a problem. And we just need to have this L SKD which does not kill the sign changes on the left. So if L SKD would be too small it will prevent us to deduce this result. So we need to prove a large deviation results for log of L SKD. In some sense we should prove that for almost all D log L SKD is not too small. So for good discriminants L SKD is not too small L prime over L goes plus minus at different points. So the same thing occurs for the partial sum on the right hand side. So let me briefly tell you how we locate the zeros which is like actually the most technical part is to locate one of the difficulties to locate the zeros on the character sum side. So to do that you need to know that when you integrate from zero to infinity the main bulk, so the main part of the size of this integral is from two parameters very localized. So you can show that you can forget what happens close to infinity, forget what happens close to zero. And the main integral is dominated by what happens between Y and Z. So if you have that it will tell you that indeed like this SKD which changes size between Y and Z. So here like to deal with that we use like some like classical techniques on large even equalities and character sums. So I will not deal with that because I wanna talk, I see the time is running and I wanna plan to talk about something but second problem which is related to directly characters but exponential sums on the unit circuit. So let me move to something completely different which is like the measure of polynomial. So here like we seems like we don't take anymore about character but we will go back to character in few minutes. So if you can forget about the first part if you were sleeping and like wake up and like look at that. So if we take a polynomial P we can define like an object which is called the measure which is just the integral of the log on the unit circle of your polynomial. So essentially this is the limit when Q tends to zero of the Q of moments of P when you look at the moments on the unit circle. So if you want this is related to this is in some simple way related to the zeros of your polynomial. So if your polynomial factorize in this way so yeah there's roots alpha i here but then the manner measure is just the product of the roots which are outside the unit circle. So it's kind of captures the information about the modulus of the roots of any polynomial. So it's when you have a polynomial where your first thing you're interested in is you to locate the roots. So this in some way in some condensed way like captures this information. So you can say I can we compute this manner measure for some interesting polynomials. What can we say about that? So in the last century little would study the specific important class of polynomials which are the one where all the coefficients are either one or minus one. So you can form all the, it gives you a bunch of polynomials with your degree n you have two to the n plus one such polynomials and they have been studied like in all the aspects like the norms, the manner measure if you want the maximal size of the unit circle you want to study the number of real roots it's also related to some combinatorics problems et cetera. So one thing which is easy is the second moment the second moment you can just use Parseval's formula and show that it's about square root of n if you have a polynomial of degree n. Then there are like, so many open questions on this kind of polynomials. For instance recently famous conjecture was solved few years ago about the possibility of constructing a polynomial which plus minus one which is flat meaning that it stays more or less of size constants times square root of n when you look on the unit circle. So it's something which like does not very much when you look at the values of the unit circle. So this was called the little root flatness conjecture and like it took some years to solve this conjecture. So it was just like as a historical note and we will focus on not on this problem but on the norms and the malar measure for specific families of polynomial. So in general, the malar measure is bounded easily by the second norm. So we know it's smaller than square root of n. And one question which was asked is does there exist a positive constant such that for every polynomial p of degree n in fact the malar measure is smaller than one minus epsilon square root of n. So can we like approach the upper bound constructing polynomial with plus minus one coefficient? So the largest known value is given by this polynomial of degree 12 which is gives you malar measure which is relatively close to one times square root of n which is just 0.98. So nothing tells you that it's the best you can do there's no proof of that. So probably this is still not solved but people like look at this external problem and started to study families of interesting polynomials. And for instance, they looked at all the family of all plus minus one polynomial and looked at what can we say on average about this malar measure. So on average about all plus minus one polynomials they showed that more or less you can compute this average and you have this ninth constant e minus gamma over two which tells you that on average a malar measure of little root polynomials is square root of n times e minus gamma over two which is a constant which is not so close to one but it's strictly larger than half. So cause the average is like larger than half then they ask can we construct actually explicit sequence of polynomials which have a large malar measure. So Shoy and Erdely did that and constructed some sequence of polynomials with malar measure larger than square root of n over two. So let's now see what can we do with polynomials which are given by directly character coefficients. So and through this again, the fakety polynomial so if you take the coefficient given by a legend of the symbol can I understand the moments can I understand the malar measure? Can I relate it to some more general problems et cetera? So but this what is what was proved on this fakety polynomial is the following results. It was proved by Montgomery that the maximum on the unit circle of this polynomial is more or less understood in sense that is of size smaller than square root p log p and bigger than square root p log log p. And it's still unknown if the right order of magnitude of this sub norm of the fakety polynomial. So at the beginning like people looked at this fakety polynomial or some related fakety like polynomials because it was like on many points close to square root of p. So it was an idea to have like some kind of flat polynomials for with coefficient plus minus one. So what was proved about the malar measures of this polynomial? So little would prove the thing that for this specific family it could prove like this malar question, it could answer it and show that there is a constant epsilon such that this malar measure are smaller than one minus epsilon square root of p. On the other side, like Erdely and Lubinsky showed that it's like almost larger than square root of p over two for large primes. And then a series of improvements led to the following thing that we can have the malar measure is strictly larger than square root of p over two. So it's somewhere between one minus epsilon square root of p and one half plus something small square root of p. So this is like what's the state of the arts is like we have like, we know that the constant is not too small and not too large. And in what about the asymptotic behavior? Can I compute it and like show for instance that it gives me very large malar measure? So it's giving me useful polynomials for the question of malar and et cetera. So this, like if I read like I quote the paper survey of Erdely, which says that the problem of finding an asymptotic for the malar measure seems to be well beyond at this moment. So there were no conjecture formulated for this malar measure. And we solved this conjecture, I mean this conjecture which was not existing. We proved that actually the malar measure is asymptotic. So for the important thing is like for fixed p it's not an average over p, it's for fixed p the malar measure is given by some explicit constant time square root of p and this explicit constant is not the one when you average all the plus minus one polynomial. It looks very close to it, but it's not the same. It's not a minus gamma over two. So it's like a new phenomenon which like occurs for this family of polynomial. So how much time I have? I think I would skip the proof but just say something. So what is specific about this? What is important? Is that this? Think of the polynomial when you apply and when you evaluate at the roots of unity you have what is called a Gauss sum. So you know it's of modulus square root of p. So it's pretty regular on the roots of unity that's why you could think like a while you could expect for instance that the maximum modulus on the unit circle is not too large. And the thing is like the value of the roots of unity is something like square root of p times some legend or symbol. So you have some kind of randomness on the values at well distributed points. And from this information you can actually prove a very fairly general results so which will goes like that. So you wanna compute the manner measure. So you wanna integrate this log of fp. So you kind of look at what happens between two of roots of unity. You change variables and more or less what you're like lead to understood is like this thing in blue which is the integral between zero and one of log g p k t where this g p k t is just normalized values of the fake heat polynomials at the point e to the k plus t over p. So our goal is to show that this actually converges when p is large to some constant. So the thing is like I said this function like using the fact that you know what is the values at the roots of unity you can rewrite this g p k t as some sum of nice function. So makes this function of t. With some legend symbol on the left. So it's something which has some randomness inside. And using this randomness you can show actually that this g p k t with will converge to some random process. And you will we will compute this manner measure as values the same as the same integral for some random process. So as I say the shifts behave like independent random variables. So you have legend or symbols that we've seen in the first part these are some these are some consideration is some kind of random plus minus one with probability one half. The other part is some nice function that you can express by Taylor expansion. So you could say, okay this function I can approximate it by some random process that I will define. I will keep the same function and just I will replace the directly values like the legends of symbols by Bernoulli variables. So it will give me a function which is random. So function on the space of continuous function. So it will give me like a random process on the space of continuous function. So it will give me a function between zero and one given by this series. And the idea is that we can prove some this limiting distribution result meaning that this GPKT will tend in some sense to this random process. And from this results, we will compute everything on the random side and deduce that we have the same results on the deterministic side. So the main thing is the main result is the following that the sequence of random process given by this GPKT converges weekly to this process, GX of T. So what does it mean? Like if you don't want to talk about probability it just means the second thing that if you take any continuous functional even you average one of a P sum of K in between zero and P minus one of P evaluated on these speculative values So for instance, you can take fee to be to be the integral of a Q of power. So you can compute moments, for instance. This is given by the expectation of this random process. So indeed, like you can recover any information on these speculative polynomials by computing the next expectation on the right hand side. So now, okay, you can recover. So I will just keep proof because I have no time and go to application. So for instance, one nice functional you have is like the, if you have a polynomial you can compute moments. So given a function and computing these moments is like a continuous functional. So you can recover all the moments from these results. So in particular, you can show and have an asymptotic for all the Q of moments which is given as an expectation of a random process to the Q, et cetera. So it means that like a this like was solving a conjecture because about the moments of fake it a polynomial before it was only done for integer moments and it was done by a rather complicated combinatoric proof. And here you have explicit asymptotic for the moments given by an integral of an expectation of a random process. So you can tell me yes, but this does not tell me anything about the malar measure because if I have a function and I associate the integral of log this is not a continuous functional. So I cannot directly apply our results. So just if I have like some time I will move to it. So on a more, let's say on a more probabilistic if I rephrase our results it would be like that. If I take any rectangle on a complex plane and look and I look at the measure of T. So I look at the values on the unit circle of the polynomials I wanna know when it falls on a specific region. Then I can compute this by computing a probability and my process falls into this region. So this is a result on the distribution of exponential sums because the fact that the polynomial on the unit circle is just this thing. It's just sum of KDM e to the two IPN theta. So it tells you like when this, how this thing distributes in the plane. An important thing is this is like as I said is not for an average of a prime but it's for fixed primes. So finally like I have a picture this is like the picture of a realization of the process on the complex plane. So it tells you how this like GXT distributes when T varies on the unit circle. And this should be the same. I mean, that's what we prove. It's the same thing distributes the same when you look at the values of our polynomial. I see something on the chat but let me finish with this. So as I said, we wanna do the same for the malar measure like to solve like to have an asymptotic on the malar measure. We will need to prove a right I wrote here which does not follow from a weak convergence theorem. So if you wanna know like on the left hand side you have a KT polynomial and the right hand side you have expectation of log of our process which would be some kind of malar measure of our process. The major problem is this like function is not continuous on the space of continuous function. So we cannot apply our result because this functional is not continuous. So it's not continuous but the only problem is like we need to control the logarithmic singularities on the random side and on the deterministic side. So we need to show that there's some time like this integrals will not blow on left and on the right hand side. So I will not enter in the details but what we will do is like, we will like consider first truncate look at when our function are not too small. So when there is no problem like the functional nulls are continuous. For instance, if GP is larger than epsilon and GX is larger than epsilon but then the functional when I truncate and I look at only this part is continuous and I have my limiting distribution result. But then the problem and the main technical difficulties to understand have a measure of T on the unit scale such that I have very small values. So such that I have some kind of singularities. So what we prove is that this does not happen too often like the measure like of this is like some kind of control so we can control the logarithmic singularities of this two integrals on random side and on deterministic side. So this is like the main work like say here like the main difficulties to control this specific singularities. So I think it's like already 50 but essentially to control the singularities the thing is to just look at the formula and the bottom HPKT and we need to understand singularities of some specific function which are given like some of rational function with plus minus one and eventually some zeros too. And this kind of function are not too hard to, I mean with some work we can show, we can analyze the first derivative and the second derivative and see that it does not like it will not stay too much small than epsilon because it has some like control variation. So when we analyze like this thing for the deterministic side and the same thing. So for one thing the epsilon will be directly characters and the other side will be very variable but to be some kind of similar analysis we can then deduce a result on the milder measure of polynomers. So then we can, for the moments we don't need this because like we have continuous functional and we can directly compute all moments using our distributional results. So maybe I would just finish here if I have time or not. Sorry, Mark. Do I have some minutes? Yeah, you can take one or two more minutes, of course please. So in practice, so you say maybe because this looks a bit theoretical, you have this Jx, we say, okay, fine. How do I compute this Jx in practice? But you can compute it by, you can truncate the series. So truncate up to J and then computing expectation of this but it's just like taking all the possibilities of plus minus one. So here on this, like on this XM. So when you compute expectation, you just compute another edge over all the plus minus one coefficient of an integral of log of, here there's like some sign missing M smaller than J of this function. Of course it's quite something like it takes a lot of time because two to the J is like very, I mean, it grows very quickly. So if you want to, you need to compute a lot of integrals and then sum it and then if you do it, you will like kind of approximate the values that told you about the, for the malar measure of Fakete polynomial. Of course you could do the same if you want to compute the constant which appears in the moments, you will do the same, you will just have a range of our coefficients and integrate what is here between the bracket to the Q. So you can compute everything on the random size and then they use this for any asymptotic for any fixed large Fakete polynomial. And actually our method like does not need specifically could be adapted for different polynomials. The only thing you need is that the value on the unit circle, I mean values on the roots of unity or on a well set of well chosen set of points have some randomness. So for instance, as was like when we released the paper, Michelle Mosinghoff wrote to us and said, can we do the same for kind of shifted Fakete polynomials which are polynomials which appears in some extramar problems. And because like the, we still have some randomness for like this kind of polynomials, you could compute in prime and you could compute the malar measure for this kind of polynomials too. So it does not apply only of this polynomial but it just applies on something where you have randomness of on a well set, well chosen set of points on the unit circle. So I will think I will stop here.