 And today I will be talking about sums of Closterman sums and Saliya sums and their applications to various things including moments of L functions. So let me start with basic definitions. In particular I want to define the sums. And here you can see this definition. So the Closterman sums, which I call k sub qmn. You see three parameters. There's the sums of x running through all reduced residue system model q and we sum up these exponential functions. So e sub q means the exponential functions of the argument divided by q and x minus 1 is always understood to be computed model q. When q is in subscript here. Saliya sums define it almost in an identical way except that this sums are twisted by the Jacobi symbol. And this automatically means that q must be an odd integer. So Saliya sums look slightly more complicated but because of some kind of reasons I will explain it a little later in fact usually Saliya sums are easier to handle. It's because we have no idea how to evaluate Closterman sums but Saliya sums can be evaluated in an almost explicit way. Now I want to abandon Closterman sums and Saliya sums and just some motivation to talk a little bit about Möbius function. And for the Möbius function, which I denote Mö of k as usual there was something which was known as Möbius through the randomness which in a very vague way says that for any reasonable sequence of complex numbers a k for example normalize the s number is at most 1 by absolute value if you combine the Möbius function with a k there should be some non-trivial cancellations. So this sum should be little low of capital K where k is the range of the summation and you expect that this will hold for any sequence every time unless it's obverse to false. And when it's obverse to false for example if fk is Mö of k itself then of course you have a Mö of k squared so it's 1 so you have the number of square free numbers which is 6 or pi, 6 or pi squared asymptotically. Okay, so this kind of very general and vague statement has two special cases where you can formulate exact conjectures and these are chiral conjectures and Sarnac conjecture. So the chiral conjecture says that if fk is also Möbius function given by a Möbius function but with a shifted argument you shift it by h with h which is not 0 and correlated with Mö of k you should get a non-trivial constellation and moreover you can expect something like this when afk is a product of several shifted values of Mö shifted by different likes h1, h2, hs. And Sarnac conjecture says that if afk is given by a dynamical system which is called zero entropy I don't want to go into details but zero entropy means that the output looks simple so it's kind of low complexity or sometimes people call deterministic there are many equivalent ways to describe it without kind of going into trouble giving a precise definition anyway we stick with this very vague notion of low complexity so in this case you should also expect a non-trivial constellation I think it was about 10 years ago when Sarnac proved that chiral conjecture implies Sarnac conjecture or maybe this was his main motivation so in some sense chiral conjecture is harder it should be but we don't have any proof of either so far ok so this was known from Möbius and for me it was very natural to ask whether you could formulate similar things for Klostermann and Salyas sums how do they behave if it correlates them with some other sequences now there are some clear distinctions we now have three parameters instead of one with the Möbius function and the second question to ask even before we try to formulate any conjectures is what is the trivial bound what is the non-trivial correlation now it's not so clear because for the Möbius function the absolute value is 1 or 0 here it's more complicated but at least we know that these sums are bounded but it's more or less the square root of Q and these consequences of the famous result of Andrew Weil so we have this but still there is something which we need to control better if you want to formulate a very precise conjecture there is some mysterious little or one other exponent and there is a GCD which sometimes is 1, sometimes is not so this will give us some extra trouble but we'll talk about this in a second and before entering the more technical details I want to recall this very well known definition you say that A is less less than B or B is greater greater than A if A is just big or B so all these definitions are equivalent to each other okay so let's just forget about proofs and see what we really want to see so let's formulate the conjectures which we hope to be true in the ideal world so you can consider several different scenarios and here instead of little or whatever would be a trivial bound I have to be more specific due to this slight vagueness in the upper bound on Closterman and Caesarea sums but let me try still to formulate something meaningful so we assume that I have sequences A sub Q indexed by the modulus B of N a sequence indexed by one of the coefficients in Closterman sums and C which depends on both Q and N so there are three different types of sequences now what usually people horizontal scenario in this case it will be horizontal to the randomness it's when we fix M and N the coefficients in my sums and we sum over the modulus so as I conjecture that if sequence AQ is reasonable then the sum of Closterman sums correlated with any reasonable sequence AQ over Q up to capital Q should give us a non-trivial consolation and what is it would be a trivial consolation we have Q values each of them is about square root of Q so ignoring the GCDE and other technicalities we keep it Q to three-halves but I assume it's less as the exponent is three-halves minus eta for some positive eta so this is when we sum over all Q's now we can see the rules and consider what is called the vertical to the randomness the Q is fixed but which is very one of the coefficients which is the coefficient N in this case and I assume that in this case there should be a non-trivial consolation between the sums and finally I can do both I can vary both Q and N so in this I use my sequence CQN and again I assume the same that there should be some non-trivial consolation and when do I make these conjectures I make these three conjectures in the case unless they are obvious defaults it's certainly a very vague statement but I'm not claiming any proofs either so I believe that these things should hold unless it's clear that then they do not hold ok, but this is just as I said in Dream World so let's see what we can do and let's talk about this in a second just a small comment we have three parameters on the previous slide our save region only over N of course you can ask what happens if you average over M it's a very valid question and indeed you can also consider this scenario when you sum everything you can see here all three parameters Q, M and N and perhaps you should expect the same behavior and in fact it's very possible that eta could be taken to be any number less than one half so perhaps it's true and just one more small comment that even if your sequences are as simple as you can think of namely they're all constant, they're equal one these equations are still non trivial they're interesting and they're useful and it's probably well known to everyone that equations of this type when you sum, you can see the sums of cluster 1 sums are known as cluster 1 I think this term was introduced by Huxley and they're kind of promoted by Henry Kovacic Igor, sorry for interrupting there is a question from Jean-Marc Bessouille Jean-Marc would you please unmute and ask directly so the question is what is the meaning of the eta over Q and Q is fixed? eta over Q, sorry let me come back well it's a large number when I say fixed it means we don't vary it maybe fixed was the wrong choice of words fixed means it's a very large parameter you don't sum over Q but Q is not fixed in the sense it's not bounded now that was the dream world now let's come back to the real world and see what you can actually prove we're talking about proofs I want to introduce a little bit of background on cluster 1 sums and first I want to notice that if you change the summation of the variable x to minus x then the sum remains the same which means cluster 1 sums are conjugated to themselves and we know what this means, they're real numbers for Sallier sums you don't get the same sum with this trick when you replace x with minus x because of the Jacobi symbol but you get plus minus the value of the conjugated sum which means that there are two possibilities here it's either a real number or it's a purely imaginary number so it's convenient to remember so the square of the sums is always a real positive number real non-negative number we know a little bit more we know that we can change the order of these parameters m and n, this definition is purely symmetric and playing with the sums a little bit more you can even do more, you can put both coefficients m and n in one product and to push everything into the first coefficient again it's simple change of variables give this and here you see that for Sallier sums we know much more than for Klostermann sums you have essentially closed form expression Sallier sums are given by this formula and the second sum, say when q is a prime number contains only two terms contains solutions to these congruents x squared is congruent to m and n model q so for prime numbers we have at most two values epsilon q is given explicitly depends on the residue class of q mod 4 and this formula immediately tells us that if q in Sallier sums, square root of q missing somewhere square root of q, yes, I'm very sorry, thank you yes, it's square root of q of course it would be too good to be true or maybe too bad square root of q wouldn't be here so this expression times square root of q, thank you so if you have no solution here then this sum is empty and the sum is zero so I will come back to this formula when I need them but just for your convenience instead of putting my name and the talk number as people do at the bottom I put definitions of the Klostermann sums and Sallier sums so these definitions will always be with us now I want to recall the terminology which I introduced, which we call horizontal scenario it's when we fix m and n and in this case fix means they are bounded indeed and we vary q a vertical scenario where as Jean-Marc noticed this word fixed is not probably a good choice of word but we don't vary q, q is still a large integer but we run m and n model q over some sets and finally we have another scenario when we can vary all parameters in our disposal and these three scenarios they are usually in the decreasing level of difficulty and so in the increasing strength of results even if they don't imply each other the corporations are independent so if you know everything about the first scenario and the second scenario you still don't know what happens in the second because of the uniformity issue so the fact that they are of different level of difficulty doesn't mean that they are kind of dependent so let's call both horizontal randomness where we fix m and m and vary the models and here I'm not sure if I can say more than this because it's more or less what we know meaning nothing except for one very, very important case when a of q is 1 over q so we scale each sum by 1 over q and in this case I'm pretty sure you all know that the discovery of Kuznetsov's and his trace formula and which extends previous results of Patterson and Selberg gives us the following estimate on the sums of costum on sums with coefficients 1 over q it's easier and more kind of useful to look at his result with q to the 1,6 as a bound of the shape q to the 1,5 minus 1,3 to the exponent and never mind logarithmic factor q to the 1,5 would be a trivial bound which follows from using just the end-of-isle bound and he saves 1,3 and that was a huge step towards the Linux Selberg conjecture saying that this sum should in fact be of this order of magnitude q to the little over 1 so we still 1,6 short of this result this result was improved and modified using some smoothing by Dizorye and Ivanish and they also found new applications then there were a series of other results for example Sevnak and Zimmerman they obtained a uniform version of the result of Kuznetsov meaning they obtained an estimate explicit in terms of M&N which is a very important step for many applications people also considered sums of this type when q runs through an arithmetic progression and it's a few other things but unfortunately none of this result is close to what is expected namely a constellation of this type so there is a very significant progress but we are still not there where we want to be with this now let me switch to a different scenario with vertical randomness which means I will vary M&N but the models will stay the same I will try to avoid saying fixed so I think the first result in this direction was the result about the signs of Kuznetsov's sums so we remember that Kuznetsov's sums are real numbers so the sign is a plus or minus one zero occasionally but we know zero is almost never happens so we will ignore zeroes and if you define e over n as a sign of Kuznetsov's sum with first parameter one and the second n and you can switch the roles of M&N and consider what happens when I run n so all possible values between one and p minus one then there is a result of Riemeshe's Riva and Scharke's which says that if you consider these products the products of signs of Kuznetsov's sums there is a non-trivial constellation so trivial bound would be p but if you have s terms here in this product you save one over two one divided by two s plus two so there is a non-trivial saving which of course slightly diminishes when s grows so I think it was the first result in this direction but certainly not the last and a little bit later more kind of direct sums we consider direct in a sense they were involved Klostermansums and Salyasums directly namely if we consider this transformation the remember this transformation then in this notation for V.Kavalsky and Michel seven years ago proved that if t is a Klostermansum or Salyasum it applies to both of them and if you have s in some sense independent Nürburz transformations and then you take this product of the sums t which is a K or s where the second argument of the sum is modified by your Nürburz transformations then again there is a non-trivial constellation and in fact the bound is quite good there is a potential square root constellation when the length of your summation is close to the largest possible p so this product is p to the s over two with the order of magnitude so the trivial bound here would be the capital N instead of capital N the bound says that the sum is the most square root of p again ignoring a little over the exponent so when N is bigger than p to the one half plus epsilon you have a non-trivial constellation between these sums with twisted second parameter and twisted by the Smyrburz transformations and of course you need some linear independence of those transformations you need some conditions but very natural necessary conditions and as I said it's non-trivial in this range and the sum is long enough well, besides correlation of closed-erman sums between themselves people also consider it how closed-erman sums correlate with other functions and here I will consider two probably the most famous arithmetic functions namely the Myrburz function and the divisor function so I will look at these two sums sum M where we twist closed-erman sums with the Myrburz function and sums which are called T sub qN where we do the same with the divisor function tau say for prime models q if the trivial bounds here would be just one here because this is the most square root of q for prime values of q the bound is nice and clean the prime square root of q and the sum of Myrburzes will give me at most at most k... so what did I say here? sorry, I missed N I didn't scale it by N this should be N and this should be N log N sorry, my apologies it's not one, it's capital N and it's at most N log N in practice N is missing anyway this is a trivial bound and if you want non-trivial bound then yes, we still have it but you need to assume that the sum is long enough and remember this result about the signs of closed-erman sums then the length of the sum which would give us a non-trivial saving had to be p to the one-half here in this result of Kowalski, Michel and Soren which is reasonably recent you need to run N up to p to the three-quarters for the Myrburzes and up to p to the two-thirds for the divisor function but in either case you have a non-trivial consolation and you have a power saving the first sum is not N which I mistakenly wrote as one but N to the one minus eta and the same thing applies to the second sum for some positive constant eta and I believe that if... they didn't specify the dependence of eta on epsilon but I believe if you go through the argument you should be able to take eta as some absolute constant c fully explicit times epsilon squared so I don't think it was done there but it's probably what the argument gives and let me just stress that in these ranges you have a power saving if you agree to sacrifice on the power saving and just want a non-trivial bound then you can reduce the range of your sums and from three-quarters and two-thirds of the exponent you can consider ranges of this order of magnitude p to the one-half plus epsilon and okay here I corrected one type it's N but I still didn't correct the second type it should be N log N sorry so instead of these trivial bounds Mihail Karalyov and I reproofed that you have some logarithmic saving but the range now is N to the p to the one-half we used a different approach and different method even we used some of the results for very machine learning but we dealt with something in a different way and we obtained these non-trivial estimates in a shorter range but as I have already said we lost the power saving there is another class of moduli here in the previous result we considered prime numbers of q this is why I used p in a slightly different case when your modulus is a prime power you have a small prime for example a power of 2 then you can consider very short sums and this is the result of a QLU tapping chunk in myself we reproofed that when q is a power of a fixed prime p then you have a non-trivial power saving for very short sums of this type namely when N is bigger than q to the epsilon non-trivial power saving in the sums and well I presented as a theorem here but let me add perhaps and let me explain why it has never been written but what we did we considered the sums over primes so the second argument in Klosterman's sums runs over prime numbers and usually sums of this type are harder so because for these sums we got a similar result the same method should work for Möbius's function we just didn't think about this at that time so I'm quite sure that what I said here is correct and follows from our argument you can consider different variations of this scenario and consider different sums for example you can twist Klosterman's sums with a digital function and consider sums of Klosterman's sums where one of the arguments runs over the set of integers with some restrictions on the binary digits for example you can run N one of the coefficients over the set of integers which have exactly S non-zero binary digits so this is the cardinality of the set and here again in the same paper we proved the following that if S the number of non-zero digits is greater than R the bit length of your integer the total number of digits times 0.11 a little bit more than this which is defined by some equation then you already have a non-trivial constellation so this parameter, rho zero is the density of non-zero digits so if the density is bigger than 0.11 and a little bit more than this then you have a non-trivial constellation in the sum now my initial goal was to consider Klosterman's sums twisted by arbitrary sequences and indeed this is something I want to discuss now and before I go into this let me notice that if I stupidly try to deal with sums of this type because you have two parameters m and n and I want to twist it with gamma m and n and of course I have no chance because I just take gamma m and n to be Klosterman's sum so no general result of this type with an arbitrary sequence of gamma is possible because I will get a square so I need to say at least something about this sequence and this something can be expressed in several different types namely you can and actually for many applications we should consider these three different types of sums we can consider smooth sums where just gamma m and n is one the sums are still very interesting and non-trivial we can consider sums where gamma m and n depends only on the first variable m and sums of this type are traditionally called type one sums or we can consider sums where gamma splits into a product of two sequences alpha m and beta n and the sums are called type two sums and we can consider the cases when the norms of the sequences are bounded but today for simplicity I will always assume that alphas and betas are at most one by absolute value so they're just individually bounded rather than on average and here there is a list and I'm pretty sure an incomplete list of various applications of the bounds and sums of this type and probably the most impressive application is an application to the moments of L-functions it was obtained in a series of works of Bloma for the Kowalski machine in Milisevich during this years then I did something in 2017 then at the first two applications used Kostrom and sums then Ilya Shkredov, Alexander Zakharenko and I did something with Salya Samson also found applications to moments of slightly different L-functions there are some more arithmetic applications to the divisive function and arithmetic progression and again here you can see several results and very recently Breisker here Ping Wu and Ping He started working on this together and we have hopefully something which will come out soon also on applications of sums of Kostrom and sums of arithmetic progressions and the sums which you saw in the previous case were also a part of the method which Kowalski Michelin-Savin and then Korovo and I used when we considered Kostrom and sums twisted by arithmetic functions so I'm sure it's not a complete list of applications there are many more so the sums are important and interesting and when we deal with sums of this type there are usually two kind of related but still independent goals in some scenarios you want to have the strongest possible bound in what's called poly-vinaigretta range namely when m and n are of this size q to the one-half because usually these seven things become hard when m and n are bigger than the square root of q some more elementary methods should work it's still a very important range but usually you can handle it with more elementary methods when m and n drop below this level things become harder so this is one kind of goal you can put yourself to get the best possible bound for m and n of this order of magnitude and it's very important for some applications but for some other applications it's not the strength of your bound as such it's the range in which you have a non-trivial bound become more important so you want a non-trivial bound on this sums which you saw on the two slides ago but you want it in a huge range of parameters m and n the parameters which control the averaging of costum and sums so let's talk about the poly-vinaigretta range first and I think the first breakthrough here was in the paper of Bloma for V. Kowalski, Michel and Milosevich in 2014 who proved that if product m and n is less than p to the 3 halves and m is less equal to n squared so this condition is certainly satisfied if you're really interested only in this range so it's not a restriction and you're also usually interested in the case when m is of the same order of magnitude as n so these conditions are very easy to satisfy then for the type 1 sums you have this bound and you can easily check that in the poly-vinaigretta range when m and n are p to the 1 half then this bound saves p to the 1 over 24 again the trivial bound and the trivial bound would be the number of terms which is m times n times square root of p so this is the trivial bound so the case when m and n is exactly in the range where things become hard a few years later I used a different approach and I proved and generalised the bound to arbitrary moduli and the bound which I obtained using a very different approach kind of more elementary approach looks like this and this bound saves q to the 116 so here q could be any integer number not necessary prime and this improves 1 over 24 down to 1 over 16 compared to the result and generally this two bounds which I presented they're not the only bounds there are some other bounds obtained by different authors in different contexts so if you compare all previous bounds which I didn't list here I presented only two most recent bounds then this bound wins all previous bounds in this polygon which is explicitly defined by this list of vertices but the most important part is the point 1 half 1 half which I think is somewhere here is comfortably inside of it so it wins in this critical point so this allows to improve some applications and these applications as I mentioned were two moments of L functions so what do we know about this we consider the standard Dirichlet function even if the method applies to other L functions as well but let me start with this L function and for many years people try to estimate higher moments of the cell functions 10 years ago Meteor Young proved that if you take the fourth moment namely some fourth powers of all non-trivial characters then the result is given by a polynomial of degree 4 an explicit polynomial and the main term was known for many years but you notoriously ignored the normalization I ignored the normalization it's 1 over P, thank you sorry, of course you have to normalize it by the number of terms 1 over P at the front so the average value of L functions is given by the polynomial of degree 4 with a non-trivial power saving previous results would give some logarithmic saving due to his brown sound average and some other people here it was the first result with a power saving then Blomov for Rikowalski, Michel and Milisevich improved the saving which was essentially 1 over 100 to 1 over 32 so the saving was much larger and a few years later they kind of noticed that what T.I. Pinkchank and I did in 2016 for smooth sums for Dirichlet L functions you don't need type 1 or type 2 sums you just need smooth sums using this bound allows to reduce 1 over 32 to 1 over 20 I just want to make a small comment this result which they used holds for any integer Q unfortunately this kind of generalization to arbitrary composite numbers Q does not propagate to the results on moments of L functions there are some other ingredients which should be handled which before we obtain results of this type with a power saving for composite model it's a very important equation we kept it doable but there are lots of machinery involved in this and it will be a very non trivial task to make sure that we use the full power of these bounds on close to 1 sums and we really have results for composite Q okay so before the crucial part was the strongest bound in the polyAV-negative range now for many applications you need just the widest range and here the strongest result is due to Kowalski-Michel in Sarant results of 2018 who gave a very nice estimate on type 1 and type 2 sums so for type 1 sums where there is only one coefficient the bound is non trivial when eminent exceeds p to the 1 third the range is slightly more broad than this but it's the most impressive kind of way to present the result you can go as low as p to the 1 third here and for type 2 sums you have to sacrifice a little bit and 1 third becomes 3 eighths this saving eta is a function of epsilon and as usual in cases of this type again eta should be of the shape c times epsilon squared I'm pretty sure you can get it of this form so this is what we know in this case till now I was talking about Klostermann sums now let me tell you a few words about Sallia sums and in fact some of the bounds and most of the bounds hold for Sallia sums Sallia sums are usually easier as I said but surprisingly not enough not all of them for example this approach of Kowalski-Michel in Sarant I believe does not extend to Sallia sums so we have some direct analogs of most of the results but not of all results on the other hand Sallia sums are typically easier to handle so very often we can do more and better even if we have to change the tools we apply so now I will talk a little bit about Sallia sums and I start with type 2 sums for Sallia sums because we don't take much advantage of considering type 1 sums for Sallia sums it's not quite true we have a better bound in some cases for type 1 sums but I don't want to talk about this now so let's consider type 2 sums of Sallia sums we have two sequences of complex numbers alpha m and beta n and we sum over m and n and consider the sums so using that the square of Sallia sums is always a real number then very quickly we arrive to this bound the square of this type 2 sum can be estimated as L2 norm of alpha or if you assume that absolute values of alpha m are at most 1 you can put square root of m here and L infinity norm of beta is 1 if you again follow our assumption so this term is easy to understand and then we have this sum are over 3 variables n1 and 2 outside and the variable m inside and here you have a correlation between 2 values of Sallia sums so you correlate values of a n1 and a n2 and you average it over all m in this range and then you also average over n1 and 2 so there's a triple sum followed by this symbol and from now on we concentrate on this sums so sums of Sallia sums in this form now this sums are were introduced by Don and Zahara School in 2019 and they use them in the study of moments of sum L functions L functions for automorphic forms of half integral weight something which is previous technology with costum and sums wouldn't be able to handle so the moment they consider we are not covered by the approach of Bloma of Kowalski, Michel and Milisevich so what they proved they proved this bound which I strongly encourage you not to look carefully at nevertheless you can take my word on this in some ranges this bound is non trivial even if it's not completely obvious and certainly you don't want to verify it you better trust me that it's non trivial even if it's not pleasantly looking then this result was improved and improved in a series of two papers one was again by Don and Zahara School here my PhD student and I joined and later by Zahara School we obtained another bound and here is a summary of both results again the bound may look a little bit more pleasant than the bound in the previous case but it's probably not much fun to verify when it's non trivial but if you consider this special case when both variables are less than p to the one third so they are very small ranges then in the second bound you can see that only the first term matters in each of the expressions these terms are smaller than the second ones because the influence of p is bigger than the influence of m and here is bigger than the influence of n and in this range the bound becomes of this form it becomes mn squared and this is the number of terms so this is times p and the trivial bound on this is p so this is the trivial bound and second we have p divided by m squared n squared to the power one quarter but never mind the power you see that the bound is non trivial when this is less than one and this is less than when mn is bigger than p to the one half so this means you have a non trivial bound starting with the case when both variables are of order p to the one quarter which of course is much below the polar vinaigretta range where both variables of order p to the one half now I just a few words about the ideas behind the proof because the question we had to consider is a very nice question of independent interest so we needed to estimate what one would call the energy of square roots namely you want to estimate the number of solutions of this equation in four variables we have variables u, vx and y which run over fp each but we impose that the squares belong to this interval an income a very short interval of lengths capital K and the equation is very simple u-v is equal x-y so if you ignore that the square roots are not uniquely defined you can just write it like this you are interested in variables which you call capital U, capital V and so on from this interval such that the square roots satisfies this linear equation which of course is not completely correct because square roots are not uniquely defined but you understand what this all means what is the interval of a finite field sorry, the square roots of a finite field, yes what is the meaning of it we can see the quadratic ratio in this interval I think it means k is much smaller than p k is much smaller than p, yes k is much smaller than p yes still but we have to take the representatives of the field that's why I say it's kind of informal definition of what we do this is kind of formal think which we estimate but you can think about this as equation in square roots okay even if probably it's not a good idea to put it in a paper so and what we want we want to improve the trivial bound which is k cubed and of course equations of this type of independent interest I think it's a very nice equation to consider what we managed to prove here we prove that e of k doesn't exceed and you have a choice of two bounds the first bound is k to the fourth over p plus k to the five halves and in this bound the first term is correct so if you take four random variables then the probability that this satisfies this relation each of them takes capital K values would be this the second term well it's non-trivial it's less than k to the three but it doesn't kind of goes down to what would you expect name is the diagonal behavior the second bound in the second bound you have a wrong in some sense main term so you don't have this randomness in the first term but the second term gives you what you would expect name is the diagonal behavior so probably the truth is k to the fourth over p plus k squared which unfortunately we still don't have anyway we have this non-trivial bound and we can use this bound to estimate by linear sums is modular with modular square roots and one of the applications of this bounds the bounds of the sums which you considered before would be to the distribution of modular roots of primes and our application tells us the following again it's the same paper with Brezhker, Ilya Shkeredov, Alexander Zakharenko and myself so let's fix some epsilon and assume that for an integer capital L in this range and this range follows from our bound but we also need to make an assumption that there are that many prime quadratic residues L to the one plus little over one we certainly expect that it will be one half of L over log L and under the Riemen hypothesis under the generalized Riemen hypothesis we certainly have it but unconditionally we have to make an assumption the assumption of the abundance of prime quadratic residues so in this case the discrepancy of this set X over P where X runs over all solutions to these congruence for primes L up to capital L the discrepancy tends to zero so the sequence of fractions is uniformly distributed well as I already said this condition holds under the generalized Riemen hypothesis and we also can show that it holds for almost all primes and it's this constant 1320 over 22 is a slight improvement of 13 over 20 which we had in our previous work okay I think my time is almost up so I want to briefly mention some questions just give me two minutes to go over them rather quickly because I won't be able to say much anyway well for many years I was trying to be to say something which would not be completely trivial about sums of this type you take a closed-term sum and yet another closed-term sum with the shifted models and you can relate them and I wanted to have a non-trivial balance for this sum so to show that it's big O of Q to the 2 minus eta for some constant eta unfortunately I see no way nowadays we can say anything in this direction perhaps a slightly easier equation but still very hard is to just take different set of coefficients not to change the models when you change the models you lose a lot of power and a lot of tools with you had before still the equation is very difficult of course you need some non-triviality conditions so H should be non-zero here and here KL should be different from Mn and perhaps you should even expect this balance with any eta less than one-half but again it's just a conjecture please note that this condition is not enough because as I said you can always move coefficients in any way you like and put both of them in the first component so you really need this to be different rather than these two pairs to be different and of course this equation the first equation can be extended and you can ask an even harder equation which is similar to the Tjolo conjecture and probably is as hard or maybe even harder now when we have these hard conjectures it's natural to try to check them numerically unfortunately even this is not so easy because of what people call twisted multiplicativity you can reduce calculations of Closter Manson's model Q to calculation of Closter Manson's model prime devices of Q but all simplifications stop at this level when you come to Closter Manson's model AP we have no non-trivial algorithm to compute Closter Manson's there is an obvious symmetry but ignoring this symmetry is a sum of p complex numbers so to compute it without losing the precision is difficult and of course you look at large values of p so computationally it looks a very difficult task and I don't see any kind of natural way to do it in a better way than just a trivial algorithm function field analogs typically function field analogs are considered to be easier because we have the GRH thanks to Andrew Weil however in the case of another type in the case of function fields sorry this case kind of lags behind the number case and it does it in both aspects even for bounds of bilinear sums we have no bounds of type 2 sums and type 1 sums are estimated in a much weaker way so many of the tools which exist over the integers somehow vaporize when you move to function fields and we don't have any substitutes for them so you lose a lot of tools usually it's the other way around in this case the number case wins so they lose in this aspect and the second aspect is also kind of not known but to link this sums to moments of L functions this link is also missing assume you have perfect bounds here we still don't know how to deal with function field analogs for L functions defined over say rational function field over FQ there are some results in this direction but they all give rhythmic saving because they don't establish any link to costumans sums and of course you can consider higher correlations of salier sums which is also interesting and it's probably not so much interesting as a equation in its own rights but I like this equation because it immediately leads you to this equation to the generalization of additive energy we consider this equation with four variables but to say something non trivial for this sums you need to deal with the equation of this type with 2k variables where k is the number of sums you multiply here and what you want here you want a bound of this type say capital K to 2k over P which kind of reflects the random structure of square roots in finite fields plus k to the 2k minus 3 halves from previous results so this is not interesting so you want to subtract something from these 3 halves take an advantage that you have now more variables say 6, 8 or more times p to the little over 1 so I would be very interested to see a bound of this type unfortunately previous tools do not directly apply to equations of this type and I think this is the last slide thank you very much and I am done for today