 Thank you very much for the invitation. I will not be original by saying that I'm of course very honored to speak in this seminar. So what I'd like to talk about is the main theme of the talk is a low or small numbers. And by that I mean that I want to give you yet another example of the phenomena where which something that seemed to be absolutely true at the beginning fails miserably as we go along. And the more conventional outline of the talk is that I'll start by talking about some classical results, zeros of polynomials. And then I will move on and talk actually about the zeros of special class of polynomials, which is the effective polynomials. Now I'll tell you what we know about that. And then I'll mention a couple of new results and the ideas that go into that. And then I will move on and perhaps to some speculations and about some failed attempt to prove something towards these speculations. So the main theme of the talk is that okay well given a polynomial with a complex coefficients what can we say about the zero set. And of course as far as this question is to general to ask. So here is the picture that many of you have seen I guess is if you watch Lord of the Rings. So this is a so what is pictured here is the zero sets of the polynomials with the coefficients plus minus one in degree after 24. And of course one can see immediately here that there is an interest in phenomena which is happening. So all these zeros and many of those zeros tend to equidistribute around the unit circle. So there is a huge concentration there. And well as far as this phenomena is concerned the key to explain in this is the theorem very classical result of Ergoshan Turan which basically says if I'm giving you a polynomial and I look at the arc and I look at all zeros that are inside this the arguments of this zeros are inside the arc. Then the number of these zeros minus the expected number is upper bounded by this quantity on the right which is square root times the logarithm of the monthly measure. And so if the coefficients are bounded by say one then of course the this monthly measure contribution is at most log n and so the difference is what you can see here is a good discrepancy bound. So what about the real zeros? So the real zeros if you take your arc to be very close to the point one then essentially that will exactly tell you how many zeros are in this small sector which correspond to the real zeros. And so that would give you the bound of square root of log n at least in the class of this little polynomial. So these are coefficients plus minus one and little would written extensively about them at the end of his life. So it took another about 50 years when Waterville and Ergolian course they actually shaved down this square root of log n factor and what they managed to prove that the square root bound which is actually sharp as far as the real zeros are concerned. So this is as far as the extremal number is concerned and in terms of the what happens with the typical polynomial with coefficients plus minus one. So I'll mention here there is enormous amount of results about that but I'll mention just a three very classical ones. So the first one is due to little wooden offered who proved that actually they it's actually not the work expected but so they proved that for almost all choices of polynomials the upper bound is actually log squared of n. And it took some time where this development around the cuts rise formula and so this result was generalized to the random polynomials with the coefficients plus minus one with the Gaussian coefficients. And so the result is that the expected number of zeros is proportional to 2 over pi log n. And then so it took another time to generalize to two non smooth distributions where the number of where the coefficients are random better nearly plus minus one and then abdution offered showed that this is also proportional to 2 over pi log n. Okay so these are these results that I want to keep in mind and at least the bounds of square root and logs in the family of such polynomials. So I'm going to move now to the actual topic which is talking about the specific class of faculty polynomials. So what are those polynomials? So these polynomials are polynomials where you stick the as a coefficients Legendre symbols. And so for primes this is just a Legendre symbol and for positive if you consider a positive fundamental discriminant then you can replace it by the Kronecker symbol. And the key the starting point is that of all this works around the subject is that if you mel in transform your polynomial faculty polynomial as now put on the left hand side you'll get an L function gears L function. And the story begins with faculty who beginning of 20th century he observed that well if f of D my polynomial doesn't have zeros between zero and one then the integrand on the right hand side is positive and therefore there is no sign change on the left hand side and therefore we prove that there is no z equals zero. And so that was the start this is the first instance of low small numbers so he let he have checked it for small values of discriminants and it led him to conjecture that this is indeed the case. And it took a couple of years where he his academic brother Polia he actually disproved this conjecture by constructing the family of discriminants that have zeros in zero one. And it took another 20 years where a child came into play and he's still insisted that FD has no real zeros at least for sufficiently large D. And after that after a year Halbrunn came in and he actually again produced pretty much the same proof as Polia showing that there are discriminants there are zeros for many discriminants. So what is that that puzzled all those people who made them in great people interested in this subject so here is the picture of three of of the F 43 and so this is the zero side that you can see and well there are three things here that would immediately sort of back for the explanation. So the first mystery is that one can see that there is an enormous amount of zeros on the unit circle. And the second thing is that there are these pairs of zeros that one can see that are sort of symmetric with respect to the unit circle and this is not really this mystery is hardly a mystery it's simply a matter of the fact that if you replace Z by one over Z then your polynomial just switches and it's a circular circle. And the third mystery is of course you can see these real zeros that appeared for F 43 and the questions that I want to talk about is that I want to talk a little bit about how many of those zeros are on the unit circle and how many of those zeros are real. So let me begin by showing the simple argument which shows that actually there are a lot of zeros on the unit circle. So here is the argument which is from the paper I'll mention on the next slide. So if you take a root of unity p-thruths of unity and normalize your facative polynomial in some certain way then if we evaluate it at this root of unity one can see that it just becomes the cosine polynomial really so the real value. And now if you take instead of roots of unity the k-th power of it the k-thruths of unity then because of the property of Gauss sums you just get that this is the original Gauss sum times the sum coefficient. And the key observation is that if the Legendre symbol of k and k plus one are the same then there is a sign change when you traverse between the k-th and k plus first root of unity. And so the number of zeros is simply as large as the number of non-sign changes in the Legendre symbol and which is about a half. And this is the second instance of the low small numbers. And so if you check it for discriminants up to 500 that happened to be all the zeros that lie on the unit circle. But if you take a quite large discriminant in particular 661 then you can see that there is a little bit more zeros that appear. And so in about 20 years ago Conray, Granville, Poonen and Sound what they showed that actually the proportion of these zeros is just tiny bit bigger than one half. So this is as far as the complex zeros are concerned we know what happens. I'm going to mention one more related result to that which is a very beautiful problem of little wood and the relation will become maybe clear later. So there is a quite a well-known conjecture of little wood about understanding the zeros of cosine polynomials. And so the cosine polynomial is simply the polynomial where you just sum up you pick up your set A and you look at the polynomials with the frequencies in this set and the question of little wood what was what is the minimal number of zeros on the period for such polynomials. And little wood conjecture or he wrote something along the lines perhaps this is n minus one or not much less. And there has been great progress in this direction. So the actually it was about 15 years ago when there was a big surprise that Borwen, Erdely, Ferguson, and Lockhart they proved they constructed the polynomials which have much smaller number of zeros not n to the two-thirds this is the recent result but there was n to the five six before. And as far as this number how I think I think it's a question of Brian Connery how fast does this number grow. So the lower bound is a result of Sahasrabut who showed that this is a triple at least this triple log. So faculty polynomial in this it kind of matches the zone of validity of little wood conjecture because the number is linear. Okay so complex zeros sort of understood and the question is like how many real zeros do we have. And here the guiding conjecture is due to Baker and Montgomery and they also it mentioned for for prime discriminants in Connery, Granville, Poon and Insound. So the conjecture is that it seems likely that there should be roughly log log d zeros for such polynomials for hundred percent of them. And Baker and Montgomery they proved 30 years ago that actually for almost all discriminant the number of zeros grows and grows to infinity. So what I want to tell you is the first result as the what is the what one can prove towards the lower bound in this conjecture. And so this is so the result is the following that well we can't quite get a log log d zeros in the lower bound but for hundred percent of the discriminants one can get the bound which is somewhat smaller than log log d. You need to lose the factor of quadruple log. And here KLM has nothing to do with the Dutch Royal Illinois. So this is okay. So what about the upper bounds? So the upper bounds in the survey of Erdely the conjecture or what I stated is that actually the bounds for the upper bounds that are available are not much better, are not better at all than for the little with polynomials. And this is square root of d that we've seen on the first slide. So the next so the question is what can one prove about the upper bounds? And the first result that I would like to tell you about is that if under GRH for a good proportion not quite positive proportion but for n to the one minus epsilon fundamental discriminants and one can be this bound of square root to the d to the one third. And if you are not a believer then in GRH then perhaps then one can get worse bound, worse saving on the number of discriminants but the d to the one quarter zeros for the fundamental discriminants. So before I tell you something about more refined scale I'd like to discuss the proof or the ideas how one go about proving this there and perhaps then return to some more refined scale of the zeros. So the first idea of the proof it's hugely based on the work of Baker and Montgomery they had a very nice observation that well if you differentiate the Laplace or the Mellon transform then what you get on the left hand side is you can write it in terms of the logarithm derivative of the L function. And so if the now if one wants to understand the zeros then the fact which is present in the Carlin's world but that also in poly and cigar so if one wants to understand the number of zeros of the function is not smaller than the number of the number sorry the number of sign changes of the function is at least as large as the number of sign changes of the Laplace transform of that. And so the Laplace transform here on the right hand side you can see the Laplace transform and so in order to lower bound the number of zeros of f of d it's enough to get a good good bounce on the number of sign changes on the left hand side. So here is the rough plan so the goal right now is to get the good number of sign changes on the left hand side. And the plan so here is the rough plan of of the proof. So the statistical study of what happens is that L' over L via the explicit formula can be usually well approximated by a long sum over primes. And the problem is that of this is that this sum is too long so the second step would be to approximate to show that for many discriminants actually this sum can be localized within the short interval within a short number of primes. And if you have a short Dirichlet polynomial then one would want to define the random model and compare this Dirichlet character to the random multiplicative function appropriately chosen. And now the random model would converge to this certain to this usable normal variable. And the problem is that right now is that this normal variables are not independent but if you make a good choice of the points size then this random variables will happen to be independent. And the key for the quantitative bounds right now would be a good discrepancy bounds between so I must mention that in their random if you have a random Gaussian vector of random variables then you expect there are quite a few sign changes. And so the idea to quantitative bounds would be to show oops disappeared would be to show a good discrepancy bounds between the random model and approximation. So what I want to do right now is to talk a little bit more detailed about what's what's going to happen. So here is the first step which is the approximation step. So this is you one can ignore the parameters here so the key here is that if one is given the L function the L prime over L then by using the explicit formula this is very classical one can actually write it as a sum over primes. And that happens for many many fundamental discriminants. So once we have this the problem as I mentioned that the sum over primes is rather long so the next step would be to localize it and again one can see here that there is parameters floating around but the key here is that for most of the fundamental discriminants what happens is that the L prime over L can be approximated by a short characteristic polynomial. So if you fix X this sum U.S. and V.S. are going to be quite short and the way this is proved usually is by a moment's estimates and large sieve inequality. So once we have this once we have that this this is a short sum so the idea as I promised before would be to approximate it by the by the random model. And what is the good random model here? Well you would think that the characters are with probability a half you have plus one and probability a half you have minus one. So there's some confusion of the variables in the last slide perhaps you it's meant to be U of X and V of X so the range depends on X not S. Actually U of S and V of X they do depend on X. There's an X and there's an S so do they depend on S at all? Oh yeah I see what you mean. So S is so I should have made more clear I guess so S is the depends on X. So the first line here unfortunately my tablet doesn't work so I can't really point out. So S if I understand your question correctly so S is the function of X. Oh right right right okay so it makes sense. No it's actually a very good point to stop here so the key is that because we are aiming for the quantitative bounds it is very important in every intermediate step to have a good dependence on X because we would want to come very very close to the half line. So this is exactly the key in in these intermediate results is to make everything explicit in terms of X. So right so perhaps I should have written here X. Okay all right so here is the and go back to this. So here is the right model or so this model has been used extensively in many many works in the subject in particular Elliot and Montgomery Vaughn and Granville Sound and many other people. So it turns out that the right model instead of just flipping the coin with probability one half is to adjust it a little bit and the reason is that characters modulo p squared for fundamental discriminant they occupied p squared minus one residue classes and for about p minus one of them they hit zero. So the probability of hitting zero is roughly one over p plus one and the rest splits up evenly between a half between a minus one and one. So the character happened to be modeled well by this random model and as I said before so the question now we have our short Dirichlet polynomial and if you just stick here the random model then they would converge these random variables would converge to the central normal distribution with the variance and you can notice here that when you come to close to the half line this variance explodes. So it's very important to control this variance close to the one to the half line. Okay so once we have this comparison as I promised before one needs to this is the key for the quantitative balance is to have a good comparison between the distribution of the random model and the deterministic part and here comes the so here comes the part of the compare in this. So now we have what we have if we sample our polynomial at different points which with such that these ranges do not intersect then we would get a vector on one hand we would get a deterministic vector of this Dirichlet polynomials sampled in different points and on the other hand we would get a similar random vector. So the goal to compare the moment the discrepancy is to define this rectangular discrepancy which is the following. So you look at the difference between what you have for the characters and the difference between the random model and you look you take the supremum over all rectangular boxes when you move your box around. So it turns out I have time but I thought that I will not talk much about that but here is the so using the techniques of Lamzuri and Leicester and Rezzaville so recently they've developed a very nice methods for quantitative bounds for the universality theorems and so using these methods for the analytic methods one can actually prove that the discrepancy between the random model and the deterministic part is upper bounded by one over log x to the one fifth is not very important. But once this is assembled so now we have a random model we have a deterministic part and now as I promised you before so here is the quick summary of what's going to happen or what happened. So we take points that are very well separated and such that x such that our random variables changes signs with size so it's not just enough to get a sign change but you need some size to compensate the gamma factors. But so we can choose them with high probability they will change sign as a Gaussians and then as we've seen before the for almost all these this random model well approximates the deterministic one that what we needed before and once this is assembled therefore you have a good sign changes in this random vectors of your reciprocal polynomials and that gives you a good number of sign changes and no time over out and that gives you a good number of changes in depth. So I don't want to get too much into technical details with that and then instead I will tell you a little bit more about the upper bounds. So how would you prove the upper bounds on the number of zeros of polynomials? Well it's in some sense simpler but simply because we don't know at least we don't know much how to do that so the good way or a way would be to go back to the beginning of 20th century and recall the Jensen's formula. So the Jensen's formula tells you that if I have a point and I want to upper bound the number of zeros inside the ball around this point then as long as I know that on some larger ball the polynomial is not too large and as long as I know that at this point this value is not too small and also that the circles are somewhat separated so r is not close to the little r then I can upper bound the number of zeros inside the disk and then the natural first idea would be of course well you take your interval zero one and you just want to cover it by one big circle and say that by one circle and say that okay upper bound that but the problem is that with this is that if you take the small circle to cover the whole interval then the bigger circle will stick out of it and we will lose control over the maximum value of it and so the way to overcome this difficulty is to instead use the cover line so instead of covered by one circle you would want to cover it by several of them and so here is the roughly how it's done and what we need for that so the if I pick up the points x alpha and x beta normalized in this way then and so for many discriminants what happens is that's what we want to do is we want to take let's say two circles and so if I have two circles as long as I have a size of my faculty polynomial at point x alpha and at point x beta I can do the following I can cover it up I can use the Jensen's formula in one of them and I can also use the Jensen's formula in the another one and the point being is that if you take a circle second circle to be small enough then you don't stick out too much and therefore you you would be hoping to have a control over this big circle so in practice so the problem is to produce the simultaneously large values at the points and x alpha and x beta for different points of alpha and beta so I that's exactly what I was saying so if we take a set of the discriminants for which the values at the point x alpha and x beta are large then as long as you can the way to show that there is enough of these discriminants is to show that the first mixed moment is large and the second mixed moment is small and one prototypical so that becomes a problem of bound and the mixed character sums and of course so I'm going to give you the prototypical example here is the bound for the first moment of of this size of the size of x plus alpha over two beta over two and the bound on the second moment on the size of that side now in practice what we do is we actually don't use two circles but we use three circles and so with the matter of executing the Poisson summation formula and the bounds for the character sums to get it to get the right bounds and that's what gives the numerical results that I've told you about before right and actually the numerical results are going to be so you sort of use all the standard bounds that you can imagine for that okay so what I want to do in the this is to basically in the last quite fast actually so in the last moment is to go back to the original question of faculty and so understand how wrong or how right actually faculty was and so the question is well okay we know that there are for most of the discriminants there are a lot of zeros but are there those for which there are no zeros for actually faculty was right and there is a more much more general heuristic of pure Sarnac about what is called totally positive L functions and as an outcome of this heuristic one also has that actually there should exist infinitely many of faculty of discriminants for which faculty polynomial actually doesn't have zeros zero one and well the another way so this heuristic is done by comparing to the random model but one way of thinking about these polynomials without zeros is actually by looking at this simple factorization so the t equals one is always a zero and so if you factor out one minus t so what we have as an output the coefficients become character sums so the question is if the character sums are positive and don't say have sign changed then you immediately have that the number of zeros then there is no real zeros so I want to show you a movie directed by oh unfortunately I don't know why but this movie doesn't I want to show you a movie of this character sums which do not actually change signs so this is the movie well perhaps maybe not too fast but so these are the character sums with the discriminants around 200 000 which do not change signs and therefore for all these characters with these conductors you actually don't have zeros for the faculty polynomial and you can see you can pick your favorite shape you can see that there is a lot of different shapes that these characters actually achieve so this is what is pictured here is the character sum up to the size of the conductor scaled by the square root of log a square root of q as a natural scaling for this so yes you can see that there is like very much different shapes for it I'm not going to do that more and so what I want to tell you another in the last actually bit is that if you're willing to give up on the not taking the full interval from zero one then there is one can produce actually discriminants and this is quite simple one can produce discriminants for which the faculty polynomial will not have zero so what I mean by that is that if you're willing to go from the zero to one over power of log then this then one can cook up a good proportion of discriminants which will not have zeros at this polynomial and I can tell you so what what is also in progress it hasn't been written down but the actually if one is willing to go on the one over log scare then for a hundred percent of the discriminants the number of zeros is going to be upper bounded by log log squared so this is the range the range one over log n is the range where you can first time see this conjectured upper bound of log log at least of the order of log log so to compare it this is to compare it why is that somewhat better than for the little wood polynomial so with little wood polynomial the results of bourbon earlier in course gave that the intervals up to the size of one over log x have at most log x zeros up to this polynomial up to up to this point so here the interval is somewhat wider and the zeros there is a smaller number of zeros it's not zeros at all so I actually think that I have another time so this proposition is quite simple and not technical unlike the previous ones so I will give I'll try to describe to you the proof of that that was too fast with it so the the key how would one construct these discriminants with the a lot of discriminants for which you don't have zeros well is to construct a lot of discriminants for which you point towards for which the character is equal to one for until quite a long point onwards and so here is the simple lemma which says that the number of such discriminants if you fix your parameter y then the number of such discriminants is at least a quarter of log x so the proof is quite simple you take a vector of plus minus ones and what what you do is you prescribe the values of for a given vector you look at the values at the discriminants which give you as a Legendre symbol gives you exactly the values of vi and now the the question the key thing is that if you take two prime two two value two primes so if we take the two primes with the same vector of the values then when you multiply these two primes you get the same you get value of the character one so this is what I've been and that's what I was talking about so if you take two primes which coincide on the small primes Legendre symbols then the value is one and so then the number of so by multiplicativity of course is that the number this is going to be one on all integers up to y and pnt prime number theorem and combinatorial argument just simply says that the number of such discriminants is at least as a number of combinations of choosing from these sets vectors with the same Legendre symbol and this is at least x over square x and now so I don't have another slide on that but now once you constructed the characters that point in the same direction it's very likely it's very easy to prove that the discriminant the polynomial will not have zero until some point and the small caveat where you get this extra win so for people who looked at this clearly see that there is the square root of e this comes from the Burgess bound from the Burgess bound from the on the least quadratic non-residue so one can use the vinaigrette of trick to boost it a little bit to say that you can actually do a little bit better than that so I will not talk about it right now I guess I am done and thank you this is the now the site which is thank you for your attention