 Thank you very much for the invitation. It's a pleasure to talk. What I see on the screen is that this meeting is being recorded. I just click on Got it. Yeah, that's OK, Alexander. And I hope that you can see my screen. So this is a talk that's addressed to a large audience, not to the experts. Although I already see a few names. Oh, OK. So we already have lots and lots of experts in this area. Now, if anybody has a question, please address them directly. Let me start. We all know Biggles famous theorem. In any arithmetic progression, we could. We can have, or we may have, even many primes. We do have even many primes. So more precisely, if we take two numbers, two integer numbers, A and D and D is, and they are relatively prime, the greatest common divisor is one, then there are even many primes in this arithmetic progression. One step in the proof is to show that L1 chi is not zero for any non-principle character. Of course, Dirichlet used his Dirichlet functions and made linear combinations of them, of their logarithmic derivatives. And he was able to estimate certain sums involving primes in a given arithmetic progression. But then the main contribution would come from, let's say, from the Riemann zeta function that would correspond to the principal character. But, and of course, zeta of one, it's not defined. Zeta has a pole there. We have the main contribution coming from zeta. But then in principle, this might cancel with contribution coming from other Dirichlet characters if they vanish there. So then the logarithmic derivatives will have poles themselves. Now, so he showed that this doesn't happen. He also developed his also famous class number formula, say d is larger than 4. I talk about the imaginary quadratic number field. And h of, say, of minus d is the class number. And it counts number of classes. This is a finite number. But it's strictly positive. So this h being strictly larger than 0, or it's larger than or equal to 1. That's one way to prove that L1 chi d is strictly positive. So it doesn't vanish. And then not only this, but it's at least 1 over pi over the square root d. Now, if we can develop some theory which we show that L1 chi is non-zero and it's not small, then this will give us a way to show that the class number is large. So if L is of the size of L1 chi, say, is of the size of 1, 5, something like this, then h will be forced to be of the size of square root d. So this class number formula relates to the special value of the L function, this particular function to the class number of this quadratic number field. And as I said before, we have this unconditional bound that L1 chi is larger than, larger and larger means larger than a constant times 1 over square root d. And what follows, I will just write chi for a real primitive derivative character d. So this takes values that are real. The values are 0, 1, and minus 1, because if the values are not 0, there are also roots of unity at the same time. What should hold true is that L1 chi or chi d. So this chi depends on d. Now, it's larger than something like 1 over log log. It's less than a log log times various constants. This is assuming the generalized demand hypothesis. Unconditionally, one can show that the value of the Dirichlet L function at 1 is, these values are bounded by a log. But lower bounds are much harder to obtain. The reason is that we cannot, we don't know. I shouldn't say we. Let me say I. I don't know how to rule out possible, say, hypothetical 0. Let's call it beta of L1 chi with beta close to 1. And if I may go back to this, just think about L1 chi. So I have a very complex variable s here, and I let s equals 1. Now, if I let s equals beta, and beta is less than 1, but extremely close to 1. And if it's 0, so if beta is 0, or what we call the London 0, then the value of a function at 1 cannot be too large. It has to be small, unless the derivative grows very fast. But as I said, the function and the derivative, you can always give a lot of bounds for this. So just by this remise, we can see that if the raw London 0s equals 0s, it cannot be too close to 1. So this beta cannot be closer than, let's say, a distance smaller than something like 1 over square root. It cannot be closer like a distance 1 over d. Let me jump over a few other remarks. I will also send the organizers my slides, so if you want to look at them later. Well, let me go a bit faster over this. The classical 0 free region looks like this. There's no 0 of any d equal function. Say chi now is a character mod q in a region like this, with a possible 1x section, where s is sigma plus it. Sigma is the real part. t is the imaginary part. And in a region like where the real part is so close to 1, a distance smaller than a constant over log t or log qt. All right. In particular, if I take t to be 0, and I look at these real 0s, we don't have 0s for these d equal functions that are a distance less than a constant over log q, distance from 1, with a possible exception of 1, 0. Let me make it clear. For one d equal function, we may have one exception. If this d equal character is not real, then we don't have any exceptions. You have a proof. If this is a real character, we may have one exception. But it doesn't mean that every single real character can produce an exception. And in fact, we know that this land of single 0s, if they exist, they form a sequence that's very, very sparse. Proves like this, that they kind of repel each other nicely. Well, if we don't have any 0, no real 0 in the classical 0 free region, HECA show that L1 chi is large and a constant over log d. And this gives a very good bound, lower bound for the class number. In particular, it would solve, say, the class number problem. Now, what we know, still unconditionally, but with possible exceptions, or in other words, with ineffective constants there, are results of Lando and Siegel. You see both of them are from 1935. In fact, I look at the papers. And they came after Ithematica, one after another very quickly. So as soon as Lando came up with some ideas there and he obtained this unconditionally, then Siegel got some additional ideas and he proved this thing. So theorem of Siegel, where L1 chi is unconditionally larger than a constant depending on epsilon over d to the epsilon. This holds true for any epsilon 3 larger than 0. But a constant c of epsilon might be large when epsilon is small. We have no idea. And not only that, but this is not effective, not even in principle. So we don't solve the class number problem in this way. The best effective lower bound that I know comes from work of Goldfeld and that it's combined with the work of Gross Zagia of this size. That's the class number is larger than the log. It's not exactly like this. The sharpest inequalities that I've seen out of this form, a constant, absolute constant, and the log of this, the product over primes dividing that look like this. And this allows, in principle, to solve the class number. Say, class number equal h, you give us an h. And we should be able to find the finally many class numbers, the finally many imaginary quadratic number fields with this class number. Now, there are many nice and famous results that involve L1 of Siegel's years in one way or another. For instance, a theorem of Linux that in any arithmetic progression, that there is an absolute constant L such that in any arithmetic progression with AD relatively prime, the right primes congruent to A mod D that are small. Like if you look at the first prime, it's less than a certain power of D. And this L is an absolute constant here. And then the best results so far or the current record, as far as I know, is 8 equals 5 here. Of course, we know we expect much stronger results to hold true. Under the assumption of the generalized Riemann hypothesis, the Riemann hypothesis for the Riemann zeta function as well as for all the other functions, then L is less than you can take L to be 2 plus epsilon. So for instance, if I want to find the first prime in a given arithmetic progression, modulo D, then IGRH will be able to find such a prime less than D squared, say D to the 2 plus epsilon. So it's very interesting that Finland and Irvine were able to show, but now assuming the existence of a single zero or a lambda single zero in certain ranges. So this lambda single zero is heavy fat in certain ranges so without being very precise, but it pushes below two. So in some sense, this goes beyond GRH. And there are other results like these that people will say they are loosely results because we do not think that there are any lambda zero. I don't think that any lambda zero exists. In fact, I think that GRH holds true, the generalism hypothesis holds true. And then of course there are no zeroes where they shouldn't be. I mean, all the zeroes are trivial zeroes and then the non-trivial zeroes should be on the critical line. But why prove such results? Well, one reason is that sometimes you can prove a result assuming existence of a lambda zero and then you can prove a result assuming that no lambda single zero exists and you combine the two proofs and you can get an unconditional result. But also such theorems allow us to test the strength of this hypothesis, lambda single zero exists and there are many in the literature. And they also allow us to measure say the strength of the present tools of the current technology. And very quickly to mention, a few of them not precisely besides this, that's like the first thing that you observe is that if you look at, if you have a, you look at the modules and look at the primes in various identity progressions module, that modules, if you look at the, if you look at all the characters, module or that number, D and N somewhere there may be a real character has a lambda of single zero and that may distort this distribution quite a lot. Like half of the arithmetic equations may have very few primes and then the other half may have like twice as many, almost twice as many. Now, if another result of very nice famous result of heat brown, for instance, there aren't even many twin primes but what do you assume to get this? It's not enough to assume the existence of one lambda single zero as I said, it has influence in a certain range but if you assume an infinite sequence of lambda single zero then you can get this very surprising result. Other type of results, if you have, if you look at primes in short intervals, so if I take a large number X, now how soon after X we can find, can we find the prime? Of course we expect to find the prime in an interval of the size log square of X, say, things like this. Well, we don't know this. Assuming the Riemann hypothesis, then we can get results of this form, the intervals of the size X, X plus square root X, not exactly square root X, let's say X of one half plus epsilon or maybe square root X times some logs there, then assuming the Riemann hypothesis, I mean, assuming Riemann hypothesis for the Riemann set of functions, yes. Then you can prove the existence of such primes. So this exponent one half here or the length of the interval is important then we can do something if this is larger. And again, as I said, R H gives one half plus epsilon. And again, there is a context in which Fridladi and Varneus are able to show assuming existence of one half equals zero, that you can find primes in smaller intervals, which is one half minus something smaller than that. Another very nice result of them, but again, assuming if you assume the existence of one half equals zero, then you can find primes in such, this is a striking result, primes of this four into the six plus square. In particular, you can apply this to, this discriminant to find elliptic curves with one, not only one place of bed reduction. There's an application. Let me jump over a few other results. Of course, existence, hypothetical existence of land as equals zeros will distort so called per correlation function. Gomeric's per correlation function. It was studied by Montgomery, studied by Hebron, also studied by more recently, I mean, more recently, still sometime ago, Conrad Varneus results are, for instance, this is the result of Conrad Varneus. It's just kind of pushed to the limit and it shows that if you see, at least if you assume the existence of land as equals zeros, then at least half, then the distance between zeros will be at least, almost always will be at least half the average space. So why are these results very interesting because suppose someone comes in with a new idea and shows that unconditionally that you don't have, you cannot have this. Because we do expect positive proportion of zeros to be distances between consecutive zeros of the Riemann Zeta function to be larger than the average distance but positive proportion to be like less than the average distance, still a positive proportion to be less than one-tenth of the average distance. We have a, say a clean conjecture, this pair correlation conjecture and in general this model we predict things like this. So if you combine this with something like that and it would show that there are no land as equals zeros. Oh, and I want to mention another result of Peter Sarnak and myself, 20 years old, is that sequences of land as equals zeros, again I'm not going to state the result precisely, that is out precisely, but I'm happy to talk separately with anybody who wants to, who's interested in any of this, or in particular in this result and discuss the actual results that are in the paper. But what the results say is the following, that sequences of land as equals zeros cannot exist by themselves without additional sequences of complex zeros of some maybe other particular functions. That's basically, so in other words, if you assume, let's say we assume GRH, so we assume the generalized demand hypothesis for all non-trivial zeros of all vertical functions except we allow any vertical function to have as many real zeros as it wants. So to speak. So assume GRH except for possible real zeros of all these other functions, then we can show that there are no sequences of land as equals zeros, that was the result there. Now, a couple of ideas here, something about the mechanism here. Well, one thing that happens if we assume the existence of a land as equals zero of L as chi, say we fix a chi that's a real character and L as chi has a zero very close to S equals one. Then one thing that you can show is that this force is that chi of P is minus one for most primes. Now, heuristically, let's just say if you think about the Euler product that gives L as chi is the product over all the primes one minus chi of P over P to the S that would be the series except the nine kind of computing it at S equals one. So it's one minus chi of P over P to the S to the minus one. Now, anytime I have chi of P equals one so I have an infinite product here over all primes for each prime, if we take primes in order there are primes for which chi of P is one there are primes for which chi of P is minus one there are primes for which chi of P is zero a few of them maybe those primes which divide our capital D. They're finding many but the other was the primes are chi of P is one or chi of P is minus one now those primes for which chi of P is one if you put a one here you get a one minus one over P that's less than one but I have it to the minus one. So when you invert it it's larger than one so each one of these contributes to the product a factor that's larger than one. So it increases the product. It makes the product large but we are assuming that this number is small because we are assuming that L as chi has a land of sigo zero so zero close to one so that forces L one chi to be small. So in this sense this force is chi of P to be minus one for me say more rigorously well one can show something like this unconditionally. So I have a D I have a chi that's a character real character modular D and we can prove unconditionally or resolve like this it's a finite sum now okay so I have two parameters I have I have this capital D it's fixed and I have this land of sigo zero but this is unconditional whether or not we have a land of sigo zero it doesn't matter this still holds true it still holds true. I take a parameter X and I take the sum of something over N but what is something is just the derelict convolution. If we're more familiar with say the declase series and multiplication of series if I put an N to the S here and take the sum to infinity this will be just the product between our LS chi and the Riemann Zeta function and if you look at the coefficients that's what you get the coefficients are non-negative they are integer numbers. Now on the right side I have the value L one chi it's multiplied by a log X so this depends on X a gamma is always constant here is the derivative of our L function computed at one and this is a narrow term that looks like this it's like a power of these like D to one over four log over square root X so if I take X to be large compared to D I don't even need to take X to be large compared to D I can take X to be D and I have a D to one over four in the numerator and I have a square root D in the denominator so the denominator will dominate the numerator so this B goes less than one it's small this is constant, this is constant the only one that should be large is the first part L one chi times a log X when X increases but if I have a lambda of sigma zero then L one chi is small so this force is the small part with me so smaller than expected now to be precise what we do I take two values of X and both of them are powers of capital D say I take D square and I take D to the A and A is large later on maybe I take A to be 300 well let's say I take A to be 100 here and I take a sum and I okay and I subtract this I compute this for X equals D to the 100 and D to the A and compute it for D square and I subtract these things what I see on the left side is just the sum of these numbers between D square and D to the A on the right side I have error terms that are small and I have constant terms that cancel so this constant L prime computed one chi has there's there's no X in it this will cancel then the gamma here will cancel so whether it's constant will cancel this times L one chi and I'm just live with L one chi times log X so what I get is L one chi times log X so I'm sorry log X is log of D to the A minus log of D square so the whole thing is less than log of D to the A now if A is 100 or 1000 that's still small as small as log D because log of D to the A will be equal to 100 log D or 1000 log D now finally if I have a lambda or C equals zero and if L one chi is less than one over log cubed and you multiply this by just a single log the result will be less than one over log squared if I assume that I have a worse lambda or C equals zero so L one chi is less than one over D to 1000 then I multiply by this poor single log and I still get a saving of one over log to the 999 so it depends how bad the lambda C equals zero is we get a sharper if much sharper inequality here this could be small if there is this will be small if there is a small lambda C equals zero under on the left side now these are positive numbers or non negative in particular for primes you see for primes this is just one plus chi of D that's what it is when we put it here one convolve chi of P is just one plus chi of P now finally if chi of P is one it produces a two here so it's two over two over P the sum of one over P is of the size of one or the size of this but in our case is much smaller if I have a lambda C is of the size of one over log to the 999 as I said so this means that the proportion is in some sense the proportion of my primes for which chi of P is one is much, much smaller is less than one over log to some power all right so in almost all cases for almost all primes in this sense that the exceptional set is less than let's say proportion but the percentage of the exceptional set is small is less than one over some power of log I have the chi of P is negative one now let's observe that the movies function at the prime is negative one when the movies function at the each prime is negative one and the movies function is multiplicative our character is more than multiplicative is completely multiplicative but on the other hand it's not minus one at each prime but it's minus one at most primes so both of them being multiplicative at least then for most square fin numbers so numbers that are products of distinct primes they will match so in most cases because chi of P for prime devices P of n being minus one chi of n will look like the movies function yeah so then L is chi will look like like one over z but we have these things now I want to report on some recent work joint work with Hong Gui and the Kyle Pratt on the influence of say hypothetical existence of lambda equals zeros on some central values of their functions I'm going to especially look at theoretical functions but there was work done before so now let's see I'm going to do the following I take two numbers I have one modulus that's capital D and I'm going to have a second modulus that's little q although this is big D and the other is little q the little one is much larger for us so our little q is a prime that's going to be much larger than this capital D there's going to be like D to the 300 in that range so I'm saying these things in advance so then we have some intuition now for capital D I have a single theoretical function and this is our single the theoretical function associated to a quadratic character and the theoretical function is assumed to have one dot zero zero that's our assumption and we want to so this is about capital D there's nothing to do with this little q but we want to we want to show that this has influenced some far away not too far but far away in the sense that it has influenced for the values of these theoretical functions L one now I'm using a Psi Psi Psi but I say Psi L is Psi and I look at all all characters Psi modulo q so I take a capital D and I take a larger q that is of the size of little q to some power say between little q to 300 and little q to 1000 not much, much larger in that range I have lots of primes and for each prime I do something what I do is I look at all the theoretical functions associated to all these characters Psi so Psi runs over all characters in each case I compute this I compute Lsi at S equals one it is believed that it is not zero and in fact it is it is believed that it's never zero very single integral function L one Psi is expected to be non-zero and the raw results the raw unconditional results they have nothing to do with lambda sequence so all these results nothing to do with truth theorems power theorem is also a truth theorem but let's see let me go quickly over them Subramanian and Murthy show that a positive proportion of them if you look at all the primitive characters mod this little q a positive proportion I will be more precise later a positive proportion is a positive proportion of them non-zero the finite sara showed at least 30% that's about the third Hungu introduced some some refinements and showed that at least 34.11% when q increases at least 34.11% of them are non-zero at the central point and very recently because this is from 2012 very recently in 2020 Kan Miličević and Ro showed at least 38.460.1% are non-zero of the central values of this L1 chi L1 one-half and psi and this is for primes for q prime the other previous results hold true for all for general q what is known under the assumption of the Riemann hypothesis on GRH one can show that at least 50% 50% of them are non-zero now these are all unconditional results except for the last one that assumes the generalized hypothesis our result here oh ok before I get to our result assume something else assume there is a 1-0 again unconditional results on non-vanishing of the derivatives of these functions at the point one-half so if either the function or the derivative if one of them doesn't vanish then we can at least conclude that the function cannot have a zero a quarter larger than one or if we show that at least one of the first 3-4-5 derivatives don't vanish then we know that the order of vanishes is at most 3-4-1 so in that case we can get larger percentages Buoy and Milinovich prove that prove a lower bound on the proportion of psi for which the k derivative is non-zero and the lower bound is of the size of 1- constant over k-square I mean it's at least 1- constant over k-square and in particular the proportion goes to 1 as k goes to infinity but here of course ah first of all you fix a k and you take your little q to go to infinity and you prove a lower bound the lower bound depends on k and then and then once you look at your bound and look at the bounds and what happens with the bounds as k goes to infinity they show that this goes to 1 higher and higher percentage of this derivative won't vanish or the order of vanishing stays bounded by k for example if you they show that more than 75.44% so more than 3 quarters of them have at most a simple 0 that s equals 1 and if you look at at most a double 0 they have a higher percentage and so on and so on so what we wanted to do was to look at the influence of our L s L s chi on these values a bit farther away because our capital D is much smaller than this little q and we want to see if these influences if we can get better percentages our expectation was to be able to do much better under the assumption of this existence of L s so the percentages here would go to one match much faster but that's not interesting enough what we wanted to see really was that if we can put our hand on a k if we can provide a k it's 5 such that the percentage is exactly 1 already when k is 5 then we got this result and we found that we can already get 100% here when k is 1 so we stayed the result I don't like to write in terms of percentages or anything I write something rigorously here it's all through like that and in fact you don't have to assume the existence of Ls so unconditional result but it's not useful unless you have a Ls really it is an unconditional result it says the following fix a constant c large and n 300 I'm going to take c to be 1000 it's an epsilon or any epsilon this is not the epsilon from Siegel-Steven I just take epsilon to be 1 half let me take epsilon to be 1 half so this is fixed now I take any prime between any prime q between d to the 300 and d to the 1000 for each prime q I'm doing the following I write it as a sum but it's just counting I count one each time I need a Dirichlet function or a Dirichlet a character psi modulo q that's primitive this star here just means that we go over the primitive ones for which the value of the associated Dirichlet function at 1 half is non-zero we want to show that the high percentage of them is non-zero so we want to show that this sum is large as large as we can we are not able to do it but we get some of course we talk about a percentage so I take the total number so I count those for which the value of the central point is non-zero but then I divide by the total number of Dirichlet characters so I divide by the phi of q and I get the number it's between zero and one and we want it to be as close to one as possible but we don't get anything better than one half we get one half plus this plus is really a minus because we don't get one half we don't get half of them at least half of them but almost half we get one half minus something and that something is big O meaning it's bounded by a certain constant times what I see inside the actual constant depends on epsilon and on c but all of a sudden for any practical purposes I'm going to take a fixed epsilon one half and see it's one thousand so that's that's a constant there will be an absolute constant and it's a constant times what it's one over I write it like this typing purposes but one over square root of law that's good because when q increases this is less this goes to zero so it won't affect our percentages one one half but the other one you see it's low q to something large and it's 22 over 2 I put my epsilon I add my epsilon is one half it's well 26 over 2 that's 13 and I multiply this by L one because this is not the this is our module module it's now so finally now we assume that we have a if we don't assume that we have a if this L1 is not small then we don't get any result if this is just like a one and you multiply by a lot you don't get any okay but if we assume that we have a lambda of 0 pretty bad one but not too bad say if we assume that L1 chi is less than one over log q not one over log q we don't make assumption on q we make assumption on capital D because chi was defined mod D if we assume that L1 chi is less than one over I don't need to assume anything like one over square root D no it's enough to assume that L1 chi is less than one over log D to 100 you see this one over you see by this inequality sphere we can see that the logs are of the same size log q is between 300 log D and the thousand log D so they are of the same size so if our L1 chi is less than one over log D to the 100 this is going to kill entirely whatever log I have here log q to the 13 so we do get non trivial result in some sense this matches this matches the proportion it's obtained under this other assumption GRH this is just as strong as the GRH this result but of course GRH is something that we believe is true existence of one of the signals is something that we believe is not true now suppose now we look at higher and higher derivatives computed at one heart as I said it's enough to look at the first and then the percentage jumps directly from 50% to 100% already so assume the same hypothesis as in the previous result of course this is a joint work as I said with Hung and Karl then the number of characters Simon Q such that this has a multiple zero so a zero larger of order larger than or equal to two is bounded by this and there's a typo here it's not the number of such characters I divide by the total number the total number is bounded by fear of Q times and then it's also bounded by Q times so it's less than Q over the square root or plus so you see it says that in this more precise sense that almost all characters are such that L as Kai has that most zero but we are not able to say for this 100% of them almost all of them what happens that makes it to have a vanishing order at most one is the function that's not zero at the central point what is the derivative what happens from a technical point of view we look at the certain combination a certain linear combination between the function and its derivative and that linear combination computed there at the central point we can show that it's non-zero almost always meaning the exceptional set is bounded by this so that's actually what happens but because we don't know if it's a function what is the derivative that finishes we write it like this now I see I only have one minute or two minutes there's another family that we studied in more recent paper we look at new forms of ways to prime level q and the associated functions now are self-tool and they have they have a root number that's one or minus one now if it's minus one these root numbers appear in the functional equation and when they appear in the functional equation they force and if it's minus one then of course it forces the function to vanish at that point not only to vanish at that point but if the root the root number is minus one it forces the function to have an odd order of vanishing it's an odd function around that point and we look at what's called the analytic range of this, so the order of vanishing now the analog of the previous result where we take higher and higher derivatives and you're sure that they start in the higher and higher percentage are non-vanishing in this more difficult case this was done by Kowalski, Nishal and Wanderkamp and they showed that the percentage of f's for which the analytic range is less than or equal to k goes to one as k goes to infinity there are clear conjectures we know what to conjecture and the Brouwer and the Morty conjectures very clearly, basically so I can finish in a minute this thing that those that don't have to vanish kind of don't vanish so those that that are those f's that are even or we say the sign in the functional equation is plus one are not forced to vanish at the central point and the conjecture some of them may vanish put a small percentage like this zero percentage they have some clean conjectures okay and those that are all have to vanish at least for the one the conjecture is that okay most of them do vanish and the order of energy is one those that are all a deep result and actually it's a very deep problem there's a deep result but much deeper than their work Henik and Peter's work is the actual problem because they push this to the limit they show that if you can improve on this percentages even a little bit there's no time to talk about that but if you improve this even a little bit then you show that there are no long the zeroes at all that's striking result I just finished by stating our result in an imprecise form like this we assume now the existence of a long the zero and see what we can say about the percentage and if we assume along those zero and some range again we look at we look at our capital D and again we have a prime the level is little cube and again the prime will be between D to 100 D to the 300 or something like that and again we show that the analytic rank is at most one if kfq is one and that's basically interesting the analytic regardless of kfq is one or minus one which is subtle I'm saying in all cases the analytic rank is at most true 100% of cases but what is this 100% in all cases with the exception of something and the something is small provided there is a lambda zero because that something has that is multiplied by L1 now because I went I guess by my computer 3 minutes over time let me jump over these things and just thank you for your attention