 Thank you, Mike. Thank you, Alina. Thank you, Philip, for the lovely invitation to be here with you this afternoon, morning or evening for some people joining us. Apologize in advance, because I'm going to cough a little bit today. So, the topic that I want to speak about this afternoon is about some Hubert space techniques and their relations to lowline zeroes of health functions. Okay, so this is going to be a talk at the interface of analysis and number theory. Mainly, I'm going to get some motivation from number theory, but I'm going to dive in a little bit more into some of the analysis aspects today. And my ideas to essentially convey one or two ideas to you that you might not be aware of that could be used in these situations. Okay, I'll try to make it as light as possible. If you have questions, feel free to stop me at any time and ask, and I'll answer if I know. Okay, let's start. So, I'll start with a little bit of motivation of what I want to talk to you about. And this is my co-author in this paper, Andres Chiri, and let's assume that he arrives at the bar and meets a Mr. girl that he likes and he offers her, how about a buy you a beer? And the girl says, well, the girl is a girl that likes the Riemann's at the function and number theory very much. And she says, well, sure, but only if you can prove me that under the generalized Riemann hypothesis at least 69% of the Dirichlet of functions modular prime q do not vanish at this point one half plus pi i over four log q. And then our hero is going to be lucky because this is a problem he knows how to solve. So this is essentially the themes that I want to talk to you about this afternoon. Okay, for families of health functions, assuming the generalized Riemann hypothesis, we want to kind of estimate the order of vanishing at the center point. This is a well known problem. I'm not considering it for the first time by no means this is a well studied problem. I want to estimate the order of vanishing at a given height to give low line, low height. And I want also to consider the problem of estimating the height of the first zero in the family of health functions. And my main idea this afternoon is to kind of provide to you a unified framework via Hubert spaces to tackle these problems. And this is all based in joint work with Andres cheer in Micah Milinovic. It's on the archive if somebody wants to check it out late. Okay, so first part, no vanishing of functions. So let's start with the most basic types of functions, Dirichlet character chi module q is going to be a function, you know, which is periodic of period q, which is a multiplicative kind of man is kind of M times kind of and, and it's zero, if the number is not co prime with q. Okay, for each of Dirichlet characters, you can construct the Dirichlet function which is going to be a sum of this to the right of one, some of Cayenne and to the minus s, and we are going to be talking generically speaking functions associated to a certain algebraic object f that will give me certain coefficients, which I call lambda f here to form a Dirichlet function. Okay, this is essentially what is going to appear in the next 40 minutes. This is kind of the, the theorem, the first theorem that I want to show, you know, let q be a prime and assume the generalized Riemann hypothesis for Dirichlet functions module q. So what you want is you want to fix a height t, and you want to sum overall characters module q, the order of vanishing of the Dirichlet function L at the point one half plus two pi i t over log q. Okay, I want to sum, but I want to average, I will divide by q minus two. Okay, I'm taking, I'm dividing by the primitive. I'm averaging overall this is all functions the primitive functions. Anyway, so I want to show that this average so that the average order of vanishing at the given height is no more than this number on the right, where you appear you have one half one plus sine four pi t over four pi t inverse plus an error term. And you see that this depends on T. Okay, so Kaiser is the principal character module q. So what this result says is that for any epsilon, the proportion of primitive Dirichlet characters module q for which they'll function at this height is not zero is at least one minus this number on the right. Okay. So the height t at the central point one half. This is a node result of murty from 89 saying that the proportion of non vanishing at the central point is at least one half minus epsilon. Okay, now you see that when T goes to zero here, you get one over four, right on the right hand side. I'm marking here. The total portion one minus one over four is three quarters. Right, so this is not contradicting the result of murty in the sense that if you count the order of vanishing at the central point, you expect that half of the vanishing would go up and have to go down. Okay, so this is kind of the plot of this function for small t for a small height t. I said when he goes to zero you get three quarters here. And for example, when you plug in T one over eight, you, you get the answer to the problem that I proposed in the beginning right that the, the function at the point one half plus pi i over four log q is not zero Well, you plug in T one over eight in this expression, you get 69%. So 69% of the functions module q at this particular low line height should not be zero. Okay, so you see that from this graph that this proportion is always bigger than a half, but it has some bumps, and it's sometimes we can do it, we can do better right but at the limit, when the when the height is large we get back to the one half. So what's the average spacing of zeros. This is kind of a one over log q right. Yeah. So it's one quarter of the average. Yes. Yes. Okay. Okay, so let me put my setup being I'm going to be a little bit vague, especially because I don't want to. To go deep into this and many people in the audience will know these things much better than I do. I want to focus more on the analytics, analytic aspects that will come later. So for me, F will be a family of certain algebraic objects. Let's say that for each of these algebraic objects little f, I can associate an L function L of SF, which has some coefficients lambda f and and to the minus s. And I'm going to make some assumptions on this L functions, you know, we're going to do the usual assumptions that this L function LS of f admits an analytic continuation to an entire function that the generalized Riemann hypothesis holds for this family. We're going to denote the zeros row F to be one half plus I gamma F. So I gamma F will be the ordinates of the zeros. We're going to assume that this function set satisfies a functional equation after probably after multiply by some gamma factors which I call L infinity. So there's a functional equation that relates SF to one minus SF bar. These are all functions these are functions coming dual pairs so associated to LSF there is the LSF bars the dual function that has coefficients, which are just the complex conjugates of the coefficients of the original L function. So I'm going to assume that this gamma factors have no zeros on the real on the critical line. So all the zeros really come from the L function. And I'm going to assume of course that if my function F is in the family then the this F bars in the family. The simplest examples would be of course the traditional functions module q, but you are many other examples of functions that fit into this framework. So, let me talk about this one level density philosophy introduced by cats and Sarnac so when you study these functions I mean generally, when you study the statistics of the zeros at the high point is usually behave like this Gaussian unitary ensemble. But if you're interested in studying the statistics of the low line zeros. Then cats and Sarnac in their work in the 90s they suggest that for each natural family of L functions. So some associated associated symmetry group, which is one of the five types know these are the classical groups of unitary matrices you know subgroups of the unitary matrices, such that the, the low line heights. The low line zeros of this family of L functions should behave like the eigen values of this of these matrices, you know, and they kind of divided in five possibilities. So there's this unitary symmetry you the symplatic which I'm called SP, the orthogonal or the even orthogonal. So even in the other talk on SO odd. I'll explain a little bit more what I mean by this. Okay. In a minute. So, throughout this talk what I want to do is to consider averages, you know, over functions in a family, and I want to order things by the conductor, you know the analytic conductor in the case of traditional functions is just the modules. Throughout the talk, this f bq will either denote the one of the finite sets you know it's either the sets of functions in the family, such that the function has the conductor equals to q or less than or equal to bq. You know, I can average my objects in essentially many different ways you know these are just two possible ways but the method is robust enough to consider even weighted averages between q and to q, whatever you want. So let's just keep it in the simple setup. So, what cats and Sarnac kind of suggest is that if you give me a smooth function fee, whose footage transform has compact support, they conjectured that when you sum fee over the zeros. Well, you have to do this normalization so fee of the zeros, the heights of the zeros gamma f log of the conductor divided by two pi if you sum over the zeros, and you sum over the, the, the functions that have conductor at most q, and you average you divide by the number of elements in the set, and you send the conductor to infinity. This should be equal to the integral of fee against a certain density function. This is kind of suggesting that the statistical behavior of the low line zeros that contribute more to the sum should be given by some density function which is associated to that symmetry group that one of those five symmetry groups. And for these five symmetry groups cats and Sarnac determined this this density functions. So, this density function for the unitary ensemble is just done a bag measure is just one for the symplatic family is just this function one minus sine two pi x over two pi x for the orthogonal is one plus one half of the direct delta at the origin. So even one plus sine two pi x over two pi x, and for so odd one minus sine two pi x over two pi x plus direct delta at the origin. So you have five measures here. And what people prove right this is kind of a conjecture what people prove over in the literature is these sort of results for different families of L functions for for test functions fee that have a certain compact support up to a certain size they prove this to be true. So the classical reference in the literature for this is is this classical paper of Ivanietz Lu and Sarnac that has many examples of this phenomenon. So many examples in the literature with this has been proved you know this this this identity this one level density identity has been proved as long as the support of the Fourier transform of the test function fee is not very big. So it's contained in a certain small interval minus delta delta. So for example, for this unitary symmetry. So it comes from the work of Hughes and Rudnick for primitive traditional functions that one level density relation holds, as long as the support of fiat is between minus two and two. If you do more averages, you can increase the support of the Fourier transform a little bit if you want. So examples of L functions families of L functions with this orthogonal symmetry. You can see in this work of Ivanietz Lu and Sarnac. There are functions associated to holomorphic new forms in weight and level aspect, depending on how you measure you get orthogonal, or even orthogonal or order togono with different sizes of the support to. For example, one example to keep in mind about symplectic symmetry, this family of quadratic traditional functions is in the work of those look and slider from 99 says that the relation like that holds for delta equals to two for the support of the Fourier transform minus two to two. Okay, so, as far as now, I'm just giving you the motivation for the problems that I want to introduce. So this is the question. Assume that somebody is giving you this identity so assume that this identity holds assume that you have a family of L functions, and you have successfully proved that this identity when you some this function fee over the zeros normalized some over the functions up to height q and average divide by the number of elements that you're summing, and you take the limit when the conductor goes to infinity, you get a nice integral of your test function fee against a given weight that you know. Assume that you have this relation for your favorite family of L functions that you proved this for us, and you prove this for test functions fee smooth with the support of the Fourier transform contained in an interval minus delta delta. Some papers in the literature will have this what proven this relation when delta is one, when delta is four thirds and delta is three halves when delta is two, most of the results in the literature go up to two. So the idea is assume that you have this for a certain delta. What can you say about the order of vanishing at the given height. So I'm just essentially the problem that I have in my mind is. Assuming GRH. So I'm assuming that all the zeros are aligned, but I'm not assuming anything about the horizontal about the vertical distribution of the zeros right. And I want essentially just with this formula to get the best possible information about the order of vanishing at the given height. So at the central point. What's the idea. So, let fee be a smooth function with the support of the footage transform containing minus delta delta. I'm going to take. I can take any function fee to plug into my formula. Let me take a function which is no negative and such that fee of zero is bigger equal than one. So you have a function which is no negative and at zero is bigger equal than one. So if you want to count the order of vanishing of their function L SF at the central point one half. If you sum these orders of vanishing overall the functions in the family and average. This is less than or equal than if you sum each of these orders if you change by the sum over the zeros of fee of gamma F. Why is that well, because at the center point, you're taking gamma F to be zero. So every time that you have a zero at the center point, you will be summing fee of zero, which is at least one. And you might be summing something else because your function fee is no negative. So certainly, this is an upper bound for this. Okay, now you use your identity, because you know the one level density theorem for this function. So that the limb soup and q goes to infinity of this right hand side is equal to this integral fee of X against the density against the measure. Okay, so essentially you get an upper bound for the order of vanishing of the average order of vanishing of your family at the center point. So, all you have to do is to minimize the right hand side. So choosing a function which is no negative fee of zero to be bigger equal than one, your analysis problem is to minimize this integral, this integral fee against unknown measures one of five measures depending on your family. This is what you do at the center point. Well, at any given height, T, you do something which is slightly similar, you now take feet to be a smooth function with support of the Fourier transfer and minus delta delta, take it to be you can take it off in this class, but you're going to take in particular a function which is no negative and such that the points t and minus T, you have that the function is bigger equal than one. And then you do roughly the same computation, you start with the factor of two, and you sum the order of vanishing of s at the height t as of one half plus two pi it over log conductor. Why did I put the two, because we are assuming that the functions come in dual pairs. So the order of vanishing at L of s plus it is the order of vanishing of L of s minus it of the dual function. So I transform this to I put the, the dual function here. Now, you know, since I'm summing over all little f's, all the functions in the family, but conductor q and the dual also has the conductor, the same conductor. This is I can replace this f bar to an f again. And now you look at this last line here that every time that you have a zero at the height t, you know this function here will pick one. This is the time that you have a zero at the point one half plus two pi it over low conductor. This function fee will pick one here and will pick one here at the plus T and minus T. So this, this order definitely loses to this sum of the function fee over the zeros. And since the function fee is no negative, you might be summing something else. This is definitely an upper bound for the object that you started. Now, you put yourself in a point where you can use the relation, because this is what the one level density theorem tells you that it's equal to this integral, to be against the weight. And then you have an upper bound for the average order of vanishing of the function at the point one half plus two pi it over low conduct at the height t normalized height t. All you have to do is again minimize the right hand side to get the best possible upper bound. Okay. So for analysis problems that come from this number theory motivations are the following. You consider the class of functions let let me take functions fee. I'm taking this class a to pi delta t. These are functions which are let's say integrable and continuous support of the Fourier transform is contained in the interval minus delta delta. So negative and fee of t minus t is bigger equal than one. I have two sorts of problems. One when T is zero, which I call one delta extremo problem so I just have the condition that fee of zero is bigger than one for G one of those five symmetry groups, which have five different measures associated one to each, you want to find the infimum of this integral fee of x against this measure in this class of functions. In this to delta extremo problem, when you have really a height t and minus t, you really want fixed delta and fixed t to find the minimum of this integral in the offer functions in this class. Okay, so these are the analysis problems. You want to minimize a certain non negative integral in a certain class of functions. This problem is already very classical and it was solved in this appendix this very nice appendix of the paper of Ivanietz, Lou and Sarnac for Delta equals to. Okay, and this was later generalized for Freeman and Miller for small delta and their techniques is via this Fred home operators, a little bit of functional analysis and spectral theory going on there. And those are slightly different framework that will solve essentially the two problems at the same time. So, our main theorem in this paper is essentially that we give precise answers to these analysis problems. Okay, we find this this infimum of this object and the infimum of this object, which are actually a minimum. Okay, we find extremo functions we find the best possible answer. Now, I want to explain to you what it is and how it's done for you to be able to use as a black box if you want if you ever proven the sort of one level density theorem, you can just take our result and use it as a black box and get an upper bound for the vanishing and vanishing of zeros, their functions. For example, this is kind of the plot of the extremo function at the height, you know, when T is one over four, you see that the function, this is a function that has to be bigger than one at one over four, and minus one over four. You see that it has one bump. But if T gets larger, for example, when T is two, you see that the function now wants to have two bumps at minus two is one and two is one. But so this is the solution of this to delta extremo problem. Now, let me just show you the graphs of the, the plots of known vanishing that we have for each of the families so you can have an idea. The better the larger the delta, the delta is the size of the support of the Fourier transform of your test function. So the larger this delta is, the more test functions you get, and the better the result that you get. Okay, so here for example, you see that when delta is two, you get the orange graph. This is kind of the known vanishing. This is the proportion of known vanishing. So say 58% of the functions at this height point 35 here do not vanish, you know, this is for the even orthogonal family. Again, when delta is to when the size of the support is to 58% of their functions at this particular height, that's the maximum here point 62 do not vanish. The other talk on a family. Again, when delta is to you get 71% of their functions, not vanishing at this particular height here point 35, but these heights are always normalized heights. Okay, so the height is not t point 35 is, is two pi log two pi t over log the conductor. The Simplatic family is the more dramatic one, because you see when you get close to the origin, if the support is large enough say for example in deltas two, you get 94% of your functions not vanishing at the low line height. You know, all of these pictures are nice in the paper if someone wants to take a look at them later. Okay, let me comment a little bit on on the solution of these analysis problems and and some Hubert spaces that appear naturally connected to these problems they might appear also in other, in other instances in number theory. So I think they might be somewhat of with an independent interest here. Essentially, three teams that are involved in this solution of this sort of ex off free optimization problems. The first team is the so called Paley vener spaces. Okay, the, the famous Paley vener theorem in harmonic analysis is a theorem that relates functions that have fully transformed compactly supported and entire functions of exponential type. These are the same things. This is what the theorem says, if you give me a function which is square integrable in L2, then the support of the Fourier transform is contained in a, in a compact interval minus delta over two delta over two, if and only if the function f is entire and grows at most like e to the pi delta modules of Z means that it has exponential type pi delta. So, this we call this band limited functions band limited functions are functions that have the Fourier transform compactly support. So band limited functions are the same as entire functions of exponential type. So here we can construct a very nice Hubert space. So let me fix a delta. And let me call this H pi delta to be the Hubert space of entire functions of exponential type at most pi delta. So I'm going to put some norm in this space. So this is a space of entire functions but the norm in this space only is just the L2 norm on the real line. So it's just the usual L2 norm of the function on the real line. So it's kind of very counterintuitive in the first moment you say you have an entire function, but you're saying that the norm is just the L2 norm on the real line you say, yes, I'm doing exactly that. So first, if you have never seen this this is kind of magical, because, okay, functions of exponential type are functions that have this sort of growth. So you can certainly argue by parts right this is certainly a vector space. If I add two functions of exponential type pi delta, the sum is still a function of exponential type pi delta, and you can multiply by scalar. So this is definitely a vector space. This is a norm. Although you're talking about entire functions this is a norm, because it verifies the triangle inequality. And if the norm is zero, this means that the function is zero on the real line, but since the function is entire it must be zero as an entire function. So this is a norm vector space. And why is it a Hubert space. Well, and then you have to use the pay the vener theorem to show that it's actually a Hubert space. If you get the sequence that a Cauchy sequence that converges in this norm, you know, it converges to an L2 function, you move to the, all of these functions have for a transform compactly supported the L2 norm is the same L2 norm on the side. On the Fourier side, you have the L2 of a closed interval. This is complete. It converges to a function and then you go back from this pay the vener theorem which is an if and only if thing. Okay, so this helps you to prove that this is a Hubert space. What we have associated to this is the one of the insights of this thing. I believe is that for each of these five measures, you can associate you can construct an associated Hubert space. So let, let G be one of those five symmetry groups, and let HG pi delta be the normed vector space of entire functions of exponential type at most pi delta with this norm. The norm I'm going to put now the norm of F to be the L2 norm of F in the real line with this measure. The measure is not the Lebesgue measure anymore it's this function, this density function times the dx. Note that I wrote here a norm vector space. I didn't write the Hubert space because I don't know yet if it's a Hubert space. We can say it's a norm vector space pretty much as before. Well, if I sum two functions. I get to be in the same space so it's a vector space. This is a norm because if this is zero means that my function is zero. As an entire function has to be zero. But why is this a Hubert space. Well, then you have to think a little bit about. And you actually have to realize that these five Hubert space this five norm vector spaces, they are exactly the same as sets. They are exactly the same as sets with just five different norms. This is proved via this uncertainty principle. There is this so called Fourier uncertainty principle that you might have heard it before that essentially says that you cannot have a free control of the function and its Fourier transform at the same time. What's happening here. All my functions have exponential type pi delta this means that their Fourier transforms are compactly supported the Fourier uncertainty says that if the Fourier transform is compact supported. My function cannot be compactly supported my function has to have a considerable portion of the mass say 90% of the mass outside the unit ball and outside the unit ball. These measures are comparable, because these measures were just if you remember the previous slide, they were just one one minus sine two pi x over pi x one plus a direct delta at the origin. The confusion happens at the origin but away from the origin, they are all comparable by above and below. So, and then the mass of these functions is 90% outside the origin outside of the ball of radius one where the measures are are comparable by the Fourier uncertainty principle. So this means that these norms are finite. If only if each of these nor each of these five norms is finite for the same set of functions, and these norms you can prove that they are equivalent. Okay, so you end up proving that you have five Hubert spaces but they are the same as sets with just five different norms. In particular, the norms are all equivalent to the original two norm. And in this norm you already know that this is a Hubert space. So all of these five norm vector spaces are Hubert spaces. Okay, this is the first team. The second team is that you have this so called Reese fair the compositions, which is in many occasions that we find. Sometimes we find an analysis you have a certain no negative object, and you want to say that your no negative object is a square. And this is, for example, if you have a polynomial, if a polynomial with real coefficients is no negative on our, then this polynomial will be the square of another polynomial with half of the degree. You have such results for trigonometric polynomials to if you have a trigonometric polynomial that is no negative on our mod R mod Z, then it's the square of a trigonometric polynomial of half of the degree. The same sort of theorem happens for entire functions of exponential type. Okay, this is a theorem due to reason failure. If you have an entire function fee of exponential type to pi delta which is no negative on our, then fee is a square is is modules of F squares kind of F of Z, F of Z bar bar for F with half of the exponential type. Okay, so every function of exponential type to pi delta, which is no negative on our is the square of a function of half of the exponential type. This is the second team that we need. The third team, which is actually more interesting part is the so called reproducing kernels. These are Hubert spaces. Some Hubert spaces are reproducing kernel Hubert spaces, I will explain to you what this is, these are in a minute. So take a function in your daily vendor space that we decide we define to be H pi delta. This means that for any W you, if you fix a complex number W, and you try to evaluate the function at W F of W is equal to this by Fourier inversion. F of W is equal to this. Remember the support of the Fourier transform is just in this interval minus delta over two delta over two. And then you can apply a Cauchy Schwarz here. This is less than this. Well, this is just the L two norm of F hat, which is the L two norm of F, which is the norm of F in your Hubert space, as you define. This thing here is bounded. Well, it might depend on W, but it's just a number. It's at most e to the pi W times delta to the one half, the size of the interval. What is what this means is that if you fix a W every time that you try to evaluate a function F at the point W, the modules of F of W loses to a certain constant that depends on W times the norm of F in the space. This means that the evaluation functionals are bounded in the space from a functional analysis point of view. So the map that sends F to the complex number F of W is a continuous linear functional. Now you go to your functional analysis class and you invoke this re's representation theorem. Every time that you have a Hubert space, the functionals, the bounded functions are just given by an inner product with another element of the space. So this means that for each W, there exists a function in this space, you know, KG that I'm going to call KG pi delta W dot. This is a function in this Hubert space, such that the evaluation functionals is given by the inner product of F against this function. And this is this integral of F of X times KG W X bar times the measure. This is the inner product on this on the space, right? So this this magic function K of two variables right for each W is there is a K of W dot. Okay, this a function on the space on the second variable. But this is what we call the reproducing kernel. Okay, I'm missing something question. Go ahead. In the second line of the first display should be two pi just pi delta. Here? Yeah. Well, the minus delta half delta half. pi delta one. The integral is over T not over W. The integral is over T. Yes. Yes, but you're estimating this. Oh, yeah, I'm missing something. T, T goes. Yeah, you're right. You're right. T goes from minus delta over two to delta over two. Maybe you're right. So I should have put here. Sorry, I should have put pi, pi, pi delta as a pi delta W. Sorry, there's a type of here is e to the pi delta W. Okay. Thank you. But I mean, it's just a constant that depends on that you understand but but. Okay, thank you. Okay, so for each of these spaces. You have this, these reproducing currents right this is a function of two variables that magically when you want to evaluate the function F at the point W just have to do the inner product of F against this reproducing kernel with the first variable W. So, the answer of the problems that I proposed in the beginning to you, they have a very simple answer in terms of these reproducing kernels for these spaces. For example, if you want in this class of functions, continuous integrable with the Fourier transform supported in minus delta delta, no negative and phi of zero to be bigger equal to one. So if you want to minimize this thing, what you want to do is, you first apply the rease fair theorem says that a non negative function is a square of the half of my exponential type. So this function F will belong to this space that we just call Hubbard space G of type pi delta supporting minus delta over two delta over two. And then you just proceed okay, phi of zero has to be bigger equal to one. Phi of zero is F of zero square bigger equal to one F of zero is just the inner product of F with the reproducing kernel K zero dot. You apply a Cauchy Schwarz here. This loses to the norm of F in the space square times the inner product of K zero dot K zero dot. But remember that when you, when you do the inner product of any function with this reproducing kernel you get this function evaluated at this point. So you get here that the L two norm of the function F, which is this object, okay, if phi is F of X square, this is just the L two, this is just the L two norm of F in this Hubbard space. You move this K of zero zero to the other side, and you get the answer. The object that you want to minimize is at least bigger or equal than one over K of zero zero. This can happen because you just use the Cauchy Schwarz here, the quality can happen when F is a multiple of this reproducing current. So this is the answer of that one delta problem using the reproducing kernel structure. The two delta problem has a similar approach, you know, now the idea is relatively simple as well. Again, you start with the with the some some space of test functions fee continuous support of the Fourier transform minus delta delta, you want it to be no negative and at two points t and minus t to be bigger or equal than one. And you want to minimize this this integral so this is this is a two delta problem. I'm not just quote a general lemma that you can prove about the geometry of a Hubbard space. So if you give me any general Hubbard space, and I give you, give me two vectors v one and v two, let's say of the same modulus. And I want to define the space of the elements in the Hubbard space such that they in a product with this v one is bigger equal than one in modules, and then a product with this v two is bigger equal than one in modules, you know, the value of this, the modules of x, the norm of x in this set, which is kind of a, you can calculate in terms of these vectors v one and v two. Okay, so this is just a little geometric lamma on a Hubbard space that you can prove if you give me two vectors. And you ask for the set of points in the space, who's in a product with the both vectors are at least one in modules. This has a minimal element. And you can prove it's it has this value here. And this this sort of geometrical lame is exactly what we can apply the situation because if you try to do the solution you start again. And if test function fee in this space being a negative is a square. And then the, the claim that the fee at the point plus and minus t is bigger equal than one, essentially says that f of t remember is just the inner product of f with the element with the reproducing kernel k t. The product of f with this element is bigger equal than one and the element the inner product of f with k at minus t is bigger equal than one. So the vectors v one and v two here in this lemma are just this this vector and this vector. Okay, so this lemma pretty much gives us that the minimal norm, if you square here is just two over. The norm of this guy which is k t t and then a product of one with the other which is k t and minus t. So you have, and they quality can happen here as well, you know. So you have a perfectly nice solution for these two problems. You know, in terms of the reproducing kernels of course when when t is zero here, you get back to the original answer of the previous problem right which is k of zero zero. So you have the solution of all of these five problems for these five measures. In terms of the reproducing kernel. Now you may say a manual you haven't done anything if you don't show me what this reproducing kernels are. And I agree, yes, you're absolutely right so far I have done the kind of the soft analysis part. But of course, there is a there is a catch right. So there is a hard part in this problems which is actually to compute these reproducing kernels. Okay. This is what we did in this paper we have kind of computed this five reproducing kernels for these five spaces for the size of the support in two situations for any delta. And in three, which are more complicated situations for delta between zero and two, you'll see that it gets very complicated when the support gets bigger. Okay, so, as I mentioned that the difficulties finding this reproducing kernels and I'm just going to wave my hands a little bit here and say that it's essentially to find this reproducing kernels you have to kind of plug it in to the equation that you wanted to solve differentiate a few times and and and you get the system, a certain system of equations, which have a system and by N equations to solve at the end, where N is essentially of the order of delta. The bigger the system of equations that you have to solve, but these equations are not. It's a linear system but the coefficients are kind of functions, you know. So, it gets a little bit complicated from the technical point of view to solve, but it's computationally computation. You want to use computers to do it for large delta you could systemize. So, this is just an idea of the reproducing kernels for this for the unit that is symmetry. This is the theorem of pay leave in there that the kernel is well known is this sign pi delta Z minus W bar. Pi Z minus W bar this is this is classical this, of course, not, not discovered by us this is do to pay leave in there. This is the orthogonal symmetry for any delta. This is the reproducing kernel. So you see if you want to get the answers for this one delta problem and these two delta problems, all you have to do is plug it in here, 00 or TT and plot the graph. This is how we plotted those graphs that I showed you in the beginning. For the other three, it gets more complicated. So here's a description for the symplatic symmetry when delta is between zero and one. This is the reproducing kernel and for delta between one and two you see that it gets starts to get very much more complicated. So you have to define kind of constants here that depend on delta and functions that depend on W and the reproducing kernel gets a big expression depending on these auxiliary functions of W and this little constants that depend on delta. And it's not, it's unavoidable to be to be complicated because those five measures that I presented to you in the beginning after you pass one, the Fourier transform has a discontinuity, you know, and and things really do get complicated. Okay, so this was all for the first part. I'll just comment a little bit on the lowest zero problem, you know, because it's a it's a complimentary problem. It's likely different. So let me in five minutes I will give you an overview of this. So this complimentary problem is the following I now want to to find what's the minimal height of a zero. Okay, so if you take all the fun functions in a family with conductor Q, and you ask for the minimum of a zero normalized appropriately. I want to compute what is this minimum normalized the property when I send q to infinity. So I want to find the height of the minimal zero. I want to find upper bounds for this I want to say that the okay the minimal zero is not more than black point 25 of the average spacing, something like this. So this this problem was considered by Hughes and Rudnick in 2003 for the case of the rich lay of functions. And the idea is interesting as well. So the idea is to consider a test function fee of this form. Let's say fix a and consider a test function fee of x which is x square minus a square times G with G even towards and non negative and support of G minus delta delta support of G hat. And if you do this one level density some for this function fee. If you did not have any zeros below the height a below the height a is when this function is negative. So, from a from a onwards this function fee is no negative right. So, if you had this. We can conclude that it must have a zero before the height a, because if all the zeros were after the height a this object on the right hand side would be no negative, could not be below zero. Okay, so as long as if zero is bigger than this limit, you can certainly conclude that the lowest zero has to be below the height a. The point hand side above is where you use your one level density estimate to say that this is equal to the integral of this test function fee against the measure. And the test function fee was of this form x square minus a square against G. Remember that G being a non negative function is a square. Okay, so if I plug it in G a square here, and I, I, I rewrite things. This is equivalent to saying that a, if I move this a square times G to the to the left hand side and divide. It has to be bigger than x square times G times the measure divided by G times the measure and G being a square. This is just the L two norm of x times F divided by the norm of F in this Hubert space. So you get a very interesting problem in a Hubert space as well, which is, you want to minimize the right hand side here. If you find F in your Hubert space, you want to find the minimal norm of x times F, which is another function in the same Hubert space. So remember these Hubert spaces are Hubert spaces of entire functions right so you have a function f of Z that has exponential type fixed. If you multiply by Z, you don't change the exponential type so this is a function in the space to, and you want to find the minimal value of this right hand side. Because if this does not belong in the space, this will be infinity, but you're not interested in this you're interested in the instances where this belongs to the space, since you want to minimize this thing. And this has a nice solution also in terms of the reproducing kernels that we found you know, and the solution is this the solution for this extremal problem is the smallest positive real zero of this function here. This is the real part of one minus ix k of ix. To explain this, it would take me a little bit more of time, but I just want to mention to you this is the graph that you get for this height of the, the, the lowest zero for example you have support to. This is what the Hughes and Rudnik proved that you have a low line zero at the height at most one quarter of the average height. But if you have the symmetry, for example, and so even you have 0.21. And if you have G, the symplatic symmetry, you have point point 38 times the, the, the average height, the average spacing of the zeros. So for this problem comes from this theory theory of the branch spaces of entire functions. I don't have too much time to explain to you what it is about but in a nutshell, these the branch spaces are spaces of analytic functions. And they are essentially constructed in this form you give me the so-called hermit biller function, which is a function that E of Z beats E of Z bar for Z in the upper half plane. And this the branch space is a space of entire functions with a certain norm on the real line bounded and such that you have some technical conditions here F over E and F star F star here is F of Z bar bar. They have bounded typing on positive mean typing the upper half plane. I just want to say that this the branch spaces are a prototype of this reproducing kernel hybrid spaces of entire functions each space like this. I'm going to write this, this function ES as the difference of two real entire functions A of Z minus I B of Z, and these spaces are reproducing kernel hybrid spaces and the reproducing kernel is given by this thing. So the pay leave inner space just occurs when this, when you take this function E of C to be the usual e to the pi I z. Okay, or e to the minus pi I z. So these are kind of a generalization of the pay leave inner space. Sometimes you have at the branch space this is a reproducing kernel hybrid space and you have a reproducing kernel given in terms of the hermit beller function E. What, what happens is that you can go the other way around. If you start with a hybrid space that you proved it's a reproducing kernel hybrid space and you found a reproducing kernel K. Sometimes you can reverse engineering here and prove that this Hubert space is in fact at the branch space you can find the function E in terms of K, as opposed to finding the function K in terms of E, you can go both ways here. And this is what we use to establish this theorem. And the proof, once you have this the branch space structure. Why is it nice. Well, that the branch space gives you a generalization of plancher else theorem. That you have for functions that have the Fourier transform compactly supported in the interval minus half half that the L two norm of the function would just be the sum of the squares of the functions at the integer points. This is a generalization of this formula. Let's take the sum of the squares at certain nodes of interpolation, but the nodes of interpolations are just the zeros of these companion functions. In the case of the pay leave in their space this companion functions were just the sine sine Pi X. The zeros of sine Pi X are the integers. So this is how you get the classical formula Poisson summation formula that the the norm square of the function is the sum of the f of n squares for a function which has been limited. Here you get a general formula. And what you do is you start with the L two norm of the function in this space. This is you write it as in this nodes of interpolation given by the roots of this companion function a which is the even function. You multiply up and down by c square, you remove the first zero of the of the function a getting an upper bound, the second zero you keep it here, and then c square times f is just the, this is the norm of the function f times x. So, in one slide, you are using some heavy technical machinery here I don't expect anyone who has not seen this before, to actually understand this from first side but all I'm saying is that using this plunge L type formulas. You can have a very easy solution to your problem right. You wanted to minimize this thing. The minimum value of this over this is just bigger equal than C zero square. Just to be to conclude this this proof that I showed you here of this fact. We discovered up posteriori when we had written this paper, we got a feedback from from two French colleagues, saying that one of their students had already considered the same problem and solved the same problem with different methods. The, the, the, this was a paper of Damian Bernard in 2015 that solved the same problem of estimating the, the first zero in these five families, but he uses techniques from orthogonal or orthogonal polynomials, and he does from scratch. It's a just want to call your attention that's a, it's a 52 53 page paper to do this things from scratch, whereas the solution from this machinery from the brand spaces comes smoother. But of course you have to have computed this reproducing kernels before. This is the hard part. There's no free lunch here. Okay, so I think I will stop here. Thank you very much for your attention this afternoon.