 Thank you. So I would like to thank the organizers for running this seminar series. It's really great. And so I wanted to talk today about some research I did, this thing I started about a year and a half ago. This really just started with me just playing around with ideas and seeing what I could do. Now, thinking about the Riemann hypothesis, not really getting any ideas how to solve it, but just kind of messing around and then just with enough work, this kind of evolved into sort of a result. So I just wanted to share this with you. So I wanted to start my talk with the Riemann Zeta function. So everyone here, I'm sure knows what it is. And so the Riemann Zeta function, as you know, is defined by this series also as an Euler product has the pole at s equal one, no other pole. We know about the zeros at the negative even integers. The Riemann hypothesis would say that all the remaining zeros all lie on the half line inside the critical strip. Of course, the Riemann hypothesis is a statement that Zeta Vess is non zero, if the real part of us is bigger than one half. Okay, so I started thinking about this in terms of fairy fractions. And, you know, an old result of Fresnel and Lando gives us equivalence of the Riemann hypothesis with a statement about the distribution of fairy fractions. So one thing I learned. Don't know why I never saw learn this before when I was studying fairy series. I learned that fairy series or neither series nor were they invented by fairy. Originally they, they go back to a mathematician called named Horos in 1802. The fairy series of any level is a collection of all the fractions whose denominator is bounded by a natural number n. And these are all written in a reduced form. So all the fractions from zero to one, which are reduced and ordered from least to largest. And so the Fresnel Lando theorem says, well, if you take, you know, if you look at the sequence of fractions in a particular fairy series like this, and you compare that to the number k divided by the cardinality of f of n, you know, both sequences started zero, start around zero and end up at one. So you can compare the difference of them and the absolute value of the difference summed from k will one to f of n satisfies a bound which is end of the one half plus epsilon for any given epsilon. This kind of bound is equivalent to the Riemann hypothesis. And if you want a just, you know, the only unconditional bound would have end of the power one here. Okay, so we're essentially just thinking about how these fractions increase from zero to one compared to something which is increasing in a just a linear fashion from zero to one. Okay, and we can, we can view the statement about the Riemann hypothesis in terms of local discrepancy. So we take any real number between zero and one and compare alpha to, well, the proportion of fairy fractions in the interval from zero to alpha with the overall number of fractions in that set, you know, we should roughly capture alpha, you know, proportion alpha of the fairy fractions. You would intuitively think that this number should be reasonably small. And so then the Fresnel-Lando theorem can be expressed. Well, it just simply translates into the statement. Okay, Neater-Eyder in the 73 studied, you know, the absolute discrepancy, which is the maximum of these local discrepancies. He showed that d sub n as a function of n has an order of growth, which is the cardinality of fn to the minus one half. Later, DRESS approved this surprising result that the discrepancy, you know, that this absolute discrepancy for given n is equal to one over n. And so just to let you know, the cardinality of the fairy series, asymptotically, it approaches three over pi squared times n squared. So Neater-Eyder's result is consistent with this, but this is obviously more precise. Okay. One of the results that Fresnel-Lando established back in 1924, they were also looking at something which essentially relates to these local discrepancies. So they show that if you get it, if you can bound to the sum of these discrepancies and get a bound of n to the sigma zero for some sigma zero bigger than a half. Well, that's equivalent to a bound on the, it's on the second moment of these discrepancies, and it's also equivalent to a bound on the summatory function of the Mobius function. So the Mobius function, the sum of the Mobius function up to x satisfies a bound x to the sigma zero. So given sigma zero, now I'm bigger than a half, that's equivalent to the statement about the distribution of the fairy fractions. So for this talk, you know, I just wanted to make this this week version of the Riemann hypothesis. I know I'm not the first one, but we'll call this RH sigma zero is the hypothesis that the Riemann Zeta function has no zeros in the half point sigma bigger than sigma zero. So maybe we can't prove the full Riemann hypothesis, but you know, maybe someday we'll show non-vanishing and a strip to the left of the line real part of s equal one. And so my paper kind of relates to the statement about a weak form of the Riemann hypothesis. Okay, what I looked at are fairy fractions with square free denominators. So instead of taking the entire ensemble of fairy fractions just focusing on the ones for which the denominator is a square free number. Can we say that these are also evenly distributed relative to this weak Riemann hypothesis? You know, that it's can we get an analog of the Fresnel-Lando result in this case where the denominators are square free. So here's a little terminology I decided to write phi sub n as this is really the fractions that lie in the fairy series of level n which don't lie in the fairy series of level n minus one. So these are just the fractions which have a denominator equal to n. Okay, so the denominator is equal to n, and then we demanded that the numerator should be co-prime to n. So the cardinality of this set would just be the Euler function phi of n. And then I look at this ensemble of fairy fractions where I'm taking the fairy fractions whose denominator is square free, so mu squared of n equals one. So this notation n tilde n is common. I think you've all seen this. So if let me try that again. So n by an abuse of terminology I'll write sometimes n is equivalent to n. This means that little n is bigger than capital N less than or equal to two n. So this is one lying in some dyadic interval that starts at capital N. And this shouldn't cause any problems. I know this is like common notation. So just so we're all on the same page, it's not too hard to get an estimate for the cardinality of this set. And the main thing that I wanted to mention is that, you know, this set also grows like n squared times some constant. So now I can show you the theorem that I proved. So first off, if you, let's say that you have sigma zero, which is a real number, I'm going to assume it's less than one. And suppose that the weak Riemann hypothesis holds for that value of sigma zero. So we have a non-vanishing and a strip. In this case, there are numbers constants C zero and epsilon zero, which have this property that if you take any epsilon, which is in the interval from zero to epsilon zero, you can get an estimate. So here I'm estimating how many of these ferry fractions with square free denominators hit the interval from zero to A over Q. Okay, A over Q is a fraction, I'll explain. So if these ferry fractions with square free denominator are uniformly distributed, you'd expect to get this proportion of them to hit that set. And under this assumption of the non-vanishing the Riemann zeta function, you actually hit the target. So the number of ferry fractions that hit this interval is what you'd expect, plus an error term which is big O of n to the one minus epsilon. Okay, and this works for fractions whose denominators are prime numbers, Q, and this value of Q can go as large as the power n to the one minus C zero of epsilon. Okay, so just to say a few things like this, and this cardinality, as I said, grows like n squared. So you're talking this error term is giving you something that's more than a square root, more than a square root cancellation. So it's a pretty fine distribution. You would expect to get this of course just for ordinary ferry series, but this result also works if you restrict to the ferry series that have square free denominators. And probably one of the main points of this theorem is the fact that these denominators Q, these primes can go pretty high, like that you can take them almost as large as the number n itself, and you still get a pretty good distribution relative to that. Then the converse theorem says that, well, if you have two constants which have this property that I just talked about, so the distribution of ferry fractions for all primes up to here, up to this level, that that also implies the Riemann hypothesis in the non vanishing in a strip. Okay, so, so these, these results go both ways. Is there are there any questions about what this this theorem says. How important it is that your endpoint is a rational A over Q. Well, what I did it like I'll show like the connection between this and like other rational numbers. For at least for the technique that I was using I tried to remove the denominator, the constraint that these denominators are prime and, you know, there are places where at least my approach to to doing this breaks down so it could, it could well be the case. I mean, if you assume, if you assume this, and then you get this statement. And, well, then the converse also holds I think from this we can also deduce a statement for arbitrary primes, but I'll get to this because I'm. Yeah, I didn't answer that very well let me try this again so I think the main problem. In a sense, your statement is stronger if you have a special fractions over Q because that's all that we balance in the statement. Yeah, I think that part of the problem. Well, yeah, part of the problem is that at least the method I was using breaks down if q isn't prime for one thing and also there's a problem with the distribution of fairy fractions when you take a when you compare what's happening with with like denominators which are if they're too large, for example, it doesn't work one over and it's too large you. We can't take you all the way up to end, but there's also seems to be a problem with having small denominators as well. The distributions are a little bit off so it may be that by the end of this talk, I'll have answered your question. Also you expect that distribution will be very different. Yeah, like, I mean, I, you know, when you follow what's happening like in this in this theorem, right, like, one of the problems with this theorem is that like at the beginning where you have these reciprocals of integers. And for a while, these discrepancies are kind of large, and it's only later on that sort of on average, you, you get a bound like this even under the Riemann hypothesis. So the like this fraction here tends to shoot over to one faster than than these ones are shooting over to one. Okay, but eventually everything kind of catches up like. Thank you. Okay, so I was going to show you like the approach. You know how I went about doing this and I know that I already know that like there are things in this paper that could have been done better. But at some point you, I just submitted it so I may come back to version 2.0 which is like a sharper form of all this stuff. Anyway, I like that one of the genesis of this work was just to starting with this observation that you can get an expression for the Riemann Zeta function by taking a sum over the positive integers and mu squared n over n to the s times this function p sub s of n, p sub s of n, I define it as the sum are the divisors d dividing n of mu d times d to the minus s. And this works out to a product over the primes dividing n one minus p to the minus s. So this is, it's almost with a reciprocal here, you'd have a part, a piece of the Riemann Zeta function, but sort of a finite piece. And, you know, this equality follows just by looking at the, you know, the Euler product expression for the Zeta function. And then taking like these terms and just writing the last all the terms after the first one is a geometric series. And then you get this and then thinking about what that means in terms of these multiplicative functions. So one thing about the Zeta function is like the the square free numbers sort of capture everything that you need to know about the Zeta function. You know, through like an identity like this, you know, numbers which aren't square free play a kind of secondary role, at least from this point of view. Okay, so looking at this identity, I started thinking, well, can we can we get some something similar to this by by taking other expressions which are, you know, kind of reflect this. I didn't say that very well, but I started looking at functions which are similar to Zeta. In the sense I'm here I'm looking at mu squared N divided by N to the s PS of N. And if this function is identically one this G of N, then I recover the Riemann Zeta function but I can put other arithmetic functions in there as well. So one class of functions if you if you have a multiplicative function G, and suppose that that satisfies the property that G of a prime power P to the K is just equal to G of P. So these values are all the same for any positive K. Well, if you if you take such a G and plug it into this expression and unfold the Euler product and compare things you find that in that case you're just back to an ordinary Diraclet series with the same function G as the Diraclet coefficients. Okay, so everything kind of drops out and this is why you know in the case where G is just identically one this condition clearly holds this multiplicative. This term vanishes and then this becomes identically one here so you just recover the Riemann Zeta function. You can also use the same trick with with a principal character. So if if a chi zero is a principal character modulo any q. The same trick works. We have this expression and we recover the L function for the principal character. However, we get something. Okay, I heard something anyway. So if you're working with a principal character this is all this is all great. Nothing new. If you plug in a non principal character chi. So we look at this some mu squared in chi of N over N to the SP PS event and fold this while you do pick up the L function of chi associated associated to Chi and then there's this kind of correction factor here. So it's not quite an equality but you can express RS Chi in terms of that. You know this this piece of it is a regular and this piece yet has no poles. If real part of us is bigger than a half. And so this even these functions will continue analytically to the half plain sigma bigger than a half. And they have the same zeros, which is kind of cool. I looked at some other functions. So here I'm going to use the sawtooth function. And I just defined this side of T is the fractional part of T minus a half. If T is a real number which is not an integer and I took this to have the value zero for integers. And we can consider this as one of these functions. And then kind of moving away from multiplicative characters, we get like this, this new function which is, I don't know what to call it, I'll just call it the sawtooth. So, and I started, I looked at this in the paper. So, this is the, what I'll use in the case of dealing with these fairy fractions. So we take this real number alpha to be a fraction A over Q, where Q is a prime number. Here's some, some notation. So I use Z sub Q is Z mod QZ, Z sub Q cross for the multiplicative group. And we're looking, I wanted to look at these kinds of Dirichlet series. Okay, so here's the definition again. One thing if Q divides A, then this, this term by periodicity just becomes zero in the way I've defined things. Psi of zero is zero. So, in that case, you get a function which is identically zero. If, if A is not in the zero residue class mod Q, you can actually, you can unfold this a little bit. And show that S A over Q S, I show how to do this in the paper, it's not, not difficult. But we get an expression for S A over Q, so the sawtooth function is minus one over pi IQ times Q minus one. And then we have a sum over the Dirichlet characters mod Q, which are odd. So the chi of A, so the A is related to this numerator. Here's the Gauss sum, L1 chi, and then RS chi, that's the function I looked at before, which is, which looks a lot like this RS chi looks a lot like LS chi times this extra fudge factor. Okay, anyway, since each of these functions continue to the half plane sigma bigger than a half, then we know that the sawtooth also continues to that whole, that half plane. Beyond that, well, I won't say. Okay, so I'll try to show how this function rears its ugly head and all of this, you know, all of this. So first off, if you, I'm going to take A over Q as a fraction, I'm going to assume Q is a prime number for now. And then, so with this choice of alpha, straightforward calculation shows that if you look at just the ferry fractions with denominator equal to n, the number of those that hit this interval from zero to alpha is equal to alpha times phi of n, so the expected number. And then there's a correction term R of n, which can be written as a sum over the divisors d of n, mu of d alpha n over d. And so writing it this way. Now, if n is square free, you know, we can separate mu of n over d as mu of n times mu of d. And then also, this term, the fractional part here is just off by a constant times this term. And the fact that the sum of the Mobius function, mu of d over all divisors d of n is zero, we can kind of introduce an extra minus a half into here and turn this into the sawtooth. So we get this explicit formula for the error term in this, in this approximation. And now if I sum over n, in, you know, in this dyadic interval and the ones which are, which satisfy the square free condition. Now I have a way to estimate the intersection of, or the cardinality of the intersection of S sub n with the interval zero to alpha. And I get the expected value here. And now my error term looks like g of n, which is this sort of a sum. Okay, so basically just taking this, this expression for R of n and summing over n in the dyadic interval. The Mobius function here is going to take, is going to select out the integers n, which are square free. Now we reverse the order of summation. We can write g of n, this error term this way, where m sub d of n. This is a sum over integers in the dyadic interval from n over d to two n over d. And, but only those which are co-prime to d. And so here's where this, this kind of function starts showing, popping out this a piece of S. If we just look at the, the infinite series mu of n over m to the S, but restricted to integers m, which are co-prime to d. This series looks like a one over zeta of S times this correction factor, piece of S of d. And so if we use a Perron's formula, we can get an approximate value for this m of d. So we can, we can approximate this term, which is a piece of the, this error term. And so we get m sub d of n looks like this plus an error term. And there's, we have flexibility for the d number t. I think I, well, I, you know, we just have to optimize at the end the t, I'll assume is smaller than capital N, but otherwise I won't say too much about it. So, I think also, technically, when I did this, the proof I had to assume that capital N was irrational just so that things kind of work out in a nice simple way, but that's not really like an important assumption. Okay, so anyway, we can, we can approximate m of d, which shows up in this expression for g of n with this integral and then an error term, which isn't too bad. Okay, and so now we want to kind of sum, we want to multiply by this factor and then sum over d up to n. So multiply by mu squared d times psi of alpha d, we sum. Now we have an integral expression for g of n, the error term. And applying the sum, we have a, well, this term here, multiplied by a partial sum of the sawtooth. And then we have an error term, which is acceptable for us. So this piece here is just a partial sum of the sawtooth function, the sawtooth series that I showed you about. Okay, and then what I did to try to deal with this function g of n was to say, well, can we take this partial sum and expand it to the full series? I know that the full series is analytic all the way to us, sigma equals a half. And so, you know, if I can replace this by the full series, I mean this partial series is also like that. So that's not the only reason, but anyway, the idea is to try to shift the line of integration into the critical strip. And if we know that we're avoiding zeros, so if the Riemann hypothesis is true, we can avoid zeros of the Zeta function. We can shift everything over and get a strong estimate for the error term. Okay, and so that's sort of how the proof of this forward implication of the theorem goes. So just to kind of recap, you know, I have this quantity here, which I'm trying to estimate in the theorem. You know, I've expressed everything in terms of this function g of n, and g of n is given by this integral plus an acceptable error. And I just need to show that the integral satisfies this taking, you know, well, sort of taking an appropriate choice of t. Okay, what I needed, what needs to happen though, it's for this to work because we need a kind of a good bound on this function. And I used a generalization of a Blomers theorem to get a strong bound on both the absolute value of the sawtooth and then also the difference between, you know, the full series and the partial series. So these bounds, getting strong bounds on these quantities allows me to just replace this function by the completed sawtooth series. And I thought that this is like just a reasonable way to proceed. You know, it makes it kind of clear how things work. And also, this also opens up the door to, you know, getting explicit formulas. So maybe you have zeros of the zeta function, you know, close to one, but you can still express this error in terms of the zeros. And if you do that. So, I don't even think I did that in my paper, but one of the things. If you, so if you replace this expression by the full sawtooth function, and then you shift past zeros of the zeta function, you're going to get an expression for this error term. In terms of zeros of zeta, but what the expression gives you is like a series. Let me just go back here. We end up getting something like this showing up where we're taking zeros of the Riemann zeta function but we're evaluating the L functions of non-trivial Dirichlet characters on the zeros of the zeta function. And this is the term that shows up when you do that. You know, you guys can tell me if you've ever seen like examples where you have like a series involving, you know, non-trivial Dirichlet characters evaluated at zeros of the zeta function. But anyway, that's what shows up. Maybe I just haven't seen it. That's possible too. Okay. Before I, you know, finish this explanation, let me just tell you a little bit about this theorem of Blomer that, which is really like the main workhorse of all of this. So, okay, so the theorem of Blomer that I used in this paper was a theorem where he was considering square free numbers in arithmetic progressions. To compare the number of square free integers and a given arithmetic progression to the expected number of them. And then this is like a, it's like a second moment overall, all possible congruence classes of the of this error term. And he showed that this, this second moment expression is bounded by something like x to the epsilon times x plus the minimum x to the five thirds q the minus one q squared. Okay, so this was like the starting point for me. What I did was to generalize this so instead of just summing me squared and I was trying to do this with more arbitrary function. So, let's just say that we have a sum multiplicative function, little f, and then associated to this function little f. I'm going to define a new function capital F, which is the sum D divides in mu of D F of D. So, little f gives rise to capital F. And so then Blomers theorem here, I wanted to replace mu squared of N by mu squared times this function capital F of N. Now, if you take as your given multiplicative function, you just take the characteristic function of the of the integer one, right, that's the identity in the ring of Dirichlet series. Then the ring of, well, in the ring of arithmetic functions with respect to Dirichlet convolution. So anyway, if you take a little f to be just the indicator function of the number one, then that gives you capital F is the function which is identically one. And so then this expression just becomes mu squared. So, like this generalization with functions like this includes Blomers result as a special case. And for this sum to work, I needed to make some assumptions. Now, I know some of this like may strike you as a little odd, but the result I gave basically for theta and the window from two thirds to four thirds. I assume that my function, little f satisfies a bound of this form m to the minus theta e to the omega of m. And this should work for all positive integers m, a little omega is the number of a distinct prime factors of m. I define just to be able to state the result cleanly or cleaner I define x sub theta to be x to the one minus theta plus one z q is the product over the primes other than q. Let's see now there here's a little typo here. I this because the theorem I'm stating here actually works for all integers whether or not they're prime. So, so this this should actually read p doesn't divide keel. Okay, in both places. Okay, this particular theorem doesn't we don't need q to be a prime number for this to work. Anyway, I have this z sub q is this expression which is defined in terms of our function f. And then zeta q of two is like just a piece of zeta and I think I'm missing a minus missing minus one here looks like. Okay, and so here's, here's the theorem. So I'm taking this function mu n times capital F of n and looking at it looking at the sum like this over a given arithmetic progression. Then I compare compare the value of that sum to its expected value. And again it's a sort of a second moment type bound as we run over all the congruence classes mud Q. And then you know this bound, maybe it's hard to see what's happening but I think if you, if you take the special case where little f is the indicator function of the number one, then this, this recovers blowers result in that case. And I think the main thing that's going on here is that if as long as this data is in this range you get something which is non trivial. So you know this four thirds is only artificial in the sense that you know theta can be larger than four thirds but you don't. You know, this, this approach doesn't give you anything non trivial in that case. The main is that if you're summing this kind of a function in an arithmetic progression, it's kind of on average pretty close to its expected value. So there's a good distribution in terms of like the integers, you know this, this kind of a function isn't going to preferentially select one congruence class over another one. So in this application, I took little f to be this function mu n over n to the s PS of n, and you can work out the capital f of n in this case works out to the reciprocal of PS of n. So then, you know this functions pretty simple and you know both of these functions are pretty simple. You can easily satisfy a bound of this type. So, you know, depending on the location of s, you know there's an s here in this definition. So depending on its real part, you can apply the theorem with theta equal to the real part of us. And then you see that this bound holds and so the result of the theorem holds. And so as a corollary, if sigma is between two thirds and four thirds, we can bound this, this is the partial sum of the sawtooth I showed you before. And we can see that the this partial sum is bounded by this expression. So we get x to the one, one a half q to the one half. And then some other stuff here. Well, the, you know, if q is just slightly smaller than x, if it's like x to the one minus epsilon, then you get a full savings of the power of x in this estimate. And the other terms also will give you a full savings of a power of x as well. So the this partial sum is, it's a small it's a very small. Okay, this is, this is basically how we how we get this. So, all right. In the last couple slides, I wanted to just show you how the opposite, how's the opposite direction go. For this direction, we assume that there's some constant with C zero and epsilon zero for which this nice distribution happens for all primes up to this bound. Okay, and so for the reverse implication, I had this intermediate results and this is one. This is where like exponential sums come into play. And just like looking at distributions. So again, a fee sub n is a set of the fairy fractions with denominator equal to n. My set S sub n is the same as before. It's the fairy fractions with a square free denominators within, you know, in a dyadic interval. I also defined a fairy of capital N to be all of the fairy fractions up to capital N. And I have this kind of technical theorem, which, well what it says is that if you have a real number alpha from zero to one, and it's not too close to a rational a fairy fraction whose denominator is pretty small. In other words, like L over n is like lies in this set that means that it's denominator n is no more than a basically an epsilon power of capital N. And so as long as you're not too close to fractions like this, you're not in some sort of a window like that. Then, then we get like a nice distribution. The number of fairy fractions with square free denominators in the interval from zero to alpha is the expected value plus plus an error term, which is small enough for the theorem. Okay, so then, yeah, and this one, the proof of this is pretty technical. I use exponential sums and so forth, but basically building on the fact that, you know, these kinds of fractions A over Q are reasonably well distributed. If we're assuming this condition to begin with, there's enough of a good distribution of these fractions that we can approximate alpha by one of these fractions A over Q. So essentially replace the the idea is to try to replace alpha here by one of these special fractions A over Q where the denominator is a prime and then use what we know from the from the other theorem from our hypothesis. And you can do that as long as you're not too close to one of these fairy fractions with a small denominator. So then we take the sum over mu of n, just looking at this sum where we have n in the dyadic interval. We know that the Mobius function mu of n is equal to the sum over all the fairy fractions with denominator equal to n, E of x, this is E to the 2 pi i x, as usual. So it's like this sum of the cyclotomic cyclotomic units. And then we can express the sum as a sum over the set S of capital N of just E of x. Now I define E of U to be the difference between S and intersect the interval zero to U and then the expected value U times the cardinality of S sub N. And using a partial summation, you can then you can then convert this sum here into an integral. It's minus 2 pi i integral zero to one E of U. That's the exponential function, then this error term, du. And I sort of did like a kind of circle method type argument where you you split the interval from zero to one into major and minor arcs. Now I know this is not really how it's done, like in for those types of applications. But I call here capital M is the union of these intervals I, L over N. This is from my previous term. It's the it's the real numbers that lie really close to one of these fairy fractions with a small denominator. So you have those are the major ones and the minor ones are everything else. And so anyway, if we're away from one of these bad intervals, I can use that previous theorem to get a good bound on this this piece. And then for this other piece. Well, we just use the fact that it's small. So I use a kind of a stupid bound on E of U. I don't have like a really sharp bound on this, but I could put in something pretty dumb. And that's enough because the the measure of the set capital M is small enough. Okay, and so this is how well at the end of the day, we get a really sharp bound on the sum. And it's actually with a power savings if because we're getting a power savings in this term. And so that's how we produce that's how you can produce a zero free strip for the remun zeta function. All right, so that's it.