 Yeah so all my work or everything I'm talking about today is going to be joint work with Mark Shuster. So I'll begin by talking about some very classical number three, the twin prime conjecture. So one way to state it is there's infinitely many natural numbers n, such that n and n plus 2 are both wrong. And if you check, if you look at thousand numbers and count how many of them are both prime, it certainly seems like it's a lot, there's a lot of twin primes, especially very small numbers where it seems like almost every prime is a prime. But and you know they get decreased sort of slowly over time. After that it really seems like there's an infinite number. But we have no idea how to prove this. And despite that fact, one very natural thing to do is to try to formulate even stronger and more general conjectures that imply between prime and conjecture. So one example of this was done by De Pauliak and he conjectured that for every non-zero even h, there are infinitely many n, such that n and plus h are prime. This is of course false for odd h because there's only one even prime. And so this result is a nice generalization, but it's not very precise. So just saying there's infinitely many primes of a certain form is not very informative because we'd like to know like where these primes are, how far in the number line do you have to look to find a certain number of twin primes or a certain number of pairs such that n and n plus h are both prime. And so for this problem there is also a conjectural answer, a more precise conjecture which is by Hardy and Littlewood. And so they conjectured for a non-zero integer h. If you look at the number of n, such that n and n plus h are both prime, this is asymptotic to x over log x square. So if you divide by x over log x square in the number of such primes up to x and it should converge as x goes to infinity to an explicit constant, c h. So this constant is very simple to write down, but I'm not going to do it in this talk just because it's not particularly important what the exact other constant is, but we do know it is totally computable. So you should think of this conjecture as a manifestation of the idea that the primes behave randomly. So the prime number theorem said there's x over log x primes up to x. So you can think of that as that each prime has like a 1 over log x or each number has a 1 over log x probability of being prime. And you can think of these being independent, this is like the Kramer random model primes. And so n has a 1 over log x probability of being prime and n plus h has a 1 over log x probability of being prime. The probability of both prime is 1 over log x square. And so that's where this x over log x squared asymptotic is. So this conjecture, any of these conjectures, there was not very much progress made on it until there was a lot of progress made in a very short span of time. So the breakthrough work was by Yutai Zhang, who proved there is infinitely many pairs of primes with distances of about 70 million. So he proved that the Polynex conjecture is true for at least 1h between 2 and 70 million. And the first PolyMap project improved the upper bound to 4,422. And then Maynard came in with new ideas and lowered the upper bound to 600. And there was a PolyMap project in order to combine the ideas with all three of the prior works and they got it all the way down to 246. And that's still the best that's currently known. So because we're kind of stuck on proving these conjectures, what I want to do is to kind of change the rules of the game. So I want to consider variants of these problems where instead of working with numbers, we work with polynomials over finite fields. So let me begin by briefly reviewing finite fields. So if you have any prime p, then the integer z mod p form a field. But there are also extensions of that field for q any power of p. There's a unique field fq with q elements, which is an extension of the z mod p. I'm going to fix, basically, for all my results, I'm going to fix a field fq. I'm going to look at polynomials in one variable, t, with coefficients in fq. And this is a ring. And it turns out to be very closely analogous to the integers. So almost every concept in number theory, we can define using integers, there's an analogous concept we can define using polynomials over finite fields. And almost every, like, conjecture or theorem over the integers, there's a corresponding conjecture or theorem for polynomials over finite fields. The main difference is some of the things that are conjecture for integers are theorems for polynomials over finite fields. So I mean, one simple variant of integers is the positive integers are natural numbers. And a natural analog of that for polynomials is the polynomials leading coefficient in one, which are all, which are also known as monic polynomials. So I'm going to denote this with a plus and I'll be working with monic polynomials through most of the talk. And then in particular, we can define flying polynomials. These are monic polynomials other than one, which have no monic polynomial factors, except one in them. So I've just choked the definition of primes, and I just replaced positive integers everywhere in the definition with monic polynomials. So this kind of, this replacement is often relatively kind of straightforward, like almost a mechanical process, and you can see that you replace the definitions and the same kind of properties. So an example of the kind of properties that we'll look for for polynomials or a flying field is a prime number theorem for polynomials, which is going to tell us the number of monic polynomials degree D that our prime is asymptotic to q to the d over d. So we divide it by q to the d over d, and it converges as d goes to infinity to one. So this is like an analog prime number theorem. The total number of monic polynomials of degree D is like q to the d. So q to the d is playing the role of x here, and d is playing the role of y. It's because there's d possible varying coefficients and few choices for each coefficient. So let's just, you know, you've got our bearings. Let's make things concrete. Let's consider an example. Let's work in the smallest flying field, f2, and look at polynomials at degree 2. So there is two squared for monic polynomials degree 2. We have just two choices for each coefficient, t squared, t squared plus one, t squared plus t, and t squared plus t plus one. And the first three of these can factor them and they're not prime. So for t squared and t squared plus t, the factorizations are relatively clear. For t squared plus one, we use the fact that two equals zero in this field to factor it. And that makes t squared plus one equal t squared plus two t plus one, which is t plus one squared. And so the third one is prime. There's one remaining one. And so this is like the general method for how to count primes of degree d is you can multiply all your lower degree polynomials and whatever remains i's prime. And so you can generalize this kind of analysis to other finite fields. If you look at f2 joint t, there's q squared polynomials degree 2. So q of them are perfect squares because there's q linear polynomials they get perfect square of. There's q choose two products of two distinct linear polynomials again there's two linear issues. And so the number of remaining polynomials we can just subtract and get q times q minus one over two. Which is which is approximately q squared over two. This is an example of the prime number theorem and you can prove the general case either by this kind of reasoning counting inclusion exclusion kind of thing. Or you can use in a unique way the uniqueness of higher degree finite fields. Because the roots of these properties will generate the higher degree finite fields and you can use that to count them. So we have the analog of the prime number for theorem. And that lets us state the correct analog of the hardy little wood conduction. So we're now going to count the monic polynomial degree d where f is prime. And in addition f plus h is prime for a fixed nonzero h. And then because we think the probability one polynomial is prime is one over d. The probability two polynomials are prime is one over d square. So we have the asymptotic is cubed to the d over d squared. And we divide by that. And we predict that it converges to some explicit constant C h, which again I won't tell you although if you really want to know I can tell you. But here we now have a theorem. So the theorem is that this conjecture is true under some assumptions on q. So given that I'm looking at works for any value of x. And so the assumption we need is q to be odd. So we can't work in powers of two. And we need q to be greater than 685,090 times p squared. So let me say something about this restriction the way you should think about it. So q is always a power of p. So q is p, then this condition is never satisfied. If q is p squared, this condition is never satisfied. But if q is any higher power of p, this is satisfied with only five remaining exceptions. So there are a lot of fun and fields which satisfy this. But this is entirely a technical restriction. I don't think it reflects anything about how the primes actually behave for these smaller values. Did anyone have questions about this statement? We have some questions in the chat room, but your co-author is handing them very well. Yeah, let me know if anything should be. Cool. So let me mention, as is appropriate, the prior work on this question. So if you only care about the existence of infinitely many primes and you don't want the precise asymptotic, and then it was previously known, also with a restriction that h is a only a single term. And this is done, or q greater than 105, in a paper by Castillo, Hall, Lemke, Oliver, Pollock, and Thompson. And this was based on applying the ideas of Maynard to polynomials over a finite field. And there's a very key trick which is used to get this particular result in the paper, which they credited to the intent. So I wanted to mention them as well. And then even earlier for h and constant, it was done by Hall infinitely many by some kind of explicit construction. And then the other way you could switch up the problem, which makes it a little bit further from the original problem of the integers, is to just fix the degree d and let q go to infinity instead of fixing q and letting d go to infinity. And this case was handled by Bender and Pollock as long as q is odd, and by Claremont as long as q is even. And these results all use fairly different methods, I guess, both from our work and from each other. So let's talk about another classical problem in number theory, which is Landau's problem, to show there are infinitely many primes in the form n squared plus one. Again, this is a very hard problem. And again, we can generalize it to more general polynomials. And so the generalization is bunnyhubs and conjecture, which gives a nice set of conditions on a polynomial g that ensures that g and n takes a prime value infinitely many times. You have to write down, there are some reasons you can check a polynomial wouldn't take a prime value infinitely many times. As soon as you avoid those listed reasons, the conjecture is that it does take n for any prime values. And again, we can make a more precise conjecture, which counts the prime values in a given interval. And so here, the more precise conjecture was made by Bayton and Horne. And this works for any non-constant polynomial. We look at the natural numbers n less than x, where g of n is prime. And we conjecture that's asymptotic to x over log x up to some explicit constant, cg. Maybe zero if you fail in full new constant conditions. And this problem is very hard. So if you just look at a case where degree g is one, you can think of a little bit, and you can unravel, you can see it's the same thing as Dirichlet's theorem, on primes and arithmetic functions. But as soon as the degree of g is greater than one, there's not any case of this which is no. The closer thing we have is there are some results on multivariable polynomials. So you have more variables, you have more choice, it's easier to find the primes. And perhaps the most important one is the result of three-lander n-variant, which is instead of n squared plus one, you look at m squared plus m to the fourth. And they show this polynomial takes infinitely many prime values. And this is a very cool result because the number of natural numbers, which are representative of the form n squared plus m to the four is very small. It's like the number of numbers, such numbers as x is like x to the three-quarters. So they're much rarer than like numbers in air meta-progressions, or numbers in other saturated metacons. And so again, we can make an analog of this for polynomials over finite fields. So what we want to do is we have a polynomial in our variable x with coefficients in f dt. And so you can think of that if you want to see two variable polynomial in t and x. And we're going to count the number of monic f of degree d, such that we plug in f for the x variable into g, and we get a prime polynomial in t. And we'll make the same conjecture, we'll conjecture an asymptotic to q to the d over d times an explicit constant. But there's one difference. There's one thing that is very special phenomenon about polynomials over finite fields that we don't see in integers. And this has to do with we need to assume, it's actually crucial to assume here, that g is not a polynomial in x to the p. So with the exponents of x appearing in g, one of them had better not be a multiple of p. And I'll talk about why we need that assumption later when I'm going to kind of talk myself. And again, we have a result in a special case. So we can restrict only to quadratic polynomials. And for simplicity, we're going to restrict the polynomials to the special form x squared plus t for d and f u join t. And then the conjecture is true as long as p is odd. And q is greater than two to the 10 times three squared times e squared times p to the four. So you did not miss a variable e, I mean 2.71h here. And so again, this is something that's true for infinitely many finite fields, but maybe not all the ones we want to consider yet. And again, this is a technical restriction that I don't think respects the actual behavior of finite polynomials. Also pause to consider this result and ask questions, although maybe Mark will just answer all the questions. And then let me mention prior work. And so there are some recurring themes. So again, if you're only interested in that there's infinitely many prime values and not the asymptotic, then for certain values of g, like maybe a power of x minus a constant, there are results of polyc, which involve using some kind of clever construction where you take back to the large power of t. So the number you get is much less than the effect of asymptotic. And again, if you fix a degree d and let q go to infinity, there are some results. There are results by polyc and n-tane in professional cases. And then we'll all see a look at some higher dean's analogs of this. So now I want to say something about how you will try to prove these results, both in the integer setting and in the polynomial setting. And we'll also lead into the third result I want to talk about. And so that has to do with the parity problem. And the basic summary, this is one of the most important obstructions or ideas in analog number theory. And the basic idea is if you want to tell if a number is prime, you just have to tell if it has an odd number of prime values because primes all have an odd number of prime values. And so the way to test whether a number has an odd number of prime vectors that's particularly nice for what we're going to be doing is the Mobius function. So the Mobius function in minus one raised to the power of the number of prime factors. So one if it's even minus one if it's odd, as long as the number of prime, as all the prime values are distinct, you have no repeated prime factors. But as soon as you have a repeated prime factor, you just like the Mobius function to split the difference and it ends up being zero. And so to kind of what does the parity problem mean in practice, it means that to prove a result counting primes before n squared plus one with n up to x, you better be able to estimate the sum of Mobius of n squared plus one. You better estimate how many numbers up to x have even numbers with an odd number of prime values. And to count twin primes, n up to x, we better know Mobius of n times Mobius of n plus two, which is the same thing as Mobius of n times n plus two. And so these sums are like an instruction to proving these theorems in the integer case, and they're also an instruction in a polynomial over finite derivatives. But I was a little bit glib with my summary, because the justification that prime numbers have an odd number of prime factors, while they also have a number of prime factors to convert all to one mod three, and it's convert to one mod seven, but they only have one prime factor. But there's actually a reason that we specifically care about an odd receiver number of prime factors, that we don't care nearly as much about a mod three number of prime factors and mod seven number of prime factors. And the reason is there are these lovely identities which relate the Mobius function to a function counting primes. So if you sum the Mobius of d times log d over all divisor d of n, we just take a minus sign function, you get the von Mangel function of n, which is a function which basically counts primes. So it's a function which is zero if n is not a prime power, and if n is a power of a prime p, it's log p. And so this function is closely related to the function which is one if n is prime, zero if not. It's a little bit different because you have the log factor, and you have the prime power terms, but in almost any practical problem, it's not very hard to get rid of the prime power terms and get rid of the log factor. And so being able to sum the von Mangel function in a given set is as good as being able to count primes in a given set. And so any time you're summing the von Mangel function, you can use this identity and you can get a Mobius sum. And then the first case of that, where d is equal to n, is like the case I wrote down above, where we just take our prime sum and turn it into a sum. So that's why the parity problem really appears. And so we want to know what happens with Mobius sum, how do they behave. And so the good news is that we expect all of these Mobius sums to cancel. As soon as you have a non-constant polynomial g in n, if we sum n less than x in Mobius and g, this sum should be a lot smaller than x. So in particular, if we divide by x, it should go to zero as x goes to infinity. But the bad news is we don't know how to prove this conjecture. Except if degree g is one, it's a variant of Dirichlet's theorem of primes in argumentative progressions. And then for a higher degree, there are some very strong partial results, which are mainly in the cases where g is split as a product of linear factor. So g is like product i in k of n plus h on. And so this we know if k is two or k is odd. And it also requires some additional averaging over x. So you don't just take one value of x and divide this. You would take like, you know, one, two, four, eight up the powers of two, and then average over again over the different powers of two. And then you would know it converges to zero. And so this is the result of a series of papers. It's a breakthrough work of Madomaki, Radjibu, and then they work together with tau, and then tau did a paper alone, and then tau worked with turbine in. And the combination of all that work is these results. And then for any product of linear factors, no matter the degree, there was a recent result of turbine in, we may not know what the limit is, but at least we know the limb soup is not, is not exactly one or is not exactly the maximum possible value. I guess that was the legal assumption. So there is a little bit of cancellation there. But these results are far from what we need to actually start proving the twin-pines conjecture and any other conjecture. So, but this conjecture also, we can make an analog event over finite fields. So we, we have to begin by defining the Mobius function in this setting. And we'll define it's minus one or is the number of prime factors, if the prime factors are distinct, and zero otherwise. And then we can make the analogous conjecture of tau or the analog to the conjecture of tau. So if G is a polynomial in X with coefficients in polynomials in T, and we'll plug in f monic of degree D, and we'll sum over f monic of degree D, the Mobius and G of T of f. That should cancel when we divide it by Q of the D, it's a conversion zero as D goes to infinity. And again, we need to put this strange additional restriction that G is not a polynomial in X to the p. Maybe one way of saying the restriction that makes it more similar to integer case is you could say the derivative with respect to X is non-vanishing because the derivative of X to the p with respect to X is zero in characteristic p. And so then it will be the same assumption that you make in the integer case, non-constant there as a polynomial in the derivative model. But it's a little bit strange, like why don't we consider the derivative? I will explain. This is, excuse me, there is a question for you from Azvin. Azvin, will you please unmute and ask away? Hi, I'm not sure if I'm audible. Yes, please go ahead. We hear you. Yeah. Does Scholar imply Bateman-Horn or the twin frame conjecture? No, so it doesn't. We will use a form of Scholar's conjecture to prove Bateman-Horn in a twin frame conjecture, but we will use a very strong form of it, which is stronger than what I have stated, but luckily is equal to what we proved. And we will need other ideas in addition. There's not a simple implication, but it's sort of like the first step. All you could also think of it sort of as the last step. Thank you. Yeah, so I'll say shortly what's going on in the case when it is a polynomial. And so again, this is something we can prove in a special case. And here our formula are weird inequality and q looks a little bit more reasonable here. So if q is a power of a prime p, we again need to assume that p is odd. And we need to assume q is greater than 4 times k squared times p squared times e squared, where k is the degree of x and g, and e is again 2.71. So we need, as the degree of g and x grows, we need larger and larger finite fields to prove the statement. But the growth is pretty mild. Like we go from extensions of the given fp, like fp, fp squared, fp cubed, fp to the fourth, fp to the fifth. Besides the k we can handle is increasing by exponential, as we extend our energy. So a couple of weeks ago, a couple of months ago, it all seems the same now. I heard you speak about the quadratic case, and at that time you were very pessimistic about higher degrees. We spoke about the quadratic case of the prime result. So at that time, we proved the higher degree case of Mobius, but we didn't do the higher degree case of primes. So this is still the state of the art? Yeah, this is the same thing I was talking about last time. I maybe didn't emphasize the higher childhood there because I wanted to talk about the, I was talking there about the sort of the hard analytic number theory part of the argument, as was appropriate for this seminar. For this seminar, I'm going to take a broader overview as is appropriate for this stuff. So this is the same work, the same state of the art. For Mobius, we can do arbitrary high degree polynomials, as long as the finite field is large. For counting primes, we're restricted to polynomials of degree two. And for twin primes, we're restricted to only two primes instead of higher degrees. And this is because of our strategy of reducing from primes to Mobius, it starts working quite, it doesn't work so well when the degree gets larger. Because you're factoring, you're dividing into two different factors. And if you have two factors that are very, very large, you don't know what to do. So, okay, I'm going to talk about prior work, and I'll talk about the prior work, which is the most relevant to understanding what we did, which is the work of Conrad Conrad and Gross. And this work is devoted to answering the question, like, why are there these restrictions that G should not only depend on actual decay? What happens if G is a polynomial in accident? And so Conrad Conrad and Gross showed in this setting that Mobius is a periodic function of F. So what I mean by a periodic function of an integer is that the function should only depend on the integer, modulus and modulus, and the congruent's class. And so what I mean by a periodic function of a polynomial is it should depend only on the converse class of F, mod some fixed polynomial, but I also let it depend the converse class of degree F, mod four for technical reasons. So when you sum Mobius of GTF, in this case, you're summing a periodic function. And some of the periodic functions have a kind of straightforward behavior. If you're summing over a very long interval, you're summing over each converse class in the same number of times. So if the sum over converse class is vanished, then your sum will cancel very beautifully. And if your sum over converse class does not vanish, your sum will not cancel. So Conrad Conrad and Gross like, evaluated the sum, and they showed sometimes it cancels and sometimes it doesn't. So both behaviors really occur. So unlike Chowla, it's kind of a projection of the random behavior of Mobius. But Mobius is really behaving in a non-random way in a special case. And that makes Chowla sometimes fake. So one way to think about theorem three is that it is handling the remaining case that was not handled by Conrad Conrad and Gross. And everything would they, when they handled, we show Chowla's conjecture remains true. But this kind of understates the relationship between what we're doing and what they did, because we're kind of doing a reduction to the case they studied. The strategy we're going to use is so like sub 2f equals r plus s to the p, where r and s are two new variables, and show cancellation in s. And when we get cancellation in s, we're going to be summing by their result, or we prove a slight variance of that. They'll be summing a periodic function. We'll not be getting a summing of product function over a very, very long interval. So we can't use these most classical techniques for proving cancellation in some of the prior functions. But it still turns out that it is much impossible to show cancellation with periodic function sums using methods that wouldn't work for our original sum. And so again, there are prior results for this problem. And I think only in the Q2 infinity version, so for fixing the degree, and it was done by Carmel and Rundek if she is part of linear terms. And then there was actually a very close relationship in the Q2 infinity setting between the Mobius and the Bateman-Horon results. And so Pollock, Enten and Kowalski solve this problem in like exactly the same cases as they solve the Bateman-Horne problem in the Q2 infinity So in the remaining time, I'm going to try to say something about the proof of this theorem. So the proof has kind of three different steps, which I'm going to be explaining, I guess sort of a reverse order. Well, the first one I want to explain is going to use the most kind of classical methods from analytic number theory. It's all about manipulating sums and estimating different terms in your sums. The second will use like elementary algebraic manipulations of polynomials. And the last one will use kind of non-elementary algebraic geometry methods. So I hope everyone will find at least one of these approaches interesting, although maybe that's too much to ask because there's a lot of people here, but hopefully many of you will find some of them interesting. So the strategy we're going to use is we're going to do what I mentioned. We're going to take our prime count of the results, just immediately replace them. The strategy for proving theorem one and theorem two. So our results about the primes using our results about the Mobius. And so we're going to do exactly what I sort of foreshadowed earlier. We're going to take our prime counting. We're going to turn them into sums of the von Mangelt function, which here will define to be zero if a polynomial is not a prime power and the degree of pi if it's a power of a prime pi. And we're going to use this identity which relates the von Mangelt function to the sum of the Mobius function of the divisors of the polynomial to turn our von Mangelt sums into more complicated Mobius sums. And I'm going to explain this process in theorem two in the result about prime values of quadratic polynomials in F because this step of the argument is a little bit simpler in that case, although other steps can get more complicated. So what we're going to get when we get that we're going to sum over degree F equal to D, monic. We're going to sum von Mangelt function of F squared plus D. And we're just going to immediately rewrite that as sum over G dividing F squared plus D monic of Mobius of G. And we're going to we're going to redirect to the sum to one possible degree of G at a time. So we're going to attack the sum for each value of degree G once and then you know we can put this factor in front and then we're going to sum those up to get our total estimate. And the reason that's a very good idea is because we need to apply different techniques for different values of M. So when the degree G is small we'll be using a different strategy than when the degree G is large. So for small M, for very small values of G, this is the most classical range. And this has been we're using sort of nothing new that you can you can open this sum. You can sort of look at the residue classes of F module or G. And by doing that, you can transform this sum into some kind of Euler product. And you end up with exactly this constant C H or like C H times X over or times q to the d over d. So the small M range if you add it up, it gives exactly like the main term that we expect to be the limit. And so what that tells us is that we need to prove that the large M range every other range the sum cancels it goes to zero so that the small M will dominate. And then for very large M, and particularly you can think about M almost twice the degree of F. So degree G is very large and F squared plus d over G is very small. We can transform the sum manipulate a little bit to get a sum of Mobius of a polynomial. So of a polynomial and another variable. And when we do that, that's exactly the kind of sum we bounded in theorem three in our chalice state. So once we transform the sum into some of Mobius another variable, we're going to plug in our estimates from that sum. And the actual estimates that we proved were significantly stronger than what I stated in theorem three, but I didn't write them because it was like a more complicated statement. But basically we're not just showing the sum is going to zero, we're showing it going to zero quite fast with like a power savings. And there's also uniformity in the quadratic polynomial. And so using those two facts, we get cancellation, not just in the sum of a single quadratic polynomial, but also if we sum like over a bunch of quadratic polynomials we end up needing to do because of how we're manipulating the sum. And so using this eventually strong version of chalice conjecture, we're able to handle the large M range. But there's a problem. The problem is there's like a range of medium M that we can't handle using either methods. So either M that are greater than D greater than the degree of F, but less than like one plus epsilon times the degree of F for like a small constant epsilon. And for this range, we need like a completely different idea. And so this idea is a trick that's due to Hooley, which is to use kind of classical results in the theory of quadratic forms. So the kind of result that you want to use is like a number. So in the integer setting, the kind of result is a number is divides n squared plus one for some integer n, if and only if the number is a sum of two square. And so in general, we can show our column was I represent our divide F squared plus D, if and only if they are represented by certain quadratic forms of discriminant D. And so we'll use that, you know, minus, minus D over four. So we'll use that to re-express this as a problem about Mobius sums over quadratic forms. And then we will take one of the variables from quadratic forms and then apply here. So this is the basic strategy we need to prove, you know, a strong version of each other. And we need some additional arguments, but we do eventually get our twin prime state, or sorry, our quadratic Bayman-Holm statement in degree two. If you want to know about twin primes, the argument here is basically the same, but a little bit different. We have two different primes. So we would sum two different von Maingal functions. So we'd have two different divisors degree here, values G here, and we'd have two different degrees. And so instead of having three ranges, we would have like more ranges because each divisor could have a small degree, large degree, medium degree. And it's the range where both devices are medium. That's the trickiest ones for us. And in that case, we use some classical intellectual theory arguments by Fouvert and Michel to handle the final range. So we've kind of reduced the problem to a sum just of Mobius. And let me explain something about how this goes. So as I mentioned earlier, the Mobius sum, G, if we plug in r plus s to the p, that's a polynomial only s to the p. So Conor and Conor and Gross chose the periodic function of s to some explicitly computable modular set. And so we want to, it's not an arbitrary periodic function, we want to describe it as a kind of function that's meaningful in number theory. So what kind of periodic function turns out to be is that we have a quadratic character to modulus m, and we compose that quadratic character with a polynomial w in s with coefficients, you know, that are polynomials mod m. So you plug s into some kind of polynomial and then you evaluate con. So over the integers, this kind of function composing a character with a polynomial shows up all the time in number theory. And this is showing up, but sort of in a completely different way. So the basic strategy we're going to use, so I'm going to explain a little bit of how to prove that kind of identity. And then the first thing I'm going to say, what's the basic strategy we're going to use once we have the identity? The basic strategy is we're going to write f as r plus s to the p. We can choose a nice set of r so that each f occurs once as r plus s to the p. And so we get a sum over r, sum over s, mobius of g of t of r plus s to the p. We get, we then get a sum over r, sum over s of a quadratic character of a polynomial in s. And so then the remaining thing once we prove its identity is to get a cancellation in these sums, quadratic character of a polynomial in s. So let me say where this identity is coming from. And then for simplicity, I want to only consider the case where starting polynomial g. So we'll only be looking at the mobius of r plus s to the p. And then to start this analysis, I'm going to, you know, give the formula for any polynomial, not just r plus s to the p. And the tool we'll use is going to be a quadratic character of fq. So the multiplicative group of units of fq is cyclic. There's a unique character that goes to the plus or minus one. That's non-trivial. And if f has degree d, there's a formula of Pele, which says the mobius of f is equal to the quadratic character of the discriminant of the polynomial f up to this factor minus one to the d. And I'm sort of going to ignore factors depending only on the degree in this analysis because we can just ignore that in the proof basic. And so this formula, you can prove it using Galois theory. You can relate mobius to the action of Frobenius on the roots with like a permutation. And it's related to the sign of the permutation. And the sign of elements of the Galois group acting on the roots is related to the discriminant. That's how you prove Pele's formula. And after you have that formula, we're going to have just kind of very classical identities, algebraic identities in relating to discriminants of polynomials to like derivatives of polynomials and resultants of polynomials and all these other kind of very classical things. So the first identity is that the discriminant of f is the resultant of the derivative of f with that. So the resultant of two polynomials is a product of the values of one polynomial out of the roots of the other. And if you just write down, factor out and write the values of the derivative of the roots of f and you multiply them, you'll get the discriminant up to some sign that I don't care about. And so if you plug in r plus s to the p into this formula, we get a minus one to the d times the quadratic character of the derivative of r plus s to the p with r plus s to the p. And so the key thing that makes everything work is we differentiate r plus s to the p, we use the product rule, we get p times s to the p minus one, but p is zero in our finite field of fq, so we just get dr dt here and r plus s to the p. And then we use here this lovely symmetry property of resultants. So that the product of values of one polynomial at the roots of the other is the same thing up to some boring sign with the product of the values of the second polynomial at the roots of the first. So we're taking the product of the values of r plus s to the p at the roots of dr dt. And that is, dr dt is fixed. And so that only depends on s modulo dr dt. And it's like a multiplicative function in r plus s to the p. If you multiply two polynomials, you multiply their values at each roots. So chi two of the resultant of a polynomial with dr dt is always a quadratic character, a polynomial with modulo dr dt, and that gives us a formula of the form I've written on top of the screen. And so for the general case, you need to do this argument and then plus you need to do a little bit more work. You kind of use some elementary algebraic geometry, use the fact that any two polynomials, which have the same roots and the same order of vanishing on the roots, agree up to a constant because you write both sides of the entity as polynomials and you check that they have the same vanishing. So the last thing is we need to we need to bound this sum. So one thing we could try to do to bound the sum is to take an analogy back to classical number theory and say I have a quadratic character and both of the polynomial, I'm summing it over s. And then this summing over s, if it's going to be small, it behaves like a small interval. It behaves like I'm summing over small natural numbers. And you could say there's all kinds of methods, like the Burgess bound polyvenoguado method for getting bounds for these sums over short intervals of character sums. And the problem is our interval is too short for any of those methods we profitably apply. So we actually can't do that except specifically equals three. You can get a little bit of juice out of this. So what we're going to do instead of applying these classical sort of more or less elementary methods in number theory, is we're going to apply some very non elementary approaches to bounding character sums. So we're going to think of this as a sum, not a short interval sum for polynomials or finite fields, but as a complete sum for many different variables in finite fields. So you want to think of each coefficient of s as its own variable, which lies in a finite field. And we have a many variable sum over the finite field of some kind of character sum. And then Deline proved a very general bound for character sums using like these etalcology methods as part of his proof of the Riemann hypothesis over finite fields. And so we have to use Deline's machine to make our result work. And he is a very powerful result, but he doesn't give it to you for free. So to use Deline's results to get a submission down for the sum, you have to estimate some kinds of cohomology groups. And it turns out the cohomology groups that appear involve some kind of nice geometry. What you end up seeing when you analyze this sum is there's an arrangement of a bunch of hyperplanes in this high dimensional space. And you're interested in the complement of his hyperplane arrangement, the complement of the union of hyperplanes. And you want to calculate the cohomology of that complement, not just the ordinary cohomology, but twisted by some kind of quadratic local system, some quadratic representation of the fundamental group. And the key thing we want to show is some kind of cohomology vanishing theorem. But this is actually something that was similarly studied in characters 0 by Cohen, Dimka, and Orlick. They studied a very similar problem. And so we can adapt these kinds of character 0 methods to a finite field setting and to show our cohomology vanishing result, which then gives us, by applying this very huge machinery of Deline, gives us a very strong bound for this character sum, which we then sum over R to obtain our bound for Chalice conjecture, which is then strong enough to imply all the others. Thank you for inviting me to speak.