 So I'm going to talk about a new topic in number theory. So I would like to kind of end these series of lectures by talking about something that brings together a bunch of things we discussed before, but I have no idea what, if any, topic might do that. So instead, I'm just going to talk about one more thing that seems interesting and is maybe the kind of the most interesting from a purely number theoretic perspective. So I'm going to speak about my joint work with Mark Shusterman. Shusterman on a series of old conjectures in number theory on the twin primes conjecture, chalice conjecture, quadratic Bateman-Horn. And I want to begin by talking about chalice conjecture, which is a conjecture about the Mobius function, which is defined for, well, let's say for integers as minus 1 to the r, if n is, well, maybe plus or minus, a product of r distinct primes for p1, pr prime in distinct, and it's 0 otherwise. So if n cannot be written in that form, which we know from the unique factorization theorem, is equivalent to n having some prime square devices and being not square free. So it's roughly minus 1 raised to the number of prime factors except that for many purposes, it's convenient to kind of zero out all the numbers that have repeated prime factor rather than just counting at it's like multiple prime factors. And so the conjecture is if you have d1 through dk integers that are distinct, if you sum over natural numbers less than x, Mobius of n plus d1, product up to Mobius of n plus dk divided by x, this to converge, the x goes to infinity to 0. So this statement is a reflection of the philosophy that the Mobius function should behave like a random function, or at least its sign should behave like a random sign, like a coin flip independently for each number. Because when Mobius is 0 versus non-zero, it's not random. It's like it's a periodic or a series of periodic phenomena with period 4 and 9 and 25 and so on. But the sign part is totally random. And so if we determine the sign of each of these values independently, their product would also behave like a coin flip. And so the law of large numbers would indeed say that this limit would be true. And so this conjecture is not known. This is unknown for k greater than 1. The k equals 1 version is basically equivalent to the prime number theorem, except there is a version of it that's known with additional averaging in x. So if you average the left side over x equals to 1, 2, 4 up to a power of 2, and then take the limit as the power of 2 grows and grows, then you would get conversions to 0. This is for k equals 2 by work of tau, and for k odd by work of tau and teraevinin, which is all building on the breakthrough, matomachy. And so this is a conjecture that's focusing on the randomness of the Mobius function in a kind of additive way. So the Mobius function is a multiplicative function. It behaves nice when numbers multiply. And so we're instead plugging in sums of numbers and seeing what we get. Or another way of describing it is it's a conjecture of the local randomness of the Mobius function. So in this conjecture, and it's going to be typically much larger than d1 through dk, so we're looking at very nearby numbers and saying that the Mobius should behave independently among those. And if you knew this conjecture, you would get this kind of local limit of the Mobius function. Behaves randomly, or maybe a slight variant of the conjecture. But Chala actually made a more general conjecture than this, which is not kind of purely additive, but is a hybrid of addition and multiplication. So a polynomial. So the polynomial Chala conjecture would be that f is an integer polynomial that is non-constant, then the limit of x goes to infinity. The sum of natural numbers less than x Mobius of f of n should be 0. So again, if Mobius were random, then this sum would certainly cancel in x by the law of large numbers. And so the conjecture is that Mobius is behaving sufficiently like a random function. And there's something a little bit tricky here, because if this polynomial f has a repeated prime factor, as a polynomial of integers, then this sum is behaving non-randomly, but it's behaving non-randomly because all the terms are almost all the terms are 0, because they all have a square factor. And so the sum will cancel also in the non-random case. And so this polynomial version is much harder, even with additional averaging, because you can't take advantage of the local and additive structure of the sum and play it off against the Mobius structure, the multiplicative structure of Mobius. It's harder to get track of any kind of structure. And so it's not known for k is equal to 1, and it's essentially a version of Dirichlet's theorem on primes and arithmetic proportions, because the polynomial of degree 1 is just summing over an arithmetic primes, and because you can relate these Mobius sums. The sum is over primes. Oh, yes, sorry, I don't understand. Yes, degree k. So let's consider an analog for fq of t. And so here, for the first time, is we're going to be really important that fq is a general finite field and may not just be the prime field at z minus p. And so we first need to define the Mobius function, and we'll define it in the same way. Mobius of f will equal minus 1 to the r if f is equal to alpha times pi 1 to pi r for alpha unit in fq, pi 1 through pi r, distinct primes, distinct monarchy irreducible polynomials, and 0 otherwise. And here, we had a polynomial in z of x. So the coefficients should now be polynomials in fq t, which means we essentially have a polynomial in two variables, fq through nt of x. So the conjecture is f in fq of t comma x, not. So instead of a non-constant, we have a slightly stronger condition. We would not like our polynomial to be a polynomial only in x to the p. It should have some exponent that's not a multiple of p. I know my degree f is k. So the limit of n go to infinity, the sum over monarchy polynomials, degree f is n of the mobius of f of t comma f. I make my big polynomial g so that I don't have to keep saying f, g of tf, 1 over q to the n. The limit should equal 0. And so this is just a perfect analog of the conjecture we had over the integers. We're summing over polynomials with a given degree instead of integers of a given size. We still think the mobius function sum should converge to 0. The only difference is we have a slightly stronger condition on the polynomial. And that's for a very good reason. The conjecture is actually known by work of con or con or root and gross to sometimes be false when this condition is not satisfied. I know this also came up in your second lecture. But could you remind me again why you were placed less than x with an equal and the? So it doesn't super matter. I could have stated this one with a dyadic integral from x to 2x, and it would have been an equivalent statement. And I could have stated this one with a less than or equal to n or being a equivalent statement. It seems slightly cleaner to write. I mean, it kind of requires the least number of symbols to write less than x in this case, rather than to put lower and upper bounds. But here it seems simpler to do inequality. If I was doing it less than or equal to n, then the number of polynomials to divide by would be q to the n plus q to the n minus 1 plus et cetera down to 1, which would be a slightly more complicated thing to divide by. But it's just kind of. Is the general principle is the safe talk in both cases? Exactly, yeah. Can you give the simplest example, but then I will be x to the power of q for each of six volts? OK, so I would tell you the mechanism, not by giving an example. So this will come up. The theorem is this function for any polynomial in x to the p, this function is periodic in g. It only depends on the congruence class of g modulo something. And then, yeah, so then it's clear that even if your periodic problem doesn't cancel, if you just compose it with a linear polynomial, you can then make it cancel. Yeah, any polynomial is periodic. Most of these periodic polynomials will cancel, but then you can compose it with something else to make it not cancel. I think it's true the most. So the mechanism is there's this exceptional periodicity. So the theorem of myself and Mark is if the size of the finite field q is a power of p, the conjecture is true as long as q is greater than 4k squared p squared e squared. OK, so this is a somewhat strange statement. OK, first, you didn't miss a variable. e is 2.718, et cetera. So p is the characteristic. So both the size of the finite field and the characteristic are showing, and then kind of unsurprisingly, the problem gets harder as the degree grows. So the degree of the polynomial enters into our formula. And so how does the criterion work? It never holds over the finite field fp, because p is always less than p squared. It never holds over the finite field fp squared. But for all other prime powers, this is going to hold with finally many exceptions for a given k. So there are a lot of fields where our theorem applies, even if there's infinitely many, where it doesn't. Why do you say q is a part of p? Oh, well, I need to explain that p is a variable, is the prime that q is a power of. So q is the size of the finite field, and it's a power of some prime. And I'm saying, let p be the prime which q is a power of. Yeah, so if I was giving a series of lectures only on this theorem, I would start at the very beginning by saying, let q be a power of p, and let fq be the field with q elements. But because I've been talking about finite fields fq the entire time, I'm saying, now let's introduce p, which is the characteristic which has been hidden before. Oh, I see you did use it before. Never mind. OK, mostly hidden. Oh, yes, right. That's a very good point. I did use it really, really right up there. That's a very good point. So I should have said that p is the characteristic up there. And then the only other comment I have on this is the floor is probably removable by being slightly more careful. But it's just a constant, so nobody really cares. And so I want to say something about how this is proven. And then I will describe the applications of the statement or really of a more precise uniform form of it to primes, prime counting problems in fqt. And so unlike in all the previous problems that I'm talking about, I'm not going to just take this sum and convert it into geometry in the most straightforward possible way of interpreting a set where summing over as a space and the function of the sheaf on some space and just directly calculating the combology of that sheaf and proving combology vanishing and bounding with any numbers. So what I'm instead going to do is to use a trick that converts the sum into a sum of simpler sums and apply geometric methods to each of those simpler sums. And you can also think of this geometrically of I have a problem on the combology of some space, and I'm slicing that space up into smaller spaces whose combology I can calculate directly, and I'm getting information of the combology of the bigger space sort of only indirectly. So but first we have to figure out what the kind of geometric meaning of the Mobius function is. So the key identity for doing that is that the Mobius function of a polynomial, which I think it was for simplicity, assume that that was a monic, the Mobius function of f, say its degree n, is minus 1 to the n times a character of the discriminant of f. So here delta is polynomial discriminant, and chi is a quadratic character. So it's a non-trivial homomorphism from the multiplicative group of fq to plus or minus 1. And we set chi of 0 to be 0. So it's essentially like a Legendre symbol. And so this gives a geometric interpretation, because the discriminant is going to be a polynomial function in the coefficients of f. And then this quadratic character we can construct from a sheave associated to the double covering of the square root of the square root. So why is this true? Well, first, both sides are 0 if and only if f has a repeated prime factor, which happens if and only if f has a repeated root. Otherwise, the polynomial f is going to have n distinct roots in fq bar. And the prime factors correspond to orbits. So the number of prime factors is the number of orbits. And so minus 1 to the number of prime factors is minus 1 to the number of orbits, which is minus 1 to the n. Sorry, the orbits of Frobanius. So Frobanius is acting on the roots. And I'm considering the orbits of the Frobanius action. And because Gaoua orbits always correspond to irreversible factors, and Frobanius generates the Gaoua group, number of prime factors is the number of orbits. And the number of orbits of a permutation, if we raise that to the power of minus 1, that's always minus 1 to the n times the sine of the permutation. And then for any element of the Gaoua group, for any polynomial, the sine can be calculated from the action on the square root of the discriminant of the polynomial. So sine of Frobanius is equal to Frobanius of the square root of the discriminant divided by the square root of the discriminant, which is equal to the discriminant times q minus 1 over 2, which is chi of the discriminant. Oh, yes. Oh, that's a very good point. Yes, so this only works for pi. Yes, in fact, every result I state today will be for pi odd, probably. So you certainly could use this formula to geometrically study the sum directly. So I guess another way of saying chi of delta f is the number of elements in fq, such that y squared is equal to delta of f, minus 1. Because if it's a perfect square, other than 0, it has two square roots, which is 1 plus 1. If it's not a square, it has zero square roots, which is 1 minus 1. And if it's 0, it has one square root, which is 1 plus 0. So the sum f and fqt plus Mobius of f of f, if you assume that the degree of f of f is constant, which is kind of easy to rig up, this would be minus 1 to the n times the sum, or the size of the set of f in fqt plus degree f is n. And y in fq with y squared is equal to delta of f, should be a g, delta of g of f, minus q to the n. So you count pairs of a polynomial and a square root of the discriminant of that polynomial plugged into something. And then you subtract often expected mean term of q to the n, and that's what you're bounding. So we would need to calculate the columnology of this variety parameterizing of some variety with an equation y squared is equal to delta of g of f. So the problem is that this variety is, we don't really know how to calculate the columnology of it. It's some messy thing. It could have complicated singularities that seem to depend on the polynomial g in a complicated way. So that isn't what we do. What we instead do is use the periodicity observation that is made by columnar to columnar and gross. We use kind of our own variance of it. So we want to use the fact that if we plug in mobius of g of r plus s to the p as a function of s, that depends only on s modulo, some polynomial depending on r, some function m depending on r and g. So let me explain where this periodicity is coming from. And I'll first do it in the simplest case if we just look at the linear, the identity polynomial. So mu of r plus s to the p itself. Well, it's minus 1 to some degree times this character evaluated at this discriminant of r plus s to the p. So what is the discriminant of a polynomial? It's the product of all the differences between the roots, square. And what that means is that up to some sign, general polynomial, delta of f is the resultant of f with the derivative of f. So let me say it like this. Alternative derivative of f with f. So if I take the value, the resultant of two polynomials is the product of the values of one polynomial and the roots of the other polynomial. And if you take the value of the derivative of f at one of the roots of f, you get the product of the differences of that root with all the other roots. And so we multiply from all the roots, we get all the differences between roots with each one appearing twice, just like the discriminant up to some sign depending only on the degree. And then this is, again, up to some sign, it's the same thing as a resultant of f with f prime, the products of the values of f at the roots of the derivative of f. So that means the Mobius function of r plus s to the p is plus or minus some boring sign, chi evaluated at the resultant of r plus s to the p at the derivative of r plus s to the p. So we have the derivative of r. And then the derivative of s to the p is p times the derivative of s times s to the p minus 1, which is 0 because p is 0. So this depends only on s modulo r prime. And it depends on s modulo r prime in a very nice way. In particular, you can take, at least for r, you can take for r square free at least, you can take a pth root of r modulo r prime and you get the resultant of r to the 1 over p plus s whole thing to the p with r prime. And that's just equal to the resultant of r to the 1 over p plus s with r prime raised to the power p. And you plug it into chi because p is odd. You don't really care if you raise to the power p or not. So you get a shift of the resultant with r prime. And that's basically a shift of a Dirichlet character modulo r prime. So in kind of the simplest case, we end up with Dirichlet character sums. And so there are tools in analytic number theory to bound in a non-trivial way sums of these kinds of Dirichlet characters. In particular, we're summing these Dirichlet characters over all polynomials of a given degree, which behaves like an interval in the integers. So you can apply techniques from number theory for solving Dirichlet characters in interval. Unfortunately, these techniques are not very helpful unless p is very, very small. p is like 3 or something. You can do something. But otherwise, the interval that you need to sum over is too short. I should explain what we're doing here. So what we're doing is we're observing that the sum for f of nfq is joined t, degree f is equal to n, u of g of f can be written as the sum of r in fq t plus degree f is r, the sum of s in fq t, degree of f less than n over p, mu of g of r plus s to the p, times 1 over q to the floor function of n over p. Because if we choose s, we're choosing s of degree less than n over p so that the degree of s to the p is less than n. And so adding s to the p to r won't affect its degree. And then every f can be expressed as r plus s to the p for unique value of r for every s. And so if you divide by the number of values of s, then you get the same sum. And so to get cancellation in this original sum, we just need to get cancellation in sum over s. But even in the sum over s, it's hard to get cancellation here by traditional analytic means because the degree of f is like n over p, or degree of r is equal to n, degree of s less than n over p. The s is less than n over p. The modulus of this character r prime has degree n. So this is something like summing a Legendre symbol a plus s over b for all s of size less than b to the 1 over p. And so this sum, we can't show any cancellation in this sum in a number theory context, unless p is at most 4, where we can use the Burgess bound. So for only very small p, as p is getting larger, this sum we're trying to show cancellation and it's getting shorter. And only for very small p do we have a hope of showing cancellation by geometric methods, by arithmetic methods. But for geometric methods, things are much better. This sum looks, so let me describe the geometry. So as a polynomial, in the coefficients of s, this resultant r over p plus s with r prime is a very nice polynomial. It's a product of linear factors, which are basically up to shift the values of s at the roots of r prime. Just by the definition of the resultant, it's the values of one polynomial at the roots of another. So if we, for this polynomial, do the same trick of taking y squared equal to the polynomial. So bounding the sum s and fqt, degree f, degree s less than n, chi of the resultant of r over p plus s with r prime is roughly the same as counting fq points on the variety y squared is equal to this resultant of r to the 1 over v plus s with r prime, which is a product of linear factors. So another way to say this, we have a branched cover of affine space, an affine space of dimension four of n over p, which is branched at an arrangement of hyperplanes in affine space. And we're trying to calculate the co-homology of this double cover. And an equivalent way of saying this is we're trying to calculate the co-homology of the complement of this hyperplane arrangement twisted by a one-dimensional locally constant sheet, the locally constant sheet of rank one that corresponds to this double cover. And an arrangement of hyperplanes is not nearly as scary a geometry as the vanishing locus of the discriminant of a general polynomial. And in fact, the same kind of trick works in general. So in general, the discriminant of g of r plus s to the p as a polynomial coefficients of s is a product of linear factors, which are obtained by evaluating s at some points, finally many points. So how can we check this is true in general? Well, we can use the fact in algebraic geometry that a polynomial is basically determined by its vanishing set. Or more precisely, a polynomial is a product of powers of irreducible factors. And you can't tell the powers just from looking at the vanishing set, but you can tell what the irreducible factors are. So it suffices to prove its vanishing set is a union of hyperplanes, which are all of the form, the set of all s that take a particular value at a particular point. Because any polynomial that vanishes of the union of hyperplanes will be a product of linear terms. And well, how did this work? So delta of g of r plus s to the p is equal to 0 if and only if g of r plus s to the p has a repeated root. At some point, we'll call it a. And so this happens if and only if g of r plus s to the p. And the derivative with respect to d of g of r plus s to the p both vanish at a. But this derivative by the chain rule is dg dt evaluated at r plus s to the p plus r prime times dg dx evaluated at r plus s to the p. So if we evaluate this so crucially, this derivative doesn't depend at all on the derivative of s more space. So then if we plug in a, what we get is this occurs if and only if g of a r plus s to the p evaluated at a equals 0. And dg dt of a r plus s to the p of a plus r prime of a dg dx s to the p of a is equal to 0. So this is two equations in the two variables a and r plus s to the p of a, or you can think of it as an equation in a and s of a. And so generically, these two equations will have finally many solutions because these two curves don't intersect. And in the case where it does have some curve where they intersect, you can check that's a degenerate case. And you still end up getting something periodic there. And so there's finally many pairs of values of a of s of a where this can vanish, which is exactly what we claimed in terms of this union of hyperplanes. So I want to talk next about the geometry of hyperplane arrangements, what makes them, so which ones are nice and which ones are bad? So normally, if we have a variety or a branch cover variety, we would study its singularities. And we would like it to be smooth, or at least to have as few singularities as possible. And that will make it easier to calculate the cohomology. On a range of hyperplanes, in some sense, it has a singularity whenever two or more of the hyperplanes intersect. That's where you'll see singularities of this double cover branched at the range of hyperplanes. So in that sense, our variety has very bad singularities. But for hyperplane arrangements, that's not the most, the best notion. What makes a hyperplane arrangement nice is if the hyperplanes intersect generically or transversely. So the singularities of a hyperplane arrangement are the points of non-transverse intersection. What that means is there are m hyperplanes intersecting in a space of co-dimension less than m. So if two hyperplanes intersect the space of co-dimension two, three-dimension intersects the space of co-dimension three, and so on, that is generic. That is what would happen for a random collection of hyperplanes. So whenever this doesn't happen, and there's excess dimension of the intersection, that is a non-generic behavior that makes potentially the behavior of the co-homology more subtle. And then for our hyperplane arrangement, this happens only if you're finally in many points. So why is this true? This is again true for some elementary reasons involving the geometry of polynomials. So our hyperplanes all have the form s of a is equal to b. So if you have m of m, you get s of a1 is equal to b1. s of am equals to bm for a1, b1 up to am, bm distinct pairs. So we're looking at an intersection of hyperplanes as a set of solutions to this kind of system of equation. If ai is ever equal to aj for i not equal to j, then bi will have to be unequal to bj, because our equation are distinct. We don't want a redundant set of equations. And there's no solutions at all. Otherwise, we have the problem of polynomial interpolation. We've fixed the value of our polynomial at m distinct points. Now we want to know how many polynomials satisfy it. And the general feature of polynomial interpretation and the interpolation, the equation are linearly independent. We can choose a polynomial of degree less than m going through any of these m points. So the solutions have code dimension m. Unless m is greater than the dimension of our space of polynomials, which is 4 function of n over p, in which case the solutions are 0 dimensional. So if you have a space of polynomials, you're fixing their value at a bunch of points. The only way that the dimension can be larger than you expect is if the dimension is 0. If the dimension is very small, but your expectation where it was even smaller. So you expected it to be negative. And so the only points that can be singularities are the solutions, these 0 dimensional solution sets, of which there's finally many, because there's finally many linear equations on this list. There's finally many subsets. So we have a hyperplane arrangement. And with this natural notion of singularities for hyperplane arrangements, it has very mild singularities. It's simple, normal crossings are away from finally many points. And this is even true at infinity. If you extend the hyperplanes projectively to infinity, then there will have no points at infinity where they fail to be simple, normal crossings. Or no, it's not quite true. It's because the parallel ones intersect each other at infinity. But other than that, they'll be simple normal crossings. And so we apply general methods to estimate the co-amology of the complement of a hyperplane arrangement with coefficients in a rank one locally constant chief. So the locally constant chief is coming here from basically splitting the co-amology of this covering into two pieces, one which is invariant under the operation setting y to minus y, and the other one which has sign flipped by the operation. And then the sign flipped part is the co-amology of a rank one chief on the complement of this hyperplane arrangement. And so such sheaves are kind of easy to parameterize. Around each hyperplane, the monodromes is given by, well, in the complex setting, by some complex number. In the allatic setting, some root of unity. And those numbers are totally independent. So for any tuple of a number associated with each hyperplane, we get a chief. And then we can prove co-amology vanishing results under this kind of generosity hypothesis on the hyperplanes, and also some kind of mild generosity hypothesis on the monodromes of this local system. Because vanishing would certainly not be true if your rank one locally constant chief was just constant chief, for example. And so we wrote two papers about this. In the first paper, we used a result that was very specialized to a specific form of hyperplane arrangements. And the second one, we used a slightly more general method, which we adapted from work of Cohen, Dimka, and Orlick. And they did work over the complex numbers. And we adapted their method to characteristic p and generalized it in some ways and specialized it in other ways. And so proving this co-amology vanishing result gives you a bound for the count of points on this variety, gives you a bound for this sum. And then summing that bound over r gives us the bound we want for the Mobius function. We should take a break, and then I'll talk about primes. That's not good. So a very general statement is if you have x on affine variety, x bar of projective compactification, then we have a j from x to x bar, open immersion. And if you have k per verse chief on x, then the co-amology with compact supports with coefficients in k maps to the usual co-amology with coefficients in k and lives in a long, exact sequence relating to the co-amology of the boundary of the compactification with coefficients in the push forward of k. And so this vanishes in degrees strictly less than 0. This vanishes in degrees greater than 0. So the simplest case is if the co-amology of the boundary vanishes, this boundary contribution in infinity vanishes, these two are equal. And so they vanish in every degree except for 0. So that's a very powerful vanishing statement. This is what I was using in the work with Tomplier. Here, but there's also a weaker statement, which is if j star k is supported on a sub-variety of dimension, a set of dimension less than or equal to d, then by a semi-perversity argument, this will vanish for i greater than d, I think. And that forces this by the long, exact sequence to vanish for i greater than d plus 1. So as long as this push forward is supported on a very low dimensional set, then you have also a pretty good bound on the co-amology. And so for the hyperplane arrangement, this seems like a reasonable strategy. You have an affine, sheaf on an affine space. There's a very natural compactification, which is projective space. You can push forward to that, and you can hope to show vanishing using the monodromy around the hyperplane at infinity inside this projective space. Unfortunately, you have a problem. There's sometimes no monodromy around the hyperplane of infinity in projective space. And our problem that happens roughly half the time, so it's quite a frequent event. And so what you do, the trick, is you do a switch. You pick one of your other hyperplanes, and you let the affine variety be the complement of that hyperplane in projective space. Because every hyperplane in projective space is complement looks like affine space. So you apply to x the complement of a well-chosen hyperplane in this projective space for function n over b. And you do some kind of local calculation using your knowledge of the singularities when you have transversality that makes it easy to calculate. When you don't have transversality, it's only finely many points, and it's just a zero-dimensional set. So the most important thing is that you need the monodromy around your hyperplane to be non-trivial. So if we're writing y squared equal to product of linear factors, we need to pick one of the linear factors with odd exponent. And the other thing you need to check is we don't want to have any other parallel hyperplane. So if s of a is equal to, we want to have an a that only appears once on our list of equations. So we want to have no other hyperplane which is parallel to it. And then we can use, so you need to check that such a hyperplane exists for most values of r. And you can check this under some generosity condition on g. And then you can force this generosity condition on g. It doesn't always hold, but it holds that for some substitution. So there's some details in rigging things to make sure this condition happens. But the main thing is, yeah, the hyperplane should be isolated, should not have parallel ones, and should have this non-trivial monodromy. And since writing this, we came up with a different method which should handle some non-isolated cases. But you definitely want to find, you definitely need a hyperplane with non-trivial monodromy. Because if there's no hyperplane with non-trivial monodromy, there's no cohomology vanishing. Because you just have just the cohomology of this hyperplane arrangement, and it'll have some compactly supported cohomology in degree twice its dimension, which is just an enormous degree. So the proof must somewhere take advantage of the monodromy being non-trivial. And we're taking advantage of it really only at one hyperplane, which is kind of a convenient thing to do. So a very classic conjecture in number theory is the twin primes conjecture, which says there are infinitely many n with n and n plus 2 both prime. Oh, sorry, n plus 2. And there's a generalization of it due to Bunyakowski, which might have in fact been the original statement, which says that for d even, there exists infinitely many n with d, n plus d plus both prime. And a more refined version of this was conjectured by Hardy and Littlewood. And so as always in number theory, we don't really want to prove something as infinite. It's much better to know how many there are in a given interval. So you can take for d not 0 the set of n less than x with n and n plus d both prime. If you take the size of the set and then you divide by x over log x squared, this should converge as x goes to infinity to some constant cd, a constant depending on d. And this conjecture can be motivated by the philosophy of randomness of primes. If we think of each number n as being prime with probability 1 over log n, then the probability n and n plus d are both prime is going to be approximately 1 over log n squared. And summing over n less than x, that's approximately x over log x squared. And the constant cd comes from the fact we have to adjust, for example, for the fact that almost all primes are even. Almost all primes are not multiple of three. We adjust for congruence conditions, modulo small primes. And so cd is a product over prime factor, over prime numbers. And it depends on which primes divide d. Yes, there is a need for it, yes. And this is something that we can conjecture an analog of in fqt. So for d not 0 and fqt, take the limit n goes to infinity. Cardinality of the set of f-monic degree f is equal to n. f plus d are irreducible. If we divide this by the q to the n over n squared, this should equal some cd. Which will be expressed as a similar infinite product. And the theorem of myself and Mark, this conjecture is true for q greater than almost like 700,000 p squared. So the quality of the constant has degraded somewhat from the shallow bound. But other than the constant, it still holds for all but finally many finite fields, except for the prime fields and the quadratic extensions of prime fields. And so this is a linear statement. And just like for a shallow, there is a kind of nonlinear analog for polynomials. So the conjecture is if we let g v of x be an irreducible polynomial. Well, first I have to hear, I should probably say that, isn't this conjecture false when q equals 2 and d equals 1? Oh, cd will just be 0 in that case. Yeah, so right. So here also, if d is odd, then cd is 0. You don't need to say d even, yeah. Yeah, it's a product of local factors. And then, yeah, so one should put a statement giving a criterion for the conjecture to be nonzero. Because it's kind of interesting. Yeah, but the conjecture is that it's nonzero. There's no local obstruction, because cd is encapsulating local obstructions to irreducibility. So if you take the limit of x goes to infinity, the size of the set of natural numbers n, n less than x, or g of n is prime, divided by x over log x, should be again some constant, depending on g, yeah. So here, did you prove just that cd, or did you prove the value of the cd to be? Yes, the value of the cd is what you expect. And it is a very precise statement, so the value of cd is what you expect. And it's with some explicit error term, also, or reasonably for the error term, of power savings type. OK, great question. So which conjecture do you think of, like Bayman-Horn? So it's pretty simple. It is unknown for every single polynomial, except for the linear polynomial. So the only thing we can do is if you have a multivariate polynomial, then you can sometimes count prime factors, and particularly the polynomial n squared plus m, or prime values, n squared plus m to the 4 is known by Duke Freelander and Vaniage to have infinitely many prime values. But there's not a single one variable polynomial that's not linear. That's known to take infinitely many prime values. So here, cg is not 0 if the leading coefficient of g is positive, no prime p divides g of m for all mz. So unless there's some kind of obstruction to getting infinitely many primes, I'm assuming the first one is only being obstruction if you don't count negative numbers as primes, then you should get infinitely many primes under this conjecture. And I can explain the formulas for any of these concepts that anybody wants to know how you write it down. They're all based on kind of the same principle. So what's the fq during t conjecture? We want to take g and fq of t comma x irreducible not in fq of t comma x to the p for the same reason. The limit as n goes to infinity, number of f in fq of t plus, degree of f as m, g of f irreducible divided by q to the n over n should be some constant, depending on g. And the theorem here, so for p odd q greater than 2 to the 10 times 3 squared times e squared times p to the 4, then this conjecture holds for monoc polynomials of degree 2. I again mean Euler's number, yes. That's a great question. So the Euler's number actually comes out of not out of the cosmology vanishing step here, but out of the Betty number bounds. If you estimate the Betty number bounds we get, which in the simpler cases we know are actually sharp, these Betty number bounds are expressed using binomial coefficients. And if you estimate binomial coefficients using Sterling's formula and try to see when they beat an exponential, then Euler's number pops up. So yeah, if you really optimize everything, the formula would be slightly different, but it's a very small savings and for a great increase in the complexity of the formula. So we can think of this as including the analog of things like the conjecture is infinitely many primes of the form n squared plus 1, for example, because that's a monoc polynomial of degree 2. And the method here should work for non-monic f of degree 2. It would have been a little more complex to prove. We didn't feel like proving it. The paper was kind of already long enough. So both of these results are proven using our work on the Mobius function as a key ingredient. And so to explain them, I need to explain the relationship between primes and the Mobius function. And so the relation becomes a bit nicer to express in terms of the moment angle function. So we write this just in terms of polynomials, although the same thing was defined much earlier for numbers. This is a function which takes non-zero values on primes and also on prime powers. So it is equal to degree pi if f is equal to pi to the r for pi prime r greater than 0 and 0 otherwise. And over the natural numbers, it would have the same definition with a log instead of a degree. And then the counting primes of a given form, for example, prime values of g of f is almost exactly the same thing as summing the volm angle function over g of f because the prime power contributions are very easy to rule out. It's very easy to bound how many prime powers are appearing and throw those away. At what sum of the prime power, you only divide by the degree to get back the prime counting function. So almost any time in the analytic number theory, problems about the volm angle function are equivalent to problems about counting primes. But the volm angle function is a little bit analytically nicer because, for example, I have a monoc polynomial f. I can write lambda f as the sum over monoc polynomials g, dividing f of the Mobius function of g times the degree of f over g. So this is kind of inclusion, exclusion in terms of the different prime factors of f. And you can check that this is non-vanishing exactly for prime powers. And another interpretation is the identity is the volm angle function is the coefficient of the logarithmic derivative of a zeta function. Mobius is the coefficient of the inverse of the zeta function. And this degree is the coefficient of the derivative. And so this is an expression of the idea that the logarithmic derivative is equal to the derivative divided by the original function. And the exact analogous thing is true more classically for natural numbers. So any time I have a counting problem about the primes, I can reduce it to a volm angle function. And then using this identity, volm angle sum reduces it to a more complicated sum with Mobius. So it's simpler to do the reduction in the Bayman-Horn case, even in that case it is more difficult overall. So this count of primes, degree f as n, g of f prime, is roughly the sum over f monic degree n volm angle function of g of f times 1 over roughly 2n for in the monic case. Because each prime will contribute to 2n here. And that is equal to the sum over f in fqt plus degree f is equal to n. Now we'll sum over g in fqt plus g dividing g of f, maybe we'll make that h, h dividing g of f, volm Mobius function of h, and then 2n minus degree of h. And so the strategy is to break the sum up depending on the degree of h. So for terms where the degree of h is less than or equal to n, it makes sense to fix h and sum over f. So we're summing over f where g of f is a multiple of h. We fix h, sum over f with h dividing g of f, which is basically a congruence condition on f, which is a congruence condition on f mon h. And so we're counting solutions to a congruence because the degree of h is less than the degree of f. The number of solutions is portion of the number of residue classes that solve it. And this is something that's multiplicative in h. We can use the Chinese remainder theorem to split it up as a product over primes dividing h. And then the Mobius function is also multiplicative. So we end up with a multiplicative function. And it matches our predicted constant cj. So these terms give cj. So if you think about just the case where h is 1, mu of h is 1, every f satisfy this. So this will give the main term of size q to the n. The terms where h is kind of a small degree are giving adjustments to the main term, but they're all relatively easy to calculate. There is no more, well, sort of. There'll be a little bit more. So yeah, there'll be a little bit more. It's yes. If degree of h is larger than n, at least maybe larger than 1 plus delta times n for n small, then what we want to do the opposite thing, we want to fix g of f over h. So make a new variable to be g of f over h. And then sum over f, where g of f is a multiple of the new variable. So I'll call it w. And then we want to sum over f, where w divides g of f. And we get a Mobius function now of g of f over w. And so w dividing g of f is a congruence condition on f modulo w. The solution is an arithmetic regression. So we can reparameterize the solution in terms like a new variable, f is equals like w f star plus b. And if we plug in w f star plus b in here, we get a polynomial in f star. So we end up with Mobius of a polynomial in f star. And then we apply Chawa. And what's missing is some tiny range in the middle where the degree h is approximately equal to n. And here we have to do something different. And the trick we do involves the classic number theory of quadratic forms. What you can show is there exists f where h divides g of f, if and only if, h is represented by certain quadratic forms. So the first case of this is a natural number is going to divide n squared plus 1 for some n, if and only if, that natural number can be written as a sum of two squares, or more precisely, a sum of two relatively prime squares. And you can approximate n, like the number you're squaring, in terms of f in terms of n. The vector you plug into this quadratic form to represent your number. So we had to redo in the setting of polynomials over fine fields a bunch of the classical theory of quadratic forms. And what you end up with is you end up with sum of mobius evaluated at some kind of quadratic form. But then you end up with some weird extra terms. You end up with additive characters applied to the ratio of x and y, which comes from, basically, if you want to know if h divides g of f for some f in terms of quadratic forms, that's more simple. But if you want to know h divides g of f for f of a particular degree in terms of quadratic forms, it gets a little bit more complicated. And so what you have to do is you express this in terms of, some look similar to our Chao La type sums, except there exists this additive character in here. And so we have to run the argument proving Chao La's conjecture in a greater level of generality where you allow certain twists. In this case, a twist by an additive character at some kind of residue at infinity of a ratio of two polynomials. And so in the geometric setting, that turns out to be studying the same columnology groups, but for a somewhat more complicated sheaf. We're twisting with a sheaf that may be wilder amplified. Because every time there's an additive character here, we get a sheaf with a monotron of order p and that gives wilder amplification. But I guess if we're summing in the y variable, or sorry, it's x inverse mod y, x inverse mod y. Because it's x inverse mod y, there's no wilder amplification at infinity, there's only wilder amplification at some of these hyperplanes. So we're in a more general situation, but the geometry is pretty similar. We still have the same set of hyperplanes and the same transversality. And we can run the same kind of argument. And for the twin primes conjecture, the argument is simpler, except we have two primes now. So you need to apply this identity twice and you have two different divisors whose degrees vary. So you have to split up into more different cases. And again, there's a middle case. And in that case, to deal with the middle case, we used a purely number theoretic argument going back at something done by Fouvery. And maybe in the last few minutes, I'll say what is the way to think about the volmangrel function and this identity geometrically? Because we didn't use that geometrification of the volmangrel for this problem, we just reduced it to Mobius. But the geometric projection of the volmangrel is helpful for other problems. So if you look at the space affine space an parameterizes harmonic polynomials of degree n of degree n. And inside it, we have the space parameterizing square-free polynomials, which is the same thing as configurations of n unordered points in A1, parameterizes square-free polynomials. And so the roots define on n to 1 cover of confin. And that's a homomorphism, a map from pi 1 of confin to sn. And so representations of sn give locally constant sheaves on confin. And these sheaves extend naturally to an. They extend to an. And there's two different ways you could try to extend a sheaf. You could try one very naive way. You could say, I just learned a sheaf theory. I just learned about push forwards. Let's just push forward my sheaf from confin to an. Or you could try to be very sophisticated and be like, I learned about derived categories. And then I learned about perverse sheaves. And I learned that the best way to extend a perverse sheave is by taking the intermediate extension of perverse sheaves. But these two get you the same answer in this case. So interestingly, either one gives you the same extension. When you extend in that way, you get functions which are also nice from a number theory perspective. And so the trace functions, their trace functions, give arithmetic functions. So the mobius function corresponds to, well, minus 1 to the n times the function, the trace of verbanius, on the sine representation, or the sheaf corresponding to the sine representation, for exactly the reason that we just said this. The trace of verbanius on the sheaf corresponding to the sine representation is nothing but the sine of verbanius, which counts the parity of the number of orbits of verbanius. And if you extend the sheaf corresponding to the sine representation to an, it'll vanish everywhere on the complement of conf n, matching how the mobius function vanishes. And the volmanglet function is the sum from d equals 0 to m minus 1, minus 1 to the d, the trace of verbanius, on wedge d of the standard representation. So this is the n minus 1 dimensional standard representation of n. So why did this work? What are we doing? We're evaluating the characteristic polynomial of verbanius at 1 when we're taking this. This is the dth covariance of the characteristic polynomial. And we're summing it. We end up with the values of the characteristic polynomial at 1. So this vanishes. So this is 0 if and only if verbanius has 1 as an eigenvalue on standard, this n minus 1 dimensional standard representation, which happens if and only if verbanius has 1 as an eigenvalue twice on the n dimensional, on the permutation representation, on the permutation representation, which happens if and only if it has 2 orbits, because you get a 1 eigenspace for each orbit. And so this will vanish unless you have 1 orbit. If you have 1 orbit, the characteristic polynomial is t to the n minus 1 on the standard representation. So it's t to the n minus 1 over t minus 1 or on the permutation representation, t to the n minus 1 over t minus 1 on the standard. And so you get n. And then perhaps miraculously, extending this sheaf in the natural way from square free polynomials all polynomials gives the kind of extension of the prime counting function that number theorists have found most convenient to work with. And so this identity from the representation theoretic perspective come from a relationship between these wedge power representations and the sine representations that if you induce from a product of smaller symmetric groups the sine representations, you get a wedge power of two copies of the, you get a wedge power of the permutation representation, which you can split as two wedge powers of the standard representation. And by taking a suitable linear combination of those, you can find a single wedge power of the standard representation. And for some arithmetic problems, in particular, the problem of summing these functions in arithmetic operations of square free modulus, you can kind of work conveniently in this level of generality of an arbitrary representation of Sn. You can calculate the co-homology twisted by an arbitrary representation of Sn. And so you can obtain results for quite general arithmetic functions in this way. OK, thank you for inviting me to speak here. So you've talked a lot about problems which you kind of like you just kind of take them from like the rational numbers to kind of the finite field setting. Are there like problems which are new in the finite field setting or are there like problems which don't carry over like is there actually like a difference between? Like is there sometimes a difference between the two somehow? There are differences, certainly. So I mean, for example, this phenomenon we saw that the Mobius function for certain polynomials becomes periodic is not something that we believe happens over the integers, even though it's so there are. And then it's also easy to make differences if you don't properly account for things like the fact that the place at infinity is non-archimedean in the function real case instead of archimedean. So in talking about questions with an origin in the function field setting that don't come from classical number theory, I think it's somewhat subtle because if you had an origin that was completely unrelated to classical number theory, it's not clear that would be a number theoretic question. You could say it's just a general kind of question about the algebra of finite fields or the geometry of curves over finite fields maybe. But one example I can think of is there's questions where you take a number theoretic question and you transfer it to the functional context, and then you modify it in a way that makes sense in the functional context, but doesn't make sense over number fields. And so an example of this is any of these kinds of statistics questions, we're asked about the statistics of number fields. In function fields, those correspond to statistics of curves that are covers of a fixed curve, like degree D covers of P1 or something like that. But when you're considering curves, it's not so natural to look at degree D covers of P1. It seems more natural to look at all curves of a given genus. So any kind of statistical question you can ask for number fields, you can ask an analogous statistical question in function fields on average over all curves of a given genus, which unfortunately will probably be really hard because we do not have any idea how many curves of a given genus there are over FQ. We have no idea what the growth rate of the number of smooth curves of genus G is over FQ as G goes to infinity with fixed Q. Well, we have some upper bounds, but other than that, we don't really know. We have upper bounds and lower bounds, so there's a pretty big gap between them. So this is the kind of question. And then some things that are easier, like there are some problems where number theorists would study them over a given number field, but you would never think to ask for uniformity as you vary the number field. But in the function field context, it is kind of natural to look for uniformity as you vary the field, and those can potentially be more tractable. Yeah, I think that's the thing that makes the most sense, is that kind of question, which doesn't. You can't carry it back to the function field context because you can't just take an arbitrary number field of a given genus. That doesn't really make sense.