 Thanks for the introduction. Yeah, so before I get to sums of three cubes, let me just briefly review the situation for sums of two squares. Okay, so if we like fix an integer k, at least two, and maybe we want to know, I mean, it's well-known which integers, non-negative integers say, and are sums of k squares. For example, there's criteria by Fermat for sum to two squares. And this implies that the density of the set of sums of two squares is zero. Although it's kind of close to, I mean, it's only barely zero because the number of representable integers up to size n is n over log n to the one-half times some constant, roughly. That's due to Landau. Okay, so that's kind of a borderline case. And then after that, so it's Gauss showed that, I mean, he determined which integers are sum of three squares, or I guess the genre, maybe. And then, okay, so that density is positive. And once you go to four or more squares, then every integer is represented. And, okay, so in particular, the density of represented sums of four squares is, or more is one. Okay, so basically, I mean, the point is that the more variables you have, the easier it is to solve these equations because there are more room to find solutions. So another way to say this is the critical kind of case where the number of variables equals the degree. So k equals two. The equation x squared plus y squared equals n. This has, if you average over n, this will typically have a constant number of solutions on average. And some elementary geometry of numbers argument. And the fancy way to say this is that this equation x squared plus y squared equals n is a log clubio variety. And let me briefly discuss the kind of, maybe an algorithmic point of view on these topics. So the kind of algorithmic difficulty of solving this equation x squared plus y squared equals n is maybe comparable to another kind of two variable quantity equation. That's famous, the factoring equation. They need to try to find non-trivial solutions x and y such that x, y equals n. But yeah, I mean, just like factoring in special cases, this equation can be efficiently solved in polynomial time. Like if n is a prime that can be represented as a sum of two squares, then you can actually find those x and y efficiently using, say, continued fractions or elliptic curves or other tools. This is due to the references on the bottom. And building on this sum of two squares equation for primes, rabin and shallot, we're able to at least heuristically give an efficient algorithm for solving the three squares equation, x, y equals n for all n when they exist. But this is only possible because there are many more solutions than in the kind of critical case where the number x, y, square equals n. Okay, but those last two points were kind of just performed. So now I want to move on to the main topic, which is diophantine problems, especially cubic ones and three and six variables. I'll explain why it's these two variables. Basically, we'll have statistical results in three variables and kind of like full, not statistical results in six variables. So for simplicity, I will work over a function field k of form fq adjoint t, so rational function field in one variable where fq is a finite fields. And since I'm doing cubic problems, it'll be kind of convenient to assume that the characteristic of this finite fields is bigger than three. So q is a prime power where the prime is bigger than three. Right, so many kind of classical problems over the integers or rational numbers have natural analogs over the function fields. In particular, the integers here are analogous to the polynomials inside k, namely fq is a bracket t. So this is like the ring of integers of this fraction field k. So, yeah. So the advantage of working over a function field is that there are more tools available. For example, the Riemann hypothesis is proven over function fields for arbitrary L functions, and that's basically due to the lean's resolution of the Baye conjectures. And okay, so these L functions are valuable for diophantine problems because they kind of glue together a lot of local data on say point counts, modular different primes. And a famous example of their diophantine relevance is the Birch and Swinerton-Dyer conjecture, or BSD for short, on elliptic curves which connects this like the L function, has to be L function LSE associated to an elliptic curve to the group of rational points E of k for, yeah, so this is for any elliptic curve over like a global field, like for example k. So this has to be L function which I'll kind of say more about on the next slide, it packages together all these local point counts into a single object, while the group E of k is about global like actual rational points over k. So the BSD conjecture is that the kind of order vanishing the analytic rank of LSE at the central point equals the algebraic rank of this group E of k. So local to global kind of principle. Okay, so the way these L functions are defined is for simplicity, let me focus on the case of interest in this talk, which will be, let's say we have a smooth projective hyper surface H over this function field k. And concretely, like the simplest examples if you want are diagonal equations like a one x one cubed plus dot dot plus a n x n cubed. So say n variables, say a homogeneous cubic equation or any other degree is also fine. And the coefficients a one through a n are all non-zero elements of your field function field k. So at every prime high of your function field, so I mean, if you scale a prime by like a scaler in FQ then it doesn't change like the arithmetic there. So to normalize things, I'll just, primes will basically correspond to monic irreducible polynomials for me. Okay, so for each monic irreducible polynomial pi, you can count, you can kind of study the geometry of H when you reduce it mod pi in a suitable sense. And that will give you your local factor of this of your passive a L function, which I'll call L pi of S H. And the cleanest case to define this is when at primes of like good reduction basically. So if you have a prime pi that does not defy the discriminant of H, in other words, where H is non-singular modulo pi then the L function, the factor L pi S H is given by a product of one over one minus some things where the number of things is some classical Betty number. Like it won't depend on, it will just depend on H and not on pi. So let me call that number of Bs of H, then you'll have a bunch of these factors like one minus something and then it will be like some alpha J pi. These are like the Satake parameters and their sizes are in this case, they're all in the unit circle by the Bay conjectures. And concretely what these are is, it has a point counts is, you look at the number of points H and some field F pi R, that's a like degree R extension of the residue fields of pi. And then compare it, you might now you would expect it to look like the number of points in a protective space of the same dimension, but then there'll be an error term and that's what's given by the Bay conjectures and the resolution. So if you sort of do a square root normalization, so divide by the square root size, then you will get minus one to the, by the Luschitz-Trace formula, you'll get minus one to the dimension of H times a sum of these, like these Frobenius eigenvalues alpha J pi the R. So this formula works for any degree R and that any degree R extension of your residue field and that determines the numbers alpha J. So I've normalized it, so they're all of size one. So it's sort of analogous to like the classical Liemann's data function. Similar with that. Okay, so that's the, okay, so you can also with more work define these local factors at the primes where pi divides the discriminant, but this is defining it for almost all primes. So that's, let me just focus on that. Okay, so these are the local factors and then the global has to be L function is just what you get by multiplying all of these local factors together. So it incorporates all the different primes pi that we saw on the last slide. And okay, so another way to think about L functions more globally is to, I mean, this L is H, basically what it does is it captures the geometry of some higher dimensional variety. Namely, let me call it like H total, which is basically what you get by when you think of H but like with T as an additional variable. So for example, if H is this hypersurface A1 X1 cube plus that of dot plus A and XM cubed where A1 through A and are elements of your function field, then you can also think of A1 through A and as, I mean, they lie in this FQ T cross. They're basically rational functions of T. So you can also think of all like pairs X and T over FQ such that this equation A1 T X1 cube plus that of dot plus A and T XN cubed equals zero satisfied. That's like basically the advantage of working over the function field that you have this additional variable T that lets you connect the global L function to geometry as well. Okay, so using this kind of, and this geometry kind of sheds light on the global L function, which is something we're still missing in the number field case or classical case. Okay, so, but anyways, I wanna talk more about families of L functions rather than just individual L functions. And this is because in practice, L functions don't just live in isolation but can be like they naturally live in families. And conjectures of cat Sarnak and then further developed by Keating, Stath and others. And these are kind of informed by the geometry of this H total kind of that I mentioned on the last slide. They suggest that as your like hypersurface or variety varies, this, its L function LSH should resemble the characteristic polynomial of a random N by N matrix A from some classical group where N is roughly of size log of the discriminant of H. So here the discriminant of H will be some rational function of T. So it will be some element of K and then the absolute value I use here is the like Q to the degree of the discriminant. So log Q of the size is just the, so all I mean is N is roughly the degree of the discriminant as a polynomial in T. So, okay. So this N measures the complexity of H and then in natural families you'll, you should, as H varies, you would expect this characteristic polynomial to kind of be random. So this is like a random matrix theory prediction. And it has been greatly, I guess, enhanced by subsequent work of Conry, Farmer, Keating, Ruben, Stath, Stath, developing the integral moments conjecture and then further by Conry, Farmer, Zirnbauer on the ratios conjecture and then other, for example, undried and Keating have extended that to the function field setting. And this, okay. So this gives the moments and ratios conjectures and basically the advantage of these conjectures is that they are kind of arithmetically a lot friendlier to work with than the original random matrix theory predictions even though the original ones are kind of the, the impetus for these newer conjectures. So, yeah. So here's one kind of classical famous application of this kind of random matrix theory prediction basically due to Katz-Sarnak or like it's in their bulletin paper on the subject. And then maybe I'm going to state it in a way using the ratios conjecture so that would be building on these two other papers I mentioned CFK or S and then subsequent works as well. So basically assume like a random matrix, ratios conjecture namely like a random matrix theory type prediction for a certain ensemble of L functions. Let's say we're interested in the family of ordered curves E sub N defined by X cubed plus Y cubed equals N times Z cubed. So this is an elliptic curve over K because like I'm going to assume N is non-zero here. Then this is an elliptic curve over K because it has a, it's a homogeneous cubic equation and three variables and it has a rational point one minus one zero. So yeah, okay. So this is the curve. We can look at its passive A L function L S E N as we saw on previous slides. And then you like assuming these random matrix theory predictions for what these L functions look like as N varies. So in this case specifically you're interested in kind of zeros. Near the central point. So that will be, you'll be interested in the ratio L prime over L of these functions L functions as N varies. Okay, so if you assume this, then you get some really nice consequences. Namely you get that the Bertrand-Swinnerton-Dyer conjecture as well as Goldfeld's conjecture on the equiduced distribution of ranks namely that half of these elliptic curves should have rank zero and half of them should have rank one. That will hold for, I mean, maybe not all, but at least a density of one, but the proportion one of these curves E subnet. Right, so this is an application of L functions to rational points. And it proceeds basically through the like structure of these elliptic curves. For example, work of like Gross-Zagier and Coley-Wagen proving some special cases of the Bertrand-Swinnerton-Dyer conjecture. Yeah, but I mean, okay. So this is, there's nothing really special about this cubic twist family E sub N that I mentioned here. This, I mean, similar kind of results hold for most families of elliptic curves. But this is a really kind of robust results. But in each case, the kind of group structure of, and the kind of the fact of the equation is homogeneous. So it's really about rational points. That's kind of these things are really essential to the, for this kind of argument to work. And the main topic of today is, I'm gonna change the N times Z cubed to a minus sign. And that makes the problem much harder because now it's really about integral points and there seems to be no like convenient structure like in BST or anything like that. Okay. But we'll still like prove something to look kind of similar to this result. So, but yeah, as I said, this is gonna be less direct cause there's not something like BST that we can try to, there's no structure that we can really pass through. So instead of directly working with some L functions connected to the diaphragm equations directly that we're interested in namely these sum of three cubes equations X cubed plus Y cubed plus Z cubed equals N. We need to work with some fancier varieties. So these will be, I'll say more about these as we go on, but let me just define them right now. So these will be the L functions LSC of a certain family of varieties, very explicit varieties. These hyperplanes sections C dot X of this Fermat for cubic hyper surface X one and six variables. So X one cube plus X two cube plus the dot X six cubed equals zero. And then slice it with C dot X here. I just mean C one X one, the usual dot product C one X one plus C two X two plus dot C six X six equals zero. So for each variety, just like for, I mean, these are basically hyper, these are basically hyper surfaces cause once if you slice a hyper surface by like a linear equation, that's basically another hyper surface once you change variables. So the same definition of passive A all functions I gave earlier applies here. And these will be kind of, as I said, we want to work, these all functions really make sense only for smooth varieties. So we want some discriminant to be non-zero, but basically most tuples C one through six will have non-zero discriminant. So this is just a small thing. Okay, so the point is, I assume that the assuming that the, like this random matrix theory type prediction, the ratios conjecture holds for this new ensemble of L functions, LSC, and specifically will be interested in kind of these negative second moments, one over L S one C times one over L SC two, L S two C. So I'll state what this ratios conjecture says precisely on the next slide, but the idea is it's a random matrix theory prediction for this family. So assuming this kind of, then you can show that XQ plus YQ plus ZQ equals N is soluble in FQT for a positive density of N. You don't get, you don't get full density. So that's work in progress, but we just wrote up this simpler version first. It should be possible also to get density one. And there are no local obstructions in this case. Yeah, thanks. So, okay, but one thing, I mean, this current result is also, it goes in a different direction as well. It's more robust in that, let's say you wanted to solve your equations in the like wearing type situation where you might restrict your variables to be Monic or something like that. That's the analog of like making, requiring your variables to be positive. So, I mean, in these kind of smaller sets, density one is simply not true. But as I said, if you want to get density one, then it should be possible with more work, but we, yeah, you do need to require, allow your kind of variables to not just be Monic, but they might be not any polynomials. Okay. Yeah, so I just wanted to talk about this positive density results today because it's cleaner, but as the methods should also work for density one with more work. But yeah, our paper on this positive density one should come out tomorrow, I think. So, and then we'll work on density one after them. Okay, great. So let me say a little bit more about the ratios conjecture. So the precise hypothesis is it will involve some standard like arithmetic factor AS1S2, which is absolutely convergent in the interesting past the critical line. So let me just not say much about that. That's not really the main content of this conjecture. So that's a part of kind of well understood part. So the main thing is they're basically we want, I mean, when you, okay, when you study one over an L function, you have to be careful because the L function has zeros on the critical line. So you have to shift away from the critical line and then you're, so let's say we're on the line one half plus beta where beta is positive and I'll let it basically think of beta as a small positive constant, but you could also let it vary with Z somehow. Then basically what we need is that this mean value of one over L S1C times LSE2, we want an asymptotic formula for this mean value. And so what it looks like, and this is basically standard in the ratios conjecture, it will look like a bunch of kind of, so we're on the line one half and then you'll see a bunch of, we see a bunch of polar terms, P of S1, P of S2, Zeta S1 plus S2. These are things that like really on the line one, right? Cause S1, like S1 plus S2 will be on the line one plus two beta, two S will be on the line one plus two beta, S plus one half will be on the line one, sorry, one plus beta. So all of these are close to the line one, so these are really well understood things, but they do capture some interesting like log type behavior and so on. And then, okay, so we also need to include some error term. For our application, we need some particular dependence on this with beta, but it's like actually weaker than what the ratios conjecture predicts, ratios conjecture, so the ratio conjecture certainly predicts this and more. It predicts that like for any beta, say it's bigger than one over Z say, that's, but you could also just think of beta as being some small constant, then you can actually take this power saving six beta, Z to be any fixed constant omega, like some small fixed constant power saving omega Z with omega independent of beta. So this is certainly falls under the ratios conjecture. And then the next thing I want to discuss is some recent developments on the ratios conjecture. So a recent breakthrough in homological stability by Bergstrom, Diakonu, Peterson and Westerland last year and this year, like last week by Miller, Pat's, Peterson, Randall Williams on the archive, they've resolved the kind of, let's say, ife moments conjecture for quadratic Dirichlet L function. So these are a little bit simpler, but they're still like of the same kind of geometric type as what I'm interested in, the LSEs. Okay, so the moment's conjecture is kind of opposite conjecture. It's where instead of L's in the denominator, you have L's in the numerator, okay. So you're like L, S, chi, D to the ith power or something like that. So they resolve this with a uniform power saving independent of I. So it's kind of like getting this omega type that's independent of beta here. It's independent of I in their setting. And, but they do this like, I guess under a Q restriction. So the Q has to be large enough in terms of the moment I in question. So it's, I guess, yeah. So I think, I mean, I think Dan Peterson will be giving a talk later this semester and you can say much more about this, but let me just say, mention some of the ingredients, not all, some of the ingredients in the proof are exponential Betty bounds, as opposed to say super exponential Betty bounds and a linear stability range or the co homology of certain local systems associated to certain synthetic local systems. I guess Sarnac kind of calls this topology beyond the Bay conjectures. And I kind of like that. So I wrote that here. Basically, the point is that it's using monitor me groups and monitor representations in a kind of even deeper way than has been used in the past. So like Deline in his proof of the Bay conjectures used monitor me groups and it over certain families of L functions in an essential way. And like, for example, cats has built on that beautifully to tackle other questions. For example, the computing moments of Closterman sums and things like that. On the other hand, it seems that in these examples, the notion of the like monitor me group is much coarser than what is used in these new results by Miller, Pat Peterson and Randall Williams, which really kind of bootstraps from deep automorphic results on the co homology of the arithmetic groups by Burrell. So yeah, I think it's a really exciting new direction. Let me just briefly mention two things before moving on. One is that, so I got an email earlier from Diakona this week who suggested that maybe if we had better Betty balance, for example, like polynomial or sub exponential, maybe that could help to remove this queue bigger than like I kind of restriction. On the other hand, yeah, so I don't know the status of that. So presumably exponential is the best known at present, but it would be very interesting to determine what are the best Betty balance possible. Okay, the final thing I want to briefly mention since we're interested in the ratios conjecture. So yeah, I have a kind of short semi expository note on the archive earlier this week, explaining how this breakthrough can be extended to the case of the ratios for these quadratic L functions. Let's say you have I, L functions the numerator and J, L functions the denominator. And again, you need to restrict the like shift in the denominator. Let's say we restrict the real part of S at least one half plus beta, then as long as Q is big enough in terms of I, J and beta, then you can kind of do this with a uniform power saving independent of I, J and beta. So this is kind of analogous to the moments independent of I thing as long as Q is larger. So this is indeed, this is the kind of result that would suffice. This would prove the conjecture on the last slide for large enough Q certainly, but it remains a very interesting open problem to prove the necessary Betty bounds and stability. Of course, yeah, so this is a very new territory. It's not really clear the difficulty, but it's, I think it's very interesting. Okay, so yeah, so I mean, maybe a brief comments about the proof of this result is that it uses kind of the Lefschitz Trace formula to re-express this, these statistics of L functions as using in terms of traces as an alternating like some of traces of Rubinius on certain co-immolgy groups. And then you need kind of two things to control these co-immolgy groups. One is you would like to understand the co-immolgy groups near the top very well. That's what the stability range does. And then the other ones you just want to bound for those and those you use, you do using Delene's Resolution of the Ray Conjectures plus the, these kind of exponential Betty number bounds for the sizes of the co-immolgy groups. So it's combining those two ingredients that makes it work. And okay, let me also mention that. So like this is for a geometric family like these Dirichlet L functions, but for harmonic families, we'll saw one has also proven, basically proven like most of the ratios conjecture for, I mean, in this, with this kind of Q restriction for certain harmonic families. For example, if you look at like, I mean, yeah, so different a character with fixed conductor or something like that. But yeah, so it seems that the ratios conjectures starting to become within reach for more general families as well, but it remains to be done. But okay, so let me return to the, what I've done with, you know, browning and glass. So I want to give some background on sums of three cubes. Let me focus on the integer case for now, though the situation, like, because it's more classical and more well known, but I imagine that the situation over FQT should be similar because these are all like very, usually these kinds of problems are not outrageously easier over function fields. They're still very sparse and difficult. So okay, anyway, Mordell in 1953, he suggested that maybe producing large general, say like not in these special families of solutions that you can sometimes find, that producing these large general solutions to the sum of three cubes equation should be as hard as like proving that pi is normal or something like that. So maybe it's just like some impossible statistical problem. And specifically asked whether, you know, three can be written as a sum of three cubes and more than the two small solutions that were known for a long time. And okay, as I said, these are supposed to be very rare solutions, because this equation is log-colab-yaw. And beautifully, and this has been talked about in past number three of web seminar, I think by Sully Lindt, the Booker found a solution for 33 that was very difficult because there's no structure, basically. And 42 was even harder, but she did it with Sully Lindt. And then three was fine solving Mordell's question was even harder and required a bit of luck this last, yeah, so anyway, but I think I just want to illustrate difficulty. So we're going to concentrate not on individual equations, but on statistics. And the statistical idea, let me just briefly recap. So it's a kind of analog of this familiar linear algebra trick. If you have like a linear map between two finite dimensional vector spaces, that's kind of as injective as possible, then it has to be surjective. This is, so we do this a lot in analysis, the second moment method, which is the statistical analog of this trick using Cauchy-Schwarz to double the number of variables in a problem. So for example, if you have F a cubic form in four variables with non-zero discriminant, then Browning and Visha proved modern's conjecture like for this doubled hypersurface, so eight variables, F of X equals F of Y away from some concrete locusts, friendly locusts. So what this means is basically that the fibers of F are controlled in mean square. And okay, so this implies by this second mode, this is kind of saying that F is close to like as injective as possible on average. So this kind of the second moment method, so if you set up some counting function, use Cauchy-Schwarz in the right way, this will show that the number of F such that RF, this representation function is non-zero will have a positive density. It's kind of the first moment is easy to compute the second moment is upper bounded by the Browning-Visha results. Okay, so we're going to do, this is a classical strategy, we're going to do something similar. So if you want to produce sums of three cubes, it is enough to control a second moment. And that second moment will correspond to a six variable equation, sum of three cubes equals sum of three cubes. So let me just briefly mention the best unconditional results over the integers are due to Vaughn and Woolly, but yeah, so I will concentrate on the FQT case where there are better results known and I will explain why. So specifically we will be interested, let me for convenience change variables and move all of these variables onto one side. So I'm going to replace you with minus U, minus V, minus W to get all variables on one side. So then we'll be interested in the six variable cubic F equals sum of six cubes. And specifically we're interested in NFB, which is the number of X such that, all of their degrees say are B and F of X equals zero. So we want to count the number of solutions to this. And specifically we would like to show that NFB goes at most like Q to the three times B. So I will first discuss some conjectures on this topic and then results on this topic. So the basic prediction is the Hardy-Litterwood prediction that NFB should grow like a product of local densities times Q to the B times six minus three. So six is the number of variables, three is kind of how difficult it is to solve a cubic equation. So Q to the three B is the expectation, lively, but the point is this constant in front is kind of subtle and I mean, in the Hardy-Litterwood prediction, it's a simple constant. It's just the product of local densities measuring the biases of these equations at different places. But Hooli showed that this, at least over the integers, Hooli showed that this prediction is actually false. There are too many solutions that are not accounted by that Hardy-Litterwood prediction. Namely, I mean, there are also these kind of structured or linear solutions, right? Like if you want to make F equals the sum of six cubes equals zero, one way to do it is just like pair up the variables and set those equal to zero. So there are also like Q to the three B on the order of Q to the three B of those kinds of solutions. And Hooli, Monin and so on, they've suggested that perhaps you should just add up those two terms, the kind of Hardy-Litterwood random misprediction plus the kind of structured linear solutions plus an error term. And the difficulty of this conjecture, I mean, which is reflected in the fact that the two main terms are kind of completely different and of the same order of magnitude. The difficulty is that it kind of lies beyond the classical circle method. If you try to write it as an integral and then apply certain, like, while some estimates, you can't even in principle do that because of this square root barrier. You can't do better than square root cancellation point-wise. But the, I mean, this closer man method, which I will discuss soon, it sort of kind of duly expresses your point count. So it kind of harmonically decomposes the difficult part in another way. So it kind of opens the door to progress. So it no longer gives up kind of point-wise while belts. And it really uses a lot of averaging. And, okay, so let me also mention that we're only going to be interested in second moments. So there's six variable point count. But if you're interested in higher moments and other statistics, you can see work of Desiree and Eckhart and Lendro. And I think Desiree has given it, whoops, I didn't talk on this before. Okay, so what is known now using this method, which builds on work of Hooley and Heath Brown, it's known that the NFB is at most Q to the three plus epsilon times B. So it seems that we're pretty close to what we want, which is Q to the three times B. Here epsilon can be anything. And what this uses is GRH for the L functions that we mentioned earlier, associated to these hyperfin sections VC. Yeah, so here's what the closer man method is roughly. So these L functions first came up in work of Hooley over the rationals, but I'll kind of explain it over the function field instead. So basically you use the circle method, you want to write your point count as a certain Fourier integral, then you break up the circle that you're integrating over into arcs. So don't worry too much about the parameters, but the point is, okay, so you have, your arcs are kind of, they have these approximated by these ferry fractions that AA over R, where R has kind of some bounded size, Q to the three B over two. And then what you do is you kind of average over these arcs as much as possible. So in particular, you can average over the numerators A associated to any given denominator R over your circle. That already helps a lot, but we want to go further and we'll also average over R. So the point is that you have this difficult counting question that you express as some average in terms of some average of certain nicer quantities over module I R up to some size. And instead of summing over X of size, Q to the B, now we'll have a dual sum over size C up to Q to the B over two, which is shorter. So there's a Poisson summation argument here somewhere. So eventually these multiplicative quantities that you get, they are connected to one over L as I'll expand on the next slide. And this can be bounded under G R H for real part of S bigger than a half. But one difficulty is that these L functions are also coupled with kind of other subtle factors related to this discriminant that I kind of mentioned earlier kind of related to the singularities of these hyperglance sections VC. Now, eventually the near optimal bound follows from this method with a few plus epsilon. But a difficulty in kind of removing the epsilon is that there are many different sources of epsilon in this argument. So it's not just like one thing that has to be fixed, several things have to be resolved to remove the epsilon. Okay, so yeah, so more specifically what this method does is we have this point count NFB and we can write that as a dual sum over C up to some size Q to the B over two and modulate R up to some size Q to the B over two. And it will involve oscillatory integrals ICR and exponential sums SCR. And yeah, so it will involve like a plet Fourier, it will be some Fourier transform you see a plus C dot X. But the point is that this sum SCR it's kind of multiplicative in R and it also relates to the geometry of the geometry of these varieties VC that I mentioned earlier. So before proceeding, let me just briefly mention that when you do it's kind of well known that the central term C equals zero it leads to this Hardy-Litterwood prediction but not the other term conjecture not the other main term conjectured by, you know, who we monitor and so. So there's something that needs to be explained from other parts of this expression and more. So yes, as I said, these SCRs you can see, I mean, FX and C dot X, so some out, yeah. So they're related to this geometry of the intersection of those two. And who we noted that if you properly normalize these X control sums by square root heuristics so R to the minus seven halves times SCR then this kind of resembles the coefficient of one over an L function. And where this comes from is that if you have, yeah, I mean, if you look at this normalized sum you can relate it to this point count by directly computing. And then when you apply the left chest trace formula there'll be like a sine minus one to the three. And that's why you get like Mobius type thing instead of like one over L rather than L. Okay. So use of the fact that these varieties are odd dimensional. So yeah, as I said, you can use GRH to get square root cancellation over R but I mean, there are other things you have to worry about too but let me not explain those right now, right? So what they get is this three plus epsilon unconditionally because GRH is known over function fields by Galen. But okay, there are other sources of epsilon and in particular, we haven't really discussed anything about the locus where the discriminant is zero. It turns out that you can unconditionally prove that that gives Delta C equals zero gives you the like actual full main term conjectured by Houlis and so on. And the reason in a nutshell for this is that whereas we previously saw that these normalize sums are typically of size one, if the discriminant is non-zero mod P, if the discriminant is zero then actually you typically have a bias. These sums are a bit bigger like by probably the one half than you would expect. And somehow that bias kind of adds up through the delta, through the circle method eventually to give you the main term conjectured from these special solutions like the special linear solutions and the six variable equation. And okay, so ultimately this kind of bias it comes from results of Bobville on certain quadric bundles. Both, we use it twice actually once for surfaces, once for clinics. So what it remains is to analyze this locus where delta C is non-zero and we would basically like to use the ratio's conjecture. A difficulty is that like L function, the function field case, they carry a bit less information. You don't have tools like partial summation available in any natural way, but a beautiful symmetry observation kind of lets you factor out this integral in dietic ranges. And what that leads to is to estimating sums like this. So by dietic composition, we want to estimate sums like sigma B N zero N one of this, with this integral factored out and then you split the moduli R into two parts, one for the kind of singular moduli that are divided the discriminant and the others that are kind of relatively prime to the discriminant. So those kind of behave rather differently. And okay, so Glass and Hockfield started with this show is that each of these individual dietic sums has like three plus epsilon size. And then what we do is we remove the epsilon and also improve this estimate in two other aspects. We kind of get decay in this like bad modulus size parameter N zero and also in this other parameter three B over two minus N zero minus N one. So when these are tiny, we use the ratios conjecture mostly and other L functions techniques to get the desired like cancellations, but in general, we need other ideas based on the size and factorization of discriminants, which is a kind of new inputs. So let me just say briefly because then maybe short on time. So if, for example, one of the ingredients we use besides ratios is we prove some analog of the Sarnac tree density hypothesis to handle certain kind of failures of a Ramanujan type conjecture for square root cancellation and the ingredients in that are like kind of basically range of, for example, they include work of Hooli establishing square root cancellation for not just what the Bay conjectures give you, which is for smooth varieties but even for mildly singular varieties. And then also work on Bruze and Dronilu on discriminants, Pudan on the square free sieve and Ekadol on the geometric sieve. Okay, and then the way we apply the ratios conjecture is like, okay, so there you can show that with some work apply one query duality and such you can approximate these these normalized exponential sums, they're derisly series at good primes. You can approximate it well by one over L times but you need to go even further than who we did. You need to introduce a further correction factor related to like Chevy-Chev's bias and so on, things like that. So this will be this P of S that we saw earlier when I formulated the ratios conjecture. And the point is that when you plug in the ratios conjecture and kind of do a suitable averaging and contour shifting argument, you eventually get a log free square root cancellation bound for these S tilde C, these normalized exponential sums. And that's kind of, as you can see, that's why we can remove, one of the reasons we can remove this epsilon from this glass and hot filter. But we need the other ingredients too. Yeah, so let me just summarize briefly. So yeah, so here's the main result again. We assume this ratios conjecture for some, then we L function, so some standard random matrix theory type prediction, then we get this positive density. And the proof combines kind of different biases and cancellations that go kind of beyond the Bay conjectures. There's this like Boeville result that gives you local biases on the boundary, delta equals zero. And then we also did a lot of other things to handle like established cancellations near the boundary. And then that maybe you also need these global cancellations when the discriminant is close to square free and maximal size. And that's where we have no other tools we need the ratios conjecture. And probably it should, as Starix says, be some kind of topology beyond the Bay conjectures. So yeah, let me stop there. Thank you very much for listening.