 So I'm going to start today's lecture by kind of finishing what I talked about yesterday, giving some ideas on the proof. And when I'm done with that, I'll move on to a totally different topic. I'll talk about kind of number of other questions related to automorphic forms. Well, let me first remind you what we were doing. We had a variety inside an defined by a polynomial of degree d. We fix a1 an over field k in k, where the kind of associated projective point is in the vanishing locus of the leading term of f. And then we define more d comma p of a1 x to be tuples of set or the variety or scheme parameterizing tuples g1 through gn of polynomials, or some more ep of degree s equal to e solving f. So f of g1 through gn is equal to 0. And of leading coefficients, ai. You mean coefficients in degree e? Yes. It could be not the leading, but. Degree e. So polynomials, tuples of polynomials solving this equation are the same thing as morphisms from a1 to x. And by bounding their degree, I've given myself a finite type modulite space. And by fixing the leading coefficients, it turns out I make that modulite very slightly easier to study. And so our goal is to compute the compactly supported cohomology of this modulite space over an algebraically closed field with coefficients in the analytic numbers for i large. And I said last time, how large i we would be able to do it for. And so the idea of the proof is rather than using the cohomology to count points, we're going to first figure out, or first estimate, the number of rational points, the number of FQ points, and then apply the same techniques. So let me go over here first. So how do we, how can we count this number of points? Well, I want to take the counting problem we have and kind of split it into an easy part and a hard part. So we were counting tuples g1 through gn and FQ adjoin tuples of degree to EI, F of g1 through gn is equal to 0, and then the leading coefficients ai. Oh, sorry, let's leave. So I want to express that in a way that's kind of silly, as a sum over tuples satisfying most of those conditions, leading coefficients are ai, the sum of the function that is 1 if F of g1 through gn is equal to 0 and 0 otherwise. So roughly speaking, what I've done, so this set we're summing over is a set that I understand pretty well. It's very easy to count the number of elements of the set. I have e free variables for each polynomial n, and each of them has q values. So this is set aside q to the en. And there's lots of simple functions, which I know how to sum over this set. For example, functions that only depend on the congruent class of a polynomial modulo, some polynomial of small degree. And so a general strategy, I want to express this function that I want to sum in terms of some functions that may be simpler, and some of those simpler functions. And there is a very general tool that we use repeatedly in analytic number theory when we have this kind of function that's 1 if an equation is satisfied and 0 otherwise, which is to detect by characters. Use the fact that the average of the characters of a group at any element is 1 if the element is trivial and 0 otherwise. So let me write this. It's a sum over the same set, g1 through gn and fq of xi degree gi of e leading coefficient of 1 over, so what does f of g1 through gn lie in? It lies in the group of polynomials of degree less than d times e, because the gi have degree at most e. f has degree d, so you get d times e, but the leading coefficient vanishes. And that has q to the d e member. So we get q to the d e times the sum. I'm going to parameterize the dual group using linear maps. So alpha where, so pd is the space of polynomials of degree less than d times e. So this is just a vector space over fq, and I'm considering linear maps from that vector space to the base field, and psi is a character of the additive group of fq, a non-trivial homomorphism. So I'm taking a linear form on this vector space, or an element of the dual space, and I'm evaluating the linear form, which I'm writing as a dot product, so I get an element of the base field fq, and then I apply this character psi, and I get a complex number, and when I sum that complex number, over all the alpha, if f of g1 through gn is 0, this dot product will be 0, and so this will be 1, and so I'm just summing 1. I'll get q to the d e. But if f of g1 through gn is not 0, the sum will cancel. So it's the same basic algebraic fact. We use the check inversion of the Fourier transform, and all kinds of things. Like orthogonality of characters, and using it, I have rewritten my sum, and I want to do that again. Whenever I have two sums in number theory, I always want to try to reverse the order of the sum and see if that helps. So you get 1 over q to the d f front, sum over alpha, these linear maps, p, d, or alpha in the dual of p, d, e, the sum over g1 through gn, and fq to g, degree g i less or equal to e, leading coefficient ai, psi of f, g1 through gn, dot alpha. And I'll call this maybe s e of alpha. It depends on some other stuff, but it will suppress the dependence on other things. So what we've done is transformed the counting problem into a sum of these functions s e of alpha. And it's very much not obvious that we've improved the situation at all. By doing this, it's not obvious that it's easier to understand these functions s e of alpha than our original sum. But it turns out to be the case. And a big part of the reason that this is is we can now analyze s e of alpha by different techniques for different alpha. So sum of the alpha, it's not so hard to calculate the value of this sum. In particular, for alpha is 0, this dot product is just 0. So this is 1, and the sum is just q to the d. So alpha, or q to the n e. So alpha to the 0 is very easy. Alpha equals 0 is very easy to study. And alpha equals 0, the alpha equals 0 term, all already gives us the most naive expectation. We have d e equation and n e variables. So we might expect there's q to the n e minus d e solutions. And that's what we get just from the alpha equals 0 contribution of the sum. And in general, alpha, which only depends, alpha such that alpha dot f depends only on f mod h for h of small degree. So what's one way of constructing linear forms in a space of polynomials? You just take some fixed polynomial h. You mod out your polynomial by h, and you get an element of the quotient space. You just take any linear form on the quotient space. So that's a way of conducting very special linear forms that are different from a general linear form. And those will be easy to evaluate the sum because this polynomial, modulo h, only depends on the GI mod h. So you get a sum now over residue class as mod h. And if h factors into prime, there's some factors over the primes. And you end up with sums over finite fields. And you can apply Deline's theorem to these sums over finite fields that you can evaluate them kind of explicitly. And you can bound them. And you can relate them to counting points, fusions of f over finite fields. So for such f, for such alpha, we evaluate se of alpha explicitly. And these give the main term in our sum. These give, these will give, in fact, the constant that we expect in modernist indexure, the constant we get as a product over primes involving counting solutions mod that prime, will come from these alpha that are of this very special form. And then for all other alpha, we bound se of alpha. So if alpha is special, we can calculate the sum. But the sum is often very large. And particularly in the worst case of alpha is 0, the sum is as big as it could possibly be. It's q to the n e. If alpha is not special, if it's general, if it's not of this form, then the sum is not easy to calculate. But at least it's probably small. So we just try to show an upper bound on the absolute value. And then by combining the exact calculations for sum alpha with bounds for the remaining alpha, we get our counting estimate, which has a main term in the error term. So why is this called the circle method? So this is the circle method. This is the basic strategy of introducing these characters, switching the order of summation, then splitting up the set we're summing over, applying different techniques for different sets. Why is this called the circle method? Well, this was developed over the integers where you'd be counting problems like x1 through xn in z. Absolute value of xi is the most subbound b, f of x1 through xn is equal to 0. And so you'd end up with some sum where now the key term, the complicated function you're trying to sum is 1 if f of x1 through xn is 0 and 0 otherwise. But now the sum is the equation is over the integers. We're trying to check something is 0 over the integers. And you're trying to check something is 0 by characters. You need to use characters of the dual group. The dual group of the integers is the circle. So this ends up being the sum of some kind of integral over alpha in the circle, maybe in r mod z of e to the 2 pi i f of g1 through gn times alpha. So we have a character evaluated in an integer, like a pairing, a perfect pairing between integers and the circle. And so then we're going to break up, we're going to switch the order of summation and break up the circle into various intervals. That's the classical circle method. And then here we have a pairing between a vector space over a finite field and the dual vector space. And we switch the order of summation and the break up a vector space over a finite field. It's a dual. So if you're working with the ring of integers of a more general number of things, did they do something like the circle method that showed the circle with a place by the dragging dual of the ring of integers? Exactly. So it would be a torus. Yeah. And so notice I'm doing it here using a subset of the polynomial ring. You could do what I'm doing here using the full polynomial ring, where the dual would be some kind of profinite group. And it would just be equivalent to what I'm doing. So that's mean. So in particular, to understand what this means geometrically, we need a geometric interpretation of this function s e of alpha. Because the whole circle method is about studying this function. And so we would like to interpret it geometrically. We'd like to interpret it as a sheaf. And so there's kind of two parts of it that we have to interpret. There is the sum and then there is the character. And so the sum we interpret by a highly general method. So the twisted formula left-shifts fixed point formula, I guess, says that if you have a map of varieties and you have a sheaf on x, then the sum over i of the trace of from q, the sum over y and y of fq, sum over i minus 1 to the i, the trace of Herbanius on the stock of ri pi shriek f at the point y will equal the sum over x in pi inverse of y, the trace of Herbanius on the stock of f at x. So I have some function that's expressed as a sheaf as a trace of Herbanius on the stock of a sheaf. Summing that function over the fibers of a map can be expressed using the derived push forward, the derived compactly supported push forward of the sheaf. So I get another sheaf or really complex of sheaves which tells me the sum. And so here we're summing over just an affine space, which is very easy to express as the fiber of a map. And we have just a projection map. So the main thing to geometrically interpret S e of alpha is to construct a sheaf whose trace of Herbanius is given by this psi function. And so this is done by Arten Schreyer's theory. So this function is like a polynomial function on some big affine space with coordinates given by the coefficients of all these polynomials. And so you can use that polynomial to build a covering of affine space. So y of the q minus y is equal to f of g1 through gn dot alpha defines a finite et al covering of a d e times a n e. So the a d e are the coordinates of alpha. The a n e are the coefficients of g1 through gn. And then y is this one new coordinate I'm using to define the covering. And this finite et al covering has Gaoua group fq. Because if you add any element of fq to y, you get another solution of this equation, y of the q minus y is preserved. So that gives us a homomorphism from the et al finite et al group of a d e times a n e to fq. We usually can send to the allatic numbers using psi. And so this gives us a one dimensional representation of the et al finite et al group, which defines a sheaf, which we'll call l psi of f of g1 through gn dot alpha. You have to take the extension of the allatic numbers with. Oh, yes. Very good point. And it's actually very important for the argument that I do that. I don't want to choose an l so that the p 15 unit are all contained in ql. And then, if you unravel the definitions, the trace of Frobenius on the stock at this point, at some point, will equal the value of the function at that point, which is just because what does Frobenius do? It sends y to y to the q. That's the same thing as adding f of g1 through gn dot alpha, just by this equation. And so you then take this sheaf, and you push forward along the projection, dot alpha, for pi, the projection from ade times ane to just ad. And this gives you a complex of sheaves on this affine space. And what you can guess from this identity up here is the co-amology of this complex of sheaves recovers the co-amology groups we want, up to some shifting and tape twisting. And then once you have that same inner mind, you can prove it using the theory of the allatic Fourier transform, which is a geometrization of the classical theory of Fourier transforms, which uses exactly these arntryer sheaves. So basically it suffices to calculate the co-amology of ade k bar with coefficients in this r pi streak lpc. So because a lot of you might not find these arntryer sheaves so intuitive, I'll point out that there's a different way of doing it. So instead of taking the co-amology of affine space twisted by this alpha, you can consider just the co-amology of the set where this dot product is 0. So just the hyper surface where this dot product is 0. And if you think about it, x is defined by this linear system of equations, the f of g 1 through g n dot alpha. And so there's a family, a high-dimensional linear system of these hyper surfaces. And if you take the co-amology of the base parameterizing hyper surfaces with coefficients in the co-amology of the hyper surface, that gives you the co-amology of the total space by the ray spectral sequence when you can relate the co-amology of the total space back to the co-amology of x. It's like some blow-up of some affine space at x kind of thing. So in both cases, you're led to this problem of calculating this co-amology, all of the space parameterizing linear forms. And the only reason that's easier is because we can break that space into different subsets and apply different techniques at different points. And so in particular, there's a stratification, a d e emits a stratification into closed subsets where a m d e. So one way to express it is the closure of the set of linear forms obtained by evaluation at n points. So evaluating any point gives you a linear form. Evaluating n points, you can take any linear combination of those linear forms. The set of all linear forms on polynomials you get that way is locally closed. And I'm considering the closure. So maybe my points can collide or go off to infinity. And it has a nice description in terms of the rank of some matrix being small. And so for m equals 0, this is only going to pick up the 0 form. And then for m equals 1, it'll pick up like a two-dimensional space of linear forms evaluating at points. And m equals 2, it'll be four-dimensional. It'll be larger and larger closed subsets. And for m less than or equal to e, we can calculate the stock of r pi streak lpc of f of g1 through g adm explicitly. So using the fact that alpha is a linear form that only depends, say, on the value at n points, then you can express this sheaf as like a tensor product of sheaves coming from each of those points. And you can use the Cuninth formula to say like the cohomology of tensor product is the tensor product of the cohomology. And then you get a bunch of kind of simpler cohomology problems. You get a very explicit answer for what the cohomology can look like. And not only can we calculate the stocks, we can give a nice description of the sheaf on the strata, the strata like adm minus adm minus 1. So the strata, and these are the sorts of this kind of spectral sequence. So anytime you have a space with a stratification, you get a spectral sequence computing the cohomology of the space from the cohomology of the strata. And we calculate the cohomology of each of these strata by kind of formal manipulations of this sheaf. And that gives the spectral sequence. And then at points not in AEDE, we prove a vanishing result of the cohomology in high degrees. And the vanishing result uses a technique called vial differencing, which takes these exponential sums and multiplies them by their complex conjugates and reduces the exponential sum problem back to a counting problem. And so what we did is wrote down a geometric version of a vial differencing argument that reduces the cohomology vanishing problem to a problem of bounding the dimension of some variety. And then we use Langvais to reduce that to a counting problem, and then for the rest of it we kind of follow the numerical argument exactly. The degrees of what? Oh, yeah. So the trivial bound, I mean, so this is a variety of dimension NE we're taking cohomology up. So the trivial bound is that it, I shouldn't say trivial, growth in the cohomological dimension is that it vanishes for i greater than 2NE. And so what we're going to do is 2NE minus something which is going to be linear in app. So linear in how far in the stratification we are times, I'm not going to get the constant right, but it's like N over 2 to the d minus 1 plus. It's some explicit thing like this. So as we get, as N grows, the stratics get higher and higher dimensional, and the cohomology is appearing in smaller and smaller degrees because our cohomological dimension result is getting better and better. Oh, vial differencing. And so it's this geometric form of vial differencing that leads to this funny condition on the L that I mentioned yesterday, that L should be, have even order modulo of the characteristic because vial differencing involves taking a complex conjugate of the exponential sum and the fact that the complex conjugate has the same size. And then doing that for character sheaves is tricky without independence of L results unless L is such that the complex conjugate of this character can be obtained by an automorphism of qL bar. So if psi is such that the inverse character, the complex conjugate character, is a conjugate also by an element of the Gal group of qL, then we can run this val differencing argument. So sufficient independence of L results would remove that assumption, of course. So of course it is known for alternating some of things that you have independence of L, but not are you needed for individual sheaves, which is kind of difficult. Yeah, exactly. So yeah, but for pure perverse, there is no way to use the perverse. If you just restrict the pure perverse sheaves, you can still do. I don't think so because I have no reason to believe that these push forwards are perverse. OK, OK. So another geometric question here, you consider M distinct points, but instead you can consider real board schemes of degrees. Because I'm thinking the closure doesn't matter. Because I'm thinking the closure doesn't matter. But can you recall it to me? Whether the real board scheme of 0 seconds, the closure of the part where you have distinct points. It is, yeah. I'm on a curve, so it is the closure. Ah, it is on the curve. So they're quite simple, OK? Yes. And so the reason I expressed it as a closure is the closure also allows me the points to go off to infinity. And there's not a very simple way to express what happens as the points go off to infinity, but just saying the closure is pretty simple. And that's good enough. I mean, let me mention the explicit criteria also because it's very simple. If you just take the matrix, you can evaluate alpha at the polynomial 1, the polynomial t, all the way up to the polynomial alpha t to the m. And then at t, up to t to the m plus 1. And then here goes up to alpha of t to the dE minus 1. And so here goes, I guess, up to alpha to the t to the dE minus m. If the rank of this matrix is less than m, that is if and only if alpha lies in a dE. So this sequence alpha of 1 up to alpha t to the m, what we want it to do is basically satisfy a recurrence relation of length at most m. And that's the definition. So does anyone have further questions about this? Yeah, just a small one. So you stated something over a general field, but then in the proof it was, you know. Oh, yes. So the point is that you can use spreading out to go to the finite field case and use this art trier stuff. So if you're in character zero, you can kind of always, you can always choose a characteristic where the whole argument works, and you reduce to a nice finite field. And then if you have co-mology vanishing in all the finite field specializations of a point, or in most of the finite field specializations, that gives you co-mology vanishing or co-mology isomorphism results at the generic point. Would this be the Whitney stratification for the push forward also, or would these be like local systems on the open locus? So for, no. So even for m greater than e, there's no hope of calculating the Whitney stratification. We just can't describe this sheet precisely enough to know exactly what it is, like exactly where it's least. We only know it's vanishing in certain degrees. For m less than or equal to e, I can describe a stratification where it's least. It's a little bit more refined than this. So when two points collide, here it doesn't matter. But when two points collide, you'll see a singularity of this sheet. Actually, so when m is less than e, part of the proof is that it's the closure of this set, but it actually vanishes away from this set. So it vanishes at the sheet vanishes at points of linear forms, which are lying in the closure of forms obtained by a valuation at points, but are not in the solid valuation at points. So a stratification, once it's least, would be like forms evaluating at m points, and then the closure, and then forms evaluating at m minus 1 points, and then the closure in the form. It would look like that. So we twice the number of strata. However, in characteristically, the notion of Whitney stratification is not really well-defined. So I can only say a stratification onto its least. I can't say the Whitney condition, or at least it's not a meaningful condition. Is there a way to guess that if one didn't think of ADEM in the way you've stated, wouldn't it be intuitively correct to think of stratifying the space into sort of a risky open where this is nis, and then look at the complement, and then look at the risky open? Yeah, well, certainly if you did that, if you look at the high degree co-hemology groups first, you would certainly get the same stratification. Because I know the high degree co-hemology groups are vanishing outside of these specific strata. So yeah, if I handed you the sheet and I told you what the stocks were at different points, you would be able to recover the stratification from the sheaf, at least the low values that add part of the stratification. So it is a natural stratification to consider when studying the sheaf. But I mean, yeah, actually, let me say one thing about what this means in very geometric terms. So if you think about this kind of version with hypersurfaces, you consider for each alpha the hypersurface where this equation is 0, then what you'll see, what the stratification will mean for the hypersurfaces is that for small values of m, those strata are hypersurfaces with enormously large singular locuses. They have huge singularities. And the hypersurfaces for large values of m, we can show the singular locus will have smaller dimension. And so the fact that the dimension is smaller lets you control the cohomology by general results of the hypersurface. So another way of saying what we're doing is we have your variety. When I calculate, you compare the cohomology to all these hypersurfaces. Some of them are very reasonably smooth, and you can kind of bound their contribution to the cohomology using general bounds for hypersurfaces in terms of their singularities. And the others are singular. And you have to really explicitly calculate the cohomology. And you do by separating variables, but ultimately using the code of thought. So I now want to talk about something different, which is automorphic forms, or more specifically modular forms. And so what I've decided to do to kind of avoid getting drowned in notation when I talk about this is to say most things just in the case of the classical modular forms, or the case of GL2, kind of, and then gesture at what happens in general. So I want to consider really classical modular forms, which are functions from the upper half plane in the complex numbers or complex numbers, which are holomorphic is a modular form of k and level n of z is equal to minus k of az plus v cz plus d abcd and gamma 0 of n, which is equal to sl2z subject to the condition that c is equal to 0 mod n. And also, there's some kind of growth condition which is annoying, az plus v plus d is founded as mz goes to infinity for abcd, not just in the subgroup, but in the whole group. There are a lot of interesting number theoretic questions about modular forms and their generalization automorphic forms. It's probably an understatement and kind of, I guess I should say, what Heiko operators are, right? So I don't know if I need to say this, but I'm going to say it. So the Heiko operator tp to the k acted on f if I had the point z, the sum, 0 to k over 2, sum over abcd and gamma 0 double-close set. So the most important one is that there should be some relationship between modular forms and Gaouar representations. So I guess the way we're in that, we have some vector space of modular forms should have some isomorphism to a vector space of functions on certain Gaouar representations. An isomorphism satisfying some expected properties. And so a lot of cases of this conjecture are known over integers. But over function fields, kind of even more progress has been made, and in particular, for the analog of classical modular forms. So in function fields, field fq of c, c and algebraic curve over fq, there should be an isomorphism between a space of modular forms on like g0 to fqc, and then a space of functions on two-dimensional Gaouar representations from the Gaouar group, fqc to gl2. And so in this case, this was proven by Drinfeld. And Drinfeld did this using geometry. He did it, in fact, using two different kinds of geometry. So he found two different ways to interpret automorphic forms in the function field context geometrically. So one of them, so Drinfeld developed kind of two geometric interpretations, I guess three, right? Because one using just vector bundles, one using Drinfeld modules, and one using Stukas. And I'm not going to talk about the Drinfeld modules and Stukas ones today. I just want to mention them because these are the ones that are very similar to the geometry that appears when we try to study automorphic forms over the rational numbers. For example, the modular curve has geometry that's similar to a modular space of Drinfeld modules. And so these behave in the way that people are the most familiar with if they have seen the theory over the rationals. And the modular space of vector bundles behave in a way that's really different. And so even though they're really simpler objects, like it's not very hard to define a Drinfeld module and it's easier to get your hand, or sorry, not very hard to define a vector bundle, and it's easier to get your head around what they're doing than a Drinfeld model or Stukas. There's a deranger from the perspective of the classical theory, in particular because their modular spaces have these enormous dimensions. But that's kind of the same phenomenon that we see in all these analytic problems is the spaces of interest are very high dimensional. And so I'm bringing up this conjecture on the equivalence of some vector space of ordinary forms with some functional Gal representations because building on Drinfeld's work, people started to study this conjecture in the geometric setting, and it took on an incredible life of its own in the geometric langage program that people have found a geometric analog in the same way that this calculation of the cohomology of the mapping space is a geometric analog of counting points of some kind of equivalence of categories between a category of sheaves on a modular space of vector bundles and a category of sheaves on a modular space of Gal representations, or representation of the final group, at least. What is very big dimensional here? It's not big dimensional. OK, so the modular space of vector bundles has very high dimension. That's the genus of the curve grows first. Whereas, so these modular spaces, they don't grow as the genus of the curve grows or as the level grows, but the modelings of bundles does. So what I want to talk about is what about other questions, in particular, what about analytic questions about modular forms? What is the, can there be geometric approaches to analytic questions about modular forms in addition to algebraic ones? And the answer is, yes, I think there's a lot of interesting ways to connect the kind of analytic theory of automatic forms to geometry. So I'm going to talk about two problems in the analytic theory of modular forms. The first one that I expect most of you will have heard of is the Ramanushian Conducture. And the second one, which I expect a lot of fewer of you will have heard of, is the supine arm problem. And I'll also spend a little bit of time, hopefully not too much, talking about each one and what a geometric approach to it would be and what kind of geometry you see. But I'm not trying not to go into too much detail. So first, the Ramanushian Conducture, what is that? So we have these HECA operators, Actinal Modular Forms, defined by the p to the nth HECA operator applied to f, evaluated at z, is like, I think if I normalize it by p to the n times k minus 1 over 2, times the sum over matrices A, B, C, D. Inside the double coset gamma 0 m is equal to 0 k over 2, sum over matrices A, B, C, D, coset gamma 0 of n, p to the k minus m, 0, 0, p to the m, gamma 0 of n, modulo gamma 0 of n, of the same term, C, C plus D to the minus k, f of A, Z plus C. So it comes from the fact that this twisted action of S L to Z on the upper half plane comes from really a twisted action of S L to R on the upper half plane. And so in particular, these elements that lie in S L to R, but not in S L to Z act on it. And from these elements, we kind of produce an operator. And it's by choosing it to be like summing over a set that's like left and right invariant by gamma 0 n, we make it send forms of level n to forms of level n. So p is not divisible for n. Yeah, so for p not dividing n. And so the conjecture is that the eigenvalues of t, p to the n on cuss forms, which are forms where instead of being bounded as mz goes to infinity, this goes to 0 as mz goes to infinity, I think if I've normalized it right, these are bounded by n plus 1. And so the wrong-managed conjecture is known for classical modular forms. And the way it's known is via the Langland's correspondence. We take a modular form that's an eigenfunction of these HECA operators. And from it, we construct a Galois representation where the eigenvalues of the HECA operators are related to the trace of Frobenius on this Galois representation. And then we apply DeLene's theorem to bound the eigenvalues of Frobenius on the Galois representation, and we therefore bound the HECA operators. And so this is a powerful technique, but we haven't currently made it work in all cases. So what I want to talk about is a technique developed by myself and Tomplier for proving cases of the HECA conjecture, of the Ramanujian conjecture, which doesn't rely on Langland's reciprocity at all and only involve a little bit of functoriality. And it's based on a kind of different perspective. And what you speak about will apply to any reductive group? Yes, it will apply to any reductive group. May I ask a quick naive question? Is there no Archimedean component, but is there an analogous Selvrix conjecture? I mean, I think it would just be the Ramanujian conjecture again, because there's no Archimedean component. And in the case of the Selva, you're referring to the Selva one flutter conjecture? I mean, I assume. You just ask her. Is it, in that case, is there also something in the Archimedean component of the representation that is the non-Archimedean components? Because it's always a tensile product of local. Yes, so. There, which is also something like the Ramanujian conjecture and analogous to this. Yeah, so they can be formulated in terms of the automotive representation as saying that the local representations are all tempered, which means that the matrix coefficients of the representations viewed as functions on the local group are almost in L2. So if you have the detail of what almost means there, so it has been in the modern understanding it's interpreted as a purely representation of the erratic conjecture. But. To clarify, I meant the classical statement is that eigenvalues don't class you. Yes, but the answer is quite yes. So it has a reputation there. Yeah, so I thought it was kind of a good point, because my method was complete. It doesn't work, at least not currently, for the Ramanujian conjecture at ramified primes, for showing temperedness at ramified primes. It's specifically a method about eigenvalues of HECA operators. And so it only will work at un-ramified primes. So the point I want to, the observation I want to make about the eigenvalues that I'll make it first in the classical setting and the same thing will work in function fields, is because there's algebraic relations between these eigenvalues, this statement is equivalent to a number of statements that are seemingly weaker. So it's actually equivalent to say the eigenvalue of Tp to the n is bounded by c times n plus 1 for any constant c. And it's because you could relate powers of Tp to Tp to the n. So by bounding the eigenvalues of Tp to the n, you bound the eigenvalues of powers of Tp and not bound to the eigenvalues of Tp. And you amplify. You get a better bound by doing that. So you can get rid of this constant c. So I want to actually take advantage of that. So I want to observe that the Ramanian conjecture holds for all forms of weight k and level n, if and only if the trace of Tp to the n squared on the forms of weight k and level n, I guess the cost forms, is bounded by c times n plus 1 squared for any c. So why is this true? Well, the only of direction is just because this has real eigenvalues. So the square is positive. And so if I can bound the trace, I bound any of the individual eigenvalues. And the direction the other way is because each term under the Ramanian conjecture is bounded by n plus 1 squared. And the number of terms is some constant. So you automatically get a bound of this form just from the trivial bound. The trace is mostly eigenvalues. So we can actually interpret the Ramanian conjecture for this whole family of forms at once as a statement about the trace of this Hecker operator. And that is a statement that is more admissible of a geometric interpretation because you don't have to worry about an individual automorphic form. You can just geometrically interpret the space of their functions. So what about the dimension of the space of Casper? I missed this part. So if you know that for each form in order to get the bound, then you still need the dimension to the number of... Ah, the dimension is fixed. Yes. So the constant is depending on k and n. Yes, you can replace it because if you get better presentation, you'll get immediately exponential growth, even, yeah? Yeah, yes. It's replaced by a cube or whatever, yeah? Yes, so there's a gap between the polynomial growth and the exponential growth. And in fact, the way our method works, it doesn't exploit that gap. But it's possible that there is a way to do it. So what we want to do is to interpret modular forms geometrically and then understand the trace geometrically. And also, let me say like this, I'm interpreting. So there's two ways to, there's many ways to define modular forms in the setting of function fields. So there is the kind of, there's the kind of idelic way, which is you take your field f of functions on a curve, you look at g2 of the idels of that field, you mod out by gL2f, and you mod out by some compact subgroup, and you look at, for example, L2 functions on this. But for anyone who, if that definition is not intuitive, let me give a kind of concrete model, which is we're going to replace the upper half plane by gL2 of formal power series in the variable T inverse divided by formal power series in the variable T inverse and also by scalars. So this also can be described as the vertices of the Brouhot-Tetz tree. So we replace, if you take a hyperbolic plane and let the limit of the curvature goes to infinity, you got an infinite tree. And we're replacing the upper half plane by an infinite tree, or at least the vertices. And the modular forms will just be, there will be functions invariant under gamma 0 of n, which will just be the set of triple, of matrices A, B, C, D in SL2, F, Q, T, with C congruent to 0 mod N. And so you just have them invariant under this function. So you could also view them as functions on the quotient. And the quotient of this tree by the group will be a graph. This graph will have some kind of complicated looking finite part, and then it'll have a more simple part, like an infinite part going off to infinity. And for eigenforms, the definition of a cusp form is very simple. It's just a form where the function becomes 0 eventually on this infinite part. It has finites. So there's a definition of modular forms in a setting that is kind of similar to the definition of the classical setting. But it also emits this very nice geometric interpretation because this quotient set can always be interpreted as the set of vector bundles of rank 2 on a curve over FQ with extra structure. So in this very concrete case, gamma 0 of N mod of OGL2 is the inverse of T, double square brackets of T times T to the v. This would be equal to the set of, I guess this would be PGL2 bundles on P1 over FQ with a rank 2 vector bundles with a rank 1 sub-bundle over the vanishing locus of N, the vanishing locus of the polynomial N, modular line bundles, modular twisting with line bundles. So we're looking at vector bundles on this curve, in this case P1, with some extra structure. At this divisor, this closed sub-scheme, the vanishing of N, we're fixing a sub-bundle of rank 1 on the sub-scheme. And then we're modding out by something. If you twist by a line bundle, it will count that. It's the same. And so this is the same thing as the FQ points of some modulite space. So in this case, it would be bund to N, the FQ points. So automorphic forms are going to be functions on the FQ points of this modulite space. And so the HACA operator is an operator taking functions on this space to functions on this space, has a kernel, which is a function on the square of the space to N squared of FQ. So the HACA operator, you're taking a function on bundles, and you're writing as a new function by summing over bundles, which are related to the original bundle by taking a sub-bundle or a sub-sheep, whose quotient is supported at one point. And you can express that as the action of basically a matrix, the action on a space to functions of a matrix, which is given by some function on the square of the space. And we're just trying to bound the trace to bound the trace. And so this function has a geometric interpretation. There's like a modulite space of pairs of vector bundles with a map between them, where the co-kernel is supported at one point. And that maps to the square of the modulite space of bundles. And we're just taking the push forward of the constant. Yeah, write that down. So the HACA kernel is like a pi shriek QL for pi. It's the map from modulite space of pairs of bundles, v1, v2, and a map from v1 to v2, where the co-kernel of f has length n and is supported at a point p, support it at p. And this maps to the pair v1, comma v2, maps to the set of just pairs v1, v2. So it's the trace. So we have, so we can express the HACA operators coming from this kernel. This kernel is coming from the trace of Rabinus on the sheaf, which comes from some variety mapping to the product bund 2n cross bund 2n or some stack mapping to the product bund 2n cross bund 2n. And this is all very, very explicit geometry. And so we need to show the trace is not too large. So we need to prove some kind of co-homology vanishing result for this sheaf. Yeah. Well, I can just write the trace of square in terms of the original traces. And so this is a strategy, but there's one really big problem with it, which is we can only hope to prove the bound that we want if Ramanusion holds for all forms in our space of form. We're going to trace them in the space of form. We can only get this bound if it holds for every single form. So the problem is that I have set up this base of forms to include both cost forms and Eisenstein theories. And Eisenstein theories are known not to satisfy Ramanusion. So to get even a hope of proving the conjecture, I have to get rid of the Eisenstein series. And I have to do it in a geometrically nice way. I have to make a kernel, which only sees the Eisenstein series and doesn't see the cost forms. And so I have to, but I don't know how to just subtract the Eisenstein series directly. So I do something more drastic. I put a local condition that forces all forms to be cuspid. So rather than choosing functions that are invariant under a subgroup like this, you can choose functions that are aquavariant for a non-trivial character. So the kind of thing you can do is you look at matrices A, B, C, D, where A and D are congruent to 1 mod T. C is congruent to 0 mod T. And then you have some identity like A, B, C, D dot F should not equal F, but should equal F times psi of B plus C. B plus C over T. Let's say B mod T. So this is not quite gamma 1, but this is, I'm choosing this subgroup to be, this subgroup is a little bit smaller than gamma 1, because I'm setting both A and D to be 1 mod T. This is a Iohori subgroup. And I'm choosing a generic character of the Iohori subgroup in that case. And so this is going to force my forms to have a local representation, which is called a simple supercuspital representation. And it's a theorem that if your local representation is supercuspital, the form is cuspital. And so this condition on the form is a condition I can detect by the kernel. I just have a hack of kernel. I put this psi into the definition of the hack of kernel. And then I'll produce a hack of kernel that only sees forms, which satisfy this twisted invariance property. And on the geometric side, that corresponds to changing this definition. So there's some local condition on the map at a specific point. And then putting an art and trier sheaf into the geometry. And so I'm pushing forward not the constant sheaf, but an art and trier sheaf. And in a general case, you'd be pushing forward like an intersection columnology sheaf coming from the geometric stock asomorphism. You'd be pushing forward that tensor with an art and trier sheaf. So it's very easy to put this kind of condition into the geometry by twisting something by an art and trier sheaf. And then you have a hope of the theorem being true because you've thrown away all the things which are not expected to satisfy ramanusion. You have only things which you believe to satisfy ramanusion. But you still need to prove something about the hack of kernel. And so the key theorem is that the hack of kernel, when it's twisted by psi, comes from a perverse sheaf. And it turns out, if you calculate the bound you get from perversity, you get a bound for the trace in terms of a power of q times some bedding number. Is it the pu of lc? Yeah, the proof shows it's pure. So what happens, you have some pi shriek of some lc and you have a pi star of some lc and there's a natural map between them. And the proof shows it's an isomorphism. So that gives you the answer. It's a pure perverse sheaf. You have a per perverse sheaf on a product, on some kind of bun 2 squared, a product of two modulated bases of vector bundles of extra structure or g bundles of extra structure. And so this gives you a bound for the trace. Yes, we're still using the vague conjecture, but not applying it to the Langland's parameters. So some totally unrelated or seemingly unrelated sheaf. Or rather, maybe I should say a very indirectly related sheaf. This gives a bound of the form trace of tp to the n squared on our space of modular forms. So it is bounded by dms times some cos, depending on n. So if you look at the power of q that appears from the weights, it's perverted exactly the way you expect. So the bound you get is exactly the trivial bound you get under the Ramanian conjecture. So you get expected dimension of the space of forms times the bound for an individual form. And the power of q that shows up in our bound is exactly the power of q that shows up in the space of forms. However, we have no idea what the betty numbers of the sheaf that show up in our bound are. They could have some totally arbitrary dependence on n. For all we know, they could be much, much worse than exponentially now. So we get an estimate that has good uniformity in q, in the finite field q, but absolutely none of the uniformity we want in this lower case in the degree of the Hecker operator. And so there's a trick, which is to use cyclic base change. So there's some theory that given a modular form on fq of c will produce modular forms on a field extension, any cyclic field extension, and we're going to apply it to a specific cyclic field extension fq to the E of c, extension of the base finite field. And the Hecker eigenvalues on the two sides can be related. So we apply our bound to a much larger finite field that will give you a bound for the trace up here. And we can deduce, which would be a bound for the eigenvalues up here, we can deduce a bound for the eigenvalues down here. And so using that, you can amplify, you can turn a bound that looks weaker into a bound that looks stronger, and that will give you a bound of the form that you want. That will replace your bound with dims times m plus 1 squared using the algebra of how these Hecker eigenvalues are related. So. And do you get all the cash form? You said that you choose a particular condition that is sufficient to ensure that you have a cash form because you control the representation with a specific supercash to the one in some colors. So this is not everything. Correct. So this representation that I wrote down, this condition is a special case. There's other similar conditions that work for us that I didn't write down. But it is absolutely not everything. So we define in the paper a specific class of local representations that our method works for. And we can prove it for every representation that one of these places satisfies that condition. But there's tons of cost forms that don't satisfy it. In particular, there's tons of cost forms that are unraffied at every single place. And so our argument fails for those. And that's one caveat to our theorem. And the other caveat that appears in the statement of the theorem is the cyclic base change is not a theorem for arbitrary groups over function fields. So we need to use it. I mean, we're working in the case of general split reductive groups beyond GLN because the Ramanian texture is known for GLN. It's a theorem of LeForg. But so we need cyclic base change for these general groups that's not proven. The people who are experts believe that it can be proven by the known methods. But I think still nobody has done it. So it's a theorem. It's a Ramanian conjecture at unrhamified places subject to these two caveats. We have this local condition. And we need to assume cyclic base change. We have this local condition. And we need to assume cyclic base change. Sorry, but if you have some quaternion group, there's no cusp form. Is it all forms of cusps? Yes. No. And so the problem is that, yeah. So I thought about this. The problem is that we need everything to be true geometrically. So quaternion groups, if you extend the base finite field, then it's no longer a quaternion. So it no longer has this everything a cusp form property. And because we're trying to prove a geometric theorem, the geometric theorem can only be true if the Ramanian conjecture is true, not just for all forms in our family, but all forms in the analogous family you get when you extend the base finite field. So quaternion algebras don't work. And in fact, it's a theorem that there's no groups that have that property, like the property that every form is cuspable, the property that they don't have any parabolic subgroups. There's no groups that have that property when you extend the finite field in this way. You mean no non-privile semisimple group on which kind of groups you consider there? Yeah, OK. Maybe a way of saying that is that every reductive group over fqc over the field functions over a finite field, it becomes quasi-split over an extension of the base finite field only. So it can't be, yeah, it will always develop a barral subgroup as you go to an extension of the base finite field specifically. It may not be split, but it's at least quasi-split. And so where do you run into problems exactly when you have ramification? It's not. What do you mean when I don't have ramification? No, when you do. I hope you. Well, it depends on the kind of ramification. I mean of representation. So basically the method involves treating forms in families. I don't treat one form at a time. I treat a family of form set by local conditions. And there are some restrictions on the local conditions that are needed for the method to work. And the only restriction, I only have bad restrictions at one place. There's one place where I need to have a sufficient amount of wild ramification. And if I don't have that in the local condition, the method doesn't work. And so you can see the method doesn't work kind of just thinking about the statement, because the theorem I would be trying to prove is not true if I put a weaker local condition at that place. But I will say at least a few words about the proof of this geometric theorem. So what about this map pi? Are we using to get the perversity? So this HECA correspondence, this modulate of pairs of bundles with a map between them, is studied a lot in the geometric theory. But people usually think about a slightly different map. People usually think about the map from this to the modulate space of one bundle or to the modulate space of the other, extending v1, v2, and f to just v1 or just v2. And that map is a kind of projective morphism. The fibers aren't smooth, but they have bounded. Their singularities are independent of the fiber. It's a locally trivial vibration. It's a very nice map. But the map pi is a different map. It's mapping from the HECA correspondence to cross-bundles, so modulates of pairs of bundles. And it has a different character. So in particular, it's not so hard to see that pi is affine. So if I have two vector bundles, the space of maps between them is an affine space. And this is true relatively on the modulate space of vector bundles. And the space of maps that have this property is a closed subset of that affine space. So pi is an affine morphism. And so pi streak of an R and trier sheaf will therefore be pure perverse if it's isomorphic to pi star. So we have the co-homology of compact sub-boards and the ordinary co-homology. The ordinary co-homology is semi-perverse because it's affine. The co-homology of compact sub-boards is like co-semi-perverse if they're isomorphic, then it's perverse. And the same thing works for purity. And whenever we want to tell co-homology with compact supports is the same thing as ordinary co-homology, it's very helpful to look at an explicit compactification. We want to say that these differ by a contribution of the boundary of the compactification, the divisor and infinity of the compactification. We say that contribution is zero, will be done. And so we need to kind of define a compactification of the Hecker correspondence. And then we're going to do this in a fairly naive way. Like I said, maps between from V1 to V2 are vectors in some affine space. It's a vector space of maps. And there's an obvious way of embedding that affine space into a projective space. And that gives you a projective compactification. And there are some more sophisticated ways to define compactifications of this Hecker correspondence. There's a Drenfeld-Leford-Vinberg compactification, I think it's called, a kind of naive way. It's sufficient for us. And then basically we have to show some rj star L psi vanishes locally on the boundary of that compactification. So the way the compactification works is we're letting a map between V1 and V2 kind of go to infinity in the vector space of maps. And so the way that projective geometry always works, things that go to infinity are just things, like ordinary things up to scaling. So we end up with maps up to scaling. But what can happen is the maps, we're supposed to have rank two. And when they go off to infinity, they start having lower rank. They have rank one. So you end up with kind of studying locally these lower rank maps between bundles. And you kind of need a local model to study the local behavior of the sheaf and to check that something vanishes. And so there's, I think, different approaches to finding local modules of this compactification, models of the calculation of the haggar correspondence. And we did it in a way that I thought was kind of fun. We used the space to model itself. So this, if you think about a haggar correspondence, in the classical setting, the haggar correspondence is itself a modular curve, a pair of elliptic curves with a map between them of degree p. That is itself a point of a modular curve. And so there exist haggar correspondences from the haggar correspondence to itself. And the same thing is true in our setting. This compactification has haggar correspondences, which is use haggar correspondences from the compactification to itself. And since the haggar correspondence is basically a smooth morphism, at least kind of generically, this lets you see that a bunch of pairs of points in this space that are related by the haggar correspondence are locally isomorphic in the smooth topology, because you have a space mapping to both of them smoothly. And so this sheaf, we have to study, can be calculated smoothly locally, so it takes the same value at a bunch of different points. And so using this, you can start at any point in the space and you can follow a chain of haggar correspondences around to a space where calculation is very easy. And what you do is you just follow the haggar correspondences to a point that's very, very high up in the cusp. And when you go up high up in the cusp, your vector bundles become very unstable. And that means they have a lot of automorphisms. And then those automorphisms really help you. You can use the automorphisms to show vanishing of the stock by showing the stock is invariant under the automorphisms and showing it's not invariant under the automorphisms and deriving a contradiction from that, unless it's zero. So in summary, we use geometry, which is kind of analogous to things you see in the classical theory of Mazur curves. Like we have these haggar correspondences between haggar correspondences. We have these points high in the cusp, but it's also kind of weird. Like points high in the cusp don't develop all these automorphisms in the classical set, but they do in the geometric set. Does anyone have questions about that? So there is no analog of this in the classical case of a classical model of a plane. There is no, you don't give in, this method doesn't give you any new insight. Yeah, I don't think so. So it might with like sufficient brilliance suggest something you can do, but it has to be very different from what we did. I have no idea. So one comment is I thought about what this compactification is. So it's one separate method of this compactification, what it is in the setting of the classical Mazur curve. And there is an analogous definition, but it's like the furthest thing from a compactification you could imagine because the map that we're trying to compactify, one, it's already compact. So the relevant map is like the map from X0 of P to X of one cross X of one, this kind of map in the classical setting, which is a finite morphism. So it's already compact. And when we try to compactify it by the same method, the thing we have to add, we're actually adding a higher dimensional variety. I think it becomes like a three dimensional manifold or something, I don't remember exactly. It's a higher dimensional space we're adding when we try to compactify it. So it's already compact and we're adding something that's much bigger. So it's a very different from the usual notion of a compactification. And it's probably doesn't really have any meaning. Yeah, so like, I mean, like certainly all the et alcohomology stuff we're doing has no meaning in the classical setting, but even this compactification doesn't seem to have meaning in the classical setting. However, I mean, when I thought about this stuff, the bungee on the Farg-Fontaine curve had not been studied at all or had been studied only a little bit. And so maybe there is some meaning to some of this in the setting of bungee on the Farg-Fontaine curve. I have no idea. May I ask another question, which is maybe a little bit naive. There was a result that the theorem that came out last year by Harris and Shiburtaru, which shows that in this setting, I believe, split reductive groups of a function field that if Ramanujan holds for one place, it holds for all places. Do you get by an implication of their theorem? Yeah, so that, so one observation you can make in the classical setting, it's easier to make it said if Ramanujan holds that one unrampfied place, it holds for every unrampfied place. Oh, it has to be unrampfied in their work, too. Yeah, I think, it might be, so, yeah, so I think that is, yeah, so that follows from the Langlands parameter thing and the Lien theorem. It follows from Langlands parameters and the Lien theorem in a very easy way. Even though you can't yet use Langlands and Lien to prove Ramanujan directly for all forms, you can show that if it holds at one place, it holds at another place. They have to be unrampfied or not? Yeah, it has to be unrampfied because at the rampfied places, we don't have a good understanding of how to relate the automorphic to the Galois side. We have this LeForge and SDA theorem, but it's not very explicit. So, it might be that, I mentioned that fact to Harris and that's why it's in the paper, I'm not sure. It might be that, you know, so some simple observations like that, there's observations that I made while working on this that I thought were much less interesting than the main result we were doing, but turned out to be kind of helpful in other kind of settings in the great products and Langlands that like, like for example, that appear in my recent paper with Harris and it might be related to that. But yes, so somehow, yeah okay, let me make a comment about that. So the Ramanujan conjecture for groups other than GLN, it's not just for cuss forms, it's not true if you only restrict the cuss forms, there's an additional restriction of generosity. And so in our statement, we have this additional restriction from this local condition, but it's not obvious why the local conditions we write down imply generosity or it's not obvious if they apply generosity. So we put in our paper some explanation of why our theorem should follow from the Ramanujan conjecture and the general philosophy in Langlands so it's not obvious from the way we stated it that our theorem in fact, in fact follows from that. And that chapter which is kind of disconnected from the rest of the paper, some of the ideas in that have also been helpful. So in the last few minutes, let me set up what I'm gonna talk about next time. So a completely different analytic problem about modular forms and kind of one of the most naive analytic questions you could ask is like, okay, f is a function from the upper half plane for the complex numbers. Let's not think about it as an automatic representation, let's not think about it as anything more sophisticated than that, let's just think about it as a function. How big is the size of f? And in particular, we can kind of normalize it so that the average size is fixed and ask does it kind of stay mostly close to its average size or does it sometimes get very large and other times very small? So what I'm gonna say is like, how big is like the infinity norm, the super norm that the max, like given that the some normalization, the two norm is one. So in the classical thing, because it's not quite invariant under gamma, not n, you have to be a little more careful. So the right thing to do is size of f of z, times the imaginary part of z raised to the power of k over two. So the max of this for z in the upper half plane, the absolute value of f is not invariant under the modular group. So if you just apply the right element of the group, it gets very enormous. But if you normalize it by multiplying by correct power of imaginary, it is invariant under the modular group. So to find the function of the modular curve, if f is a cost form, this function goes to zero at the cusps. So this is kind of well-defined for cusp forms f. And you can ask how big it is as like a function say of the level of the modular form. Does this grow, how rapidly can this grow with the level as the average size of f is fixed? And so the question that people have studied and you can study the analog in the function field context. And what I'm gonna talk about next time is how the analog of this function field context is related to some, I think, very nice geometry involving this modular space of bundles and then the modular space of Higgs bundles that maps to it and like the neopoded cone in the modular space and kind of how they relate to each other. But I won't spend the whole time talking about that. If you are not a fan of the analog of very form stuff, after that I'll talk about my next topic, which is Galois groups and the distribution of Galois groups and the distribution of class groups and its relation to the geometry of Hurwitz spaces and also to questions and probability theory on categories. Yeah, so I think I'll stop here. Does it make sense to see if there's an obstruction to not knowing this, I think there's change. Does it make sense to not just look at T square but look at T fourth, T sixth, try to see how Cn changes with respect to that. So the problem is I have no idea what Cn looks like. So... It's unfollowed from the explicit sheet that you have. Yes, it's not explicit in such a way that I can calculate the bedding. It's very, very far from that. In fact, I mean it's something explicit but so the kind of the algebra, so powers of Tp to the n can be just written in terms of the other Tp to the n and then the way that the sheet theory works is it kind of does that for you automatically. So you don't gain anything by trying to be clever instead of saying I'll work with higher powers instead of working with a specific Tp to the n. It's the same. You'd be getting the same bedding numbers either way. Yeah, I just currently do not have a great idea for how to evaluate them. I mean in particular, I'm pretty sure that they grow at least exponentially in n. Like in general, there's cases when I really know how to exactly calculate the bedding numbers of something relevant in functional number theory. And usually the true growth rate is exactly exponential in the dimension of the variety. When you have a very good understanding of the bedding numbers, it's an exponential function, exactly. And so probably that would happen, at least exponential in this case as well, because this is related to the dimension. So it would be at least exponential and so exponential would not be good enough to get it without amplification. Which it's not the only case where the bedding numbers are exponential and if they were sub-exponential you could use that to get something interesting. There's also a setting in herwitspaces where the bedding numbers are provably exponential. But if they were sub-exponential then that would be cool. You give one on the individual, would you have form? Well, yeah, it's for an eigen form that you wanna give a bound for. Okay, it's not, okay. But yeah, so the theorem is in the function field context. But yes, if it's not an eigen form you actually can't give a useful bound in terms of the L2 because what you can do, you could say I'm just gonna choose an eigen basis and I'll choose a linear combination where the coefficient of each one is the complex conjugate of its value at a given point. And if you do that you will get a form that's highly concentrated in one point and very spread out among the others. So the game is showing that eigen forms are not like that. Which there's multiple kind of arithmetic methods to do and there's also a geometric method to do.