 Okay, so actually the title is Automorphic Forms on Higher-Ranked Groups, at least the official title. Okay, well, there will obviously be some overlap with other lectures, but that's probably not a bad thing. So I guess Philipp in his lectures has focused on the group SL2R. Yeah, I think that's because it's a short one. Aha. Okay, well, never mind. In any case, SL2R is the underlying group for classical automorphic forms, and there are many ways to generalize this due to certain special isomorphisms. One can view SL2R as the connected component of SO2-1, and then the natural thing to generalize this is to look at SON1. Of course, you can generalize this even more and look at SOPQ, but let's not be too general. Another way to look at SL2R is to view it as SP2R, or depending on your taste, SP1R. And then the natural generalization is SP2NR. Or what Emmanuel suggested, you can view this as PGL2 plus R, plus means positive determinant. And then the natural generalization is PGLN. Now this has rank one, so strictly speaking, this is not a higher rank group, but in some other sense it is a higher rank group. It has rank one over R, but over QP it may have much larger rank, so perhaps it does qualify as something of higher rank. This has rank N, and this has rank N minus one. I will mostly focus on PGLN, but I will also mention some things about hyperbolic spaces and the symplectic group Siegel modular forms. So let me start with hyperbolic spaces. So we are interested in automorphic forms on SON1. And to get started, we need some coordinates. So we start with the Eva-Zava decomposition of SON1, and, well, abstractly, it's a product of three groups, N, A, and K. So I'm not quite sure how to, no, no, no, I'm just trying to find out how to organize the blackboards appropriately. Perhaps I'll just continue here. OK, so K is the maximal compact subgroup that's most easy to describe. This is one and then SON. So that's obviously isomorphic to SON. And then A determines the rank, and you can easily see it's a rank one group. It depends on one parameter, and then the rest is the identity matrix. So that's isomorphic to SON1, one, perhaps connected component of the identity. And N is a bit harder to describe. It's easier to describe the corresponding Lie algebra, and then N is just the exponential of the Lie algebra. And so it's matrices of the form identity plus a matrix N plus a matrix N squared over 2. That's the beginning of the Taylor series expansion, and turns out that the rest is 0. Where N is of the following form, N is, well, this N is not this N. So this is a matrix N. Take a different font, yeah? So it's a matrix of dimension N plus 1, and it has a vector here of dimension N minus 1, and the same vector again. And it has a vector here with a minus sign and the same vector here, and the rest is 0. So this is a vector in R to the N minus 1, and this is isomorphic as a group to R to the N minus 1. OK, so there are other ways to choose coordinates. Here I'm assuming that my quadratic form is something like minus x squared and then plus y squared plus z squared plus w squared. So it's minus 1, plus, plus, plus, plus, plus 1, and so on. Sometimes, OK, so let's write this down. Is this gone forever? So this uses the underlying quadratic form minus 1, and then 1, 1, 1, 1, 1. Or I don't know, maybe the signs are different. Maybe it's 1, and then minus 1, 1, whatever. But this is obviously a form of signature 1N. Often, the quadratic form 1, 1, and then s is used. So this has one hyperbolic plane, and then here it's the identity matrix. So this is also a form of signature 1N is used. And then everything looks a little bit different. OK, so hyperbolic space is SON1, modulo SON. And that's a natural generalization of the upper half plane. And indeed, if N equals 2, and perhaps I have to take the connected component here of the identity matrix. If N equals 2, then this is just the usual upper half plane. And if N equals 3, this is what Akshay introduced yesterday. That's hyperbolic 3 space. And again, there are several models of this hyperbolic space. There is the hyperbolic model. This is the set of all x0 up to xn in the positive reals times the reals to the power n, such that x0 square minus x1 square up minus xn square equals 1. And then you have a natural action of SON1 on this hyperboloid. Maybe more familiar to you if you have grown up with the usual upper half plane is the upper half space model. There are n coordinates in this case. Here we have n plus 1 coordinates, but one equation. Here we have n coordinates. And we just require the last coordinate to be positive. So in the case N equals 2, this is just the usual xy complex plane. And there is a third description in terms of Clifford algebras, which is sometimes quite useful. And let me spend a little bit of time defining the relevant objects. cn is the Clifford algebra, which you may or may not have heard of. So the Clifford algebra is an algebra of dimension 2 to the n. So as a vector space, it has dimension 2 to the power n. And a vector space basis is given as follows. We take the power set of some basis elements, i1 up to in. So I take n elements, and then I take the power set. And so I get obviously 2 to the n elements. And I interpret a subset. So any element in this power set is a subset of this set, i1 up to in. And I interpret it as the product. I'll give examples soon. So this is a bit up-using notation, so strictly speaking. The basis is, so it's r plus ri1 plus up to rin. And then all products are i1, no, wait, i1, i2, and ri1, i3, and so on. So it's a product and not really a subset, but it's clear how to do this. And so this defines the Clifford algebra as a vector space, which is not particularly interesting. It's an algebra, so we need some multiplication relations. And I called them i because they play a similar role as i of complex numbers. So in particular, we have ij squared is minus 1 for all j. And we have ia, ib is minus ib ia. This is what you know from Hamilton quaternions. And in fact, it turns out that the Hamilton quaternions are a special case of this. And there are some more relations. I mean, this is not enough to get all the relations, but the multiplication table is known. So as an example, c0 is just the real numbers. c1 is the complex numbers, and c2 is the Hamilton's. So c2 is r plus ri plus rj plus ij, and ij is k. OK, so inside cn minus 1, we have an important vector space, vn minus 1. And this is the vector space consisting, so generated by all basis elements of degree at most 1. So i1 plus plus up to in minus 1. So that's a vector space of dimension n inside the Clifford algebra of dimension 2 to the n minus 1. And I can view the upper half space hn, and this up there should be hn, as sitting inside this vector space. And in a very natural way, if I take the upper half space model, then I have n coordinates, the last of which is positive. And I simply map this to x0, x1 to xn minus 1. So I don't do anything in this map. OK, so you can view the upper half space as sitting inside a Clifford algebra. So in the familiar case of n equals 2, it sits inside c1, and c1 is the complex numbers. That's what you know, the upper half plane is part of the complex numbers. And as Akshay mentioned yesterday, the upper half space, n equals 3, can be viewed as sitting inside the quaternions. Now the important question is, how does the group SON1 act on this? I mean, this is obvious in the hyperbolic model. But it's not so obvious in the upper half space model, and it can be very well described in this Clifford algebra model. There exists a set which I call SV. V stands for Valen, who more than 100 years ago introduced this theory, SV of cn minus 2, which is a subset of matrices, 2 by 2 matrices, with entries in the Clifford algebra, cn minus 2, acting on Vn minus 1. Bifractional linear transformations. If I take a matrix G, a 2 by 2 matrix with not all entries, but so not all of these matrices are allowed. I have to take a certain subset. But they are 2 by 2 matrices with entries in cn minus 2. Then this is az plus b times cz plus d inverse. Now this algebra is highly non-commutative. So the order does make a difference. I cannot write az plus b over cz plus d. I'm doing all of this in the Clifford algebra, which is highly non-commutative. For G is abcd in SV cn minus 2, and Z is in Vn minus 1. And this makes sense. So abc and d are elements in cn minus 2. But of course, I can embed cn minus 2 into cn minus 1. And then I know what the product with an element in cn minus 1 is. And it turns out, which is not easy to see, you have to show it, that this is an invertible element in this algebra. Not everything in this algebra is invertible, but it turns out that this element will always be invertible. Do you want a definition of SV that is invertible? Yes, probably. So the definition is a bit more complicated, but this is certainly one of the requirements. And the definition of SV is rather complicated, but I can give you the definition in simple cases. So here are some basic examples. SV of c0 is simply sl2r. So there is no extra assumption. It's just, well, the determinant is 1. But other than that, it's just everything. The same holds for SV c1, this is just sl2c. But already SV c2 is quite complicated, or a bit more complicated. This is the set of all matrices g, abcd, 2 by 2 matrices with entries in the Hamilton's, such that the following holds. ab star minus bc star equals 1, ab star and cd star are in v2. So they have, OK, I'll explain this in a second. And what is star? Star is the involution. So star is the involution that maps x plus iy plus jz plus kw to x plus iy plus jz minus kw. So in particular, star is the identity on this vector space v2, because v2 is defined by last vanishing coordinate. And it changes the sign of the last coordinate. OK, let's make a reality check. What's the dimension of this? And this turns out to be a group. What is the dimension of this group? Well, for each entry, over the reals, you have four possibilities, because these are Hamilton's. So this is dimension 16. This means that the last coordinate vanishes, so this subtracts one dimension, another dimension. And here you have a random quaternion that has to be one, so this subtracts four more dimensions. So in total, you have 16 minus 1 minus 1 minus 4. It's dimension 10. You have 10 degrees of freedom over the reals. And the dimension of SO41 is 10. So this is good. OK, so reality check, the dimension of SV CN minus 2, sorry, C2, is 16 minus 4 minus 1 minus 1 is 10. And that's the dimension of SO41. OK, so the most interesting case is, so the most interesting case, I guess, is the case n equals 3. That's the case that Akshay mentioned yesterday. Well, perhaps the most interesting case is n equals 2. But yeah, I'm supposed to talk, not to talk on the case n equals 2. So then H3 is SL2C modulo SU2. And as we have seen in the upper half space model, typically the coordinates are x, y, and r in R3, such that r is positive. And so you can view this as a Hamilton quaternion with last vanishing coordinate. So this is sitting inside the Hamilton's with last vanishing coordinate. OK, any questions? So if you want to have further reading, literature on, especially on the hyperbolic 3 space, but also on hyperbolic n space and this valent group and Clifford algebras and so on, basically everything by Elstrott, Grunewald, and Manneke. They have several three-author papers, and you find everything in great detail in their works. What? There is a famous book. The book treats hyperbolic 3 space in complete detail. You find everything you want to know, hopefully, in this book. So that's certainly the most important reference. But this hyperbolic n space is treated in research papers. OK, so the Archimedean theory is very similar. The n case is very similar to the case n equals 2, simply because it's a rank 1 group. And in particular, so there is one Laplacian eigenvalue. There's one spectral parameter. And one has similar, for instance, one has similar bounds towards Ramanujan. There is a Kuznetsov formula, which has a very similar shape as the original Kuznetsov formula. And you can find this in many works. You can find this in work of Resnikov, Miatello Wallach. And there is also a long paper by Kokdel, Lee, Piatetsky, Shapiro, and Sarnak. So the Archimedean theory is fairly similar to the classical case. Hecker theory is quite different. Because Hecker theory, so there are Hecker operators. If you take an arithmetic subgroup, you can define Hecker operators in the usual way. But the Hecker theory is a bit different. Because over qp, SON1 may have large rank, not necessarily, but depending on p, it can have large rank. And then the theory is a little different. I mean, as Akshay said yesterday, Hecker operators, if they exist, change the picture completely. And so Hecker theory is a very important part. And the Hecker theory is more complicated. Because SON1 over qp may have rank n plus 1 over 2 Gauss bracket. So already in the case n equals 3, in the case n equals 2, you see 3 over 2, maybe 1.5. But if you take the Gauss bracket, it's still 1. But n equals 3, the rank can already be 2. And so if you have a given rank, then at least morally, and in some sense, very precisely, the Hecker algebra is generated by as many elements as the rank says. And so you see this. So if n equals 3, then the rank can be as large as 2. And you can see this in the classical picture if you view this as automorphic forms over an order in an imaginary quadratic field. There are ramified primes. Well, ramified primes are not interesting. But there are split primes and inert primes. And if you have split primes, then you get two Hecker operators for both copies of the split primes. So example, n equals 3 if p, the rational prime ideal, decomposes as p, p, bar, then one gets two Hecker operators, t, p, and t, p, bar. So this happens for half of the primes. And if p is inert, then, of course, you get only one. OK, so why is this interesting? I mean, you can define whatever you want. But does this have any arithmetic significance? Well, certainly the case n equals 3 has a lot of arithmetic significance because it's automorphic forms over an imaginary quadratic field. But what about higher rank, or higher n, so higher hyperbolic spaces? What is the arithmetic significance? So here is an arithmetic example. And that's associated with a theta series and quadratic forms. If you have a positive definite quadratic form, you can easily write down a generating series for the representation numbers, and then you get a theta series. And because the quadratic form is positive definite, the representation numbers are finite, and there are only non-negative representation numbers, so you have no problems with convergence. And you get a modular form on the upper half plane. If you have an indefinite quadratic form, then this doesn't really work because there are infinitely many, I mean, the representation numbers are infinite. And you have negative, potentially, you can represent negative numbers. So it's not really clear how to define a theta series for an indefinite quadratic form. And so Siegel developed the theory. So let q be an integral quadratic form with signature n1. And somehow we want to define a theta series attached to this quadratic form. But the naive thing of just writing down, generating series doesn't work. So Siegel introduces the following. He introduces the so-called majorant space, a majorant of q is a positive definite quadratic form. Well, it's a positive definite, say, symmetric real n by n matrix, in fact, n plus 1 by n plus 1 matrix. Matrix r satisfying, OK, I continue over here, r q inverse r equals q. And one can show that if you have one of them, you can easily write down all of them. So if you have one such matrix r, then all matrices are of the following form. These are of the form g transpose rg for g in SOq. So the special orthogonal group attached to the quadratic form q. So all of the matrices satisfying this are given by 1, and then you conjugate, well, it's not really a conjugation, but with a matrix in SOq. So that means g transpose q, g equals q. OK, and now we are ready to define the corresponding theta series, and we define the theta series attached to the matrix q as follows. It's a function of two arguments. It has an argument in the upper half plane, and it has an argument in SOq. And it's the sum over all vectors, integral vectors of dimension n plus 1, exponential. So what you would like to do is something like h transposed. This is not correct what I'm writing down, but what you would like to do is something like h transposed qh times z. This is what you would do. So then there is no g. This is what you would do if q was positive definite. But since q is not positive definite, this doesn't make sense. It doesn't converge. If you just take the x coordinate, then it's still OK, and for the y coordinate, you do something different. So you do xq, and for the y coordinate plus iy, and here you take g transpose rg for a fixed majorant. So you fix your favorite majorant, h, where z is x plus iy, z is x plus iy in the usual upper half plane, and g is in SOq. This is the so-called Siegel-Theta series, and it turns out it's a modular form in both variables. It's a modular form in z as a usual modular form on the upper half plane with respect to some congruent subgroup, which depends on q. So q has a certain level, and then you have to mod out a certain congruent subgroup. And it's also a modular form in g. So in g, in the second argument, it's a modular form for something that's isomorphic to SOn1, because q has signature n1. And in the first variable, it's an automorphic form, a usual automorphic form on the upper half plane. But if you keep the first variable fixed, then you get a nice interesting automorphic form on SOn1. So it is modular in z and g. So there is some arithmetic significance attached to these forms on hyperbolic spaces. OK, any questions? This is analogous as it starts in 0. Well, in what sense? I mean, certainly it's not a cuspital. So in this sense, it has to do with Eisenstein series. And you can probably, I mean, if you keep g fixed and view it as a function in z, then you can decompose it into Eisenstein series. Yes? In what, I'm confused in what sense it's a modular form in g. Don't you need, like, some discrete subgroup to say what means the modular form? Oh, yeah, yeah, yeah. Same with here. I mean, you have to mod out by a suitable discrete subgroup, which depends on q. So I mean, it depends on the arithmetic of q. There is some level, if you mod out by some congruent subgroup, both in z, in both variables. There was a question up there. How was the Hector operators indexed? I mean, in SL2, Hector operators are indexed by pn for all n in z. Or in SL2c, you can index Hector operator by all prime. By prime ideals. How was the Hector operator indexed in general? Good question. I don't know. I think, at least in this case, I worked it out and it turns out they are indexed by quaternions. So yeah, they are indexed by matrices. This is called the quasi-determinant, this expression here. And you can, if the quasi-determinant equals n, a real number n, then this corresponds to the nth Hector operator. Just as in the, yeah. So the point is, this has to be, I mean, a priori the quasi-determinant could be any Hamilton quaternion, but then it doesn't commute. So you have to take something from the center. And the center is just the reals in this case. And so at least in this description and for the case of C2, the Hector operators are parametrized by quasi-determinant being n. And I'm actually not sure how they are parametrized in general. And I doubt that you find this anywhere in the literature. I mean, for SN2 case, the real algebra is polynomial algebra of Tp1 and Tp1, which you can do it. So can we at least find the polynomial algebra of somewhere else? Well, so if you forget this picture with the valent algebra and just go back to Son1, then of course, I mean, this is a well-known group. And you can read in satake in the original paper everything. And I mean, you can just write down the double cosets with the respective representatives. But if you want to decompose the double cosets into single cosets, this is a complete nightmare if you want to do it in general. Yeah, anyway, I think very, very few explicit results are in the literature other than in the case n equals 2 and n equals 3, which is classical. OK, any other questions? OK, so this was just a very, very brief introduction into hyperbolic spaces just to give you an idea of some definitions so that you at least know how to start. And equally briefly, I would like to discuss the symplectic group and also give a few basic definitions. And then after that, we move on to PGLN. OK, so symplectic groups. OK, so let me first define the symplectic group. And there is great confusion. Some people call this SP2 and some people call this SPN. I call it SP2N. But if you don't like it, feel free to call it SPN. So these are all matrices in SL2NR such that m transposed j, m equals j, where j is the mother of all symplectic matrices, minus identity, identity. So this is the identity of dimension n and this is the identity of dimension n. And you can write this as explicitly as all matrices a, b, c, d in block notation where a, b, c, and d are again matrices of dimension n, such that a, d transpose minus b, c transpose equals the identity. a, b transpose is b, a transpose. And c, d transpose is d, c transpose. But in other words, these are symmetric. So this is the usual block notation that you find in most of the literature. Capital letters always denote dimension n matrices. OK, and there is an upper half space that I also call h, but it's not the h that we had for the hyperbolic spaces. hn is the set of all matrices z equals x plus i, y, n by n matrices, but now over c such that z is symmetric and y is positive definite. And you can embed this into the symplectic group as matrices i, x, i. So this is the same thing that you can know from the upper half plane and v, v inverse where v is the square root of y. So y is positive definite, so you can take a square root and this is the usual way to embed complex numbers into SL2R, where v is the unique symmetric matrix such that v transpose v equals y. And OK, so if I call this g, my group g, then this is just a quotient. g modulo, a maximal compact subgroup. OK, and g acts on this upper half space in the usual way. Group action, a matrix A, B, C, D acts on a point z, which is in fact a matrix, as A z plus B times C z plus D inverse. And again, you have to be careful with the order because matrices are not commutative anymore. OK, and as usual, we take a discrete subgroup. For instance, we can take sp2n over the integers, but we don't have to. And this comes with an inner product. There is an inner product. The inner product is just what you would guess. It's the upper half space modulo gamma. And then you take f1 of z, f2 of z bar, times an invariant measure. And the invariant measure is dx dy over determinant y to the power n plus 1. So the classical case is the case n equals 1. And then you just recover the usual thing. So these are the analog of mass forms. If you have holomorphic Siegel modular forms of a certain weight, then you have to include determinant y to some suitable power k as usual. So this looks all very similar to what you probably know from the classical case, except that all numbers are now matrices. But formally, it's very similar in many respects. So why is this interesting? Again, I mean, you can generalize as much as you wish, but why is this interesting? Here's the motivation. And the motivation comes again from quadratic forms. Motivation, why do we want to study Siegel modular forms? So assume you're given a positive definite symmetric matrix, an n by n integral matrix, symmetric positive definite, and even. By even, I mean that the diagonal elements are even. And even integral matrix is an integral matrix with even diagonal elements. And pick a positive integer, or positive less than n, or less than or equal to n, for a matrix T of dimension m with half integral entries and integral diagonal, symmetric and positive definite, study the representations of T by a. So what does this mean? Well, if T happens to be a number, so if m equals 1, then this is what you usually do. You want to know how many ways are there to write a given number as a sum of four squares. But you can just as well ask how many ways are there to write a given quadratic form, say a binary quadratic form, as a sum of four squares. So this is not representation of numbers by forms, but representation of forms by forms, but lower dimensional forms. And in the special case, m equals 1. This is just numbers. But you can, a priori, pick any m between 1 and n. So we can define the representation number. r a of T is the number of matrices g of dimension m times n. So in the classical case, m equals 1. This is a usual vector, such that 1 half g a g transpose equals T. And because a is positive definite, and T is positive, well, a is positive definite, this is a finite number. And you can encode these representation numbers into a theta series. Theta a of z is the sum over all T, r A of T, e of trace Tz. You need the trace to go back to numbers, where z lives in Hm. OK, and it turns out that this theta function has nice properties. This transforms like Cz plus d to the power n over 2. Theta a of z for gamma sum matrix with lower entries Cd in some congruent subgroup gamma of sp2mz. Is it determinant? Yes, otherwise it makes no sense. Yes. And so it turns out that theta a is a Ziegel modular form of weight n over 2 and degree or genus m. OK, so there is a natural motivation why we want to study such objects, because they are connected to representations of quadratic forms by quadratic forms. We have seen that many of the formulas look exactly the same, but there are other formulas that don't look the same. So many things become more complicated. For instance, there is typically a formula for the imaginary part of gamma z, which you can relate to the imaginary part of z. But here the formula looks much more complicated. It's Cz plus d minus transpose imaginary part of z Cz plus d bar inverse. Now you recognize, of course, if everything is numbers, then you get the usual formula imaginary part over Cz plus d squared absolute value. But here you can't do this really, because it's non-commutative. And you can imagine that explicit formulas become very ugly in this way. OK, there is a usual fundamental domain for sp2nz modulo hn, which looks very similar in some sense to the well-known case n equals 1. One needs Minkowski reduction theory. Well, yes. Well, this was. OK, so this was a motivating example. And now we continue with the usual theory, and we call the genus n. It just so happened that the genus here was m. So this is perhaps pedagogically not optimal. OK, Minkowski reduction theory tells you that there is a fundamental domain such that the coordinates of the matrix X, Xij, are bounded by 1 half. And the Yij are Minkowski reduced, which means that the off diagonal is bounded by 1 half times the diagonal. And the diagonal is non-negative. In fact, it's strictly positive. And we have 1 half square root of 3 is bounded by Y1, is bounded by Y2, bounded by Yn. And the determinant of Y is roughly the product Y1 up to Yn. In other words, the off diagonal is at least in terms of the determinant rather negligible. But I just described conditions satisfied in the fundamental domain. That's right. That's right. That's not everything is in the fundamental domain. This is a Siegel set for the fundamental domain. The exact fundamental domain has not been worked out, except for the case n equals 2. So Gotchling in his thesis 50, 60 years ago, one of the last Siegel students, wrote down, I don't know, 13 conditions or whatever, exact inequalities. 19? That's what I remember. OK, maybe it's 19. Whatever. OK, it's still a manageable number, but pretty large. So there is a finite number of conditions for SP4. But for higher genus, I don't think so no one has ever worked out exact conditions for the fundamental domain. But it's not a finite sequence? Probably is. Yeah, OK. But I think it's just in the case, as in the classical case, nobody needs that. I mean, if you have a nice Siegel domain and you know that everything is inside this Siegel domain, then everything is OK. OK, so how can I get back this blackboard? Oh, OK. I'm supposed to stop anyway. But let me just quickly say something about the Fourier expansion, because that's something that's very important in the SL2R case. And it turns out that the Fourier expansion for Siegel modular forms is much less useful. There exists a Fourier expansion, of course, but it's much less useful. Fourier expansion, a Siegel modular form has a Fourier expansion of the following type. One sums over symmetric matrices, positive definite, or perhaps positive semi-definite and half integral. Some coefficient times E of trace Tz. And it turns out that this coefficient A of t satisfies certain symmetries. For instance, OK, so this is for a modular form of weight k. There is a determined u to the power k. A of u transpose tu for all u in gl nz. So this is invariance by units, if you want. But it's less useful for n greater than 2, or greater than or equal to 2, less useful for n. Greater than 2. In particular, A of t, the Fourier coefficient, has no direct connection to Hecker eigenvalues. No direct connection with Hecker eigenvalues. I mean, that's something that we are very much used to, that Fourier coefficients are just Hecker eigenvalues. This is not the case as soon as the degree is not 1. If the degree is 2 or more, then there is no direct connection between Fourier coefficients and Hecker eigenvalues. So the Fourier expansion is really much less useful. One has the Hecker bound. At is bounded by the determinant of t to the power k over 2, where k is the weight. And the conjecture is that this is bounded by determinant t to the power k minus k over 2 minus n plus 1 over 4 plus epsilon. So if n equals 1, you can subtract a half. If f is not a lift, and I will explain next time what I mean by lift, it's not a lift. We will do this tomorrow. But this is not known, unless in the case n equals 1 for holomorphic forms. What is known is this minus a delta. But delta is tiny. Delta is o of 1 over n. And we are actually expecting something linear in n. So yeah, there is lots of things to do. If you want to do, if you want to work on this, there are lots of open questions. OK, I guess I have to stop now. So that's all for today. Any questions or comments? Yes? For hyperbolic space, do you exactly know what is the connection for Hecker eigenvalues and the Fourier coefficients? Do we know what? The connection between Hecker eigenvalue and the Fourier coefficients for integral h and i. I don't know. Certainly in the case n equals 3, yeah, I don't know. There hasn't been much analytic number theory on these spaces. And I think it's now the time that one introduces the methods of analytic number theory for these types of automorphic forms. Yes? Would there be any consequence of analytic interest for this conjecture? Well, I mean, if n equals 2, then this is the Ramanujan conjecture, right? And so certainly there is some interest in the Ramanujan conjecture. Yeah, I mean, whenever you work with the Fourier expander and you want to have bounds, then it's certainly good to know what the best bounds are for the coefficients. And this is true on average. So in some mean square sense, this is true. But individually, it's not known. And I think there is some fundamental interest in knowing what the best possible bounds are. And one of the problems is that this is in fact wrong for certain forms that come from lower dimensional symplectic groups. But I'll discuss this tomorrow. The way to study representation of forms like forms. Yes, yes. Right, right. They go into the error term. So if you write, then you have Siegel's mass formula. So I mean, this is for cuspital forms, where the theta series are modular forms. But right, so these go into the error term, are these coefficients. And if you can bound the error term, then it's certainly a good thing. It would prove, as a principle, for more general problem than the one that I discussed, which is the representation of a quadratic lattice of rank n into a quadratic lattice of a square And so what I explained yesterday has been that there has been progress in that case, which was embedded in the case of this particular thing. So I have a comment that there's a very nice conjecture of Beuschever for SP4's that we realize it's very efficient to twist the del values of spin or a function. Yes. We certainly suggest that they are extremely irrespective, in some sense, in an even more difficult way than a cargo project in that case. Yes. Yeah, so they have certainly some intrinsic meaning. But they are not so much related to HECA, but perhaps to other arithmetic objects. Is it possible to say that the spin or L function? I will briefly mention L functions tomorrow. But my plan is not to go into most details, but just to give you an overview of the objects that we are dealing with, so that you at least get an idea how to start if you're interested in working on these things.