 So we are very happy to have Dalimel Mazaj from the Institute for Advanced Studies for this next seminar in the Quantum Encounters series. And many of us know Dalimel for his work on the conformal bootstrap. And actually that's what we invited him to talk about, but apparently he has some work in which, even though it's based on conformal bootstrap techniques, it's actually something even more maybe exciting for some people about automorphic forms and their spectra. So Dalimel, please, it's all yours. Okay, great. Thanks a lot, Slava. And thanks for inviting me to give this talk. It's a great pleasure. What I'll be telling you about today is based on some upcoming work done in collaboration with Peter Kravchuk and Sridhar Pal. And there were also some very nice, interesting papers related to what I'll be telling you about today. So first one by Hinterbichler and Bonifacio last summer, I mean last summer of last year and another one by Bonifacio himself, summer of this year. Okay, so what is the basic idea? So somehow the main point of this work is to use some ideas and techniques from the bootstrap to prove some new rigorous bounds on the spectrum of the Laplacian or more generally, automorphic forms on hyperbolic surfaces, okay? So let me set it up mathematically. So during the rest of this talk, X is going to be a renown surface which I'll take to be compact without a boundary, compact no boundary. And we can choose some hyperbolic, we're going to choose some hyperbolic metric on this renown surface. So GAB is a hyperbolic metric. So that means the scalar curvature which is usually defined as one half of the richer curvature is minus one. So I'm going to normalize it to be minus one. And with this data, we can define the Laplace operator on X with this metric. So Laplace operator is just the usual thing, contracting, covering there what is with the inverse metric and then you can study the Laplace equation. So there's a natural spectral problem which is to find the complete set of eigenfunctions and eigenvalues of the Laplace equation, eigenfunctions and eigenvalues going from zero to infinity. Okay, so the surface is compact. So it has some, the Laplace equation has some discrete spectrum starting at zero. Zero eigenvalue is just the constant function and there is some non-venetian gap to the first eigenvalue and then there is some discrete spectrum going off to infinity. Okay, so this is a very, very interesting problem to try to understand the spectra which has been studied by many mathematicians and lots of exciting works. So what am I adding today? So today I will introduce some new technique for proving constraints on the spectrum of the spectrum of the eigenvalues and in principle also on the some eigenfun, some overall eigenfunctions but I'll focus on the spectrum of eigenvalues. So in particular, I'm going to prove some upper bounds, on lambda one. So this problem is very analogous to the proving upper bounds in the context of conformal bootstrap on the gap in the spectrum of local operators. And it will turn out that this upper bounds that I'll prove today is actually saturated by some interesting surfaces. For example, the Borser surface, the Klein quartic and so on, or at least nearly saturated. We can be quite sure whether it's exactly saturated or just nearly saturated. Okay, so why is this interesting? Well, one basic point is that there is no analytic formula for these eigenvalues. So it's not an integrable problem. So there's just no way to analyze the coso for these eigenvalues. And this is a reflection of the fact that the Laplace equation is basically the Schrodinger equation corresponding to quantizing the geodesic motion of a particle. So a free particle on the hyperbolic surface and that is known to be a chaotic, classical chaotic problem. There's ergodicity, mixing, and there's just no way to solve everything analytically. And you can say that the corresponding quantum version, so this Laplace equation is one of the simplest examples of a quantum chaotic model. I mean, the details depend on the types of the surface, but I think I just wanna leave it there. It's basically a quantum chaotic model. But usually the chaos shows up in the high energy spectrum. So things like the random matrix statistics and so on, you can study them by looking at the eigenvalues for a very large index of very large eigenvalues, but today the focus will be on the low energy spectrum, like lambda one, lambda two, and so on. And this low energy spectrum is also important in various parts of pure math. For example, one of the most important conjectures in this field is the Zellbrich's one-quarter conjecture, which says that for a class of surfaces, namely those which come from quotient the upper half plane by congruent subgroup SL2Z, there is a lower bound on lambda one, which is one quarter. So Zellbrich's conjecture, which is itself related to various results in a number theory says that for some gamma, for some X, which is the congruent subgroups of SL2Z, there is a lower bound, one quarter. But okay, today I'm certainly not going to prove a Zellbrich's conjecture, but I'm going to prove some upper bounds on lambda one. So just to give you a sample result, here is something that we'll show. So if you consider genus two close to hyperbolic surfaces, then there is a modular space. It's a three complex or six real-dimensional modular space. And one thing that we'll show is that there is a universal upper bound on lambda one on this genus two modular space. So for all genus two hyperbolic surfaces, there is an upper bound on lambda one. So our bound is that lambda one is less than 3.8, 3.88, 9.77, which you can compare to the previous best known bound that's due to Yang and Yao. So Yang and Xingtong Yao proved in 1980 that lambda one for original surfaces must be smaller than equal to four. And now you can also compare lambda one to various surfaces. And there is a special hyperbolic surface of genus two, which is known as the BOSA surface. It's the one which is the highest symmetric group in the entire genus to modular space. And for that one, lambda one is equal to 3.8388873. So it's pretty close to being, so our bound is being pretty close to being saturated by BOSA surface. It differs on the fifth decimal place. So in the rest of the talk, I'll explain to you how to get this bound and I'll explain more generally the method for getting these bounds. So that's the mathematical motivation, but there's also a physics motivation, which is why I originally started thinking about it. And that's roughly speaking that, but the method that I'll explain is very close to the usual conformal bootstrap. And in the conformal bootstrap, we have a funny situation that we have like some infinite set of equations for infinitely many unknowns and we don't really know how to solve them. So we really only just know in more than two dimensions, the only solutions of those bootstrap equations that we really have nailed down is free theory. And we don't really know how to sort of analytically or arbitrary precision solve all the equations if you don't want to exclude free theory. So it's like to solve the 3D ising model using the bootstrap. So instead it'll be nice if we had some kind of mathematical machine which automatically outputs solutions of the conformal bootstrap rather than trying to solve the equation sort of one by one. And as I'll explain, these hyperbolic manifolds are a toy model for the situation. So for each hyperbolic manifold, you can define set of correlation functions, you can define local operators, you can expand the correlation function in conformal blocks for the SL2R group. And you can say that each hyperbolic manifold is a manifest solution of this slightly modified set of bootstrap equations. So maybe by studying this problem, we can learn more about the usual conformal bootstrap. But this is of course much more speculative. And finally, maybe another possible motivation is that it would be a secret dreaming to get mathematicians more interested in the conformal bootstrap. So maybe mathematicians can help us solve the 3D ising model if they realize that what we are doing is interesting and makes sense. Anyway, so that's enough of the general blah, blah. I don't know. Let me just go and set up the mathematical problem. All right, just... Unless there are... Where is the exact value is coming from? Which exact value? Oh, yeah, good. So this is not an exact value. This is a, you can get the value for the Boso surface by numerically solving Laplace equation on that surface using finite element method or something like this. So it is not known to arbitrary precision. Okay, so now a big part of this talk will be sort of review of standard mathematical results about hyperbolic surfaces. So mathematicians, please bear with me, but it will also set up the problem for... Okay, so the point is to think of... Think of X as a quotient of the upper half plane. Okay, so the upper half plane is of course the usual parameterized by X and Y, where X is arbitrary number, Y is positive. And there is a natural metric on the upper half plane, which takes this form. Now it's normalized so that the scalar curvature is minus one, it's one over Y squared. And the central role in this talk will be played by the group of orientation preserving isometries of this, of the upper half plane, which is PSL2R. So orientation preserving isometries is this following group, I'll denote it G, and it's PSL2R. So it's the set of two by two real matrices with unit determinant. Okay, so ABC and DR real and quotient it by the center, which is plus or minus identity. So identity extravially, sorry, plus or minus identity extravially. And while the way this acts, so when you write Z to be a point in upper half plane, X plus IY, then Z gets mapped to AZ plus B. So these matrices just act as usual by my various transformations. Okay, and this means you can think about the upper half plane as the quotient of G by maximal compact subgroup K. So K is gonna be the maximum compact subgroup, which is just the SO2, SO2R inside PSL2R. So K consists of matrices. So let me denote an element of K by K theta, it's parameterized by an angle, and it's just cos theta over two, sine theta over two minus sine theta over two and cos theta over two. And these matrices fix Z plus I. So theta of course is, well, you can restrict theta to be between zero and two pi because you're quotient by minus identity. And this subgroup fixes Z equals I, and it sort of rotates around Z equals I. That's why A with the upper half plane is the quotient of G by K. And well, it will also be useful to parameterize the elements of G. So if small G will usually denote the element of PSL2R and using the Ivasava decomposition, you can parameterize an element of G by three numbers. Of course, G is a three-dimensional manifold and you can let's parameterize it like this. So X and Y is a point in the upper half plane and there's a circle. So it's a product of a translation, scaling and a rotation. So as a smooth manifold, PSL2R is just a product of the upper half plane and a circle, right? Circle corresponding to the maximal. So this is just K. And finally, there is a hard measure which is fixed up to overall rescaling to be this. So this is the binary measure on PSL2R, okay? Okay, so what is the surface now? Well, the surface X is just a quotient of the upper half plane by some discrete subgroup of PSL2R. So gamma will be a subgroup of PSL2R such that it is discrete. So it has no accumulation points inside PSL2R and I'm also going to restrict to compact quotient. So H mod gamma will be compact, which in particular implies that it's finite volume. That will be important for having vacuum in the spectrum. But I'll also allow X to be an orbit fold, which means the following in terms of gamma. So you can classify elements of PSL2R into hyperbolic, elliptic and parabolic. The parabolic elements lead to casps at infinity so that they would lead to non-compactness. So gamma has no parabolic elements. If you want X to be smooth without orbit fold points, then gamma should only contain hyperbolic elements. But I'm also going to allow gamma to contain elliptic elements. So elliptic elements are like our rotations by some angle which needs to be in this case two pi divided by some integer and it leads to orbit fold points. So gamma can contain hyperbolic and elliptic elements. If there are no elliptic elements, it's a smooth surface. If there are elliptic elements, it's an orbit fold. And what we want to do is that we want to solve the Laplace equation in the upper half plane. So f of X, Y is a function in the upper half plane, which is invariant under gamma. So for all elements, small gamma will denote an element of this degree subgroup. So f is invariant under it and this Laplace operator in the upper half plane takes the following form. So we are solving this equation with this Laplace operator in the space of smooth functions on the upper half plane which satisfies this invariance condition. Now, let me just describe some simple examples. The simplest example probably are the triangle groups. So to define a triangle group, you start with the, this is the upper half plane and you draw three geodesics. Geodesics are just semi-circles and then on the boundary of the upper half plane. And then there is a, you set it up in such a way that there is a triangle and then you can define gamma to be generated. So gamma is generated by rotations around the vertices, around A, B, C, this is A, this is B and this is C. So the rotations which map this picture into itself, there are rotations by twice the angle over here. And this corresponds to the Fuchsian group. If these angles alpha, beta and gamma are pi divided by an integer. So you have some integers N1, N2, N3, okay. And what is the surface? Well, the fundamental domain for gamma is two copies of this triangle. So you sort of need to reflect this triangle once and then you glue the triangles together along the edges and the vertices A, B and C become orbital points of degree N1, N2, N3 of order N1, N2, N3, okay. And it, well, by the way, this only makes sense if one over N1. So it may only make sense when the area of the triangle is positive, which is the case when one over N1 plus one over N2 plus one over N3 is smaller than one. And then you can ask yourself, what is the smallest such triangle and the smallest such triangle in terms of the volume is the one corresponding to N1, N2, N3 being 237. That's the smallest one. You're asking, what is the sort of smallest area? What is the smallest? What is the set of N1, N2, N3 being the smallest area if they're subject to this condition? So that's 237. And the next smallest one is 238. So here is a picture of the 238 case. You can see the blue and the white are the 238 triangles. The fundamental domain for gamma is just two triangles together, glued like that. And you have orbital points here, here and here. And this is identified with that. Okay. Okay, so that's an orbital. And then if you want to get a smooth manifold, you need to have genus at least two. And then you get a six dimensional modular space of genus two. And the most symmetric element of this modular space is the border. And then you get a six dimensional modular space of genus two. And you get a six dimensional modular space of genus two. And then you get a three dimensional modular space of genus two. And then another modular space is the BOSA surface, whose fundamental domain is over here. This is the fundamental domain for BOSA surface. And it just corresponds to gluing together a certain number of these triangles, the number, I forgot how many there are, but. Yeah, so it's, these are still 238 triangles. And you just glued the right number of them. So you, so we got a smooth surface of genus two. And for genus, which comes from gluing 238 triangles. And for genus three, the most symmetric point is the client quartic, which comes from gluing 237 triangles. So in some sense, the client quartic is even more symmetric than the bozo circles, because it comes from gluing small triangles. OK, very good. So that's the end of the setup. And I'll start describing the bootstrap approach, unless there are some questions about the basics. OK, so I guess given any surface, it's probably trivial to compute the spectrum relative to the trivial. So I'm saying the difficulty is that if you just try to integrate numeric spectrum, it's hard to understand the general picture of the spectrum as on the modular space. Is this the thing? Yeah, well, I mean, it's not entirely trivial to go to arbitrary precision. You need to put a lot of plastic creation on a computer. So first of all, it's non-rigors, because you're solving things numerically. And I don't think there is an efficient way to go to arbitrary precision. But even without that, this doesn't allow you to prove some general upper bound on, say, lambda 1 on the entire modular space, because you would need to explore the entire six-dimensional modular space, or 12-dimensional modular space of genus 3. You wanted to have an upper bound on lambda 1. But can you give some intuition, like, which parts of the modular space? For example, some parts of the modular space for this higher genus, they correspond to some handles becoming very long. And so I presume in those parts of the modular space, it's unlikely that they're going to be eigenvalues, but the eigenvalues probably, the gap is probably going to go to 0. So what type of trying to reach parts of the modular space is the most interesting. Yeah, the most interesting one is where you don't have any long handles. In some sense, the intuition is that the most symmetric point would maximize the gap, and that's indeed what seems to be happening, which is the proposal I'm applying for it. OK. Thanks. But I guess the most interesting part, the thing about it is that it's a new method, which maybe will allow you to learn more general bounds on the spectrum. We are focusing on upper bound on lambda 1, but you still get infinitely many constraints on the spectrum, and who knows what people can learn. OK. By the way, it's Thibaut D'Amour speaking. The trigger inequality, will it play any role? This is, it talks about the first eigenvalue, no? I'm actually not aware of the inequality, but it will not play any role. OK. Yeah. I mean, I've probably seen it at some point, but maybe you can tell me at the end what the trigger inequality says. All right. So what is the setup for the bootstrap? Well, in the bootstrap, we basically need two things. We need to have some Hilbert space, which is a unit representation of some nice semi-simplified group, which in this case is going to be PSO2R. And we also need to have a notion of some kind of a product on the Hilbert space, which is invariant under the action of the group. So we will have both of these two things here. So let me describe it. So first, we need some Hilbert space, which is a unit representation of PSO2R. The first guess for a Hilbert space, and this one doesn't turn out to be a representation of PSO2R, first guess would be to just take the space of L2 integrable functions on the surface. OK. But of course, this is not a representation of PSO2R, because this A mod gamma is the same thing as double quotient of G, where you quotient on the right by k and on the left by gamma. And by quotienting on both sides, you've broken the whole symmetry. So this is not a representation of PSO2R, of course, but a way to make this inter-representation of PSO2R is just to enlarge it a little bit, or quite a lot, and consider the space of L2 functions on G mod gamma. So now G acts on G mod gamma by right multiplication, and that makes this V into a representation, unit representation of PSO2R. So this is the Hilbert space for the analog conformal field theory, so to say. Of course, it's much less complicated than a Hilbert field conformal field theory, but there is some pretty close analogy, nevertheless. OK, so what are the elements of this Hilbert space? So f is some element of this V. What is an f? Well, f is a function from the group to complex numbers, such that it's invariant under left multiplication, right? So f for all gamma in gamma, f is invariant. And if we define the following norm, so we can define a norm on this on V, to be just the integral respect to the R measure on this group manifold. So I'm going to usually call this G mod gamma a group manifold. So it's some three-dimensional manifold, which is a principal k bundle over the surface. So the fibers are the circles, which we have quotient about here, but now this G mod gamma is a three-dimensional manifold. So the norm is just the integral of the norm squared of f with respect to the R measure. And well, and we take f's to be such that this is finite, OK? Then there are elements of the L2 space. And this is a unit representation of G, because for each G tilde, you can act on f by right multiplication. And this right multiplication, of course, preserves the norm, because you have an invariant measure for the integration. So this makes V into a unitary representation of G. It's certainly not irreducible, but it's nice and unitary. And by the way, V includes V0. So V0 was the space of functions on the surface. And those are just the elements of V, which are right invariant under k. So you can decompose V according to how k acts. So k acts by right multiplication still. And it's just a circle group. So you can decompose V in terms of the charge with which SO2 acts. And let's call the space with charge n Vn. Then V0 is just what I said before. So V0 is just functions on the surface. And more generally, Vn is the space of L2 sections of the bundle. I guess they're called n zero forms. So it's functions on the upper half plane. So the elements of Vn are functions on the upper half plane such that they are not invariant under gamma, but they are covariant. They transform like this. Cz plus d to the minus 2n. And so if abcd is an element of the gamma, then foz transforms like this. So somehow this V combines functions on the surface together with a second natural extension, combines them with forms on the surface. Sections of a bundle of things that transform like forms. And the virtue of this is that while PSO2R doesn't act on V0, it acts on the whole thing. If you act with raising and lowering operators in PSO2R, you can move between the different Vn's. Now what we are going to do is we're going to decompose V into irreducible representation of PSO2R and see that they correspond to various automorphic forms, including these mass forms, which are just the Laplace eigenfunctions, but also there'll be holomorphic forms inside V. But maybe let me stop now to see if there are any questions about what I've said so far. Yeah, I mean, yeah, if you don't understand something, you should really ask because this is super crucial. So the basic idea is that we want, the most important thing is that G acts on V and it can move you between the different Vn's. So you have generators, the usual generators L0 and L plus or minus one. L0 is just the generator of rotation. So it measures the charge. L0F just tells you what is this integer M and L plus and minus, they take functions and they increase or decrease the value of M. There are just some differential operators on this group manifold. No, these L plus or minus and L0. Okay, so if there are no questions, let me proceed with decomposing V into irreducible representations of PSO2R. So we want to decompose V as a direct sum of unitary irreducible representations of PSO2R. These have been classified a long time ago and there is a nice complete list. One, the first one is the trivial representation. The next one is the principle series, which I'm going to combine with another one called the complementary series. For our purposes, there'll be looking identical, complementary series. And finally, there is the discrete series and it's conjugate. So the trivial representation and principle series are real representations and the discrete series is complex and as dual is the conjugate. So that's why they come in pairs. Okay, and now for each of these three points, one, two and three, they correspond to some object on the surface. I want to explain what these objects on the surface are and we'll see that they exactly correspond to the so-called automorphic forms and well, these Laplace eigenfunctions and the holomorphic modular forms in the case of discrete series. I mean, of course, all of this is very standard. I'm just reviewing standard material and the new part is coming right after that. All right, so let's start with the trivial representation, just one-dimensional trivial representation of PSO2R while it corresponds to constant functions. So it's constant functions on G-mode gamma in this three-dimensional manifold, which is the same thing as constant functions on the surface. This representation appears exactly once inside V and that's because there is just one-dimensional space of constant functions and the constant functions appear because, well, first of all, G acts transitively on this G-mode gamma so only constant functions can appear and the volume is finite. So any constant function is L2 normalizable. So basically there is only one vacuum in this Hilbert space appears exactly once and that's consequence of the finiteness of the volume of the manifold, okay? Now, the principle and complementary series, these representations are labeled by a single parameter and I'm going to denote them collectively as P lambda or P standing for principle and I'm not going to make an extension between principle and complementary series. This principle and complementary just correspond to different values of lambda. So for principle, lambda is greater than or equal to one-quarter and for complementary, lambda is between zero and one-quarter and these are some infinite-dimensional representations which have exactly one vector. So there's a basis for these representations which have one vector of each charge under the rotations. So we have L0 vn is n times vn and L plus or minus one acting on vn is some square root of something that's not too important times vn minus plus one. So L plus or minus one moving between these two things between different n's as I already said many times but now let's imagine that we have such a representation. So yeah, I should say that lambda, lambda enters in here. Okay, so it tells you it's lambda is the Casimir. So the quadratic Casimir is lambda. It enters inside this expression and it's going to be convenient to write it as delta times one minus delta. Okay, so that principle series corresponds to delta being one-half plus it and complementary is delta being between zero and one-half. Okay, now you see that there is a vector inside the principle series which has a zero charge under the rotations which means it's just a function on the surface, right? If something has zero charge on the rotation it doesn't depend on the circle direction of this group manifold. So it's literally just a function on the surface. So let's take v zero inside p lambda which is embedded inside our Hilbert space, okay? So this v zero is the same thing as having a function on the surface, a function on the upper half plane which is gene variant but it also needs to be an under function of the Casimir because it's inside an irreducible representation. So that means that, well, you can work out what the Casimir is in terms of differential operator on the surface and it's just the Laplace operator. Okay, so this F is just an eigenfunction of the Laplace operator with eigenvalue lambda where lambda is this parameter of the principle series. And the same thing for the complementary series just in a different range of lambda. So in other words, the task of, first of all, I should say that it's a one-to-one correspondence and not just that if you have a principle series you'll find an eigenfunction of the Laplace operator but every eigenfunction of Laplace operator arises from the principle series in this way. So there's a one-to-one correspondence between direct summands inside L and it's inside v which are isomorphic to principle series and eigenfunction of the Laplace in. In other words, the task of decomposing v into irreducibles contains a sub-tasks which is the same thing as that as finding the spectrum of the Laplace in. They are literally like almost equivalent problem. Well, why are they not equivalent? They are not equivalent because there's also discrete series and discrete series corresponds to holomorphic modular forms on the surface. And so discrete series is like principle but there's a delta is now a positive integer. It's a lowest weight representation. The lowest weight can be denoted v delta and it corresponds to some function which now transforms with a non-trivial factor. So it transforms like a modular form under gamma of weight 2n, sorry, 2 delta. But now it's a lowest weight representation. So L1 annihilates v delta. This is like the primary operator of the conform field theory. And well, this means that this turns out to be equivalent to the statement that F is holomorphic. In other words, F is a holomorphic modular form corresponding to for the group gamma. And the conjugate representation for the discrete series is just the highest weight representation. So there's a vector of weight minus delta which is the highest weight. And this one corresponds to something anti-holomorphic and it is just the conjugate of F, right? So these things just come in pairs F and F bar. They are both covariant with this factor or maybe with a Z bar for the F bar. And F is holomorphic, F bar is anti-holomorphic. So to summarize decomposing v into irreducibles look like this. There is a trivial representation. Then there is a sum over Laplace eigenvalues or Laplace eigenfunctions operate on a surface. So don't need to know there's a sum over some index i or lambda i are the eigenvalues. And it's some of the principle series with label. And finally, then there's a sum over discrete series where these discrete series summands correspond to linearly independent modular forms of weight to i. And one nice thing is that the degeneracy of a given discrete series can be computed from topology. So the Riemann-Roch theorem tells you that tells you the number of these independent the i's. So the number of independent discrete series of dimension delta i is given by some, is given by this formula. G is the genus of the surface. And then there is a sum over the orbit fold points and their orders. And there is a small correction at delta equals one. So these are weights to multiple forms. But in particular for delta equals one, we try to holomorphic one forms. They are the number of linear independent one forms is just the genus. Okay, I'm going a little slow, but okay. That's a summary of this talk so far. This formula, it's basically all you need to remember. Are there any questions about that? I have a question. So since the inner product here is different from the radio condensation of the product in one dimensional safety, is there some analog of unitary bond on delta or lambda? So there is a unitary bond, but I described it, right? So the discrete series is unitary when delta is a positive integer. The principle series and a complementary series when delta is in this range. Okay, so this is the complete list of unit representations of PSL2R. The representations which look most like the representations we are used from CFT, like they're lowest weights. So they're like discrete series. But in CFTs, delta is a continuous parameter, which is because we don't really do PSL2R, but we are dealing with the universal cover of PSL2R. But in this case, the circle direction is not the compactified, it's compact. So delta must be an integer or discrete series. Okay, okay, I think that should be a little bit. Okay, so now let's define the correlation functions. So correlation function is just the projection from V to the trivial representation and the way to do it, if you have some element of the vector space, some function on the group manifold, it's just the integral or the group manifold with respect to the harm measure of FG, right? So this is clearly a gene variant map from V to the trivial representation. And we can think of it as a correlator in the CFT. Okay, so if you feed into it a constant function, you get some zero value, but if you feed into an element of an irreducible representation, which is not the real representation, you just get zero. So the one-point functions of operators vanish unless the operator is identity, as usual. Okay, but now finally, we can talk about the OPE. So the OPE is going to be the thing which will constrain the spectrum. Right now, I didn't really describe any constraint in the spectrum, but the idea is that there is a gene variant product on V or on the set of smooth vectors in V, which imposes very strong constraints on the possible spectra that can appear in this decomposition. Right, so we want to, what we want is to have some gene variant map, which is like a bilinear map from V times V to V, which is gene variant. And there is essentially a unique way to do that. And that's just to send the two functions to the point-wise product of two functions. So F1 and F2 are functions on the group manifold, and if you take their point-wise product, you get a nice product, you still get them, you get some of which is gene variant, clearly. It's bilinear and it's got all the properties that you want. There's some small subtlety, which is that this only works for smooth vectors. So if you take two general L2 functions, their product may not be L2, but if the functions were smooth, then their product is also smooth and it's also L2 normalizable because the surface is compact. Okay, so it's sort of a product defined on a restriction of V. Okay, and now the point is that, well for PSL2R, there are only finite lemony tri-linear invariant maps. Another way to say it is that if you consider gene variant maps from product of two representations to a third representation, there is only finite lemony, the finite dimensional space of such gene-line maps. In fact, it's usually one-dimensional in the case that I'll look at as one-dimensional, which means that, well, let's take two functions and let's imagine that F1 is now inside Ri and F2 is inside Rj, inside V, okay, so. Why are you writing Ry infinity? Because I thought that each Ry only consists of infinitely differentiable functions. No, that's not quite true. So k finite vectors inside Ri consist of infinity. Okay, Ri is infinite dimension, I looked a lot of tickets. Infinite dimension, okay, fine, fine, fine, okay, sorry, sorry about that, yeah. Yeah, but it's okay, yeah. I'm going to suppress the infinity from now. You just imagine that everything is smooth. Okay, and now we can just decompose this using the direct sum decomposition from earlier. So it's some infinite sum over the irreps appearing earlier with some structure constants. So these C1 to i are structure constants that are labeled by no, so I should maybe write this as a ij, capital I. And because of this gene variance, the dependence of the F tilde on F1 and F2 is completely fixed by the conformal symmetry, by the PSL2R, so if F1 ranges over Ri and F2 ranges over Rj, then this F tilde is completely fixed by invariance and the only thing which is not fixed is this number. So this will be the case if the space of such maps is one dimensional, which will be the case for us now, but in general, it could be some finite dimensional space. Okay, now to describe these maps, let me restrict to the discrete series. So let me define some coherent states which will look like local operators, coherent states for the discrete series. So let F, well, let F i be the lowest weight vector in this discrete series representation, which is thought of as a subspace of V. And what we can do is that now we can define something which looks very much like a local operator. So we can introduce an auxiliary parameter W, which is basically one dimensional and complexified spacetime. And you can just act with this group element, just the exponentiated translation on the lowest weight. Okay, so this is exactly like a local operator in the one DCFT. This is the local operator at the origin and we act with the translation to get the local operator somewhere else at some value W. And this is inside V if W is in the unit disk. As usual, a local operator corresponds to a normalizable state if the operator is inserted inside the unit sphere. And similarly, you can define O i bar, which will be a similar translate of the highest weight. So let me just write it first. Right, so F was some holomorphic modular form. F bar is its conjugate, so it's anti-holomorphic modular form. And it's an element of a hyzer representation. And now you can act with this exponential of L1 now to get a local operator corresponding to the conjugate discrete series. And this is inside V if now W is outside of the unit circle. So unit, yeah, unit disk. And which means that the O's and O bars will not touch each other. So all the W's will be inside the unit disk, all the O bars will be outside of the unit disk. And the point is that now the conformer generators act on these O's in the same way as we are used to from CF, from usual CFT literature. So for example, L minus one acts as a derivative on both an O and O bar. So the action on O bar is the same as the action on O. The action of L0 is this W dW plus the delta I O bar OW. The action of L1 is again standard W squared dW and two delta I W acting on O W. Okay, so now we can just use gene variants to constrain everything. Okay, now one can also describe the OPE. So suppose we take OPE of two discrete series representations. Now it turns out that the only thing that this can map into is a discrete series representation. So it's also lowest weight and there's a lower bound on delta three, which is delta one plus delta two. And there's a corresponding OPE, O I W one. So O I W one is some function on the group manifold. O J W two is another functional group manifold. The product means that we are taking point-wise product of those two functions in the group manifold. And the claim is that now this can be expanded using these O I's for some other discrete series using structure constants. So K runs over all the discrete series such that delta K is greater than or equal to delta I plus delta J. So the OPE is non-singular because the thing that says here is delta K minus delta I minus delta J. This homomorphic OPE is non-singular and there's the contribution of the lowest weight representation coming from delta K. Sorry, Dalemann. F I's are functions of which manifold? Everything is a function on G mod gamma. It's a three-dimensional manifold. Yes, but when you are doing this exponential W L minus one, you wrote, yes, these F's and the F I bars, they are on the specific point. So F I is some specific function on G mod gamma, which is basically the homomorphic modular form with some extra pre-factor, forget about the pre-factor. And now you actually this group element and this group element just sort of rotates the function of the group manifold. So you can no longer, once you've active with this group element, you can no longer sort of factor out the data dependence. So you can no longer think of this as a function on the surface. In any sense for general W, but it's still a function of the group manifold on the G mod gamma. Okay, thank you. And now I'm taking two of them, multiplying them together and expanding in a complete, using the direct sum decomposition and the claim is that only lowest weight things can appear thanks to representation theory. If you do the same thing for, now we need to do the same thing for lowest weight times highest weight, then this can map into the trivial representation and the principle series. And this is how principle series is going to appear for us, maybe, okay, let me, so that's all I want to say about this. Now you can just, you can compute correlation functions, right? So you can compute the two point function, two point function of two lowest weights is zero because there is no singlet appearing in the OP of two lowest weights, but you can compute a correlator of OIOJ OJ bar, which now contains a singlet. And this, thanks to SL2I invariance, needs to look like W1 minus W2 to the power two delta I. And similarly for the three point function, well, three point function of lowest weight, lowest weight and highest weight will look like the usual three point function with the OP coefficient being just this, I've defined the CIJ case here and that's the same as the three point function and there's the usual triple product of W1 minus W2, W1 minus W3 to some powers, W2 minus W3, okay. So my time is almost up, but I'm almost done. So the thing that we are going to use to constrain the spectrum is the four point function as usual in the bootstrap. So in the specific with four point function that I'm going to focus on is of the following form. We'll take, let me first write it down. There'll be a four point function of O, O, O bar, O bar at four different points, okay. So now what's going on? So now we can use the product to take a product of OIOJ and okay, bar, OL bar. So we can take these products and that means that in this sort of OPE channel, we're only going to get a contribution from the lowest rate representation. So only sort of things fixed by topology. Like the number of these things is fixed by the Riemann-Roch theorem and it's under control. But we can also take the product in a different way. We can take a product of OJ, okay, bar and in that channel, we are going to find the identity and the principle series. So we are going to find things which depend on the spectrum of the Laplacian. And in this way, we are going to constrain the spectrum of the Laplacian. So in the S channel, we have something which is kind of known. And in the T channel, we have the thing that we have the option that we want to constrain. Known, but not fully known because there's a big efficiency. Exactly. So I mean, if you, the OPE coefficient depend on the point in the modular space, for example. If you knew all the homomorphic OPE coefficients, then you would know all the entire Laplacian spectrum thanks to this bootstrap equation, right? You could just expand this correlator in the S channel. And if you know all the structure constants for the modular forms, you would also, you would just expand it in the cross channel and recover the entire Laplacian spectrum. So one thing which has not appeared so far are the conformal blocks. Are they going to appear? Yeah, they are about to appear. Yeah, sorry, I'm a little slow. So let's assume that all of the deltas of these delta, always OIOJ, OK and OL are the same. And let's just call it delta. Now this correlator, this four point function, let me first write it in a way which manifests the symmetry. It's real a function of a cross ratio of the four points. So chi is the cross ratio, the only cross ratio that we all know and love. And well, what I was saying before implies that G of chi can be expanded in two different ways. In the S channel, we have a sum over discrete series in such a way that delta M is greater than two delta. And there are structure constants CijM times Cklm. And this is bar times a known function. So this function is fixed by the conformal symmetry and it's just the 1B block. So this G, this G delta of chi is a known function which is just some two F1. So chi to delta, two F1, delta, delta, two delta, chi. Okay, so that's in the S channel and you're summing over homomorphic things. And in the other channel, the expansion looks as follows. So there's the usual pre-factor that translates as between S and T channel. There's a contribution of identity and then, which is just one and then there is a sum over the Laplace spectrum of some other structure constants. These are the structure constants corresponding to D times D bar into principle series states. These things depend on a specific surface. And it's E, let's call them C tilde. And the contribution comes from a known function again, the function fixed by conformal symmetry, which is just the T channel conformal partial waves. So this H, H of lambda chi is another two F1. It's a two F1 delta, delta, one minus delta, delta, delta, one chi over chi minus one. Okay, so it's a sum over the T channel conformal block and it's shadow. And the reason you need that is that, well, I'm sorry, I should say delta bar is a, so lambda is the same thing as delta, delta times one, delta, delta. And well, one way to see that that is the partial wave, which appears in the T channel is that G chi can be expanded around chi equals zero. And it's a holomorphic function with a nice holomorphic Taylor series around chi equals zero. And both an S and T channel manifests this. So in particular, the T channel blocks that appear here must be holomorphic around chi equals zero around the S channel. And that's why you need to take a combination of block and its shadow, such that the thing is holomorphic around chi equals zero. So the logarithm drops out in that case. Okay, so now I'm about to finish with the punchline. Well, maybe next, maybe I just need three or four more minutes, that's okay. So let's just take these two formulas and equate them to each other, because they are the same thing in some range of chi. And in particular, they are the same in a power series expansion around chi equals zero. So let's just take the equation, expand the bootstrap equation. So this is sort of math motion. This is something that we would call the bootstrap equation. Equation around chi equals zero. Now the terms in the power series expansion on chi equals zero would correspond to just inserting some k finite vectors in here. And because all of these things are some exponentials with w appearing in the exponential. So if you expand around chi equals zero is the same thing as sort of expanding the exponential to some finite order and taking that descendant. So expanding to some finite order in chi around chi equals zero is like inserting a k finite vector into each of these slots. So in particular, it's perfectly well-defined thing. You can take a 4-point function of that and you can expand it in two different channels. This is slightly different from the usual bootstrap where we cannot expand the pitch and expansion on chi equals zero. But if you do that, well, what do you get? So for example, let's expand to this order, two delta plus three. So everything starts at two delta. So the third order is two delta plus three. Then you find an equation which only contains contributions from the principle series. Okay, so there's a way to take a linear combination such that the discrete series drops out and you find this equation. So remember, delta is the external dimension which is the weight of the modular form that we are using. And there's some cubic polynomial of lambda which takes this form. Okay, now I'm just setting i, j, k and l to be equal to each other to be the same operator. And this is the equation you get. So it's a sum over the structure constant squared times some cubic polynomial of lambda which is parameterized by delta. And now let's plot this polynomial. It looks like this. It has a zero at the origin of lambda equals i so identity doesn't contribute and there's some negative region over here. Right, so what this equation tells us, this equation can only be satisfied if there is at least one Laplace eigenvalue in this range because everything, otherwise you will get a sum of positive terms. In particular, you get an upper bound on the first Laplace eigenvalue, which is this. So lambda one must be smarter and are equal to this lambda gap, which is this number. Well, you can figure it out, say, so let's say delta equal to six. Delta equal to six is the lowest delta, which is always present on all hyperbolic surfaces. In particular, it is the gap in a discrete spectrum, discrete series spectrum for the 237, 238 triangles. So let's just use delta equal six because then we get universe about on all hyperbolic surfaces. Then you can compute this root and the root is just equal to this value. It's 45.507, blah, blah, blah. And you can then, well, then let's compare it to the smallest surface, which is just the 237 triangle surface, 237 hyperbolic orbital, for which lambda one is 44.88835. So it's pretty close. It's not quite there, but that's just because we only expanded it to some finite to the third order. And you can expand to any order you want and then you can set up a systematic linear or semi-definite program. We did this using SDPB, thanks to David Simons-Duffin and Walter Laundry. And what you get when we expand to like order two delta plus, I think like 35 or so, we get an upper bound using this logic, which is lambda one smaller than or equal to 44.8883537. So to the number of the same places to which we solved this Laplacic equation on the 237 triangle numerically, we get a perfect agreement. Okay, so that's when we set I, J, K, L to be the same and delta equals six. And finally, how do we get a bound on the genus, arbitrary genus module I space? So the way to fix genus is to use the delta equals one, one forms. So the OI, OJ, OK, and OL, they will correspond to the delta equals one, one forms with I, J, K, and L ranging from one to G. So we are considering this equation coming from expanding this object. Well, we are considering the set of equations where I, J, K, and L range all from one to G over all the linear and homomorphic one forms on the surface. And in this way, we get a universal bound on the entire genus G module I space. The only difference is that instead of this positivity, we are imposing some positive semi-difference of matrices, which is, it's very standard in the conformal bootstrap literature. And what you get is this. So for genus two, our bound is a lambda one, three point eight, three, eight, eight, nine, seven, seven, which is a previous bound was this lambda one equal to four due to Yang and Yao, Yang and Yao. And the BOSA surface, which is the most symmetric surface of genus two has lambda one, three point eight, three, eight, eight, eight, seven, three, we did it for any genus, but so far the agreement is the best genus three where our bound at genus three, so using sort of three different external operators is a lambda one, let's not equal to two point six, seven, eight, five, where the previous best known bound is actually was published last year, 2020, 2020, where Ross showed that lambda one is bounded by two point seven, zero, eight, five. So it's pretty close to our bound, but you can compare it to an actual surface, the most symmetric surface in the genus to modular space, which is just, which is the coin quartic and whose lambda one is 2.67 and we don't know the following decimal places. So again, to this order, our bound agrees with the most symmetric point in the modular space, okay. Oh, by the way, I should also say that in the cases when it's, when it's, the bound is saturated, you can extract not just the first eigenvalue, but also in principle, all the eigenvalues, right? If the bound is saturated, this linear functional must vanish on the entire spectrum. So you get this structure of double zeros. Now these kinds of functional are also related to the magic functions closely. They're in spirit, the same thing that the magic functions from the solution of sphere packing problem, where this is just lambda one, this is lambda two and lambda three and we compared it for the triangle. We extracted in this way the first five eigenvalues and they'll agree to like at least three decimal places with the solution, numerical solution or classic version. So that's it. And there are some natural future directions like generalizing to non-compact or non-compact manifolds where there's a continuous spectrum from Reisenstein series, generalizing to higher dimensional hyperbolic manifolds. Think about arithmetic surfaces, in which case there is an enhanced symmetry due to the HECI operators. But yeah, I think maybe most importantly, we should think about how, what this can teach us about actual CFTs. Like what I explained to you is that for each hyperbolic, there is a natural bootstrap problem which is a small deformation of the standard conform bootstrap in one of these CFTs. And it has the property that every hyperbolic manifold manifestly satisfies all the equations in this modified bootstrap problem. So I think the most important challenge is to figure out if there is a natural constructions of CFTs along these lines or some different similar lines such that this construction will automatically output objects which manifestly satisfy all the standard bootstrap equations. But that's of course very ambitious and maybe a long-term goal of this research program. So thank you and sorry for running over time. Thanks a lot, Dalimil. Thank you, Dalimil. We do have time for questions in spite of being over time. So please go ahead guys and girls. Maybe I'll ask Dalimil, what's how the computation time and space grows with the number of digits that you want to compute? So like for another, to compute another digit, how much more would you need to run your program? So I think basically what happens is that first, well, the bound converges very quickly. So at first it's some kind of polynomial approach, but as soon as the bound gets to the vicinity of the correct answer, it converges exponentially fast but actually I forgot to say this, but it seems like our bounds are converging to a number which is strictly above the actual value. So for example- Thank you, that was actually my second question. Like where does it converge? I should have emphasized it. So we believe that these digits over here, the eight, three, eight, eight, nine, seven, seven, they have all converged and they are strictly, this is strictly above the actual value for both. Which is known to have a good precision. But of course, there's no contradiction there. I mean, the problem is satisfied and we are only using a single correlator. So the idea is that you could use as many quarters as you want, maybe throw in some homomorphic two forms as external operators, drivers things. And probably if you try the right thing, you'll be able to decrease these digits using the bootstrap as well. But you're gonna have to do something slightly more complicated than what we've created. I see. And this is the computation faster than finite element method to get the same number of digits? Well, using our expertise in the bootstrap, it was faster, but we are not experts at that. We haven't really tried to optimize our computation Laplace-Eigen model. So these digits, you can get in a couple of minutes, I think on a cluster. So yeah, I did a computer in my laptop and it took me maybe probably a day at most to get all these digits, probably much less. Yeah, I don't exactly remember. Thanks. I have a question. So since you used the data equals one operator to produce the bound lambda one less than three point eight, blah, blah, blah. So I want to ask, given an arbitrary subgroup gamma that this data equals one operator always exists? A good question. Yeah, it doesn't always exist, but it always exists when the manifold has at least genus one. So there's a formula for the number of these delta equals one forms and it's exactly equal to the genus. So if the genus is zero, they don't exist. If the genus is one or greater, they always exist. Oh, okay. They're number is able to genus. So that's how we were able to fix the genus in our problem. But that's like the only way that we are fixing the genus what allows us to explore the end, to put a general bound on the entire genus, fix genus, modular space. Maybe some questions from mathematicians. Yeah, I have a question, Maxime. Are you a mathematician? Yeah, Maxime. Yeah, Maxime. Oh, Maxime. Yeah, yeah, yeah. This is picture of this magic function when lambda two, does it mean that lambda two, lambda three will be degenerate? I'm kind of, why is it, I'm asking, why is it double zero as opposed to a single zero? Yeah, yeah, yeah. No, it's not related to degeneracy. It's just related to the positivity properties that we are involved. So we are, I didn't explain the linear programming or semi-different programming setup that we use, but basically what we are doing, we take this equation, we expand it to some finite order. So we get some kind of set of polynomials of lambda, which need to sum to a positive thing when you sum them over this spectrum. And now we want to take a such a linear combination which is positive above lambda gap. I see. If it's positive of lambda gap and there is a saturating solution with that lambda gap, then the only way this is consistent is if the function will actually vanish, if the polynomial actually vanishes at all of these points, sorry. Yeah, it needs to vanish and it's positive, it's non-negative. So it needs to have a zero of order, which is even, right? It cannot cross below. That does, yeah, am I making myself clear? Yeah, yeah, but does it mean that lambda two is appears twice in the spectrum? Well, it's not related, it's not related to that. I see, yeah, yeah. For the, so it is true that, for example, for the symmetric surfaces like Bosa and Klein, there's a degeneracy in the spectrum because these surfaces have some automorphism group and one does come in irreducible representations of this automorphism group, which can be more than one-dimensional. So there is a degeneracy, but this degeneracy cannot be directly seen by staring at this function. This double, double zero is not related to that. Okay, thanks. Sorry, could I ask? Sure, sure. Yeah, thanks for the talk. Yeah, I just have been thinking along similar lines about relation between mass forms and conformable bootstrap problems. So yeah, first I just wanted to clarify your setup. So it seems to me that like for automorphic forms, there are papers of Resnikov and Bernstein. So I know you probably know about this, right? There's this setup which is called Strong-Gil-Fan formation, right? So like within the lattice of subgroups or subgroup, you have like a diamond and along each edge, you have sort of Strong-Gil-Fan property, right? And then when you sort of, you know, you take automorphic representation, then you sort of go along these edges and you can go in two ways. And he produces a running-saver type identity, which is sort of our bootstrap equation, right? So it seems to me that it's like the same here, right? I mean, am I correct or? So I'm not familiar with this, with the word that you're mentioning. Ah, okay, so you are not, ah, okay. Yeah, I'm not talking about the, I mean, does it involve, does it involve the, yeah, is it in the context of arbitrary gamma or is it arithmetic gamma or? You know, it's of course arbitrary, right? I mean, that's the most interesting case, right? Yeah, there are papers of like Bernstein and Resnikov from like 15 years ago, which discusses sort of the system conditions. The only thing which they did not do is of course, numerics. I mean, right? So. Okay, that makes me a bit sad. But it's maybe not exactly what you have because what they did, I mean, what I remember exactly is that they do like external mass forms, right? Oh, okay. And they sort of mentioned that you can do external discrete series. And, but it's just, I mean, it's a bit, I mean, they didn't bother to do it maybe right now. Yeah. What was the reason? But maybe it's not precisely. Okay, yeah. But it's very close. Okay, okay. Yeah, but I think it have only triple products. This is my memory. No, no, no, it has quadruple products. I mean, there is a quadruple products, yeah. So it's, yeah, I mean, it's a paper of Resnikov alone issue. Okay, yeah, I'll take a look. Thanks. Yeah, okay. Yeah, and maybe, so it's just a comment and then yeah, I had also a question. So, yeah, so is it correct that, I mean, with this you sort of put sort of systematics to beneficial numerics, right? I mean, like in your work or is there a small to it? Yeah, I think, I think it's just making what he, well, so in his work, there is no, he didn't realize that there is this PSL2R symmetry. Yeah. He was working with functions on the surface and using sort of the algebra of copper and derivatives and the fact that the manifold is hyperbolic. Yeah, sure. You know what is the, what is sort of the contraction of two copper and derivatives? What is the combinator? So basically it's, our main contribution is realizing that in his setup, there is a conformism. Oh, okay, there is this PSL2R symmetry. Mm-hmm. Yeah, sure. Yeah, and so you don't kind of produce a kind of interesting new analytical functionals, you know, like related to L functions of the surfaces and so on. Do you? Or I mean. Well, I think these servers only have L functions when they have cast, right? Otherwise you can't really talk about L functions, at least. I mean, okay. You know. Yeah, yeah. Well, yeah, we haven't constructed analytical functions for these. I mean, as I said, it looks like the bounds are very close, but they're not exactly saturated. Sure. So the optimal functional does not really know about the surface itself. It only knows about it within some good approximation, but not exactly. Okay, thanks. Okay, thanks. I'm sorry, Misha. So you knew about this, but you didn't do it. Yeah, because I said it was to... I mean, somehow, you know. Well, that's a shame on you. I saw that, you know, without new functionals. Most of us didn't know about this, so we didn't do it if you knew about this, but didn't do it. And it's pretty damn down to the side. Is that you like analytical functionals? It's not so interesting, but if you can do numerics, of course that's... But actually, I have a question, Daniel. So these people who prove bounds, Yao and this person in 2020, I forgot the name is... Ross. Or Ross. So how did they do it? Did they use some sort of similar techniques or did they use completely different? As far as I can tell, they used completely different techniques. Like they... For example, Yang-Yao, they take the surface and they imagine they have some... They have some holomorphic map to the sphere. So this map always exists. And then, you know, they integrate some kind of a... Some kind of a functional of that map over the surface. And, you know, do some integration by parts, things like that, I think. And it allows them to put bounds on the Laplace eigenvalue. Basically, the Laplace eigenvalue comes from acting with some, acting with Laplace equation. And, you know, using the fact that the absolute value of the gradient is positive. So I think things like that are used in the Yang-Yao proof. But it's not... That bound is not precise enough to be saturated. Well, I think the Yang-Yao bound is saturated at genus two by a different metric. So the Yang-Yao bound doesn't use hyperbolicity. It's a general upper bound on the first lambda one for a general Riemannian metric. It's a bound in terms of the area of the surface. And there's some, you know, okay. You can, of course, you don't have any general upper bound because you can make the surface arbitrarily small. And then the first lambda one goes to infinity. But if you normalize the area, then there's an upper bound. As far as I can tell, their techniques is very different. Any other questions? Either way, just two. Yeah, two at a time. Yes. So the trigger and booster inequalities, they give both a lower, I was checking, and other bounds for lambda one, which uses the minimal length of geodesic cutting on the surface in two. Okay. Yeah, I remember. You remember. Okay. So it's probably rough compared to what you do. Yeah, it's... It's rough, but yeah. You might compare still in some cases. Yes, yes, yes. Thanks. Yeah, well, it will be interesting to know... It will be interesting to know when your paper is out, that I don't know what mathematicians who really work on these things. And yeah. Yeah, yeah. So what we always think is surprising. For me, it's very surprising. And it's quite amazing that I guess the miracle for me is that, you know, just like for the easy model, is that how close you can get to the actual answer by doing some final subset of the bootstrap constraints. So I guess, yeah. Also for the easy model, the bootstrap equations we know for a long time, they just bother to check what they give. Yes. So, yeah. So here again, you know, for some amazing reason, you just get very close. So it's really interesting. Yeah, I agree. To me, it looks very interesting. Of course, it's the consumers, the mathematicians will judge eventually. When I spoke to some mathematicians here at the Institute, they were quite excited. But I mean, they also really love hyperbolic surfaces here so maybe they're a bit biased. Okay. Well, if there are no further questions, then let me stop the recording.