 Okay, so I will start my last lecture. So today I will talk about the recent work of Zhi Wei Yun and Wei Zhen, about their so-called high Gorosage formula of a function field. And as a last time I'm trying to explain that the Chola Sabre formula should be considered as a kind of special case of Gorosage formula. The special case not mean that the theorem itself, the proof itself. So because when you prove Gorosage formula, you prove one generating theory equals another generating theory. And in a very degenerate case, that identity will implies Chola Sabre formula. And then at the end of lecture, Emmanuel asked the question why Gorosage formula is true. At the very beginning, in the 80s, we believe that Gorosage is true because of BSD is true. People believe it is BSD. But even looking for the Chola Sabre formula is a different story. Chola Sabre formula is about a second term, it's not about the first term, BSD is always about the first term. So that's already giving indication that Gorosage formula probably nothing to do with the first term. Right? It's simply an expression of teleexpansion. So that's actually, you will see in my today's lecture, you see this is pretty clear that you suppose I have a formula to explain every derivative. So the work of the Zhiyun and Weijian has a significant two directions. First of all, he proved all the different derivatives. The second thing they introduced a new method called a relative trans formula. So in the classical proof of the Gorosage formula, we use a very representation. The representation is not as powerful as a trans formula, but it's more effective. Once it works everything is precise, trans formula you have to do a lot of analysis. But once it works, it works for general group. So the method I explained today should be the future method for the number theory, more than the very representation. Of course, Zhiyun and Weijian work on the function field. So they have a new technique they can use in the left-hand trans formula. So in fact, they prove all the identity by prove identity between the perverse sheaf. So that's the new technique. It does not work with a number field case. So that's why in our assumption, they assume everything is aromified because they want to make sure that the modular stack of Stuka is a smooth Stuka of fq. Okay, so we start with some notation. So I start with f of fq, a curve, a smooth geometry connected. And so at start, we take a cover, x of y is the cover. And we assume aromified. Of course, the theorem still makes sense with aromified. So as I know that, I checked the way last week. And here, Zhiyun already pushed everything to square free conduct. But they said they need some time to finish the manuscript. So we're not going to talk of the new result, still an aromified double cover. So we have Zhiy because 2g of x minus 1. So you have this identity. So in particular, g of x cannot be 0. Because I assume y is connected. So I have a standard notation is fq of x, e, fq of y. Then I use idles, these idles of f. So it's a resurrection product of OFV, the places. I mean, in function field case, it means the crucible point. Or this is the weight. So V is inside. And I write this in. It's just there. Maybe let me write. I don't like this. It's a little bit strange. Of A, it's just idles, or x, x is at x. And I write the eta is the quadratic character of A, f cross. f cross plus minus 1 is the quadratic character associated to extension e of f. So that's a pretty much standard notation of A, v, j, g. Other people don't use this notation. So then you have error function. This error function is a pretty standard error function, eta of A, the devices. So A is the ideas or whatever, or devices or whatever. It's an effective device, not idea. So you can write them pretty clearly, zeta function of y, zeta function of x. So if you use an lecture transform, you will get verbinus of q, q negative s over v. So this v is h1 y, h1 y of ql, h1 x of ql. Well, so this is the same. So this is the dimension of v is 2g minus 1. So you see that this error function is actually a polynomial in q of negative s. So it's a pretty simple object. So question we want to understand. So what is the derivative? What is the tele-expansion for error eta s in s minus 1? So I mean that the error is the derivative of eta at 1. I mean that if you use a function equation, it may be easier as 0, as easy. That's something we're going to do. Well, this is a pretty interesting question. So they have two different formulas. There's the first two cases. And the example is the r eta 0. This is just a special value. That's the class number formula. The cardinality of Jacobian y of fq, Jacobian x of fq. So this is the class number formula. So then, of course, our prime eta of 0 should be something called a charlotte-server formula. In this case, probably you can do that the same way as we do in the classical charlotte-server formula. I'm not really sure, because we don't have datacan error function. Oh, I'm not even sure how to prove this without using the Grosage formula. Because in the classical case, on a modular curve, there was a datacan error function to compute it. So even in this case, we do not know what's going on. OK, so this is the question one answer. And as I said last time, this is everything should be the special case of higher Grosage formula. So this higher Grosage formula somehow relies on construction of some modular space over a base dimension r. That's a new concept introduced in that paper. So the classical arithmetic is a variety of a number field, a variety of function field of a curve. So then you do a new arithmetic. So arithmetic, say, if the base one dimensional, you are not doing much. It just only takes first derivative. If you want r's derivative, you introduce more. You need to have a base high dimension. So that's a very interesting phenomenon. So from now on, we assume that. So now I assume let's go back to new thing. So I start with, this is the question I want to do. So we assume that r is even. So we introduce this stuka, the cover definition, that, so let r s be a scheme, xr scheme. So it really means s is a scheme of q with a morphism from xs to xr, r from 1 to r. Then a grn stuka on s is a diagram of morphisms on bundles and on x cross s. So what I really mean, so you have this notation will be have inclusion is actually not isomorphic r over 2, then change direction. And the last one, you want isomorphic to 0 to the sigma. So I explain these notations. Maximum notation is the following. So there, well, so this xi of bundles would have x cross s, rn bundles. And the sub quotients, i of i minus 1 or i of i plus 1, depends your i is less than r over 2, bigger than r over 2. So these are line bundles. So this i less than r over 2, bigger than r over 2. So this is isomorphic to gamma xi of ri. Gamma xi, this is the graph of xi. So the gamma xi is the s map to s cross x. OK, start here. Ri are line bundles on s. So this is the first thing. The second thing, the 0 sigma is a probe by Frobenius, 1 times Frobenius s, pull back of x0. So remember, this x cross s there, I take Frobenius of this direction, but not this direction. OK? OK, I've been over fq, for example. So it's not a, it's a, take a q's power, not there. So this is called a stucco. So why this thing is interesting, of course, is the morphism, phi from stucco is a morphism of diagram. Cartesian morphism, diagram. So we're not going to describe. So the interesting thing about this stucco, I give you two examples, I show you why this is an interesting object. So example, if r equals 0, so I don't have any modification. What? Cartesian, yeah. It just means that there's a, probably I don't really need a Cartesian. No, you don't need. If I have a modification, probably I don't even need it. It also is a sine quality e, you have i just from the second question, so it's also non-stricted. Right, right. Non-stricted, you bought it, it's strict. So again, you have just ei over ei plus 1. I think it started from R over 2 again. It started with R over 2. R over 2. Oh, OK, start R over 2. So this is less than, oh my, OK. So this one has a two-bonded with index of i. OK, that makes sense, you know, so I cannot have. OK, so r equals 0. Cartesian problems. I'm sorry, I'm lost in the ranks of the bundles. So you said the line bundles. These are line bundles. Why are you not inventing n? What? E i is a vector bundle of x cross s. Run the n. All of them same rank. All of them are g r n bundles. So r equals 0 is a very interesting case because they don't have all this in here. It's just a last step. The last step give you a data of descent, right? So this means nothing, it just means, so we have this bundle. So we have this sticker of r equals 0 of n. It is essentially either the classical automorphic space, g r n of f, g r n of a, of g r n of o of a. It's just a points, points scheme of fq. And r equals 1, and r equals 1 doesn't make sense, r equals even. So let's take a case when n equals 1. So we call there line bundles. You have line bundles. I mean, this all is a line bundles, one to other. So the difference, the morphism between line bundles is it determined by a devices. So basically it starts with the first bundle. And all of the devices is determined by this graph, gamma xi. So it's also one equation. You don't get many equations. So in fact, I'm not going to write down all the details. So roughly speaking, if n equals 1, your bundle of 1 r is fitting in in a Cartesian picture, pick of x goes to pick of x0. This goes to x of r, not here. So you have a Cartesian product. And here, this morphism is given by xi goes to ox summation xi, i less equal to r over 2 minus summation xi, i bigger than r over 2. Let me see. Yeah, i from 1 to r over 2. Yes, i from, because x i equals 0. Yes, yes. What does n equals to pick of x of x0 here? What? Is that the vertical one? From here to here, you start with the line model goes to line model sigma minus 1. You take the conjugate, then the Frobenius, then the minus 1. So it is the Cartesian thing. So the kernel of this morphism from here is nothing else. It's a pick out x of fq points. Is it really made from pick or from pick 0 to pick? From pick. Because the pick is actually the E0. It's just some line bundle. You don't care which line bundle. You don't have a degree thing. So this gives you an idea that the SH1r is a principal homogeneous space over xr for the group of a pick, x of fq. So this is already a very interesting object. So this gives you a geometric construction of eta cover of xr for arbitrary r, right? What? I can try it, yes, to give you a covering. So that's already tell you, even for n equals 1, it's an interesting object. So we will restrict everything to the case r equals 2. So now assume now r equals 2, no, n equals 2. So the n equals 1 is pretty clear. In this case, we will reduce the head corporate. So for any d effective device, x, we define the head corporate. We define the head corporate. Let me see how I can do two things. I can define the head corporate of r h of d. So h of d is the head corporate. So this head corporate is a stack, a stack. And is it fitting in? Is it a correspondence? Let me fit in some picture here. So you have, since it's a correspondence, you have this, so what's the notation I do? 2 h of d goes to 2r and 2r, then for the morphism of x of r. So this is equal to all the objects, phi from t prime, such that the determinant device of determinant phi is d. So as I said, if you have a morphism between two stucco, then you have a morphism between these vector bundles. Because the Cartesian property, as just put it there, the determinant doesn't depend which vector boundary you choose. So you have the same question. You did not explain what is this Cartesian property. Would you explain? Maybe this is the definition. So I just want to have these line bundles. Probably. Do I need this? But just explain it for a equal to, so we can see. Let me try to see. I just want to, yeah. OK, so it doesn't really. This probably is the definition. I wrote it, because it just means a commutative diagram. Now, forget about Cartesian, just commutative diagram. OK, yeah, because it's nothing. You don't really need any Cartesian. Just commutative diagram. The point is that I already know that there's a modification of rank 1, so I worry about that part. If I don't have this restriction under the rank 1 thing, I worry about it. So this is the phi i. So phi on what? Commutative diagram, this is the last map, and the first map I conjugate for me. Right, right, right. Yeah, so all these things, you map everything. This diagram to next diagram make everything commutative. Yeah, but the last one is the first one. First one is the same, the same morphism, yeah. So this is, you get this HEC operator. That's usual. And for our study today, we actually want to quote it by another thing. So we'll take, we only work on a G or PGR2. So we only study the SHTG of R to be HH2R mod a pic of X, F, Q. So the points are the following. So this is the diagram, and they're only non-trivial thing. The last part, you have a free business. So if it starts with a vector bound already defined of F, Q, you can transfer this diagram by these line bundles. Nothing will change. So then I also quote it by HG of H of D is a similar thing to R H of D quoted by the image of that. You can say that, I don't know what the quote means. Because all the morphism between the bundles, when you quote it by line bundles, you have a morphism. So this probably means that. So anyway, so if this is the case, this becomes self, there's an evolution on this descent. So if you, like elliptic curve, you have a E1 map to E2, you have a conjugated map. Without, if you don't work on a PGR2, you will not. You will have, when you come back, you have to modify that something. So when you cannot really give you a morphism, you cannot define a dual morphism easily. So when the PGR2, that's the case that I'm doing. OK, so this is some easy thing about Stuka. And of course, the major theorem in the Stuka is the following. So we don't say anything. So the theorem, so this is for Dreamfield, when R equals 2 and Vashovsky for general R is that. So this Stuka and R is a Deline-Manford Stuka on a stack, which is a locally finite of FQ. So in fact, it has a countably many. It has a filtration by, because it's a vector of others, you can attach a stability to the vector of others. In this way, using stability, you can make a covering by schemes of finite type. But unlike the typical vector of others of a curve, the boundary is very small, so this boundary can go forever. So there is no, the boundary is still very big. So it's not finite type. It's infinite type. The second thing, so the morphism between T and R to X of R is smooth, smooth, let me see, proper? No, not proper, no, no, sorry. Cannot be proper. It's a smooth of dimension relative dimension equal to R equal n minus 1. I don't know which one, R equals n minus 1. So the interesting case we want to start is n equals 2. If n equals 2, for example, then you will get this of R, X of R. So this is the dimension R. This is relative dimension R. So it's the set of a dimension 2R of FQ. So this is the dimension set of the 2R, the dimension equals R. So this gives you a hint. You can define the intersection theory on this thing. And? You don't mention Laform in this statement, so. Nothing to do with the Laform. You know Laform, so. No. No. For this one? For this statement, no? No. Even the smoothness? What? Even the smoothness? Yeah, this is Wachowski theorem, yeah. I mean, Laform probably ends big. This is purely Wachowski theorem. OK, so and for people who cares about Langland's program and what do we care about there? Gorzaghia formula, there's two different difference. People who care about Langland program only care about the generogophybe. So XR, you have a generogophybe there. We actually care about whole scheme, whole stack of FQ. So there is a difference. And this difference actually costs a lot of trouble. So somehow, you have this Langland program for generogophybe, for example, you know how to call homology decompose. But in the study, this Gorzaghia formula, we actually want to study and understand the homology whole scheme, whole stack of FQ. That's actually the major difficulty in the study of this problem. So we, in other words, we care about integral model, not necessarily just the generogophybe. OK, so these are the basic concepts about shduka. So the next thing I want to do, I want to talk about CM shduka, or in their terminology, we call it dream field, Hingness shduka, and a geometric kernel. So in the first few lectures, we talk about three different kernels. One is the geometric kernel, one is analytic kernel. The last one is the left-hand kernel. So the theorem is approved by all these three kernels are equal to each other. So that's the same. So the first one I talk about dream field, Hingness shduka. So that's the easiest thing to see. So we have this y mapped to x by nu. So this will give you a morphism between the stack, the rank one stack of r, but it's for y to rank two stack and x to put x here of r, just by push forward. So if you have a line bundles here, you just take the push forward line bundle, you get a vector bundle rank two. So that's like a typical situation. And of course, this one is over y of r, this is over x of r. So you get the same. And then you can quote it by pick x of fq and all the picture. But it's just everything quoted by things here. So you will get sht or ry to sht of g of r. So this is a t, this is a g1, this is by g1 y divided by, I mean, the restructuring of scale, y of x of g1. You get this thing. But not only you get this thing because you get another morphism here. So you can put everything in the one picture, x to the r, y to the r. So you get this morphism. And then we define the notation here. It's too complicated. This is the major thing I'm going to study, the two morphisms. So now both of them is over yr. So this is relative dimension equals 0. The relative dimension is r. So let me rewrite the picture here. So I write this thing equals z, where this thing is called m. So I get z map to m goes to yr, I get this diagram. So this is a stack of dimension 2r. This is a stack of dimension r. This is r. So in fact, this one is a compact stack because the relative dimension is 0. I don't understand why you have this t. So the first one is over y, isn't it? Yes. OK, so over y, the group should be over y also. Yeah, 0, 1, y. Yeah, so in some sense, what I could get there is just y to the r. I don't see why I've got this restriction of sky. For me, I will get, by the description, get for n equal 1. You get x, yeah? Yeah, I will get just y to the r. This z for me should be y to the r. I don't understand why you have this. Say it again. This is equal. Yeah. Z for me should be y to the r. No, this is this diagram. Yes, but you have to take the quotient by the p. But by pick of y, I call it by pick of x. Are you going to buy it? OK, so this is the principal homogeneous space of pick of y. Because of by pick of x. Yeah. So it is a Galois cover with this group. OK, because now you cannot code by pick of y. Right. OK, so this is z. So z is a quarter. It's a quarter. Whatever. It's a Trimferd signal cycle. We're going to compute it. And as I said, z is actually the compact. Because of this thing. So this z define a class inside. I don't know. Maybe write, define some notation. I'm going to have a theta here. So you have a theta star of this thing is inside ch, the char group with the compact support of this coordination r of m. So that's a good thing. It's a compact. That's the one reason, I guess, they study the case when y map to x is not split. And presumably, say y, maybe I can take a y to be two copies of x. So I get two line bundles, say, direct sum, for example. So I can see all the stucco, which is a sum of two line bundles. So that's a perfect subspace. But if you do that stucco, that space will be still compact. I don't know what's going on. I ask them a question. Why do they just compute them even more trivial case? Why is there two copies of x? They say there's some problem with the combatifications. OK, so we're not going to describe their thing. So in particular, we can define the self-intersection. So z here, so actually briefly, this really means that the whole thing is this one. And if you want to make connection to the fault and the height, so this is actually equal to c of r of their relative bundle of m over yr of z. You can define this way. So this will look like the fault and the height. So the main theorem, which is related to the fault and height is what we maybe should call the high Chola-Sebel formula by this Yun, Nijian, Nami, probably last year. So one of the very surprising theorem is that they give us the formula for the derivatives of r functions. So prove that. So this is z equal to 2 to the r plus 2 divided by log of q to the r of the power of r, 8 of 0. And this r is even, but it's enough because this function does not have all the coefficient. Is that right or not? OK, so this is the method. They proved we are interested. As I said that, I mean, the Chola-Sebel formula is always a special case Gorozaglia formula. So what's the Gorozaglia formula want to do? So for high. Do we have some information on the G gamma G? Do you know the sign of G gamma G? Sign of, oh, this is a positive because by Riemann hypothesis, by GRH. But there's a theorem in function of your case. So you take a teleexpansion under this side of 0. It's a whole function there, so there's nothing. Yeah. So it's always positive. For high Gorozaglia, we're going to explain. So we also need the formula for not only the thing there, for z hd star of z. So this d, the hd is a head cooperate. So roughly speaking, there's a picture here. You have a head correspondence. You have a cycle here. You pull back to this one, then push forward. So that's the cycle you get. So it's by a high correspondence there. So in fact, that's the way. So in fact, even for the proof of this high Chola cyber formula, you cannot prove directly. You have to prove the high Gorozaglia first. The strategy proof is the following. So give a strategy of the proof. So the strategy. So they have two ingredients. So maybe I call it an ingredient, not a strategy. So first thing they computed is that these intersection numbers, this z of hd z can be computed in a cohomology group. So that's a very different thing than number theory. n is v hc of 2r of m of q bar. So the ql of r. So l is a prime nautical p. So you compute actually in a cohomology group. Just by the cycle class. Cycle class, yeah. So that's something not like a number theory. Not like in number theory case. We don't have such a. In number theory case, presumably this is something called a semigroup, whatever. But we have these coefficients. So I don't know how to compute our color of intersection theory purely using cohomology. But in function theory case, you can do that. That's the advantage. In the very beginning of your talk, you introduced these with the motion of first cohomology of double power. It's not the same. Oh, yeah, yeah. Not the same way. That way it's easier. Yes, that way it's easier. Yeah, so that's the question of two cohomology. This is a great thing. So a major part of the paper is to study a structure of v under the Hecker algebra. This is, well, ql bar, ql, whatever, joined with all these Hecker operators, the effective divisor. So I mean, the main theme of the proof is the following. So actually, the main result is that for this thing is this v equals to v is the Einstein part. There are some of the cuspid part. We call it a cuspid part. Presumably, the best thing you want to do, for example, we can show, say, this v is a modular, like the usual one we do for the Lagrange program or modularity, Tanya-Mashmura type of conjecture. You want this to be modular. But unfortunately, they could not prove it. They cannot prove this one is a modular under the Hecker algebra. But they proved not far away from Hecker algebra is that it has a decomposition by Einstein part and another part of cuspid part. And this v0 is a finite dimensional. So this is of ql. So this actually is a very nice thing. So even I don't know it's a modular, but it's not a very big space. I mean, this is modular. Einstein, are we not going to explain? So this is infinite dimensional. So it's a larger part of this infinite dimensional. It's a finite dimensional. But don't be surprised why it is finite dimensional. For example, this looks like I want to study p1 over z, like p1 over curve. The co-homology should be finite dimensional. But this Einstein part is infinite dimensional, as I said, because of stability. You close by stability a lot. But even this whole stack is not finite type. So everything added is Einstein. You will not add too much. There's a cuspid part. Cuspid part is the center. If it defines stable locus, yes. Is the intersection trivial or self-intersection of Einstein part wrong? No, not trivial. Not trivial. But as I said, there's stability condition. The strata, each time, introduce Einstein's sin. But the cuspid part is the center. So you don't really worry that much. So at the very beginning, this module stack is a scaling, because it's not finite type. When you study the co-homology, you find that the most interesting part actually is the first stable locus. All other things, you just introduce some Einstein theories. So that's the good news. So the second thing is the consequence of the first one. The second thing, so this above decomposition allows us to translate the computation z from z h of d to a degree of d sufficiently big. So you can compute this thing. So you don't need to compute the d equals 0. d equals 0 is something we care about Chala's ever formula. But here, I said, we actually don't care about d equals 0. It's supposed you don't compute any big d. It computes enough. Because the finite dimension, it computes a few some big d's that you're done. And for big d, somehow, we know how to compute it. But small d, we don't know how to compute it. Because we want to translate everything into perverse shift, translate everything to the algebraic geometry. You have to define some modular space. This modular space, somehow, for d z bigger has a better property when d is small. So that's another thing. So which will be the main point, the main effort of computation. So another thing is important is that this decomposition, actually, the action of heck algebra and h phi 0 is almost semi-simple. So at least I can write down the same. So if I define i over d, define it to be the z, hd star of z. If I define this number, then I can have a decomposition. I can have a decomposition z. We will have a summation of z over pi plus z over r star star. For example, you can decompose the pi to the cuspid representations. So maybe I write down, we're not going to write the r star separately. The pretend is sum, including continuous spectrum. So this is pi, the irreducible representation of g, the spherical. You have such a formal sum. Once a formal sum, you can compute the geometric kernel. So there's something the geometric kernel want to do. So the geometric kernel will be. So we know that hd of z pi is given by the lambda pi of h over d z pi. So this lambda pi h of d, it's a sataki parameter. So in other words, it's just the coefficient of the error function for pi. You remember, there's an elliptic curve. 1 minus AP, that's the AP part, something like that. The coefficient of the particular error theory. So this will give you that decomposition i of d equal to summation of pi lambda pi h of d of z pi z pi. So this will give you a very nice formula to decompose your geometric kernel as this geometric kernel is just a linear combination of this more primitive one with a coefficient. This is very important because, eventually, I will change my hd. When I change my hd, my coefficient changed. So this thing, so in other words, if I take all hd together, so id for all h of d, so this space, so roughly speaking, this will give you a concrete idea about this number id and this number z pi z pi. So if I know everything about id, then I can say something about i z pi z pi. So this is actually the growth diagram thing I want to do. So this is a geometric kernel. So now, next thing I want to talk about it. Do you want to take five minutes break? OK, that is a five minutes break. Then I can have two more sections to go. So I want to talk about there. I just introduced the geometric kernel. I just said I introduced the three kernels. So they prove all three kernels equal to each other. So that's the way to prove. The second one is a more analysis of zeta integrals. And zeta integrals and an analytical kernel. So it's just like a proof of growth diagram. You have a geometric theory that I'm trying to show. It is analytic. This is pretty similar. So we start with g equals p g r2. And so we start with a HECO plate. So it's a smooth function with compact support, or g of a. And then from this one, you can define the k f x y is an automorphic function with two variables in a very simple way to define. And there are gamma g of q, g of f. So this is an automorphic function with two variables. And it is the kernel represent the regular action of a action of HECO algebra on r2 g of a g of f. So what I really mean, I mean that if you have a phi inside r2 space g of a g of f, then you can define the HECO plate as usual g of a phi x y of f of y g y. Just a right translation. And you can rewrite this. And this is the definition. But if you're trying to put a gamma in the middle out of them that take their called a fold of your integral, it's easy to define that this is equal to the inner product between this phi of x k f. Let me see, y x y y, but this time is g of a g of f. So this represents that. But OK, of course, I mean I didn't really talk about probably the better say the conjugates, a complex conjugates if we think of r2 product. So let's pretend that forget about the detail of the complex conjugates because I don't really use them that much. So this implies that we have a spectral decomposition, so at least a formal one. That is k f x y summation of phi. So it's also normal basis, also normal basis of phi. Let me see, rho f of phi of x times phi bar of y, just a formally. The formally, because first of all, it may not be convergent. It may have a sense that serious is not really some. So just give you idea. You have such a decomposition. So this is usually called an automorphic kernel. And in fact, in the theory of automorphic forms, that's only the automorphic function can be constructed easily. We don't know how to construct other functions. This is probably the cheapest way to construct automorphic forms because you give any f, you construct it. And an interesting thing, that this is actually more include all the automorphic functions. The reason this is a head corporate. I mean, every automorphic forms basically appears in such an f. So when you change the f, you actually get all the information you want. So now we introduce the zeta integral. The zeta integral actually need to be regularized. So I will not going to talk about them of JFS to be the integration of, OK, write a here. a is a diagonal in pjr2. So it's a sum of a to 0, 1, 0, 1. So I introduce the double integral, write a here cross a. So in the automorph form theory, this a is always means a of f, a of a of f. So you have kf, x and y, and x, y of s eta y of dx dy. So both x and y are diagonal elements. The integration takes s power that's pretty familiar. Then I twist the second one by eta. So there's no much reason you can twist by anything else. But that twist by eta have my own reason to do that. So this is a kind of a jackass thing. But if I use the spectral decomposition there, then it tells you immediately what I'm doing there. If I say in terms of our spectral decomposition, spectral decomposition, we know what we have. We get gf of s. Just bring everything there. We will get summation of phi of integration of a rho f of phi of x and x of s dx times of a of phi bar of y, y to the s eta of y dy. So these two are typical the Hecker functions. So those are typical Hecker functions. So typically we write them as z. First one is rho f phi s. This one is z rho f phi twist with eta and s. There was a character there. So basically it's a product of two error functions. And then in fact, if your phi come from some automorphic form, so basically this is the error function for your representation. This is the error function representation twist by eta. So when this product together, that's exactly the best change of your G2 error function from f to e. So that's the reason I'm going to do that. So it's pretty interesting I introduce one extra eta there. So in fact, for us we want to do something much simple than this thing. So we introduce the g over d is an electric kernel. That's already an electric kernel there. It's defined to be you take a derivative d over ds, r is the power, then take s equals 1 that j hd of s. I forget what the hd is. hd is a function inside cc infinity. g of a is a characteristic function of the set of the integral matrices. g inside g of o of a and the determinant, the divisor of the determinant of g equals d. So there's something like that. It's just a characteristic function of the set there. So it's a smooth function. So the main theorem, the proof, the theorem. So this is the main theorem. Like a Goelze idea, we have an electrical part of the geometry, but the main theorem is that there's a main identity, is that there's i over d. The geometric kernel I constructed is basically the intersection pairing between the Trinfeld-Hingelen cycle and is a translation by hd. Equals to this number, the log of q minus r of j over d. And this makes sense because this is a rational number. If you take a derivative, each derivative will get a log of q. So you have to take the log of qr, the power. So that's the identity you can prove. So the example is that the case we care is d equals 0, because that's the thing we care about. When d equals 0, jh0, so just h0 is a characteristic function of jr2 of O, is essentially, you can calculate, precisely go back to calculate everything. It's not really hot. q minus 2 plus r eta 2s plus r eta minus 2s. So this gives you j over 0 equals, and all the number for r eta 0 plus q minus 2 is r equals 0. It's r to the q to the r plus 2, r eta 0. You are bigger than 0, even the 0, otherwise. So you can compute everything done. So this gives you the high Chala-Seppler formula. So if you prove this identity, just take d equals 0, you're done. But as I said, you cannot prove this identity for d equals 0. You can only prove this identity when d is big. So that's the interesting part. So actually, even if you want to prove Chala-Seppler, you don't care about Grozagia, you still end up doing Grozagia anyway, for any d. But big d. Then using this pseudo-modularity, because the v0 is the finite dimension, yeah. OK. So now we want to do something more precise. What we're going to do is, so now I assume. So let's see what's going on, I assume. Well. So will you give an idea about how it's proved? You have an intersection number which is decomposed into a song. I will explain. That's my next section. So I will talk about spectral decomposition a little bit. So I will talk about the spectral decomposition of this Ch of d already done for I of d. Remember, I of d is summation lambda pi of d times z pi z pi. So let's try to do this thing. So in this case, why? I mean that this hd, so this f is equal hd is spherical. So it's invariable size. And so we can have, so we only care about this decomposition, G of f, G of a, G of o of a. This actually is a sum. It's just space of spherical forms. Forms has no level at all. So that's pretty easy. This is essentially the representations. So this we can write down direct sum of course. This direct sum could be infinity direct sum. Could be athens than 0 C inside. Or we're not bothered to do that. So pi is automorphic for pjr2 and a spheric. Both automorphs are spheric. C times pi over pi, OK? So modular, maybe plus some athens than part. Athens than series. So in other words, I mean maybe just forget about this part. I don't want to make it very confusing. So this sum, this integration could be included some integration inside. So there's a phi pi, so a unitary base of pi. But the invariant gl of p of g o of a. This is the one dimensional. It's just a pick of one vector whose l2 norm is 1. So you have many choice. So the rho f of phi pi is also this lambda pi of f of phi pi. So this is the same thing of the Sataki pair matter I introduced before. And so this will give you the k f x and y will be summation lambda pi of f and phi of x of pi pi bar of y. What is the pi here? So that's very interesting. Very interesting, very simple decomposition. So you bring everything in, then this will give you g of f s. So this is the zeta integral. You notice the beginning. In this case, you bring in, you get a very simple decomposition. Summation lambda pi of f and l pi you base change it to e and s. So that's a very simple thing from everything there. So I get this decomposition. So in particular, if I take f h of d and it takes r derivative, then I get this g of d summation lambda pi of f of l r derivative of pi e at the center. I'm a bit confusing. My s is normalized the center. I put one here, but be careful. This really means the center. So I don't know. I lost my normalization of the error function. I don't know the half is the center or one would have it, because presumably everything I'm doing for center-automobile forms. So this is great. So we have this identity, but recall that h of d. So recall another identity, i of d summation lambda pi of h of d of z of pi z of pi. I know these two are equal for arbitrary d. And on the other side, I have many such coefficients. So I can show that for all the d together, this is sufficient to conclude that the big theorem, on the main theorem of the Yun and Zhang's work, says the Haigur-Zagy formula. I call the Haigur-Zagy is that z over pi z over pi maybe equal to, I don't know, some constant. I could not write it down. So log of q to the r of the power derivative pi e over 1. OK? As a Christoph's question, what are you concerning? This already gives you a very interesting phenomenon. So one corollary of a GRH, because GRH we know for capital forms, for GRH2, because then one corollary of the business is that z over pi is not 0, z over pi is bigger than 0 from the four, let me try to say. Oh yeah, I'm sorry. So if r is the greater than, greater than order of vanishing of r pi e of s, s equals 1, then z over pi, z over pi is bigger than 0. So in particular, z over pi is not 0. So that's a very strong consequence, because in algebra geometry, the one difficulty for us is the construction of algebraic cycles. This formula tells you that, starting with the first few of them, that's governed by BSc conjecture, everything else will be positive. So now the big question for us is, how can we use these cycles to go back to study arithmetic? I think Zhiyun and Weijian, they got some formula, some information about that, the compatibility with the BSc conjecture. So they can show that the class of z over pi somehow lives in the determinants of the same group, because that's a regular. So that's something they can show. But only in cohomology level. One thing to be really careful of that, I mean, of course, if pi is spherical, because they don't have elliptic curve of spherical here. So only one example we can think about it is probably Schmura curve. When Schmura curve, you take their extension. I mean, they're the family of Schmura curve. If you have a Schmura curve over q, you mass on p, p does not define discriminant. This is parametrized family. That family, the error function can be studied by this method. And one thing I wanted you to do is that z over pi is a cycle constructed on the modular stack. We don't know how to make this cycle, to be the cycle and your abelian variety or something. So we don't have, like, in a modular curve case, we have modular curve mapped to elliptic curve. Any points of modular curve will give you a point. Elliptic curve here, no. The relation between the elliptic curve, a function field, and a modular stack is more complicated. It's not simply, say, there's a Jacobian map from one to other. So you don't expect a parametrized by your elliptic curve by the modular stack, where r is bigger than 1. r equals 1, there is something you can do that. Where r is bigger than 1, there's no relation. I mean, if you assume a tetraconjecture, there is a relation. The tetraconject is not proved. We don't know the relation between the cycles and the elliptic curve function field and the stack with r-legs. So this is one drawback. OK, so this is, so we get a high Grosage formula. OK, so I introduce the algebraic kernel. I introduce the geometric kernel and the analytic kernel. And presumably, you want to prove this theorem. This is the method I'm going to prove. But in their paper, they don't really prove this two numbers together directly. So when you do Grosage, you prove it two things by computing local intersections. You compute them, then find them, same. There's another approach by a proposal by Weijian called a arithmetic fundamental lemma is the same thing, you compute intersection numbers. Here, at the very beginning of a hint, you don't compute intersection numbers. You compute intersection in the cohomology. So it means that you have to use some other tool to do that. So the last thing I want to introduce is some perversal shift. So the strategy that proved that formula is by mega-connection. So I will introduce some perversal shift. Some perversal shift and left shift trace. Left shift number. OK, so let d, OK, now you want to study db effective divisor. And you want a degree of d bigger than max of 4g minus 3 of 2g. So you see, now it's already db big. You cannot prove that. So they actually prove this identity. So there, you understand, they prove the identity. The identity, I know this assumption first. And by using left shift, left shift trace formula. So what I really mean that they transit both sides as a left shift number of some perversal shift or some geometric base. It means you have a cohomological correspondence on a certain black complex. And does the decomposition correspond to the decomposition into the local factor? It means the local intersection, the fixed path. So you have a cohomological correspondence on a certain complex. And then you have some del G formula, which separates that in terms of local intersection of the fixed path. Is it exactly that decomposition? Right, so we're not going to hold the bigger complex of this thing. So I will write on the left shift, which shift we use in both sides. So I will write on the shift there. So they show that it's both sides a left shift number of one perversal shift. So that's what I'm going to show there. And the general construction is very complicated. But the last one is not difficult to explain. So they study something. So OK, let me write down some notation. So d e to the degree of d. So they introduce a couple of stack. So there's x d. We have x d. We just give some notation. So x d is x d's power, symmetric of d. So it's just the moduli of device degree d. And inside here, you have x d of 0. So this is for device without a multiplicity. So this is just device d sigma xi, xi, not of xj. They have no multiplicity at all. And this one, you get another bigger one. This is a stack of pairs L, L. So the degree of L is d. L is a section. So if L is non-zero, you get this back. So here, we do not require L is 0 or not. So it's just a stack. So this one, you can write as L equals 0. And L is not equal 0. So it's basically x of d joined with a pick, pick d of x. So it's just the stack, the union of two stacks. So one's open, one's closed, because L equals 0 is the equation. There's a closed stack, an open stack. So this is basically the traumatization of the matrix, Heiko algebra. Heiko algebra has a lot of its Heiko algebra. Each element is a matrix, x11, x12, x21, x21. You take a dramatization of the matrix. But each of x11, x12, it can represent by divisor. So you represent divisor by line bundles. So this basically, what I'm doing in the blackboard is just dramatization of Heiko algebra. OK, so this is an art in stack. It's not a delimene fold. The dimension is equals d. Oh, no, sorry, dimension g minus 1, because the stack can modulate their correspondence. So then we talk about the covering. So now I write the picture here. So you have xd0 goes to x of d, goes to x of d. Then I have y of d, yd0. Let me try and see what I'm going to do. y of d, well, I can get everything. yd hat there. So it is morphism, yd0. And then I have y of d and 0 here. I have this morphism. So this means that the copies take all the diagonal out. So I have this morphism. And this morphism, if this is a Galois with a group, the group can be easily write down. You see, first you get, let me see, you got this one to be x of d, 0. Then this morphism is by symmetrical power, s of d. This is the copies, two copies. So this is called mu2 to the d's of power. So the Galois group will be the semi-product. Let me write down gamma is mu2 of d, semi-product of s of d. So s of d is a symmetric group. The central one is when defined from y of d, yd to xd. It's just a divisor, effective divisor. If you want the things without multiplicity, that's all. The one is without multiplicity, yeah. So the first one, is it worth? Oh, I know what you mean. What do you talk about it? I will know what you talk about. This one, this one is when defined? No, no, this is not where defined. This is where you are removing the diagonal. The only thing which is going to be defined is just on the left. Yeah, this is defined. This is where defined. This is not where defined. You're completely correct. So just remove the vertical mark. Remove more diagonal. So we define this thing by pull back, remove the diagonals, the image of diagonals, OK. So then I will define some shift there. So this allows us to define some representation. Let me try to understand. So for each i, 0 and d, then we can construct. There's a subgroup. Gamma i equals to mu 2 of d. Then the same semi-product, but a high s i, s d minus i. So this, OK, that's everything I'm doing. It's just like the cutting the matrix here. So I just break my letters, copies of the products to i copies, d minus i copies. So I defined a representation chi i or gamma i 2 plus or minus 1 by the following one, the chi i of, let me write it down here, epsilon 1, epsilon d, sigma i, sigma d minus i to be just the product of i here, just the first i product is a trivial and everything else. So this is the character. So this character is the variant under this group. This is the character of the here. The reason is, you see, you take a product from 1 to i. This is the semi-product here. You will find we're not changing anything. So this is the character. So then I went to find a representation. Well, I define a representation again. So I have an L is not for the P. Then I defined inducer representation from this gamma of i. Sorry, this is gamma i. My notation is not very good. All the gamma of gamma. And the gamma of this chi i, the rho i here, you can show that the rho i is irreducible. Great. So this gives you a sheaf. It gives you a sheaf. Define a local system. So it may be a write down called a chi of 0 on x on d0. So you get a local system on this open set. Now what do you do? You extend it to the sheaf. So write a chi to the middle extension chi 0 on xd. So really mean that, well, J shrink the chi in the notation of DeLine Bernstein, their book is given by this thing. So we're not going to explain what the middle extension is. It's just an extension so that the local homology homology has some minimal support you can have. So you get this sheaf there. And now we define the l i. So we define a sheaf. So this is to give you a sheaf xd. So finally, so we define this sheaf to be the direct sum of ij equal 0 to d under d minus 2jr. So this is just the write down. How can I write down this thing? So this may not be a positive, right? Oh, it's a positive. r is a positive number. So I'm OK. So I worry about this thing. So chi kj of this thing. So this is on x of d of xd. So it defines this perverse sheaf. So the main theorem, the approving is the following. So that's so far I define everything only d. And I haven't talked of anything. It depends on the device d. It's the sum for e and g from 0 to d of d minus 2j. d minus 2j. Yeah, no idea. What means you multiply copies and I multiply copies? Just rank. Maybe you want to hear the direct sum? d minus 2j over r. Just copies. What is the relation between i and j? No relation? No relation. So if j equal d, what is it? It's minus d? If j equal d. But r is even, yeah. r is even. i is even. Yeah, so it's safe. You get this big sheaf. OK, so now the interesting thing happened. So now this is the sheaf constructed in the two copies of enlarged devices, enlarged device stack. So then finally, you feel, so let d. So let d, I mean, go back to our d there. Let write a of d to b. I'm sorry. There's this stupid notation. Maybe how can I write down here? So I mean the affine space. So this is a very stupid notation. So this means that it's affine space. Now, can I have a affine scheme of fq with space, with fq points, gamma of x, or x of d? So just to define this, it's a vector space. I make the affine space. Then this a of d, we are mapped to xd of xd of the base. Oh, yeah, this has a pick. I'm sorry. So this is actually of the pick d of x, the pick d of x. I guess this thing can be mapped to here. Do I need to? Yeah, it doesn't matter. I get this in. So this is more of the given part. If you have a section of the line bundle, if you have a section of d of this line bundle, it goes to, so this means that it's a one line bundle with two sections, right? Because each of them is a pair of line bundle and a section. Your fiber product of this thing means the one line of two sections. So the line bundle is the ox of d. Section is the one is l, one is l minus 1. So you're just embedding everything to there. Yes, so once you get this thing, you can define the finally, so we define the left-hand number, left-hand kernel, or whatever. If you like it, call that defined to be the summation of a of ad of fq and the trace and the Frobenius of a bar, right? a lr of a bar, right? So that's the number you get. So the man, so the Wien and the John prove their theorem, prove their theorem identity by proving two identities, by two identity. So i over d equals t over d equal to j over d log q of r. So this is two, I suppose I want to prove. But they cannot prove directly. This one you call this one, and this one you call this one. So l of r is the l you define here. What? L of r is just the l there, yeah. So I use the error because I want to record there. Yeah, l of r is, you're right, it's just the local system there. I mean, it's a perversive shift. So let me try to see what else I want to say. Well, so what do they prove? You can imagine for each of the two sides, they construct two shifts, right? And they know construct two shifts on this xd cross xd. Then they're trying to do these two shifts, both of them are perversive shifts. They are isomorphic of some open part, right? So then by continuity, they must be isomorphic to each other. So at the end, they show everything by showing the isomorphic between three shifts. So that's the end of their proof. And for generalization, if it's proved to high level, you want to see clearly the difficulty because it needs that the modular stack is smooth of the FQ, right? You have an integral model. Right now, I think they're recently they external result to the case R is the order, or even under the conductor with conductors, with square free conductors. So you are a high level structure, right? When you are a high level structure, they can do that. But I presume this square free is a crucial condition, because if not square free, we don't have any modular interpretation of the modular stack, right? Square free, that's the only thing you can do. You would like a Schmura code, for example. In the classical case, when they have a battery reduction, and it's just criminal. When P is square free, we know what's going on. P is not square free. Generic fiber, we know, has a covering. But we don't know how to extend everything to the special fiber. We don't have a modular interpretation, right? I guess, but probably with the line of the program, like a winch in the proof, they can figure out each component of what they look like. But they don't know what the multiplicity is. So they cannot really construct a semi-stable model of this kind of thing. OK, so maybe I should stop here. I guess your I of D for T of D is the left-hand fix from I of D of T of D. Yes, should be the left-hand fix from I of D. Yeah, other one is also. Yeah, because the J of D are right on this thing, this is eta, whatever. The eta can be transferred to a shift. If we write everything down, right? So that's why I call it geometrization. Actually, that one is much more clear, because the summation of the matrices, you can translate these entries of matrices to be the section of a bundle. So you have four bundles there. You can write down this double integration as the sum of some points of a stack, the test numbers. That's the one actually even more clear. If you read their paper, that's probably done in the first few pages. But for me, sometimes the left-hand fix point formula, so you have some fix point, which is A of D of FQ. These are the fix point like formulas. So if you write something like this, there should be some local terms, so that should be equal to 1. In some sense, there is some statement that you have to prove. Or either you'd risk the problem using used delinear's conjecture, which is proved by Fujiwara. Or you prove that these local terms are 1. So what do you prove exactly in this case? If it's exactly as in my mind? I think they're, do they use delinear's conjecture by twisting by weak power of formulas, which is proved by Fujiwara? No, they use Vashavsky's treatments, but not the delinear's conjecture. Yeah, but he also proved it. Yeah, definitely, he proved it. They use Vashavsky's result, but they probably not use the very serious result of Fujiwara. Because I think the reason is in the case they use is proper, when the D is bigger, the situations that involve a proper finite type. So then probably in that case, they don't need the Fujiwara results. Even the classical, even SGA, 4 and 1 half should be enough. That case should be, I mean, the classical case is enough. I think something goes wrong when it's not proper. That's a cause of problem. OK, thank you.