 So let me recap what happened last time. So if S is an affine derived scheme and we are working under the almost finite type assumption, so we said that there is this category called as a co-of S and it can be recovered from this category if it's compact objects and the letter is perf. So then we also introduced this category, let me write it here. So the fact that there are some statements you have made at the end of last time and then you said something that they are not true or that they are not true. I'll repeat. So you don't quite understand this part. I'll repeat what's relevant. So this was the end completion of the category of coherent complexes. And we always have a functor that I denote psi. It's the functor, it's the end extension of the embedding of coherent and quasi-coherent. And so if S is what we call eventually co-connective and that means just that the structure sheath has finitely many co-homologies. In this case the functor, so psi has a left adjoint. And this left adjoint is just the embedding of perf into co. So this assumption tells you that the structure sheath is coherent and so you can embed perf into co. So that was one thing that we said. So additionally, we said that if, no, this is completely evident. So one thing is it's easy to see that perf is indeed compact. And the only thing to check is that quasi-co was compactly generated. But it's easy to see it's just generated by O. Yeah, this was fine. So this is, this was, you mentioned this paper of a bundle, bundle, yeah, this was, I did not, I looked, it didn't seem to be in the context you are speaking about, yeah, you didn't have the right. We don't need this, any of this. So we'll, we'll do it for affine and then we cover everything by affines. So we don't need any of the non-trivial facts about non-affine schemes. So now if, let's write it here. So if S is eventually co-connective, sorry, quasi-smooth. So we can introduce intermediate categories, namely for every n, which is a conical Zariski closed subset inside this thing that we called thing. We can define the category ind co n of S, which was by definition ind of co n. And this sits in between. So it's related by a pair of adjoint functions like so. And we're going to use these functions today. Okay, that was one thing we did. That was roughly the first part of the lecture. And the second part of the lecture, we talked about pre-stacks and stacks. So let me give you a digest of what happened. Okay, I think this session gets re-adjoint functions, not as a program. Yes, there are three, but the third is a bad guy. It's not continuous, we, we don't consider these things unless real necessary. So, functions that don't commute with arbitrary direct sums are, for reasons that I will actually mention in the detail of the lecture, are bad guys for the kind of operations that we will be performing. So I'm trying to stay away from them, at least, at least, unless I really have to. Okay, so the second part of the lecture, we said that if y is a pre-stack, we can attach to it a category quasi-coherent sheaves of y by taking the limit over affines that map to y. And this category, we don't have much control over it. We can define perfect complexes as the exact, the same way, limit of perfects. What's not true that, in this case, that perfects will be compact in here, it's not true anymore for arbitrary pre-stacks. So we also said that if y is an algebraic stack, instead of taking all the finds that map to y, it's enough to map those affine, to take those affines that map smoothly. And also, if y is algebraic, you can also assign to y int co, but the same procedure is this limit over a fine map, a fine scheme that map to y smoothly of int co of them. And moreover, if y is quasi-smooth, so let me mention, so it offers question. It is, it's isomorphic actually to int of co, if y is quasi-compact with affine stabilizers. And if it's quasi-smooth, for every n inside sing of y, you can again attach the category int co n of y again by the same procedure. Limit over affine schemes that map smoothly to y and all of these schemes are quasi-smooth themselves because y was quasi-smooth. So, and again, comment on what opera was asking me about. So, int co n is compact there, in int co, but not known to generate in general. But, and even for n to pool sing s, we get quasi-compact, right? Yeah, so it's not, if n is all of sing, then it equals this, then, well, then the searching follows from here, but if n is zero. Can you explain it? So it means that you have really plenty of good hidden shifts for this step, which is probably not clear, yeah? Yeah, it's, it's, it's easy to construct them. Yeah, no, so in the first statement, this int co, so indicates for nice tacks, in fact, we call it some more general, there's several things about this. So, without the higher, just usual outings tacks. Yes. So, a quasi-quarant, so one can approximate quasi-quarant shifts by clearance, so that's why the netarian can be quite easy to do it, and serial is netarian, clear, all right? So, so is, so what you are saying about comparing int co and int of co, does it go much beyond this approximation argument or? Yes, yes, there is one essential thing that one needs to prove. I don't want to, I wasn't planning to go here in this talk, but let me just say once, it's a kind of cool theorem that if under this assumption, so what you have to prove is that the function of global sections is continuous. So, it's IE O is, well, yes, and it's not obvious. It's not just the basic probability of shift homology divided from fact divided separately. Yeah, but we are stacks, and it's, it's not obvious. Because co-homology is a limit in general, and it's, you have to commute a limit with a co-limit, and it's not, it's not, it's not come. Because of the unbounded guys. Yeah, so the problem is that negative infinity, so you have to bound, so, but more is true, is, is actually of bounded homological dimension. And, and, and, and is there a counter-example without the fine stabilizers like the open or the unbound? I'm forgetting, it's, it is in our paper with Drinfeld. I'm forgetting if there is, if there is a counter-example. It's a paper of Drinfeld and myself. I'm just blanking. There is, there is a counter-example when the, the stabilizer not a fine. I'm forgetting what is the, the counter-example to my pleasure. Okay, so, let me just, so there was a bit technical discussion. Let me just say that this is what we know. But for y being equal to log sys, it will be true that for any n, int co, n of y is compactly generated. And do you know that the compact project are exactly co, n, y in general? Yes, yes, in general. Yes, in, we always know that these are the compacts. Maybe, let me say that compacts generated by, see, I can imagine for many people all the discussion about compacts on compacts is boring. But for somebody who's lived in it for a long time, it's very kind of, I wake up at night and think about compactness of objects. Is there a way to do this deal? So in, like, in Andrew Till usually we make, one needs comology on the, one needs on the bounded things. And here it seems like an unbounded combinatoric sense. Yes. Is there a way to, maybe, is there a way to sum up just combinatoric things, just for bounded things or is there no way? So, kind of no, and that's why I did this ind. And I will cease more evidence for this. As I said last, in my previous lecture, for a compactly generated category, the datum of C is equivalent to the datum of, it compacts objects. On the one hand, compact objects retain all the information. But once they start considering functors, like for example, have left a joint that sends compacts to compacts, but the right a joint will not normally send compacts to compacts. And it becomes very inconvenient to restrict yourself just to this category's compact objects. So for that reason, I really kind of want to incomplete. So that it's technical, but remember I said this about negative numbers. So it's like 5 minus 7, and you don't want to be saying 5 minus 7. You want to say minus 2. And passing this incompletion is like saying, okay, I'll have the negative numbers and I'll just work with them. Okay, so, but now time has come to discuss this guy's loxis. So what is it? So let me give a definition. So before I do the definition of loxis, let me first define bungee. What I'll say now is not at all surprising. It's kind of routine, but let me just go through it nonetheless. More generally, suppose I have a target stack y and a source stack x. In this case, I can consider the pre-stack of maps from x to y. And by definition, it's defined so that home from s to the mapping stack is home from s times x to y. No surprises here, but so this is partly why one wants pre-stacks. In general, these guys will not be algebraic stacks or anything. But it's just very convenient to have them, because even though they're just arbitrary pre-stacks, you still can calculate tangent and cotangent spaces to them, you don't want to discard them. They're convenient to have. So bungee is by definition the mapping stack from your whatever you're taking bungee on to the classifying stack of the group. Okay, this is what bungee is. Now let me say what loxis is. So for this, it's convenient to introduce yet another guy. Again, no surprises here, but it's very convenient language. So if you have any pre-stack x, in this case, my x will be the curve with which I'm working to it, I'll attach another pre-stack called x-derom. Namely, home from a test scheme s to x-derom is by definition, home from s reduced to x. So this realizes Grottendijk's idea of crystals. So one application of this definition is the following. We will not really go into it right now, but let me just mention. Okay, we defined quasi-co for an arbitrary pre-stack. In particular, we can take it for x-derom. The claim is that this is nothing else but demodules. Okay, it becomes a theorem of content with content if you know what demodules are. We know what demodules are on smooth schemes, in which case it becomes a theorem. So if x is a singular scheme, so how do you define demodules? You go to this nightmare of first defining for affine via Kashiwara's theorem and then gluing. Don't do it, define it this way and then prove Kashiwara's theorem in this context. So this gives you kind of the one-shot definition. So is it the right category? Yeah, yeah, yeah. So in the category of demodules with quasi-query and demodules on the whole single word. What is it? If you just want to work classically. So x is a singular algebraic variety, let's say. You want singular or non-singular? Non-singular. Okay, so let me, let's, let's... So here is the statement. Assume that x is affine, okay? Then we'll do the risky glue. In this case, so, okay, just to answer all first questions. So if x is affine, so demod is the following. So demod is the full subcategory of demod on some x tilde with supports on x. So demod x tilde, is it the right category? Yeah, I'll say that in a moment. So where x is closely embedded in x tilde and x tilde is smooth. So that's how you define it via Kashiwara's theorem. Now the question is, what is demodules on x tilde? This is defined, well, x tilde is affine and smooth. There is the ring of differential operators. It's the derived category of all modules over this ring. But actually you can also consider in terms of sheaves, like either quasi-query and sheaves, or non-query, quasi-query, or this should be equivalent. Yeah, I don't want, I don't even want to consider all sheaves. I never do it. So again, in the affine case, I take the ring and take modules. No, but in the usual algebraic ground, that's for all the right d sub-core. So this means all sheaves is equivalent to all sheaves. For some constructions, it's important because you can look at more resolution. I don't want to do it. That's how I define it. In my personal experience, in my lifetime, I've never seen the use in algebraic geometry of non-quasi-coherent things. No, that's the only way to construct co-coupled using more resolution. You can do it in different ways. Okay, so that's my official definition. Yeah, I don't know if it's equivalent. It's definitely equivalent on d pluses. Things might go wrong on d minus. I just don't know. You just glue the architecture using your fancy language. Yeah, yeah, yeah. And then I glue it like this. So this is the deram pre-stack. So this is one application. And now loxis on anything. So loxis g is this mapping stack from x deram to 0.1g. So this is the official definition of loxis. So a d-module on a stack, what is it exactly? No, but for example, on a stack which is a reasonable stack, a coefficient of a scheme by a group, for example. Okay, first of all, you give a definition. There are two definitions, but they are very tautological equivalent. So there's one. I'm answering Lamont's question. So if y is a stack, so we already have two definitions on the blackboard. On the one hand, d-modules on y can be defined as quasi-coherent sheaves on y deram. That's definition one. Or you can glue. So define it as the limit over schemes of d-modules on s. And those two are very tautological equivalent. So by the way, here, y doesn't have to be a stack. Again, y can be a pre-stack. So y can be an arbitrary pre-stack. In this case, you can take arbitrary maps. If y is smooth, it's enough to consider smooth guys. So these are two definitions. Now, in some particular cases, if y is the quotient of a scheme by h, one can give a slightly more explicit description. So it's what you would call an h-equivariant derived category on z. Okay, so this is my loxis. Pardon me? No ring d, no. So operations like tensor product over d or something? There is no such thing as tensor product over d. There's operation of tensor product over o. Quasi-co, anything, is a tensor category. Over d is something else. Deramcomology will go there, but it's not a tensor structure. All right. So one proves that if x is proper, then loxis g is an algebraic stack. Let me not go into the proof. I just cannot do everything. So it's one of those things. There is a section in my paper with the rincon when we do these things in detail. This analytic color is more fit to the map from by one. Well, it's a stack. The points are what you think they are. I'm just saying it in this way, just how to set up definitions so that that makes sense within derived algebraic geometry. So I want that loxis be a functor. Proper derived scheme. It doesn't matter, because you see, once I take x-deram, x-deram doesn't feel any of the derived stuff. Yes, I work in characteristic zero. Yes. So this is, it's my paper with the rincon. I think it's either section, I forget about the renumbering. It's 10 or 11 somewhere. So it is derived. Yeah. So this thing, and it was Le Monde's question earlier, it produces an object of derived algebraic geometry. You prove that it's an algebraic stack, but you can ask, is it classical or not? So namely when you smoothly cover it by an affine scheme, does this affine scheme have lower, does the structure shift have lower cohomologies? The answer is yes, in general. Sometimes it does not. Like if the group is semi-simple and x is a curve, the gene is greater than one, it happens to be classical. In general, it's derived, and you will need these guys because in the process of proof, you'll use the stack for a borrel or the maximum independent when it's necessarily derived. Can't. But not too badly? It's always quasi-smooth, which is what I'm going to say now. So it's, we'll see that in a moment. It's quasi-smooth, but it's derived. You cannot get away with classical stuff. Yes. So now let me say proposition that if x is a smooth curve, proper curve, loxis g is quasi-smooth. So let me give you kind of a token calculation how you do these things. So, well, for this to make sense, you have to first prove that it's an algebraic stack, but that you prove by some very general arguments. This you prove by some very concrete arguments. You compute something. So what do you compute? Well, you have to compute the tangent space at points and see what homologous you're getting. So we actually gave the definition of, rigorous definition within the derived algebraic geometry of the tangent space. So let me first do it in an arbitrary case for maps where x and y are just anything, even here, anything, but we are within this local or finite type setting. Even here, you can, in the setting, you can compute completely tangent spaces. So if you fix a point, sigma, then the tangent will be this. These will be global sections on x of the pullback of the tangent on y. If you think about it, this is exactly what you expect, and this is what you get within the derived algebraic geometry. So again, no surprises, just that it makes a rigorous sense. So in particular, and it's a great exercise, just if you want to acquire practical knowledge of how the derived algebraic geometry works, just do it. So if y is point mod g, we obtain the following description of bun g. Well, it's gamma. Okay, but now, so let's remember, what is the tangent space? So I take this point of this stack. So therefore, it makes sense to talk about the tangent space to this stack at this point. Who knows what it is, apart from those of you who know what it is. Yeah, Maxim, you are out. But who knows? Shifted by one, and if you look at what that is, so it'll be, you take this, the algebra, you twist it by your bundle, and shift them logically by one. And similarly, now you take x to be x deram, so we take point mod g, but x deram and the tangent to log cis, and again, for any x, it'll be what's known as deram homology. So in particular, so let me prove this proposition. So you've got this complex, and all you have to see is that it doesn't have homologists higher than number one, but here we're dealing with a local system on a curve. Local system shifted homologically by one. Well, on a curve you have homologists up to two, when you shift by one, you have to up to one, period. So that's the quasi-smoothness. So just the fact that H gamma deram local system lives in degrees from zero up to two. I just did it because it's a demonstration. So in derapid algebraic geometry, some things you proved just by waving your hands such as this, and some things you proved by some things you proved kind of very concrete and hands-on way. So at the end of the day, when you have to compute something concrete, it comes down to something very, very concrete. Yes, but that does not give you an easy way to see that, for example, if genus is bigger than one, then it's locally complete intersection or something. No, that you also prove, it doesn't give that. It doesn't give the classicality. Right. The fact that it's classical. Yeah, that you proved by saying some other way, I think using kitchen nap or something. Yeah, yeah, so it's in that book. Okay. No, but in some sense, it's not completely miraculous. Yeah, no, but I mean, you say the proof, that's enough, but if you want, really want to see more precisely what it means. Yeah. Yeah, so classicality of pre-tax is usually hard. Yeah, so if you want, so you got to something because a priest, you got something like loxis, and then you ask, is it classical? And these questions are harder. All right, because at the end, your goal is to prove a CRM that demodule on bungee and the homodule on loxis are equivalent, but those two categories at the end, none of them is concrete in some sense. I mean, it's not so... Somehow it's fine. No, no, I understand, but I mean, at some point, does it mean much or a few? Okay, let's see. I mean, okay. In particular, I can now describe what sing of loxis is. Okay, so what did we say it was on any pre-tax? So it's an element sigma of your pre-tax and then an element of H minus one of the cotangent. So let's understand what this H minus one of the cotangent. So we take H1 of the tangent and that was what? That's H2 on my curve with the local system gotten as the twist of the Lie algebra by my local system. So now I want to dualize it. Let me lift this a little bit of the cotangent. Well, I have verdiaduality on the curve and what this will be, it will be H0 with coefficients in the dual Lie algebra twisted by the same sigma. At now, sorry, there on. But now I'm going to use the killing form. This is a group that will be not that reductive. You're very right. This is where I stop if the group is not necessarily reductive but in the reductive case and we'll actually see we'll be dealing with parabolic in a moment. So this is where I stop if the group is non reductive. If the group is reductive, I use the killing form to identify it with G. Thank you, Maxim. It's fine because for what we want to do, it will be up to homothetes, up to dilations. Okay, and why did I perform this for the following reason? Okay, so now... We're not so... It doesn't matter. For what I'll say, it doesn't really matter. So I just want to draw your attention to what this thing is. If you look at it. If you have a local system, it has a group of automorphisms. And this is just its Lie algebra in the classical sense. So I want to call it sigma, a, where a, let me just, by slight abuse of language, call it an endomorphism of sigma. Which I mean an element of the Lie algebra of the group of automorphisms. Yeah, so just back. Infinitesimal automorphism. I just want to call it this way. So you took h1 of this sigma and h1 of this sigma star. I mean dual tissue. So this was, this is the dual of this and this is the dual of this. The passage from here... Pardon? Yeah, the passage here is duality. So the passage from this line to this line is just dualization. And these are dual, but they are duality. I know you have to do this, but could you not take the duality of the... This is, by definition, the dual of this and this happens to be the dual of this. Okay? No, but in particular, that means that if you take this h0, you can put that in family in some way. I mean, there is a total space when you... Same. Yeah, but I mean, it's not obvious from the beginning. Correct. Indeed. So, you're absolutely right. The top homologies are easy to put in the family, but not the... you're absolutely right. So that means that it's a bit mysterious. Well, the totality of this is my sin. Okay, and now I can give the crucial definition. Well, right, notation. So nilp is a closed subset in this sin of loxis where we require a to be nilpotent. And, well, you can understand it many different ways. So, for example, a is a section of this thing, so it has a value at every point, and it's enough to require that it be nilpotent at just one point because it's horizontal with respect to a connection. So, or it's globally nilpotent, however you want to think about it. I don't give this definition. I mean, we'll see non-reducted groups in a moment. I give the definition for the purposes of Langlands. And finally, so geometric Langlands is the following conjecture that D-mod on Banji, we actually define D-modules. So we actually know what we're talking about. It's supposed to be equivalent to, well, ind co nilp of loxis. Okay, so I'll give you an example of how this works, whether this is actually a theorem. So you're not necessarily on a curve? No, on a curve, yeah, it's Langlands. Langlands has to do with curves. Langlands has to do with curves. Yeah, okay. No, but I mean, they are more because if you don't characterize this... Oh, yeah, of course. They could be equivalent. Yeah, so I think that's what I started for this first talk. Yeah, so I wrote a hundred-page paper where I just listed all the things you want from this. So, and in my first talk, I said I'll only mention one thing, namely compatibility with parabolic induction. It's only one thing. So, un-given by a kernel. So, can we give object to the product? Yeah, so we work in the context of co-complete categories and continuous functions. In this context, anything is given by a kernel. So it is given by a kernel. We don't kind of... To understand this kernel is as difficult as to understand the conjecture itself. But you expect that there will be a nice kernel? I mean, I don't know what nice means. There is definitely a kernel. I don't know... For example, you can restrict it to open sub-stack on both sides. Open sub-stack. So, this kernel is not com-logically bounded. So, I don't know in what sense this kernel is nice. It's a kernel. Drinfeld has thought much more about this kernel. He wanted... So, he was expecting the kernel to have very nice properties and just use them to prove it. I don't know to what extent. But to make the link to what, for example, you have done for GLN in LAD in Samsung, there is a kernel. Yes. Then, I mean, it would be nice to have a link. That link can be characterized differently. So, that link is Whitaker compatibility. So, let me just say that there's a bunch of things that you want this to satisfy. And again, so, since I want to go into the direction of single support, let me just not go. I mean, the thing that is really relevant of a single support is this parabolic induction. Somehow all the other stuff, in a sense, it's more interesting because we're dealing with the cuspital part of this category. But what happens is the cuspital part doesn't see this single support problem. So, the kind of the core, the meat of Langlans... Okay, let me just say it. I think I said it in my first lecture that, like with automorphic forms, cuspital automorphic forms are the most mysterious, but they don't present analytical problems. And here we're dealing with analysis. So, that's why we're dealing here with Isis and Sirius. We just want these guys to converge somehow. Okay, so, let me mention a few things. Remember, inside here, we have quasi-co. It's sitting there. So, therefore, to it, there corresponds something. Quasi-co is int-co with a zero. So, here, the single support is... Perfect complexes. So, here, the single support is required to be nil, but I can take the subcategory with the single support as zero. So, it's our quasi-co. Yeah? Remember, we had this... I did the full subcategory of int-co. Remember, we had this... We have this. So, I'm considering this embedding now. So, therefore, it corresponds to something. I'll call this something temp. For tempered, so, what we can do, well, assuming the conjecture, you can intrinsically characterize what this subcategory is. So, if the conjecture is true, you can say what this will go to. You do it in terms of hecke-functors. I will not do it right now. I'll be very happy to do it in question time, but let me just not do it right now. It's kind of cool that you can geometrically say what temperedness means. And with the nil-potent cone, there is nothing? What? With the nil-potent cone on this side. What do you mean? Characteristic variety. So, that... So, that I don't know. I don't... Yeah, I don't know. So, like, here you can restrict to the singular support in the d-module sense. You can try to see what is... I don't know. It's a great question, just, I mean... It's great. So, let me just say another thing that is cool in the story. So, as I said, this kind of theorem holds for loxis. So, this ind-co is ind of co. So, compact objects are co-nil-p. Now, as we shall see in the sequel, rather, as I shall state in the sequel, if you take... On an arbitrary quasi-smooth stack, if you take... Yes, I know. If I take the category of coherent complexes with the given singular support, there'll be serdiality or equivalence. So, I wrote the same thing, but here it was with an op. Well, if we believe this conjecture, it's supposed to go... Take d-modules in Banji. So, there must be an equivalence between the category of compact d-modules and its opposite. Who can get what this equivalence is? Can you make a wild guess? Take compact objects of the category of d-modules and try to make it counter-variantly to itself and also involuntarily. Left-modules can see the right-modules. Yeah, but I mean, we're talking about d-modules. Left-modules. Do you have an evolution of potential bubbles? No. So, let me try to take some other answer because I think what you're trying to suggest has a name. Yes, it has a name attached to it. It's mathematician's name. Who knows what I mean? Nobody wants. So, if I take d-modules... What? So, if I take d-modules on an algebraic variety, I take those which are bounded with coherent homologes, I claim this category has an evolution. What he said is, I want to... Yeah, it's called verdeiduality. Yeah, but you don't have d, so... What do you mean? Verdeiduality is something intrinsically defined. Yeah, so one way to define the defined cases is our home to d, but you can do it... In the context in which I define d-modules, verdeiduality is... Makes sense? Yes. Even if there is a loop? Pardon? Yes. I'll be very tempted. Let me just give me a second. Okay, here's how it's defined. Do you have this business of going from left to right and so on? No, no, no, no. You don't need any of that. Just give me one moment. I'm just trying to think if I can do it really easily. Okay, so... Just aside, home from df1 to f2 is deramcomology on d-mod y. So this is what verdeiduality is. Okay, but that's not the problem. So you want to say that it's verdea, but this happens to be completely wrong. And let me just say one word. It's verdea up modulo, a very cool correction that was invented by Drinfeld. So what happens is this. So let me describe how compact objects on bungee look like. So here we're really getting into analysis. So the problem is bungee is not quasi-compact. Well, we know it from automorphic forms. The fundamental domain. So compact objects in d-mod bungee look as follows. All. You take a quasi-compact open, call it u. You take a compact guy on u and shriek extended. So now there is a problem. The shriek extension of d-modules in the non-holonomic setting is not always defined. So it may or may not be defined. So there's a theorem that there is enough of such use and f-use such that these guys are defined and they compactly generate the category. So it's a result of some content. This means supported by u in some way. Well, u is open. Yeah, yeah. No, but okay. I know what I mean. That means that... Yeah, extension by 0 from u. So it's easy to show that whenever you have a non-quasi-compact algebraic stack, compact objects must be of this form. So the drive stalks... The star fibers outside of... Star fibers outside of quasi-compact must vanish. You necessarily have that. It's not true that an arbitrary non-quasi-compact algebraic stack, the category will be compactly generated. You might not have enough of these guys. But it so happens specifically for Banji. Reduction theory tells you that there are many of these guys. The category is actually compactly generated. You said that the J-lua shriek is not necessarily defined? Yes. As you know, for non-halonomic demodules, J-lua shriek may not be defined. You mean the shriek in the demodular theory sense? Yes. So it is usually the dual of the J-lua star of the dual? Exactly. The left adjoint may not be defined. J-lua shriek is the left adjoint to restriction. And this left adjoint may not be defined. The left adjoint in the quasi-coherent or correct? Yes. Again, I am in my setting. But it is in the quasi-coherent, if you wish. Right. And so there is no good local description? Because it is not always... Yes. So I am saying not for every U and F U it will be defined, but sometimes it is defined. Let me give an equivalent. For hospitals it will be fine. So let me comment on this. I'll say one thing and then I'll go back to hospitals. So I'll answer your question as follows. A parallel phenomenon for this not always be defined is the following that you can take J-lua star and if F U was compact, this guy may no longer be compact. In fact, it may no longer be coherent. So this was coherent and it will become quasi-coherent. So you lose this finite generation, obviously. When you do J-lua star, you lose finite generation. It just so happens that bungee is structured in such a way that there are many U's for which you will not lose this finite generation. But whatever happens, if you apply the ideality to this guy, what you will get is J-lua star of the verdiadual of U. And this is a problem if you believe into anything like this. Because, okay, this was compact, you apply the ideality, but this is no longer compact. These J-lua shrieks that are compact, these guys are no longer compact. So this just has no chance of being true. Verdiaduality does not send compact guys to compact guys. So what Drinfeld invented, he introduced another explicit integral operator. He called it miraculous duality. That... Is this by action? No, it's not by action. It's not by action because it's himself. Yeah. In the function theoretic setting. So he introduced a functor given by explicit kernel that sends star extensions to freak extensions. So it's very cool stuff. That said, let me go back to Cospital. So for Cospital, one proves the following. There exists a particular open subset such that anything which is Cospital is what's called clean. It's both star and shriek extension. Yeah. Also, this miraculous duality acts as identity on Cospitals. So it doesn't do anything to the Cospitals, but does something very weird to Eisenstein. Namely, this miraculous duality, they call it mirror. It sends eyes... There are two versions of Eisenstein's functors, one with a shriek and another with a star. It sends eyes with a star to eyes with a shriek but with the opposite parabolic. So this is mirror. So he introduced this magic trick. All right. And so the conclusion is that which functor correspond to... Composition of virtual duality and mirror. What's your mirror? Eyes plus... So there are two versions of Eisenstein's functors because you can pull back with a star, pull forward with a shriek or pull back with a shriek and push forward with a star. So mirror turns eyes one version to another version but for the opposite parabolic. It's some very weird version of the functional equation if you wish. Okay, so I guess I'm still running behind the time. I still haven't covered the material of my first talk. I don't know what's going on. So let's have a break. No, it was supposed to contain everything if I covered all that I wanted to cover. Yes. Okay, next hour I'll speak twice as fast, okay? Joking. Single support started because there was a bug in the naive formulation and this bug was most visible with Eisenstein series. Let me try to explain how this bug gets corrected via this theory. I will give exactly half of the explanation and the other half will be next week. So we are considering loxis P mapping to loxis M. I call it Q spectral. So the problem, so we consider the following functor. So the spectral Eisenstein series, we pull back and then push forward. Okay, so here there's a dangerous notation, this upper shriek. It shouldn't be taken from granted, but please do for now and we'll spend some time next week explaining what it is. So the shriek pullbacks is a theory with content, so we'll talk about it. But so what we want, we want that to send ind-co-nilp-loxis-m-check-m. Well, to do this I have to talk a little bit about the sing of this guy. So how does sing of loxis P look like? Well, it's a local system with respect to P, I'll call it sigma P. Well, it's supposed to be H0 of P-dual twisted, but of course as Maxime noted there's no killing form on P, but there's a killing form on G. So P-dual identifies with G modulo the unipotent radical of P. The killing form on G does this. So A will be an element of this, twisted by sigma. And notice that inside G modulo the unipotent radical of P there is P modulo the unipotent radical of P, and that is M, the levy. So inside here there is M, twisted by sigma. And inside here there is, you can take the nilpotent elements in M. And so, and this is what I call, well, for the parabolic, this is what I call nilp. I wanted to know about the independence of the choice of the parabolic. Choice of the parabolic? Yes. You mean in the conjugate class? No, for instance, what about this functor? How much it depends on which you choose a different parabolic? With the same levy? With the same levy. Oh yeah, it depends. Yeah, it definitely depends. There's a beautiful story about that. I just, well, again, I would be happy to talk about it maybe later. So nilp is these guys. So it's nilpotent inside the levy. Okay, let me just say, and I'll explain it next time, that this functor, which in itself is suspicious, sends int-co-nilp-luxus-M-nilp-luxus-P. So if you wish proposition, proposition A and B is that this functor, and not only int-co, it, and preserves compactness. These are different statements. One is preserves compactness doesn't have to do with single support. It can be measured in the entire category, and all it says is that this functor maps coherent to coherent, and that has to do with the fact that this map is itself quasi smooth. So in particular, finite or dimension. It pulls back coherent to coherent. B is that P-spec lower star sends now this category and preserves coherence. Again, preservation of coherence has nothing to do with single support. It can be measured all, all of int-co, and all it says that sends co-to-co, and it's just because this morpheme is proper. It's just the properness. The kind of interesting thing is that it does this to single supports, and I'll actually show where this comes from. So if I don't run out of time, we'll explain this today and this next time. Let me say also more. So I wanted to explain that, so why do we think that this is the right formulation, not something else? I wanted to explain that this is kind of the minimal fix for the conjecture if you wanted to be non-self-contradictory with Eisenstein series. So if you wish, the main theorem of our paper with Arinken is the following, that do the following. Do this Eisenstein functors, but apply them not to, to the smaller category. Don't apply them to the enlargement. They do them naively. Apply to quasi co-loxies M. So you get something smaller. You will land into int co nilp of loxies. Do it for all parabolics and then see what it generates inside int co. The claim is that it actually generates int co nilp. All parabolics including G, obviously. So in a sense it's a minimal fix. You have fixed a... No, but here you have fixed a lazy or not? So I'm going through conjecture as a class of parabolics. No, no, but here when you say you have a map... No, it levies always the quotient for me. Always the quotient. Yeah. Inside here it's P more than P. Yeah, yeah, yeah. But the link between the nilp of M and P and G is probably completely geometric. You mean why, why I define this? There is a geometric link between them. You can go, I mean like for... Are you asking where does this come from? Yes, there is a geometric sense behind. Yeah, and I will explain them. Okay, so now I want to do an example of P1, how things look like. In which case this statement is a theorem. Okay, question to the audience. How does loxies G on P1 look like? Who knows? Yeah, you know that. I mean it's someone else as a derived stack, reductive. It doesn't matter any connected group actually. So how many local systems are there on P1? Yes. So what do you think the stack? And what are the automorphisms? Well it's not. So the claim is its point, the way it looks is this. You're taking my favorite derived scheme point times point over G and divided by G. It has non-trivial digits derived structure. Again, I'll be very happy if you ask me how to prove this. But after in the question time, it's lovely exercise, hands-on exercise, how you prove such a thing derived algebraic geometry. It's acting. Yeah, yeah, it's acting. It's acting. So is that joint? A joint on G and of course acting trivially on the point. So on G this is the legible bound? Yes. So the legible view is in a fine space? Yes. You take the derived product? You take the derived product, it's acted on by G. Acting. And then it's a stack quotient. But you can write a complex. I can, yeah. It will be nice. I mean, for the algebra functions? Yeah, it's something. No, I'll do it very explicitly in a moment. So let me just state that this looks like this and again, I'll be happy to prove it for you. It's very nice. You like this formulation, but... No, there is a reason. Okay, so Langlans... That means that if you trivialize... I mean, there is a stack above where you kill the G. Yes. So that means you have a map of point times point. Yeah, that stack is just take local systems where you trivialize the fiber at one point. At one point. By the way, this is not completely canonical. This actually depends on the choice of a point. Yeah, it's... It depends on choices on P1. But in any case, Langlans... So that's what it says. Int co nilp. But we have been through this causul reality game already. So this category is the following. You're taking algebra, Sim of G, shifted homologically by two to the right. You're taking modules. You're taking single support on nilpot and cone in the sense of commutative algebra. And you're taking the equivariant category. Okay, so unfortunately I placed it... Let me temporarily remove it because I want to put something here. I want to put my favorite int co without support connected by these paraffo joint functions. This will be Sim no support condition. Again, this is the obvious embedding. This thing is with support. It sits inside all things. And this is... Pardon? Where? Here. Yes. And this is the right adjoint. Namely, take sections supported. And inside here, we have quasi co. And this is stuff supported at zero. By the way, so the reason that such an equivalence exists is... Well, there was experimental evidence of Bezrukovnikov-Finklberg independently and then if you look closely, you would arrive exactly to this formulation. Okay, now here comes the question. I want to say what this equivalence... Sorry. This is... This is Langlund's tool. So here comes the question. I want to understand what this equivalence actually does. So on the geometric side, I'll take the following very concrete object. You know that band G on P1 contains point mod G as an open substack. These are the semi-stable locus. The trivial bundle is an open condition. Take the following thing on the left. J lower shriek of your constant demodule. Now what? This deserves a better place. I don't know what to do. I'll erase this theorem. And I'll tell you where it goes to under Langlund's as the diagonal morphism of the point. After all, before you divide by G, it's a derived scheme, blah, blah, blah. It has one point. Just map this point in. Let me call it I. It's diagonal map. By the way, I'm currently testing the level of alertness. Something on the left of what's written on the blackboard makes no sense. Can you tell me what makes no sense? Yeah, I claim this is very ambiguous. Okay, because you can actually understand in many different ways. And this is what I said early on that you can really get confused when you use this factor sigma and psi kind of in bedding intco, quasi-coinside intco, building as a quotient. So here is one way. So I want this to live here. That's where it's supposed to live. Point. But let's see what meanings I can assign to this object. On the one hand, after all, it's a quasi-coherent sheaf. I took the direct image of one quasi-coherent sheaf under another. On the one hand, I can understand it as an object here. And this is the wrong answer. Because this would have corresponded to something tempered and this guy is not tempered. So that's not what we want. So this object does not belong to this category. Here's another possibility. You would agree that this guy is a coherent sheaf. Coherent? Sure. It lives here, therefore. But it doesn't belong to this subcategory. So the way to understand it is you interpret it as a coherent sheaf and apply this right adjoint. By the way, as a coherent sheaf, it corresponds to the structure sheaf, to SIM. It's not supported on the nilpotent cone. You have to apply this right adjoint. No, but what does it mean? On the source, you have a regular stack. On the target, you have a derived stack. Exactly. But they are essentially the same classically. Yes. And I'll draw a diagram in a moment where I think it will be generally explained. I will write it in a second. I'm saying that when I write I lower star and I'll do it, I'll go there in a moment. It's ambiguous. I don't want to understand it here. I do want to understand it here and then apply the right adjoint. So the answer, J lower shriek of the constant goes to this thing. Take the skyscraper, apply the right adjoint. So now we'll destroy my conjecture. We'll arrive to a contradiction of mathematics. Not in mathematics, but with my conjecture. Great. Okay, so how does this thing look like? As I said, let me go to the causal dual side. You took the structure sheaf and then you took the homology with supports on the closed variety. And you're not getting something coherent, right? Like imagine a smooth variety, you take the homology with supports at a point, you'll get the kind of delta function. Not the skyscraper with the whole delta function. It's not coherent. Agreed? Again, take a smooth variety. Take a point. Pardon me? No, no, no. Coherent modules means like finite generated homology. So the homology is not finite generated. So we've got, if it were here, it would be compact. Once applied this right adjoint, it stops being compact. So this is no longer compact. But if the conjecture is true, well, I earlier described how compact objects look like. So I took this guy, constant sheaf. What's the problem? Yes. On the classifying stack, the constant sheaf is not compact. Classifying stack has, you know, equivalent homology of the group is infinitely many degrees. The constant sheaf is no longer compact. So this guy is not compact here. So no contradiction. Now, let me modify. Instead of doing this, let me call it J tilde. Map from the point. So what was your conjecture? No, it was a joke. I was saying that we ran into a contradiction in mathematics. Because I said that this guy on the right hand side is not compact. But note that the guy on the left is not compact either. Because on the classifying stack, this constant sheaf is not compact. Instead, you can modify, consider the map from the point to the classifying stack, and take the constant sheaf there and map it maybe to the classifying. So it's something whose fibers are the cohomology of the group. And this is compact now. Pardon? And what it will correspond to the, on the other side, so let me say J lower shriek of pi lower shriek of K. So it will correspond to something which is supposed to be compact. I don't know how to describe it here, but I know how to describe it here. This is the structure sheaf of the nilpoten cone. I don't know if there is surprisingly, kind of I'm killing off the equivariant cohomology of the group. So in terms of this equivalent, some of this casual reality, it will be the structure sheaf of the nilpoten cone. All right. And so for each cardon or assymchon stratum, there is a guy. Yes, I am blanking now on the descriptions on the spectral side. That means as there is a connection to the cardon-lustic polynomial. Okay. So here is my piece of ignorance, but maybe it's not just mine. As far as I know, the cardon-lustic polynomials on Banjif P1 are not known. Yeah, okay. But I mean there is something similar on the other side. So let me say that I think this is not well investigated. So we understand this equivalence, but there is a bunch of things that are... that remain open. For example, interpret the cardon-lustic polynomials in terms of the langons-doll group. I don't think this is known. Okay. So I'm half an hour behind my schedule. So there were two things that I was planning to do, but I think I can only do one. One was the actual definition of single support. There is only this point-wise definition, but there is a more powerful definition in terms of Cauchy-Colomology. That's option one. Option two would be to discuss this lower star business and explain this. So you have to accept the existence of single support, and I'll explain where such things come from, that the direct image sends this to this. What are the preferences? That means there is a link between the two nil-potent columns. Yes. Okay. So that should be not so difficult. Okay. Maybe I'll explain this first. Okay. Yeah. Okay. So... Pardon? No, I don't need Lourie's book. We are pretty much done with Lourie's book. It's a little beginning, and then you can put it aside. So there is a gap between just schemes. A quasi-smooth scheme. No. Schemes. Locally finite type. What I want to define... So on the one hand, we have a lower star from quasi-co. So I claim there exists another functor that goes from... I'll define what this functor is. It's kind of stupid. For some reason it works and it makes the entire thing work. Before I define this functor, I'll tell you what property it has and what property it does not have. The only thing is that I should have written this line below. So this will be the functor that I will define. And this is the functor that we already have. The property that it will have is that this diagram will commute. Remember it's this right adjoint. The property that it will not have is that this diagram... Well, let me say it. If the schemes were eventually co-connective, we also have these functors in the opposite direction. And this diagram will not commute. So let me just cross this out. And so that's one point when one can get really confused. So you see, there are... There are two ways to go from quasi-coherent sheaves on S1 to intercoherent sheaves on S2. You can take the quasi-coherent direct image and then embed, or you can embed and then take intercoherent direct image. And these are different, and that's what we saw with this delta function. And we can even turn around and you get two maps. All right, so now let me tell you how this functor is defined. It's very stupid, unfortunately. So to define... Remember, intercoherent sheaves was intercompletion of something. I have to define just a functor from here to here. And it's defined as follows. I embed it into quasi-coherent sheaves, but I remember that it goes to the plus part. Then I apply the usual push forward. I go to quasi-coherent sheaves as two plus part. And then I remember, all right, these guys were actually equivalent. And then I embed it. Oh, it's a standard notation. Yeah, bounded, how do you say it? Bounded from below. Okay, so now let me comment where such things come from. So if you have a map from S1 to S2, well, there's a co-differential. There is this diagram. Just taking the dual of the differential of a map. In particular, you could take H minus 1 and then define the map a correspondence if you wish like this. And we call this map sing of F. So here is the theorem. So there's one theorem for star-pull backs and it will be a parallel theorem for shriek-pull star push forwards and it will be parallel theorem for shriek-pull backs that I'll discuss next time. There's this, that fix N1 in sing S1 and fix N2 in sing S2. And now assume the following, that if you take the image under this of N1, then it will be contained in this. Kind of estimate from below. Like the N1 thing is smaller than something that has to do with N2. Then the claim is that this int co sends int co N1 of S1 to int co N2 S2. So you can see that fx will start at substitute that int co is all support condition. No. See my conventions. So yeah, it's this new funky F lower star. Yeah, which you talk to care about support condition. In this case, support N1. Yeah, yeah, F lower, yeah. Full subcategories. Full subcategories, it maps. Yeah, so, sorry, Maxime. No, no, that's the thing. It does not compact objects to compact objects, but it maps a given subcategory to, in my application it's actually proper. So in addition to that, it actually maps co to co. Okay, so you just write something more general that you need. But I need that in any ways for many different things. So again, let me just disambigate as Maxime just remarked. This funky is defined a priori from int co all supports of S1 to int co all supports of S2. The claim is that this funky sends this subcategory to this subcategory. It's kind of estimate from above. This is exactly the same when you replace H0 by H minus 1. Yeah, yeah. So there are these kind of similarities, but I don't know how to make them more precise. Now, it's a great exercise. Apply this theorem to get proposition point A. You need also point B. Point A will do it next time. So just exercise deduce proposition B from theorem. We don't have time to work with the algorithm. That's the most important job. We don't have time to work with the algorithm. We don't have time to work with the algorithm. Yeah, yeah. If you take something in the parabolic, which is unipotent module unipotent radical, then it was unipotent. Again, this is not... At the end, it's all very concrete. All right. So now it remains some time. I can try to say how we actually define single support. So this point-wise thing is not really manageable. So one has to give a more robust definition. So do people still have energy for 15 minutes? We can call it a day. No. Okay, let me do it. Just feel free to go to sleep. No, I'll survive. All right. So the framework is this. And for this framework, you actually don't need higher categories. So it's kind of triangulated. So T is a triangulated category, and A is a positively graded commutative algebra that maps to the graded center of T that what I mean by this is that for every element, homogenous element A of degree N, you get a functor from T to T shifted by 2N. And these guys commute with each other for different As. Yes, and commute with natural transformations, maps to the center, so natural. So if a map from T to T prime, the diagram commutes. So consider spec A, and for each element, homogenous element A, we'll denote by Y sub A. So this kind of locus of zeros. Also, all this gradation in a plain word, not in a super sense. We kind of think it's evenly graded. So you see it. Yeah, so N gives the shift by 2M. Think of it as evenly graded. It's a matter of convention. So what I'll do, I'll define a category for each A. I'll denote it, let's call it, well, spec A minus Y A. These will be things supported away, not supported, on the open subset away from the zeros of A. Namely, T belongs to this subcategory if this map is an isomorphism. It's a subcategory. I wrote it in such a weird way that it stands on the right because it will admit a left adjoint. And we get this, what you can call it, exact sequence of categories. Namely, you define things with support on Y A to be those guys that get killed by this function. And you really should think of, these are objects supported set theoretically on Y A, and this is its right orthogonal. So in fact, you can write down this left adjoint explicitly, call it localization A. Pardon? Yeah, so my category is co-complete. So this log A of T defined as follows. It's this homotopic co-limit, which are usually not defined in triangulate categories, but defined if you're taking it over natural numbers. In other words, you're taking the... Yeah, these animals are not rare. It's like a house mouse. Okay, so this means that this is the subcategory of objects defined on zeros of A. Now, if you have N as a risky closed in spec A, you define TN as the intersection of all the As such that N is contained in Y A of T sub Y A. So that's how you define support. And there is... So what you want to see is that it really works like support for quasi-guerrilli sheets. There's one thing to check there. So it's not completely logical. There's one thing that needs to be checked, that is that this containment is unequality. You don't need any extra assumptions. It just follows as is, but it's like the first thing which is not completely automatic. You do something to prove it. And then, so this gives you a well-defined notion of support. So now, what do we apply to? So S is quasi-smooth. But T Y A is not a subcategory of something. What means intersection? Oh, they are. They are subcategories. T N is... Oh, T... No, it's your diagram. No, no, but the... Because on the diagram there is always an inclusion, sorry, this is really bad type of... So this is quotienting. This is inclusion. These are really things to support. So T is, if you wish, the triangulated category corresponding to intco. So intco was this DG thing. We don't care about it anymore. We're taking the corresponding triangulated category. Now, who is A? Before I define A, I'll define what comes before A? B. Sorry, I'm joking. So for this definition, we don't even need Hawkshott co-chains. We just take Hawkshott co-homology as is. Now, so let's notice what we have in there. So in co-homology, it's something that acts. So into 0th Hawkshott co-homology, we have... You take one thing you know that functions on your variety map into, well, even Hawkshott co-chains, take the 0th co-homology of that and you'll have the 0th co-homology of your functions, like map to Hawkshott co-chains. Hawkshott co-homology. Another thing that you know, well, you need to know a little bit about how Hawkshott co-chains look like. So if you take sections of this shifted tangent, well, that actually maps to Hawkshott... Sorry, I should write Hawkshott co-homology. This maps to Hawkshott co-chains of s. In particular, h1 maps to the second Hawkshott co-homology and it maps as a module over this. And as a result, if you have this and this, if you take the symmetric algebra of this over this, this is by definition the algebra functions on my sing. So we obtained that sing was this classical scheme maps to this is my a. So in this story, you really haven't used Hawkshott co-chains, but you will use them. And see, there are different degrees of heaviness with which you can use them. For most applications, like if you want to compare it with my point-wise definition, you only need Hawkshott co-chains as a associative differential graded algebra. In fact, Hawkshott co-chains have more structure. There are what's called an e2 algebra. And you can get away without using it. It's much more convenient to use it for some applications, but you actually don't need to know what e2 algebras is, what e2 algebras are to develop the theory. All right.