 Thank you for the introduction and for the invitation. So the plan is as follows, to give some motivation, some constructions following board shirts, and then my results. I'm just going to use the slides for motivation so that I can get the ideas across quickly. OK, so let me start by saying this. So if you like geometric representation theory or equivariant elliptic cohomology or anything vaguely related to that, and you're a young person, which is a technical term, meaning three years into your PhD or up to five years after graduating, then you should apply for the AMS Mathematical Research Community. So I'm organizing that with some other people. It's in June, so it will be, the focus will be on research. There'll be a few talks, but mostly we'll divide people into groups based on their expertise. We'll have some reading groups online ahead of time so you can learn more. You don't need to be an expert already. And the AMS has lots of funding for this, so if you participate in a mathematical research community and you're working on a collaboration, they have funding for the groups to get together in the coming two or three years. And so it's a good opportunity and you should apply if you qualify and you should encourage your young people to apply. OK, good thing I put that slide there because I would have forgot. And the deadline is February 15th to apply. OK, so let me start with the motivation. And so here's the context that I'm working in. So we have three categories of objects, which are of interest to people, some of whom are at this conference. So we have vertex algebras, chiral algebras, and factorization algebras. And these are factorization algebras in the sense of Baylentz and Drinfeld. So what are the relationships between them? So we have causal duality that says that chiral algebras on a space X are equivalent to factorization algebras. And I have some references there, so that's due to Baylentz and Drinfeld over curves and Francis and Gates-Goray in higher dimensions. All right. But vertex algebras are equivalent to translation-equivariant chiral algebras on A1 on the affine line. So that was known to Baylentz and Drinfeld, but the first it's written down carefully by Huang and Lepowski. And so if you want to preserve your equivalences here, I better stick the same modifiers on the right. So we have three categories, which are equivalent. So they're just different perspectives. Okay. So here's my notation for these categories. And in 1999, Borchard's, who was trying to make vertex algebras more accessible to a wider audience or trying to explain that the definition is well-motivated and natural, he came up with this other definition, which he calls AHS vertex algebras. So AHS and S are some input data that we're gonna discuss in some detail in the next section. And when he chooses a specific example, and I've denoted that by SB for Borchard's, he constructs a functor that goes from AHS vertex algebras to vertex algebras. And then he asks, so this is a paper from 1999, and he asks, he constructs certain examples of vertex algebras in this way, so lattice vertex algebras he can construct, but he can't construct everything. He asks, can we construct the virasauro? How far is this from being an equivalence? So I was trying to answer these questions, and I realized that we should modify his input data. So I call this S A1, and then there's a notion of translation equivalence, and I can provide a functor from there to chiral algebras. And it's not an equivalence, but it has a section, okay? So every factorization algebra gives rise to one of these funny vertex algebras, but there's more vertex algebras of these funny things, which get collapsed to the same element downstairs. So, and then there's a map from my category to Borchard's category. So in particular, his functor is essentially surjective, but it's not fully faithful. So why do I care about this? We have three perfectly good categories upstairs, why I introduce more complications. So here are some motivating questions that Borchard's was asking. These don't explain why we're interested in studying them, but they say some of the questions that I was looking at. So how far is this data from being an equivalence, and can we construct well-known vertex algebras in this category? So here's the reason that I started looking at this paper as I was talking to Dominic Joyce, and he constructed the following example. So let's start with a C linear abelian category, which has some conditions. So for example, modules over a quiver, or coherent sheaves on a nice variety, and then you form the modular stack of objects in that category, and then you look at the homology. And what Dominic shows is that it has a structure of graded vertex algebra. You have to shift the grading to make it work out, but he builds all the structure of vertex algebra, and he actually does this through Borchard's structure. So he constructs an AHS vertex algebra, and then he just applies Borchard's machine, and he gets a vertex algebra. And so if one understands better how Borchard's definitions are related to the geometric picture, maybe when one can understand better how the vertex algebra structure on this geometric object has geometric origin, because in Dominic's preprint, it's just like a lot of formulas, like when you open a vertex algebra book, and so you don't really see what the geometry is. So the question is, can we use the geometric approach to understand this better? Could you indicate what the conditions are? I can't remember off the top of my head, but I can show you. We can look up a reference after, yeah, sorry. Okay, right, so that was the question that motivated me to read this paper of Borchard's, which I, to be honest, never heard of, okay. And another question, so actually Borchard's paper is called quantum vertex algebras. And so one of the advantages of his funny definition is that it generalizes very naturally to the quantum setting. So he's gonna define some category, and vertex algebras are gonna be commutative ring objects in that category, and quantum vertex algebras are braided ring objects in that category. So you have an R matrix, and then you have the usual braiding conditions. And so if you can understand how to relate Borchard's category to factorization algebras, then you can just transport those definitions over. So the answer to this question, I'm not gonna have time to get into it, but in brief, the answer is that you can adapt his definition, but it's not clear if it's useful yet. I haven't proven anything about it. So one thing that I've learned is that there are at least three different definitions of quantum vertex algebra, and it's not clear which is the correct one. So this might not be a helpful thing, but it is a thing. Okay. And let me just review, since I'm not sure what people's background are, just so you have an idea of the flavor of the categories upstairs. So recall that a chiral algebra on X, a variety, is a D module equipped with this Liebracket, which lives on X squared. So it's a map of D modules, and it satisfies the axioms of a Liebracket. And a factorization algebra on X consists of a bunch of left D modules on copies of X. So for any finite set, you take the product of that many copies of X, and you have a sheaf on there, and you have two kinds of compatibility conditions. So the first is Ron's condition, which says that if you restrict something on many copies of X to the diagonal, you get this thing on the fewer copies of X. So there's an example, the diagonal inside of X squared, and factorization isomorphisms, which say that if you restrict away from the diagonal, what you have on X squared, for example, splits into what you have on two copies of X. So you don't need to write these down or memorize them or anything, but I just want to do, to get the flavor of the sort of geometric objects that we're working with. Okay. So let me just leave that slide there so you can remember what we're working towards, and now I'll get into the details. Okay. Can people see the board or should we try to turn the light on? I don't know how to do that. Okay. So these constructions are mostly following board to its paper, but I'm gonna have sort of geometric interludes where I say if you're a geometry, you look at this and you say, this is what the algebra is telling us. So let me introduce the following notation. So we have two categories. The first category is the category of finite sets and then functions between them. And the second category, which is a bit more interesting, is finite sets with equivalence relations or partitions. And then morphisms have to be maps between the finite sets, which preserve inequivalence. So if two elements are equivalent under the equivalence relation in the domain, they can become inequivalent under the map, but it can't go the other way around. So let me tell you a geometric way to think of this. So let's fix X a separated scheme. A lot of Bortchard's definitions and results work in positive characteristic, but let me not worry about that because the whole picture doesn't fit together. So let's just assume we're over C. Okay, then what I can do is I can define a functor X bullet from this category of finite sets up into schemes. And it sends an object, so that's a finite set I, into the product of X with itself, and next by I. And it sends a morphism, let's call it alpha from I to J, that gets sent to a morphism of schemes, phi alpha from XJ to XI, and what that does is it sends the coordinates XJ, and we use alpha to tell us which coordinates to have. So if alpha is a surjection, this is a closed embedding, and if alpha is an injection, then this is a projection onto some factors. Okay, so what's the point? The point is that we can extend to a functor on this new category as follows. So we're gonna send I together with its equivalence relation to the set or the open subscheme UI, which consists of points in the product such that XI1 is not equal to XI2 unless I1 and I2 were equivalent. And so the condition on morphisms, so alpha from I to J, so this is just a map of sets, but if I and J have equivalence relations, then this preserves the inequivalents if and only if the following diagram. So we have our phi alpha, and inside of here we have UJ, and inside of here we have UI. And the condition is that if you apply phi alpha to something from UJ, you end up in UI. So now we have a functor on this new category. Okay, so that was our geometric interlude, and now let's go back to what Borchitz tells us. So he constructs the following category. So it's a category which he calls functors. It's fin in equivalent, A, T, S. And what these are gonna be, I'll explain the details, but the idea is that we want functors from, let's call them V, from this category into A, and for us, A is always gonna be vector spaces. So let me keep track of some choices that we're making along the way. So these definitions work in more generality, but the theorems only hold, and Borchitz examples and theorems only hold in these cases. So I don't wanna spend time on the things we don't need. Okay, so we're gonna have functors from here to here, and they're gonna be equipped with on, okay, let me put this in quotation marks. It's an action of a co-algebra. So what is an action of a co-algebra? I'm gonna have to tell you what we mean by that. So T is the co-algebra, and an action, not in quotation marks of an algebra. Where did that co-algebra and algebra leave? They're both functors. So I'm gonna explain in a second. Okay, so more precisely, in our case, so we need to construct this co-algebra, T. And the first thing we do is we're gonna set H to be equal to polynomials in one variable. So this is another choice which we're making, and the reason I'm introducing the notation H is because the category was called AHS, and this is our H, but we use it to build T. Okay, so we have this ring, and we make it into a co-algebra. So by saying that the co-multiplication of this element is the usual thing. And now we can define a functor T from finite sets into vector spaces that sends I to the tensor product I times of H. So that's just the same as polynomials in I variables. And this is a co-algebra because we can extend this. And this also gives us the functoriality. So I didn't worry about in equivalence, so if I wanna say what happens to a set with an equivalence relation, I just forget the equivalence relation. Okay, so that was T. And we say that T, so here's our definition on V, if we have a bunch of maps which look like action maps, except that of course they can't satisfy the usual action axioms. So we have a different compatibility condition, which I don't wanna say in general, but I'll say in a second what happens in our examples. Okay, so now we can. Everything's commutative, so. Yeah, but still, the edge of culture would usually put it something. I mean, I think, I don't know if action is really the right word, but this is what Borchardt's uses and this is the condition that he has. And in the examples that we'll see, it really is natural to think of it as a left action. So, okay. I think the comment is that algebras normally co-act. Yes. Which means that they're mapped from V to Z tensor. That's why I put it in quotation marks. This is an action, it's not a co-action. Okay. This is the source of S, I'm sure. Right now, yeah. So here's algebra object. So this should be S from, now it's important that it's defined on this whole category within equivalence. And it lands in Vect. And it should be an algebra object as a functor. So here, we're using that Vect is a tensor category to induce a tensor structure on the category of functors into Vect. So this is an algebra object and it should have an action of T and we have a specific example of T here. And so T should act by derivations. And that's the consequence of this condition star. So if I take an element of T and I act on a product of things in S, then I should get the usual formula. Okay. So let me give some examples. So the first example is in Borchard's paper. And so he doesn't call it this, but I'm gonna call it S sub B. And it sends a set I with an equivalence relation to the following algebra. So whenever I one and I two are not equivalent, we have their difference and it's inverse. So this is an algebra and it has an action of T by derivations in the natural way. That's probably why he picked that notation. Okay. Other questions about that? And it's a functor and for it to be a functor, it's important that maps from I to J preserve in equivalence. Otherwise we might get a zero here. And so then we'd have a problem, but we don't have to worry. Okay. So let's modify this example to get the version that I have in the slide up there. This is example two. So S a one sends the set I. So we had joined all the variables and then we invert the differences that we had before. So the observation is that these are, what are these? These are functions on a one without some diagonals. All right. Sorry, a to the I without some diagonals. And that's exactly this UI. We head over there. And so we can generalize this. So here's the generalization for X in a one. So any open, open affine curve with a fixed global coordinate, then I can take S X, which sends I to functions. And so the functoriality comes exactly because we have this map here. And so we can pull back functions. And the global coordinate is what allows us to say how, how these guys act. Okay, so now we have all of the pieces. And so a functor from finite sets with equivalence relations to vector spaces, which has an action of T and a compatible action of S. That is an element of this category. So let me denote it by C in equivalence. And so that's this category of functors. So now I said we want to look at ring objects in here. So we need to say what the minoidal structure is. So let me, there's actually two things we can do. And this is kind of analogous to the fact that if we look at D modules on the wrong space, we have two natural ways of making that into a minoidal category. So here's the naive thing. So if I have two elements, so these are two functors, then I can say that V tensor over S, W, sends I to VI tensor SI with WI. So this is an SI module. And TI can act because we use the comb multiplication to split it up into two factors, and then it acts on each factor. And everything preserves the functoriality. So that gives us an object again. Maybe you already said this, but do we want to think of S as being commutative always? Four, to get vertex algebras, yes. But in the quantum setting, it could be braided commutative. And Borchard's also has a comment. So A should be a symmetric minoidal category, and he has a comment what would happen if you took a braided minoidal category. I think you can make the definitions, but maybe you can't say anything about them. Or nothing has been said. OK, so that was the kind of boring thing to do, and here's the singular thing. So this is where the word singular commutative ring comes from. So the singular tensor structure is the following. So we're going to define it by saying that haums in this category from the funny tensor W into a third object, Z, these are given by, maybe I need more space. So these are given by compatible families of maps from the I tensor over C, or more generally whatever the tensor is in your category A, WJ to Z of I disjoint union J. And this is just my notation to indicate that I'm taking the disjoint union of I and J, imposing that I keep the equivalence relation on I and I keep the equivalence relation on J. And elements of I are not equivalent to elements of J. And so the fact that all the structure from S and T tell you that what the compatibility condition should be, and then you can show that this is representable. So you get an object B tensor W in here, and that's called the singular tensor product. So a remark is the following. It's that we can, well we have the forgetful functor from finite sets with equivalence relation to finite sets without, but we can do another thing. So we can say that we can embed functors which are only defined on finite sets. So let's call that category C. We can embed that inside of our C within equivalence in the following way. So let's send an object V to the functor V bar, and this functor is going to send I, so now I has an equivalence relation to, so I'm going to apply V to, I forget the equivalence relation, so I just have a finite set, so V knows what to do about that, S knows what to do, and then we tensor because S can also handle that. So every object in here can be viewed as an object in here, but if you take two objects in here and tensor them, you end up in here. You don't get an object back. So even if V and W are in C, this tensor product is strictly in here. Emily, does this answer product come from some sort of con extension or something? The formula with the compatible families looks like... Is it big convolution? Yeah, I mean... I think, I mean, it's really trying to do what the chiral tensor product does for factorization algebras, right? So that you're, you want to, you're, like the name singular tensor structure means that you're allowing poles on diagonals and you're taking disjoint unions here. But I'm not sure, probably you can... Okay, so the geometric thing to keep in mind in this situation when you have the case of Sx, then what is Vi? So V in this category C is its global, each Vi is global sections of a D module on some affine thing. And so it corresponds to some D module on X to the I, because we have the action of functions and then T gives us the action of the derivations. And then there's maps between them. And so when we look at V bar in the inequivalent thing, it just says that we should also remember, it also knows about the restriction of Vi to various UIs. But this guy knows about the restriction also. So it's not more data, it's just that our list looks longer. Things that we keep trying. Okay, so finally we can make the definition. So on AHS vertex algebra, a singular commutative ring object, V in C. So I wanted to live in this full subcategory. So what the multiplication map tells me is that I should have something from the singular tensor V, but I can only say that it's going to land in V bar. So in particular, if I take I and J to each be a point, I'm going to call the points 1 and 2, then I have a map from V1 tensor over C, V2 to V bar of 1, 2. Okay, and that's just V of 1, 2 tensored over so functions on X squared and then I allow poles. So that's the multiplication map. And then we can spell out what we mean for it to be associative and commutative and so on. And so the reason that we did all this work is because we have the following theorem. So given an AHSB vertex algebra, so that's SB right there conveniently. Let's call it V. Then if I apply this functor to the element with one point, which I'll call 1, so this is a vector space, this has a natural structure of vertex algebra in the normal sense. So this is how we get this theta B from that diagram, the blue arrow, from VA, AH, SB to VA. Other questions so far? So that's the end of the exposition, kind of. And so now I can say what is new. Actually, I'm going to begin with some more results of Borchert's. So the results are really asking how far is this theta from being an equivalence? And so the first positive result, so this is again in Borchert's paper, so let's let L, if it's a pairing, be an even lattice. And so then there exists some object, V sub L in this category of AH, SB, vertex algebra such that when I apply theta to VL, I get the lattice vertex algebra. So I think I have a few minutes, so I have some slides that sketch the construction. I don't want you to see the details, but I want you to see a few of the ingredients. So we start with our even lattice. Thank you. And then we construct this enormous vector space which you might recognize as the vector space underlying the lattice vertex algebra. And so it has a natural multiplication because it's a symmetric algebra. And we define co-multiplication in the following way. And it has an action of this T, and so that gives us an action of derivation. So we have a bi-algebra with a derivation. And we define the functor on finite sets by just taking the tensor product of this vector space. I think that upper L on the right should be a lower L. So I have this vector space and I'm going to tensor it a bunch of times. Okay, so that's an object in C. And now I want to make it into a singular commutative ring. Well, it's already an ordinary commutative ring because it was an algebra. But we're going to twist that commutative multiplication using what Borchers calls a bi-character. So that's a map from v tensor v into, well, this is SB of one, two, disjoint union. So we can generalize those definitions and it satisfies some axioms. And then once you have a bi-character, you can twist, so this is, can people see the bottom row here? I can't. So this is a general formula that if I have a co-multiplication, a multiplication and a bi-character, I can twist. So I'm taking the two components of the co-multiplication of u, multiply that with the first component from v and then apply r to the remaining two components. And I get a map like this. So that wouldn't give me multiplication in the ordinary sense, but that's exactly what this map, mu, should be. So this is a general construction that anytime we have a bi-algebra with a bi-character, we can form a singular commutative ring. So the geometric analog that I haven't worked through in detail but that I want you to think of is that if you look at, for example, the classification of lattice chiral algebras, you start with a commutative algebra with derivation and then you have theta data which twists it and makes different kinds of lattice algebras. So that's what this r is doing and so we just have to work out how exactly to go from bi-characters to factorizing line bundles. So that's the kind of idea and the reason that I'm exploring this example is because this is also the construction that Dominic uses. So he starts with the the algebra coming from the homology of the modular space of objects and he defines a bi-character and his bi-character comes from a perfect complex on the product so I want to sort out those details but that's where this is going. Okay. So that's a positive result. We have some vertex algebras come or in the image of theta. Yes, please. Thanks. That's where we are. So here's a negative result. So this is due to me. So if you can construct the lattice vertex algebra, you want to construct it as a VOA. You want to see the virocyl. So the question is, from virocyl object mapping to this one and the answer is no. So there is no object A in Bortchard's category with a map omega from A into his what I called VL down there such that applying theta B gives the virocyl at level one. Yeah, virocyl with central charge one into VL. So this here is for the rank one case but you can generalize if you want to deal with more notation. Okay, so that's a problem because I mean that's one of the most important properties of this vertex algebra and we can't see it in Bortchard's setting. But we can fix it. So here is a positive result. Okay. So instead of using SB we use S of A1 or S of X more generally. So the first one is that there exists a functor theta X from VA of A8 SX to vertex algebras on X and so once you know that you have a vertex algebra on X you know by this is again for X open inside of A1 and Hwang and Lepowski that's equivalent to chiral algebras on X. And so you can just follow through so you get some formula here for the vertex operators and then you get some formula here but there's actually the second part of the theorem is that the chiral bracket, so let's say that this was V and then the vertex algebras is V of one and the chiral algebra is the D module coming from that, maybe made into a right D module, let me not worry about that. So the chiral bracket on V1 of R can be described directly in terms of the AHS data in the following way. So I need remember I thought I put the definition of a chiral algebra back up but so we need a map like this and it should end up over here and so the first thing we do is we use the this mu, this chiral or this singular commutative multiplication and we land here it's called an A suddenly this should be V, so V of one V of one V of one and if we we have this mu and mu lands us in V of one, two so mu actually starts here this is what mu does but then it's you can extend get a map like this so that's the first step and the second step so we use the short exact sequence for the this is the inclusion of the complement to the diagonal and then we have the diagonal so we are here we have a map here so this is from the short exact sequence and this last map here comes from the fact that we had a functor so we have a map from V one two to V one which comes from the from one two onto one and then and that induces this map here so if you follow through all of these computations it simplifies into this formula formula alright so now I have that functor well not quite I have it for a given X and then I claim that you can make it translation and then here's this thing which I called sigma so taking global sections so recall that part of the data of a factorization algebra is that you have a bunch of D modules on copies of X you take global sections of that and that gives you a V of a functor so taking global sections gives a sigma X from factorization algebras on X to VA AH SS algebras and it has the property that the composition from VA X to VA AH X is a section and so so Theta X is essentially surjective but this this kind of example of Borchards tells us that there's more things there's so I could take my lattice vertex algebra I make it into a factorization algebra make it into an AHS thing and I get an object there but it cannot be the same one that Borchards had because mine will have a map like this and Borchards won't so that's kind of a problem with Borchards category because it means you could think you've chosen a representative of the object that you want but it doesn't have the properties okay and so well let me just conclude do you know what happens with VL when you go around but you get a different representative for the lattice vertex algebra yeah I mean what is it I mean it's as explicit as the factorization algebra for VL is which is great so we can see how much it's different from Borchards example no so I do have a question that somehow what's going on is that Borchards data gives us a map all the way into here but we don't care about all of that data right so maybe there's a way of adding more morphisms like morphisms don't so the way I've defined morphisms in this category which Borchards doesn't define the morphisms but they're functors so I said that morphisms should be natural transformations but maybe they only need to be defined on some associated graded or something like that so there might be some way of loosening your notion of morphisms so that you can get a morphism here and basically because we've chosen these X's to be affine curves all of the UIs are affine and so all of the time we're just working with with global sections and so he's basically just dropped some of the axioms of a factorization algebra like some of the morphisms are not isomorphisms and so I don't really know that there's any advantage to working in that setting rather than just requiring them to be isomorphisms yeah Maybe you said this before but do you know how to see the Virisoro in this VA category in this AHSV category you just know how to see that I mean I do now because I have a section so I just I take my Virisoro as a vertex algebra or as a chiral algebra or as a factorization algebra and then I take global sections so this category here all of the objects except for Borchard's example and maybe Dominic's example the only examples I know are just global sections of factorization algebras and in the cases kind of my argument is that both Borchard's example and Dominic's example it would be better if we could find the expression that is global sections of a factorization algebra which I guess we can but just like tracing the diagram around I guess if we could see some geometric you know moduli space interpretation so the last remark that I wanted to make is that I didn't say anything about translation equivalence but it's pretty clear how you should define translation equivalence and so then all of the functors which I defined on X I make X equal to A1 and they are compatible with translation equivalence and so that's what's going on in that diagram that's all I want to say I was can you help me understand I'm a little confused by that negative result and the assertion of that theta X is a section for instance if you take what's written as W dot and translate it as a factorization algebra right so what happens is that VL does not go to this object it goes to something else and so that's what I mean that Borchard's he has a perfectly decent representative but it doesn't have the properties that we wanted to have so that's confusing do you know if these functors preserve certain co-limits or limits like this sounds like a localization kind of phenomenon right like you have some category and then you want to force certain things to be equivalences to get factorization I haven't thought about that I don't know I have to like my conclusion like I read this paper and I was like I want to understand and now my conclusion is like I think I was happy with factorization algebras but I somehow now have factorization algebras as a full sub category here so if I can understand quantum vertex algebras in Borchard's sense I know what the definition should be but I don't know if Borchard's quantum vertex algebras are interesting you know it's a full sub category not just the I think so yeah yeah I think so not 100% sure but any other questions sorry I did your job just