 This talk will be about the origins of the theory of vertex algebra. It's going to be a bit different from my usual talks because it was in fact commissioned by Michael Penn, who has a YouTube mathematics channel in which he has a a few videos on vertex algebras. And he asked me to provide a video discussing the beginning of vertex algebra theory. So this is a sort of historical account. It's based on my memory of things that happened more than 35 years ago. So I've doubtless misremembered a few things. So first of all, let's give the original definition of a vertex algebra. Now the original definition is really clumsy and a total mess. So it looks like the following. What we do is we have some sort of module v and we have an infinite number of products on v, taking v times v to v, taking u and v to a product u m of v, where m is an integer. And these products satisfy various identities. So v has a sort of identity element such that 1nw is 0 for n not equal to minus 1 and 1 minus 1w equals w. So that's a sort of analog of the identity. And it also has a sort of derivation with divided powers given by u n of 1 is equal to d to the minus n minus 1 of u. And it satisfies some identities. And the first of these identities looks like this. It says u and v is sum of minus 1 to the n plus i plus 1 of d to the i v n plus i u, sum over all i. And the next identity says that u m of v nw is equal to sum over i greater than or equal to 0 of mi times minus 1 to the i, I guess, of u m minus i v n plus i w minus minus 1 to the m v m plus n minus i u i w. So this definition raises several questions. First of all, is there a better definition? Because this is obviously rather a mess. And the answer is yes. In fact, two or three years after this definition, Frank Lepouskin-Merman reformulated the definition in terms of a formal power series that makes it look much cleaner. I'm not going to give the cleaner definition because that's the standard definition which you can find almost everywhere. This is the historical definition. So you can rewrite all these operators in terms of formal power series. So what you do is you have a formal power series in some formal variable z, which is sum over n of u n times z to the minus n minus 1. So you can rewrite everything in terms of these formal power series and that cleans it up. So I'm not going to talk much about the better definition. What I'm going to talk about is the obvious question, how did anyone think of this? So how did anyone find this weird definition? Well, the story starts more or less with the leach lattice. So the leach lattice is a lattice in 24-dimensional real vector space. And it's really famous because the automorphism group is more or less a double cover of Conway's simple group, Conway's largest simple group. So Conway became very famous in the 60s for discovering a new sporadic group whose automorphism group was the leach lattice. And Conway and his co-workers were doing a lot of calculations with the leach lattice. And one of the things they did was they figured out it's covering radius. And the covering radius is square root of 2. That means if you have a sphere radius square root of 2 around every lattice point, it just covers the lattice. So for a hexagonal lattice, a covering radius would be the smallest number such that balls around the points just cover the lattice. So the points furthest away from the lattice points are called the deep holes. And Richard Parker noticed that the deep holes of the leach lattice seemed to correspond in a rather mysterious way to Neemire lattices. So the Neemire lattices have been classified a few years before by Neemire and they're the other even union modular lattices in 24 dimensions. And he found 23 classes of these, well 24 classes including the leach lattice. And Richard Parker noticed that two or three of these classes seemed to correspond to certain of the deep holes. Anyway Conway and Parker and Sloan got very excited about this and classified the deep holes and found there were 23 classes of deep holes. This was a huge complicated calculation taking you know 100 or 200 pages real mess. Anyway Conway used this result to give the following very striking results about reflection groups. So what he did was he calculated the reflection group of the lattice II1,25 or II25,1 two conventions for which round you put these. So this is an even lattice and it's unimodular which means the volume of a fundamental domain is one and it lives inside Lorentzian space R1,25 and you can look at its automorphism group and its automorphism group has a big reflection group as a subgroup and Conway calculated this reflection group. In fact he found its simple roots. The simple roots of the reflection group turn out to be the same as the leach lattice. And the simple roots of a reflection group are sometimes called the coxida diagram or sometimes the dinkin diagram depending on whether you're a geometer or a Lee algebra theorist. So we have this weird result that the coxida diagram or the dinkin diagram of this lattice is the leach lattice. Well what on earth does that mean? I mean a leach lattice isn't a dinkin diagram. I mean you know a dinkin diagram is something that looks like this. Well you can turn the leach lattice into a dinkin diagram as follows. What you do, the points of the lattice correspond to the points of the dinkin diagram and you need to draw lines between the points and you draw lines between the points of this. So if you've got two lattice points lambda and mu such that lambda minus mu squared equals four, so these are as close as possible, you draw no lines between them. You draw one line between them if lambda minus mu squared equals six and if lambda minus mu squared is greater than six you draw something more complicated between them. It's not really a double line because a double line means something else, but anyway you can turn the geometry of the leach lattice into a dinkin diagram. And I was a research student at the time and Conway gave me some more geometric calculations to do with the leach lattice to keep me out of his hair for a bit and in particular he asked me to classify the other holes so I went off and did that. And in order to do this it's quite useful to use Conway's observation that the leach lattice is really a dinkin diagram. And meanwhile Katz and Moody had invented Katz-Moody algebras so the usual finite-dimensional the algebras such as E8 have certain dinkin diagrams and from this dinkin diagram you can write down the serrations and that gives you a finite-dimensional the algebra. And Katz and Moody notice that you could take more or less any graph you like for instance you could take this one one two three four five six seven eight nine so if you write down the serrations for this this doesn't give you a finite-dimensional the algebra that gives you a Katz-Moody algebra. The name came later. So this is actually the dinkin diagram of the so-called affine E8 algebra which is more or less E8 over a ring of Laurent polynomials what give or take a central extension. And so I've been fiddling around with the leach lattice thinking it is a dinkin diagram and I got this idea you know you could form a Katz-Moody algebra by taking the leach lattice as the dinkin diagram so I went and told John Conway about this and he sort of very kindly told me that yes they already knew about this and he and Queen and Sloane actually had a paper pointing this out so this is the normal result of any idea you have in mathematics it turns out to be either wrong or trivial or known already and this one was known already anyway actually John Conway very kindly added my name to this paper that I've already written so I got my name on a paper that I had absolutely nothing to do with. Anyway I got rather intrigued by this leach algebra and started doing some calculations with it to see what was going on and one thing you can calculate is you can calculate its root multiplicities. So the roots of the leach algebra are elements of this lattice this 26-dimensional Lorentzian lattice and the root multiplicity is just the dimension of the corresponding root space and you can calculate the dimension of various roots so if we have a root r then the dimension of the root space is given as follows first of all if r has norm 2 then the dimension is 1 and these are just the so-called real roots so that's kind of trivial. If the roots have norm 0 then it's easy to calculate the multiplicity of the roots the multiplicity is either 24 or 0 so what happens is this has 24 times infinity orbits of norm 0 roots corresponding to the nemylattices the correspondence is if you've got a norm 0 vector then its orthogonal complement modulo omega is a nemylattice so there are 24 orbits of primitive norm 0 vectors and for the ones corresponding to the leach lattice the multiplicity is 0 and for the other nemylattices it's called nemia it's got too many i's and e's in it so the nemylattice the 23 nemylattices give you 23 times infinity orbits of norm 0 vectors of multiplicity 24 so that was easy. Next you can look at roots such that r omega rw equals minus 1 for a norm 0 root and these also turn out to be reasonably easy to calculate using the theory of affine cat's moody algebras and the result is as follows suppose you take the ramanogen's tau function which is product q times product 1 minus q to the n to the 24 so this is a very famous function whose coefficients are ramanogen's tau function which is a different tau from this one but never mind and you take 1 over delta of tau so this turns out to be q to the minus 1 plus 24 plus 324 q plus 3200 q squared and the multiplicities of roots that have norm minus 1 with a norm 0 vector turn out to be these coefficients here so this is if r r equals 0 this is if r r equals minus 2 and this is if r r equals minus 4 so that's only if it is in the product minus 1 with a norm 0 vector and I tried calculating the multiplicity for other roots r so if r r is minus 2 it turns out there are 121 orbits of such roots and 119 of them have multiplicity equal to this number 324 and 2 don't 2 have multiplicity less than 324 so I got rather excited about this and went around telling people and as I mentioned before any results you find turns out to be either trivial or wrong or found by somebody else and it turns out these results had already been found by Igor Frenkel in fact he had gone further so first of all here it Frenkel had noticed this result here that the multiplicities with respect to a norm with respect to these sorts of roots have multiplicity exactly these numbers but he had also gone further and he had shown that for any root r the multiplicity is less than the p24 of 1 minus r squared over 2 where p24 of these coefficients here so he showed that these numbers here were an upper bound not just for norm minus 2 vectors as I'd found by this huge calculation but he managed to prove this was an upper bound for all vectors so yeah that was a bit disappointing to discover that my nice new result was already known but so I went off and tried to study Igor's result and his result used the no-ghost theorem from string theory and it's a theorem about involving vertex operators and we're finally getting to something related to vertex algebras so there's a lot of nonsense written about string about string theory on youtube and the internet mostly by people who don't seem to actually know what string theory is string theory whether or not it turns out in the end to be useful in physics is a very interesting and rich mathematical theory the part of string theory used for Frenkel's result is actually some it's actually a part that's been long abandoned by physics so this is ordinary string theory in 26 dimensions which physicists are no longer interested in you're more interested in super string theory in 10 dimensions and various other related theories so um so what string theory gives you is it gives you a big space v which is got by taking the group ring let's work over complex numbers the group ring of this lattice and tendering it with a polynomial algebra in enormous numbers of variables alpha n for alpha in the corresponding real vector space or complex vector space and n a positive integer so here we have a polynomial algebra in 24 times infinity variables so it's got this really huge vector space and string theory gives you vertex operators which are formal power series um mapping v to um formal power series or rather Laurent series with coefficients in v and you get such an operator for you in the lattice 25 1 so these were the original vertex operators which you write by exponentiating something complicated and normal ordering it and so on and we also get vertex operators from um polynomials in the alpha n so these give you another vertex operator and I was studying prengles work trying to understand these vertex operators and it doesn't take long to notice that if you've got a vertex operator everything in here and a vertex operator for everything in here then maybe you should combine these and have a vertex operator for any element of v so every element of v gives you a vertex operator which is a map from v to v um to Laurent series in v in other words you're getting a sort of algebraic structure so we've we've got a bilinear map from v and v to not to v but to Laurent series in v um so um so this is sort of the beginning of an example of a vertex algebra um well if we've got a bilinear map um we should ask um what identities are there and what we have is um for each element of v we have some power series and you can ask are these power series are these formal power series commutative and we can ask is this true and there are two answers to this one answer is no and the other answer is yes um if you just write these out as expand these as formal power series and x and y then the coefficients aren't the same however if you multiply both sides by power of x minus y then at least if you apply them to w then v y u x w times x minus y to the n um if you multiply these both by power of x the same power of x minus y they become equal so what on earth is going on here well um let me give a simple example of this if you take the power series 1 plus x plus x squared plus x cubed and so on and compare it with minus 1 over x minus 1 over x squared minus 1 over x cubed you might say these are quite different the coefficients are obviously not the same however if you multiply this by 1 minus x it just becomes 1 and if you multiply this by 1 minus x it becomes 1 so these different formal power series become the same if you multiply them by 1 minus x so something similar but more complicated is going on here now once you've discovered this it's quite easy to write down some identities satisfied by the coefficients of these operators for example you you see that u x v y um in some sense as poles at x equals 0 y equals 0 and x equals y um now if you think about the variable x if you look at the x plane there are poles at 0 and y and now if we integrate in a big contour like that it's the same as integrating um doing a complex integrate round y and around zero so this gives you if you carry out this integration this gives you some sort of identity saying something is equal to something else plus something else and this is more or less gives you this identity here that I wrote down here actually gives you a slightly um more general version of this identity which kind of also incorporates this identity here so um that's uh how the identity for vertex algebras could have been discovered in fact this is not how the identity for vertex algebras was discovered um what I actually did was I had no idea what was going on so I spent several weeks and months writing out hundreds of pages of messy identities involving the coefficients these vertex algebras and just fiddling around with them and by very long winded roundabout process found the identity as they satisfied I could have saved an awful lot of time if I understood what was going on and just directly use this calculation here um so in particular one of the vertex algebras identities or operations is u zero of v you remember there's an operation u n of v for every n and u zero of v satisfies the dracobi identity or at least one form of it for li algebras so um in other words if you set u v equals u zero v then we almost get a li algebra on v and we don't quite because it turns out that u n of v is not equal to minus v n of u which would have to be for a li algebra however if you take the space v and quotient out by the image of d and its divided powers you find this is a li algebra and now if we take v to be the vertex algebra of this 25 dimensional lorenzian space that I started with we find that v over dv contains um the li algebra of the um leach lattice in other words the li algebra whose dink and diagram is the leach lattice that I started off with um furthermore we find that contains is a natural sub algebra a li algebra whose multiplicities are exactly p 24 1 minus r squared over 2 so um this is actually a little bit bigger than the li algebra associated with the leach lattice so frankel's observation that the upper bound is equal to this number here sort of explain because the li algebra of the leach lattice is contained in the slightly larger li algebra whose root multiplicities are given exactly by these numbers um this li algebra incident is the original example of a generalized kaps moody algebra um generalized kaps moody algebras were defined specifically to understand this particular li algebra with these particular multiplicities um so I want to just mention a few myths about vertex algebras and try and dispel them um the first myth is that um vertex algebras are defined over the complex numbers um vertex algebras can be defined over the integers and the theory works perfectly well over the integers nearly everybody defines them over the complex numbers but there's really no good reason for this except that people are nervous um there are plenty of interesting examples over for example finite fields um for instance you can find examples acted on by chevalet groups um so chevalet groups are defined over finite fields and the actual vertex algebras over finite fields and alex reber noticed that there are various examples over finite fields acted on by sporadic groups um for example the baby monster acts on a nice vertex algebra over the field with two elements which can't be lifted up to characteristic zero so you really shouldn't define vertex algebras over the complex numbers this eliminates a lot of interesting examples um the second myth is that vertex algebras were not motivated well um the myth is that vertex algebras are motivated by conformal field theory in fact they weren't motivated by conformal field theory because I had no idea what conformal field theory was at the time um they were they did turn out shortly afterwards be connected with conformal field theory but that was noticed by other people not by me and they weren't really motivated by the monster either at least as far as I was concerned again shortly afterwards that turned out they were related to the monster but um the original motivation was not the monster but the lee algebra of the leach lattice and the third myth um is that vertex algebras are sort of analog of lee algebras and they're not vertex algebras not a sort of lee algebra or a generalization of lee algebra but they're a sort of analog of commutative rings um in fact any commutative ring is automatically a vertex algebra um we just define um um a minus one b uh to be the product in the rings and all the other things equal to zero more generally they're analogs of commutative rings with a derivation or more precisely a derivation with divided powers um which you can actually think of as being a commutative ring action by a formal group and this is really the best way to think of vertex algebras uh a vertex algebra is sort of like a commutative ring acted on by a formal group except the ring multiplication is not defined everywhere but it's sort of a meromorphic um function of its arguments so informally if we write the group action on of a group element x on a ring element a as a to the x then um you can write identities like a to the x times b to the y and you can think of this as being a function of x and y and in a regular ring this in an ordinary commutative ring this would be a regular function of x and y in a vertex algebra it's a sort of function of x and y that may have poles um so it's not actually defined for all x and y in the group in particular the ring multiplication may not be defined because it's that may light a pole of this and all sorts of identities which are obvious in the if you write this as a um in in terms of rings actually on by groups turn out to be vertex algebra identities for instance this is equal to a to the x times b to the y c to the z so this identity looks completely trivial if you think of it as being a ring actually on by a group but in terms of vertex operators it would look like this um and it turns out to be a non-trivial identity involving vertex operators um so um so that was the original example of a vertex algebra it was the vertex algebra of a lattice um so it was quite difficult at first finding further examples so let me list some of the early examples so first of all we have commutative rings with derivation so these are the sort of uninteresting examples secondly we have the vertex algebra of a lattice and um perhaps the most famous vertex algebra of all was the one constructed by Franco Lepowski and Merman um and this appears in their book on vertex algebras which appeared a few years later and gives the cleaned up version of the definition of vertex algebras um so uh they reformulated the definition in terms of um formal power series rather than writing out all the components explicitly which makes life a lot easier um so this gives us the monster vertex algebra and the monster vertex algebra led to the moonshine conjectures which um i'm not going to say much more about that's uh that's another story um and for a long time i didn't really know of any other examples of vertex algebras and had a hard time getting anybody interested in them um i think there was um one time i gave a talk and there seemed to be an awful lot of people in the audience and i thought finally people are getting interested in vertex algebras and the the answer turned out to be the title of the talk had been misprinted as vortex algebras and a whole lot of people who thought this was something involving fluid dynamics turned out up to hear what a vortex algebra was and seemed to be a bit disappointed by the talk but anyway um eventually frankle and jus found some more examples of vertex algebras they found that if you take a highest weight representation of an affine lee algebra um then this quite often has the structure of a vertex algebra or sometimes it's a module of a vertex algebra and instead of taking affine lee algebra you can also take the virasura algebra um so we finally started getting a reasonably large number of examples of vertex algebras um frankle and paskin mermin's constructed the monster vertex algebra was the original example of something called an orbifold construction so in terms of vertex algebras what this means is you take the vertex algebra and take its fixed points under a group action and then maybe add on something else and it's very hard figuring out what the something else is it's still rather difficult to construct orbifolds of vertex algebras even today more than 30 years later um okay well i think that'll be that's more or less a summary of the early theory of vertex algebras um well so what's the moral of this as the duchess would ask um well i guess the moral is if you find a really bizarre weird example such as conway's description of the reflection group of the 26 dimensional lorenzian lattice it's really used really a good idea to spend a lot of time poking around with it and seeing what turns up um so general theories like say generalized cat's moody algebras or vertex algebras don't usually arise because someone wasn't trying to invent a general theory they arise because um very often because someone was looking at one particular interesting example and trying to understand this so if you find anything bizarre going on you should really try and focus on it