 This is an expository talk on monstrous moonshine. It's based on some old slides I found from a talk in about 1998. So it's definitely a historical talk rather than one based on current research. The first wheel just quickly describe what the monster simple group is. So the monster simple group is the largest of the sporadic simple groups which has this ridiculously large order. Here it's given in factorized form as a product of primes. If you want to write it out explicitly it's this rather impressive number here. This number is rather more than number of elementary particles making up the earth for example. So it is some idea of its size. It lives as a group of symmetries in various dimensions in particular the smallest dimension in which you can represent it as a group of symmetries is this number 196883. The next dimension is 21296876. So on the other hand, there's something called the elliptic modular function in complex function theory which is this used to be a rather obscure function that most people hadn't really heard of and it has a power series expansion. So the first coefficient is 744 the next is 196884 and so on. And one day John Mackay had been, he used to work in finite group theory and decided he would move on to Galois theory where the elliptic modular function appears. And he knew this number 196883 from group theory and when he started looking at complex function theory he noticed this number 196884. So he had this rather remarkable theorem says 196884 is 196883 plus one and even had a t-shirt with his theorem written on it that he went round to at conferences. Mackay told several people about this and the general reaction most people was this was complete nonsense. The point is there are lots of sporadic groups. They have lots and lots of different dimensions of representations. There are lots of variations of modular functions which have lots of coefficients. And if you have a whole lot of numbers then a few of them are going to be roughly the same as each other just by coincidence. So there's a whole area of crackpot nonsense called numerology where you find meaningless numerical coincidences by looking at lots and lots of numbers noticing that some of them are about the same. So general reaction to most people was to dismiss this as meaningless. However, I think it was John Thompson then noticed there were some other coincidences. For instance, if you take this coefficient of the elliptic modular function it's the sum of the first three dimensions of representations of the monster. And John Thompson noticed that if you went on you got more and more coincidences by this time it was clear that this wasn't a coincidence at all. Then John Conway and Simon Norton took this over and found more and more relations between the monster group and elliptic modular functions as I'll describe in a moment. And John Conway coined this term monstrous moonshine for this. So that's what this talk is going to be about. So next I'll talk very roughly about the construction of the monster. So the monster, the existence was suggested in the 1980s by Fischer and Grice. And I think most people thought it was going to be hopeless to construct it since much, much, much smaller groups required computer constructions at that time. People since found hand constructions of most of them. And the trouble with the monster is that it's smallest, the dimension of its smallest representation is so large that it was way beyond what computers at the time could cope with. Even today, 30 years later, computers have a certain amount of trouble handling matrices of this size. And so much smaller groups took very difficult computer calculations. Robert Grice absolutely astonished everybody by managing to construct this, the monster by hand. So it was not only far bigger than all the groups people are constructed by computer, but he wasn't even using a computer. The reason he was able to do this is this 196884 dimensional representation of the monster turns out to have a product on it. It's actually quite unusual for small representations of sporadic groups to have algebra structures. So in some sense, Robert Grice was really lucky. So Robert Grice constructed this commutative product on this space of dimension. I guess he was actually working the space of dimension 196883 rather than four. And this product is commutative and doesn't seem to satisfy any other particularly easy identities. And Robert Grice's construction was ferociously difficult. What he did was he split up this 196883 dimensional representation into a sum of three pieces. So one piece is dimension, a symmetric square of 24. Another has dimension, half a number of norm, four vectors of the leach lattice. And the third has a sort of spinner representation tensed with the leach lattice. What the reason he was able to do this is the centralize of an evolution and the monster has this rather special structure. It's got something called an extra special group here with Conway's sporadic simple group sitting on top of it. And these are all closely connotations of Conway's group. So anyway, he had to construct an algebra product on it. And this was really complicated. For any two of these spaces, he had to construct a product going from those two spaces to the third space. So there are about a dozen different products. He had to write down and then he had to make these all compatible so the monster would act on them. Then he had to find an non-obvious automorphism that generated the monster. And his construction is about a hundred pages long. It's since been simplified considerably, but even today it's, there's no really easy construction of the monster. So the word moonshine originally means foolish talk or unrealistic ideas, which was partly why Conway chose the term because the original idea of monstrous moonshine seemed silly. So famous quotes by Ernest Rutherford says that, anyone who expects to get power from Adam's splitting is talking moonshine. It also refers to corn whiskey, especially in that produced illegally. So this does have a bit of a problem that if you try searching for moonshine on Google, what you get is a lot of recipes for distilling alcohol rather than talks about the monster. So the monster is a rather large sporadic group. So in order to understand it, we can start by looking at some much smaller groups. So this is the smallest non-Abelian simple group is a group of symmetries of an icosahedron. And it's got 60 symmetries and 60 is a rather big number to handle. But fortunately you can classify these symmetries into conjugacy classes. So for example, the icosahedron has five different conjugacy classes. You can fix everything or you can take this axis here and do a one third rotation around it. Or you can take this green line here and do an axis and do a 180 degree rotation. Or you can fix this red axis here and rotate the icosahedron by one fifth or two fifth of a rotation. So all together, although there are 60 different elements of this group, you can classify them into five different conjugacy classes. So we're gonna do something rather similar to that for the monster. So for the icosahedron, it has 60 symmetries, five conjugacy classes and lives in three dimensions. So that's small enough to do everything by hand. The monster by comparison has about eight times 10 to the 53 symmetries. So you cannot possibly store all these even on the biggest computer you can imagine. Fortunately, we don't need to because the number of conjugacy classes is astonishingly small compared to the size of the group. There are only 194 conjugacy classes. And that's a little bit too much to do by hand, but a computer has no trouble with these. Finally, it lives in one, nine, six, eight, eight, three dimensions as opposed to three dimensions. Well, one, nine, six, eight, eight, three, obviously you can't write down a matrix of that size by hand, but it does more or less fit onto a computer. If you've got a gigabyte or so of storage, then you can quite happily store one element of the monster as a matrix over the field with two elements. When you're handling groups on a computer by hand, you don't write down a multiplication table for any group with order more than about four or five because it's too ridiculously big. What you do is you write down something called its character table. So here's the character table of the group of order 60. So on the top row, you list the conjugacy classes of the group. So we have five conjugacy classes of the group. Each of the other rows tells you something about an action of the group on a vector space. These are the so-called irreducible representations. So the group can act on various vector spaces and you can sometimes split these vector spaces up into smaller vector spaces. And if you can't split it up further, that's called an irreducible representation. And this group of order 60 has exactly five irreducible representations. These numbers here are their dimensions. So you can see it lives in three dimensions, which is the obvious action on rotations of Nycosahedron and it also has a few others. The other entries are given by the traces of various conjugacy classes on these vector spaces. Obviously the trace only depends on the conjugacy class. So instead of having to write out a 60 by 60 matrix, all you have to do is to write out a five by five matrix. And it turns out this gives you most of the information you need about the group A5. Anyway, the monster is actually one of the sporadic groups. So I'll just give some quick background about this. So the classification of finite groups was finished. Well, actually it's not quite clear exactly when it was finished. It was announced that it was finished in 1983 by Goronstein, but he'd been slightly misinformed about one of the pieces that still hadn't been finished, that was finally finished off by Aschbacher and Smith in about 2004 or so. So anyway, the classification of finite simple groups, it's so long nobody actually knows how long it is. I've seen estimates of 10 or 20,000 pages, but this doesn't include some very long computer calculations that were needed to verify the existence of groups and so on. So nobody really knows exactly how long the classification is. Anyway, so it says that all finite simple groups either cyclic or alternating, which are the ones you come across as undergraduates, all their Chevalet groups and various variations of them like Steinberg and regroups. A typical example of this are matrix groups over finite fields like general linear and orthogonal groups. And finally, there are 26 sporadic groups left over of which the smallest is M11 with about 8,000 elements and the largest is the monster discovered and constructed by Fisher and Grice. These are a real puzzle. Nobody has really come up with a good explanation for why we have these sporadic groups. And so all the other groups fit into neat infinite families and these ones just seem to be left over and not only do we not have any easy proof of why they exist, we don't have any simple explanation of why we get them. So the character table A5 looks like this. The character table of the monster is 1, 9, 6, 8, 8, 4 by 1, 9, 6, 8, 8, 4. And here is a piece of the top left hand corner of it. Maybe if I try zooming in a bit, you might just about be able to see that this entry up here says 1, 9, 6, 8, 8, 3. So this column here gives other spaces, dimensions of other spaces that it lives in and these give traces of various elements of the monster on that space. This is about 3% of the monster character table on this slide. If you want the full character table of the monster, you can find it written out in the Atlas of Finite Groups by Conway and his colleagues. So that's enough about the monster simple group for the moment. Now I want to talk about elliptic modular functions for a bit. So the elliptic modular function can be defined as follows. The special linear group in two variables over Z, which is SL2Z acts on H, the upper half plane of all numbers tau with positive imaginary part, as these fractional linear or mobius transformations. SL2Z is generated by these two particularly simple transformations. You can take tau to tau plus one or minus one over tau and a modular function is just a function on the upper half plane, preferably holomorphic, that is invariant under this action of SL2Z. So modular function just satisfies this equation here. There are no easy examples of modular functions. The simplest example is the so-called elliptic modular function turns out to, it's so-called because it's classifies elliptic curves in some sense. You can actually write this as an invariant of an elliptic curve and two elliptic curves over the complex numbers of the same if and only if they have the same elliptic modular invariant. There's no particularly easy way to construct it. The simplest way is to construct is E4 cubed over delta, where E4 is the simplest Eisenstein series given by this series here. And delta is the, this sort of interesting product here whose coefficients look like this. So these coefficients are called the remanagen tau function and the tau in the remanagen tau function has nothing to do with this tau here. E4 and delta aren't quite modular functions. They're things called modular forms which satisfy a slightly different form of identity. Instead of being invariant under SL2, they transform up to a power of C tau plus D. The reason they're called modular forms is that if this exponent here is a two, then that's more or less the same as a one form that is invariant under SL2Z. So these are modular forms of weight four and 12 where the numbers four and 12 are this exponent here. So if you take one of weight four and qubits and divide it by one of weight 12, you get one of weight zero, which means it's just invariant under SL2Z. So that's roughly where the elliptic modular function comes from. If you want to know what the action of SL2Z looks like on the upper half plane, well, Escher drew some rather nice pictures of it. So this is a picture of, well, it's not quite SL2Z acting on the upper half plane. It's a slightly different group. And instead of the upper half plane, it's acting on the circle is conformally equivalent to the upper half plane. So the action of SL, so the action of groups on the upper half plane sort of looks like this. So all these angels and demons are really the same size because the metric on this is really a hyperbolic metric and you can't really embed this in Euclidean space very well. If you embed it in Euclidean space, then things get distorted. So this gives us a group, there's a group which takes each angel to any other angel and a modular function would be something like a function on this circle that is invariant if you transform any angel into any other angel. So here's a rather nice picture of the modular function. From the book, Yanka and Emda, who I think this book is about a hundred years old and they drew these absolutely amazing pictures of various complicated transcendental functions. And you must remember they were working long before computers were invented. These are not computer graphics. These were all carefully drawn by hand by some rather brilliant draftsmen. So this is not quite the elliptic modular function I was talking about, but it is in fact a very similar function. So this is the upper half plane. Here we have the real axis and of here we're going off to infinity. And along the real axis, you can see the function gets very complicated. It's got all these poles. So it's got poles here at minus one and one. And roughly speaking, it has a pole at every rational number where the pole is bigger if the denominator of the rational number is smaller. Actually, these poles are all really the same size because they can be transformed into each other by elements of the modular group. They just sort of look different sizes because we've had to put them in Euclidean space. The actual elliptic modular function looks a bit different because it has a pole at i infinity over here, whereas this modular function vanishes at i infinity. And here's another view of it where this time you're looking from i infinity and again, the real axis is now this line here. So, well, there are quite a lot of elliptic modular functions. J of tau that we've been talking about is in some sense the simplest one. First of all, we're using the group SL2 of Z which is one of the simplest groups. There are lots of other groups you could use. You would take a subgroup of SL2 of Z of finite index. And it turns out that the elliptic modular function is actually an isomorphism from the upper half plain modular SL2 Z to the complex numbers. What this means is that any other elliptic modular function for SL2 Z is actually a function of J of tau. So it really is the simplest option to doing silly things like adding a constant or whatever. It has some other rather astonishing properties. One rather famous one is that J of tau is an algebraic integer whenever tau is an imaginary quadratic irrational. A particularly spectacular example of this is when you take the imaginary quadratic irrational to be one plus i root 163 over two. I mean, in this case, it's not just an algebraic integer but it's an actual integer given by this number here. The reason why it's an exact integer is that J of tau is an algebraic integer of degree equal to the class number of a certain imaginary quadratic field. And the imaginary quadratic field generated by this number here happens to have class number one. It's the biggest imaginary quadratic field of class number one as was shown by Hayden and Stark and Baker a few decades ago. Anyway, it's exactly this integer. And if you work at the elliptic modular function it has this power series expansion. Well, this is some really large number because it's about e to the pi root 163. This is an integer and this is a really tiny number because although one nine six eight eight four is large q is incredibly tiny because it's e to the minus something big. So this number here e to the pi root 163 is very, very close to an integer. If you look at this, you find there's this whole train nine's as well. There are various other, I mean you can stick in various other numbers other than 163 and quite often, well not very often but occasionally it turns out to be very nearly an integer. Incidentally, you notice 74 3.999 is very close to 744 which is the 744 here. And that's not a coincidence because this number here is divisible by lots of small primes and in particular it's divisible by a thousand. So you can actually see the constant coefficient of the elliptic modular function in this large integer here. So the moonshine conjectures by Mackay and Thompson stated that there should be a natural graded representation of the monster such that the dimension of these, the pieces of the representation should be given by coefficients of the elliptic modular function. Now, this was an example of finite groups acting on graded representations related to modular functions. So this was a really astonishing conjecture. And well, of course you could do this in a stupid way, you would just take a trivial representation of this dimension or something like that. So to give the conjectured teeth, you've got to say more about this representation. And John Thompson suggested that what you should do is look at the so-called Mackay-Thompson series where you take the trace of an element of the monster on this graded representation and see what sort of function it is. And what they did was they guessed what these representations of the monster were and were able to work out these traces from the character table of the monster and found the astonishing fact that these functions here were helped modules or at least seem to be for all conjugacy classes of the monster. So a helped module is an isomorphism from the upper half plain modulo sum group to the complex numbers for some subgroup of SL2R. Now, if you take a random subgroup, usually there won't be an isomorphism from this quotient to see because this thing will be a Riemann surface of genus greater than zero. So what Conway-Norton discovered is all these functions are associated quotients that have genus zero rather than some higher genus. So the question is, can you construct such a representation? Anyway, so I said that Thompson suggested that the trace of any element of the monster on this grade representation should be a helped module for some other group. So here are a couple of examples for type two A and two B, the first few coefficients look like this. So for example, here we get 4372 as the coefficient of the help module for this group here, which is the normalizer of gamma zero of two where gamma zero of two is given like that. And this turns out to be almost the dimension of a representation of a baby monster. And it's also the trace of an element of type two A of the monster on one, nine, six, eight, eight, four. So these astonishing coincidences keep on going. And by the way, an earlier observation about the monsters due to Og who noticed that prime divided the order of the monster if and only if the normalizer of gamma zero of P was a genus zero group. Well, Conway-Norton's conjecture about the monster was proved by Atkin, Fong and Smith. And what they did was they did a big calculation. What you can do is you can show that something is a representation of a group just by checking a lot of congruences. So since the representations are given by coefficients and modular functions, what you have to do is to check enough congruences between coefficients and modular function. And then you can show that things are representation. Well, strictly speaking, when you show their virtual representations, you also have to check some positivity condition, which is not too difficult because the coefficients of the elliptic modular function are so huge that it's not that hard to show things are positive. So that sort of verifies the original Conway-Norton conjectures, but it leaves you a bit unsatisfied because it doesn't really explain what's going on. It's just a sort of verification. And in particular, it doesn't really give you a very satisfying construction of the representation of the monster. It just sort of says this representation exists because we've checked all these congruences. So Frenkel, Lepaskin, Merman managed to actually construct an explicit graded representation of the monster using vertex operators. And they were able to show that this graded representation has the right degrees. And this leaves a bit of a puzzle because we now have two graded representations of the monster. We have one whose existence was shown by Atkin, Fong, and Smith by checking a lot of congruences. And we have another graded representation of the monster construct by Frenkel, Lepaskin, Merman which we have an explicit construction of. However, the problem is we don't know these two representations of the same. So this representation is nice because you can do things like construct algebra products on it. And this representation is nice because it's related to modular functions. And you would really, really like to know that these two representations are the same. So the problem is the problem of proofing the monstrous moonshine conjectures is to verify these two representations of the same. In other words, what you want to do is to construct the, sorry, calculate the trace of elements of the monster on this representation here. This turns out to be easy enough to do for some elements if they commute with a certain element of order two, but the construct of this representation is so complicated, it seems almost impossible to calculate the traces of other elements of the monster directly. So calculating the trace of elements has to be done indirectly. First of all, the advantages of the Frenkel, Lepaskin, Merman representation over the one constructed by Atkin, Fong and Smith is you can find an algebraic structure on it. It's something called a vertex algebra. More precisely, it's something called the monster vertex algebra because it's acted on by the monster. Now to verify that it's the same as the Atkin, Fong, Smith representation takes a few steps. First of all, you use string theory in 26 dimensions and something called the no ghost theorem in string theory in order to construct a Lie algebra called the monster Lie algebra that I'll talk about a bit later. This is an example of something called a generalized Katz Moody algebra which I'll again, I'll describe a bit later. The next step is to use something called the Vaal-Katz denominator formula in order to extractation about the traces of elements of the monster on the Frenkel, Lepaskin, Merman formula. In particular, they have a property called complete replicability, which was originally defined by Simon Norton. Finally, you can show that any function that is completely replicable is a Houtt module. The original proof of this was a very messy and ugly calculation, but fortunately, Martin and Cummins and Gannon managed to greatly simplify this and were able to find a more conceptual proof that completely replicable functions are Houtt modules. So finally, we find the trace of any element of the monster on the Frenkel, Lepaskin, Merman module is indeed a Houtt module. So now I'll talk about a bit more about vertex algebras and so on. And so the question is, what is a vertex algebra? Unfortunately, there is no easy answer to this question. The problem is that it is basically a provable theorem that there are no particularly easy non-trivial examples of vertex algebras to study. Probably the least worst introduction to vertex algebras is the book, Vertex Algebras for Beginners by Victor Katz. The title of this book is a bit of a joke. It's not really for beginners at all, but anyway. So here is a vague idea of what a vertex algebra is. It's a sort of commutative ring, except it isn't. So let's suppose you've got a commutative ring acted on by a group. Then we can form expressions like u to the x times v to the y, where u and v are in the ring and x and y are in the group and u to the x is the action of x on the group u. And if we fix two elements of the group, this gives a map from the ring, sorry, if we fix two elements of the ring, this gives a map from the group times the group to the ring. And a vertex algebra is similar except that the maps from the group times the group to the ring sort of have singularities as functions of x and y. In particular, they might not be defined when x and y are the identity elements of the group. And this is a real problem because in order to find the ring multiplication, you need to define this when x and y are the elements, are the trivial elements of the group. So it's sort of like a ring, except the ring multiplication isn't defined in some rather weird sense. Instead of the ring multiplication, all you've got are these, you might think of them as being rational functions on the group, which pay us if they were given by this formula here, except instead of being a regular function of x and y, it's some sort of strange function with singularities. So there's an easy way to construct lots of vertex algebras. You can just take any commutative ring acted on by a group and this will give you a vertex algebra. These are the trivial vertex algebras. They're just commutative rings with groups acting on them. The more general vertex algebras have strange singularities all over the place as fun as a group. And no finite dimensional examples of them. The point is that once you've got a singularity like a pole of order one, then you can automatically generate singularities of higher and higher order. So you can get poles of order two, three and four. And so you can get poles of any possible order. And this means you've got infinite dimensional spaces. And so there are no, so any finite dimensional vertex algebra is automatically just a commutative ring with a group acting on it. Next, we move on to generalized cat's moody algebras, which as the name suggests, rather like cat's moody algebras, only more so. So as motivation for generalized cat's moody algebras consider a finite dimensional reductively algebra. For instance, just the le algebra of N by N matrices. And we're going to be working over the real numbers. And it has the following four properties, all of which are very easy to check for the N by N matrices. First of all, it's got a nice invariant bilinear form, which in this case is given by the trace. Secondly, it's got an involution. You can just take the transpose or rather minus the transpose or something. Thirdly, it's graded. And there are lots of ways of grading it. For instance, for N by N matrices, you can just grade things by their distance from the diagonal. And rather obviously all these graded pieces of finite dimensional because the le algebra is finite dimensional. The involution acts as minus one on the zero piece of degree zero. That's sort of important. Finally, this bilinear form isn't positive definite, but it becomes positive definite if you twiddle it by this involution. So it's a sort of slightly twisted bilinear form with a positive definite property. So for a generalized cat's moody algebra, all you do is you very slightly weaken these conditions here. You allow the le algebra to be infinite dimensional, but it's still had to satisfy all these conditions except for N equals zero. So you allow this bilinear form to be indefinite on the degree zero piece. If you insist that it should be positive definite from the degree zero piece, then you get the finite dimensional algebras and you also get the affine cat's moody algebras which are very widely used. The basic theme of generalized cat's moody algebras is they have many of the good properties of finite dimensional simple le algebras. And for example, suppose we take SL2 with coefficients that are Laurent series. This is the simplest non-trivial example of an affine cat's moody algebra. Now finite dimensional le algebras have a vile cat's denominator formula. The vile cat's denominator formula for infinite dimensional affine le algebras turn out to be well-known identities. For instance, the vile cat's denominator form of SL2 is this formula here, which is the Koby triple product identity. There's a sort of historical remark about the McDonald's identities that I want to make. There's a famous paper by Dyson called Missed Opportunities in Science where he describes where he was looking at identities for powers of eta functions. So the cubithy eta function is a very nice classical identity due to Jacobi, I think. And Dyson said that he found similar identities with various powers of the eta function. So three, eight, 10, 14, 15, 21, and so on. And he was very puzzled by this. Couldn't figure out what was going on. Turned out that this had already been explained by Ian McDonald, who pointed out they were just dimensions of representations of le groups. So for each power, each dimension for le group, Ian McDonald found an identity for the corresponding power of the eta function. These are the famous McDonald's identities. And McDonald actually showed that these were essentially the denominators of the formula for cat's algebra. As if cat's moody algebra might been invented at the time. They were invented very shortly afterwards by cats and moody who pointed out that McDonald's identities were just the cat's file formula, the denominator formula for these algebras. So what we're going to, however, the monstery algebra is also a generalized cat's moody algebra. So it has a denominator formula. And the denominator formula is this rather striking identity for the elliptic modular function. The, who discovered this identity? It's a bit difficult to sort out because none of the people who discovered it seem to have published it. So Simon Norton and Don Zagius seem to have known about it in the 1980s. And Koiki sort of had a proof of it that circulated in a preprint that was never published either again sometime in the 1980s. So, I don't know, one of these three may be the first to discover it but it's difficult to sort out exactly who did it, when, because as far as I know, none of them ever published it for some reason. Anyway, so if we compare the coefficients of P to the NQ to the B in both sides, we obtain lots of relations between the coefficients. These relations are quite complicated. For instance, the simplest relation to the elliptic modular function is the following relation between the coefficients of Q to the 4, Q to the 3 and Q to the 1. You see, this number is equal to that number plus the sort of alternating square of this number. So these identities are really quite complicated. And Norton defined a function to be completely replicable if it satisfied identities coming from this infinite product. Actually, Norton didn't state it in terms of the infinite product but in terms of some rather complicated recursion relations. And Norton and Koike didn't actually write this identity as infinite product. They both just worked with the rather complicated relations you get by expanding out this infinite product. Anyway, you notice this infinite product looks just like the vial to form this. So the vial denominator form that says an infinite product over positive roots of a Lie algebra is equal to a sum over the vial group. So for the monster Lie algebra, the vial group is order two and this is a sum over its vial group. And this is just a product over the positive roots of the monster Lie algebra. Simon Norton also had some generalizations of the moonshine conjecture called the generalized moonshine conjectures. So the original moonshine conjectures gave a representation of the monster. Simon Norton pointed out that for each element of the monster there seems to be a projective representation of its centralizer in the monster on some other space here. So instead of having a modular function for each element of the monster Simon Norton suggests there should be a modular function for each pair of elements of the monster and these should give projective representations of centralizers of elements in the monster. And they should satisfy some identities like this. And this is rather nice because the group SL2Z acts on pairs of commuting elements of a group. So the group SLT is acting not only on modular function but on pairs of commuting elements. And Simon suggests that these should also be help modules. At the time I originally gave this talk these conjectures were unproved although some cases of this have been done. The first breakthrough was made by Gerald Hearn who managed to prove the conjectures for type 2A in the monster by a very ingenious way of constructing a representation of the baby monster. Don and Mason were able to prove it for whenever G and H generated a cyclic group. And Don was able to prove two of the conjectures whenever for all G and H. Finally, the full conjectures were proved by Carnahan a few years ago by extending the construction of the monster, the algebra to construct a large number of other somewhat more complicated lee algebras. An example of the generalized moonshine conjectures is if you take the trace of an element of type 2A on the monster, it's coefficients look like this. On the other hand, if you take the baby monster it's the centralize of an element of type 2A or at least this double cover is and it is this order. So it's pretty huge not nearly as big as the monster but still rather formidable. And it has representations of these dimensions here. And if you look at these dimensions you say they're very similar to these coefficients of this modular function. And it turns out the double cover of the baby monster actually graded vector space with these dimensions. So the trace of an element of type 2A and the monster gives you a function which is the same as the function given by the dimensions of the representations of an element of type 2B. So there's this funny connection between the representation of the monster and the representation of the baby monster. Incidentally, you might guess that this representation of the baby monster also has the algebraic structure of a vertex algebra. However, it doesn't. In particular, there's no nice product from V2 times V2 to itself as you would need if this was the vertex algebra. Well, Alex Shreber observed that, so I said that you don't get a vertex algebra corresponding to baby monster. Well, in fact, you do. Alex Shreber noticed that you don't get a vertex algebra in characteristic zero, but if you reduce mod P, then you sometimes do. So if we take an element of prime order of the monster then we can take the cohomology group of that element on the monster, and it turns out that this cohomology actually forms a graded vertex super algebra. So for some elements of the monster, these give you vertex algebras over finite fields, so you can get a vertex algebra and for other elements, they give you super algebras of finite fields. So we have this strange phenomenon that characteristic zero representation of the monster give you vertex algebras, and if you take cohomology, this gives you vertex algebras over finite fields that cannot be lifted to vertex algebras in characteristic zero. I'll finish by describing two open problems. It says three open problems here, but that's because these slides are out of date, and one of these open problems has actually been more or less solved. So here's a brook had this question. Is there a 24 dimensional monster manifold with written genus given by the elliptic modular function? If so, this might give another explanation of the monster vertex algebra. There's been some progress in the process by Hopkins and Maharwald, who managed to show there was indeed a manifold with this as its written genus, but last time I checked, nobody knows how to construct an action of the monster on this manifold. Lian and Yao pointed out that there are mirror maps for K3 surfaces that seem to be related to monstrous HALT modules. One example of this is there's a mirror map given by the inverse of the elliptic modular function. So whether that is related to the monster seems to be open. The third problem, which has actually been solved, is this weird observation by John McKay. He pointed out the monster has nine conjugacy classes of elements, and the third problem seems to be open. He pointed out the monster has nine conjugacy classes of elements of the form G, H, where G and H are products of elements of type 2A, and their orders are 1, 2, 3, 4, 5, 6, 4, 2. And these are exactly the numbers you get on weights of the affine E8-Dinkin diagram, and they're also the dimensions of the irreducible representations of the binary icosahedral group, of the simple group of order 6D. Well, it's not quite clear if this is a coincidence or not, but then you notice the baby monster and the fisher group, which is more or less a centralizer of an element of order 3 in the monster have similar properties, except these are related to the E7 and the E6-Dinkin diagrams, or possibly the F4 and G2-Dinkin diagrams. It's a bit hard to tell because F4 is a sort of folded version of E7 and G2 is a sort of folded version of E6. So, in this case, the baby monster is a 3, 4 transposition group, which means that the product of 2 involutions can have order 1, 2, 3, 4, or 2, and the fisher group is a 3 transposition group, which means the product of 2 involutions is ordered 1, 2, or 3. And these correspond to representations of the binary octahedral and the binary tetrahedral groups. And these observations were sort of more or less explained in terms of sub-algebras of the vertex algebra of these groups by Hearn, Lamb, Yamada, and Yamauchi. Finally, after this original talk was given, there's been some very interesting new ideas about umbral moonshine creating representations of M24 and nemylatases to various mock theta functions. And I think that's a good topic for another talk.