 Right, thank you Fernando. I'm delighted to be here. I've never been to Trieste before and it's a lovely city. A little bit jet-lagged but I will make do. So I recognize that the title that I offered for this lecture was called something like Modular Forms and Representation Theory. That's exactly what this is. I just forgot to change the title. So let me begin by saying that this is an exposition. So unlike a course where you might go through a series of lectures building up to some theorems, this is going to be a colloquium style where a lecture where I want to tell you about some recent theorems with the idea that if you're interested in the subject you'll know what some of the recent advances are and I'll talk about what natural references would be if you're interested in them. The two topics I've chosen to talk about are very much part of active research but what I like about them is that the problems that they're related to are rooted in famous letters of Ramanujan. In fact, in Ramanujan's very first letter to Hardy, the one that that showed Hardy Ramanujan's genius and the very last one, the very last letter that Ramanujan wrote to Hardy from his deathbed after he returned to India, both of those letters, the first and the last one, are very important in the theory of modular forms and the role of modular forms in representation theory and those two letters roughly refers to the first letter refers to Rajah's Ramanujan and the second one refers to something you may have heard about maybe in some of your physics lectures on the theory of Umbrell Moonshine. Okay, so I'm going to take a mathematical perspective and I'm going to begin with some photographs, some images. The first part of this talk, which is related to the first letter of Ramanujan, is related to these two famous identities which if you haven't seen before, I'm glad I'm the one to tell you about them. They're very innocent looking but those identities are ubiquitous and in fact had these identities never been brought into the light. It's quite likely that the theory of Moonshine, as we know it today, would not exist. So on the left you have the famous passport picture of Ramanujan. On the right you have a photograph of L.J. Rogers, who was a fellow of the Royal Society and I'll tell you a little bit about their work and what it's led to in recent times. And the last letter of Ramanujan is related to this cartoon, maybe you know it from Disney, Monsters.Ink. It's related to Monsters Moonshine and the re-visitation of the subject in the field which we now call Umbrell Moonshine, which is related to the pursuit of a three-dimensional quantum gravity theory. So I can't resist by talking about explicit examples that build this theory. So to understand what Hardy saw in Ramanujan's first letter, let's begin with a super simple, famous fact. The golden ratio is this number, phi, one plus a square root of five over two, which plays a prominent role in mathematics and culture. We're told that many beautiful works of art like the Parthenon or the Mona Lisa or cross sections of the Chamber Nautilus somehow make use of or envisage the beauty of this number. What I like about one plus a square root of five over two is not that it contains a square root of five, at least for now. What I like about it is that this can be represented as a continued fraction in this way. That is just the restatement that the golden ratio is the limit of consecutive terms of the Fibonacci sequence. And to prove that this number equals one plus a square root of five over two is a very simple calculation because the number lives inside itself in this denominator. So phi equals one plus one over phi. You cross multiply and you find phi as a root to this equation and just by the quadratic formula you get this root. It's an algebraic integer and it's a unit. So natural question that people were thinking about in the 19th century and certainly in the early 20th century is this very innocent problem. Given that the golden ratio is given by an infinite continued fraction like this, the question would be if we defined a power series called r in the variable q, where the powers of q are increasing here in the numerators, is there a theory for its special values? So indeed if you let q be one, you get minus one plus a square root of five over two from what I described above, but can you substitute any other values of q and actually evaluate this function? It's a very easy to state problem, but it's a remarkably difficult problem to solve. And apart from trivial examples, this problem was completely unknown and unsolved at the early 20th century. And actually believe it or not, this problem is still largely unsolved today and even what Ramanujan wrote down in his first letter was unsolved until about 10 years ago. So this innocent problem turns out to be the source of a lot of difficult but rich mathematics. So how did Ramanujan end his first letter to Hardy? He ended it and these are the very last three lines in the letter. So I give Hardy a lot of credit for waiting his way all the way through the letter so that he actually made it to the last three lines. He could have dismissed Ramanujan much earlier on, but he made it to the last three lines. And what did the last three lines say? Well, the left-hand side is just Ramanujan's notation for this function. The left-hand side is his notation for this function. So up here, this would be like letting q be e to the minus two pi. That would be q, that would be q squared, that would be q cubed and so on. So what Ramanujan was claiming is that he knew how to evaluate this function, not at an ordinary simple number like say a half, which might be a more natural number to choose after you start with one. He said, forget a half, let's plug in e to the minus two pi as if that's a natural choice. And what did he discover? He discovered that function evaluated at q equals e to the minus two pi is equal to e to the two pi over five times this number, which is easily describable also in terms of the golden ratio. This number turns out to be an algebraic integer, which is also a unit. It's multiplicative inverse is also an algebraic integer. He also said let's evaluate at minus e to the minus pi. And he said that function turns out to be e to the pi, e to the pi, the fifth root of e to the pi times this number. And he also easily described in terms of the golden ratio and hence also an algebraic integer and in fact also a unit. And Ramanujan ends his letter with, and by the way, I know how to evaluate all of these expressions for any positive rational number n. Hardy had never seen anything like that before. If you substitute in q equals one, you got the elementary argument that I offered you. For some reason Ramanujan said this is naturally plug in e to the pi times square roots of natural numbers. Alright, so Ramanujan in subsequent letters later offered some further values. And this statement seven became a very well-known conjecture for almost all of the 20th century, at least for people who worked in Ramanujan's mathematics. And it's the statement that this number normalized by a suitable fifth root of the exponential are always algebraic integer units. And that's startling. How many functions do you know that if you evaluate it at any infinite sequence of values gives not only algebraic integers but also simultaneously gives you units? So you know about the exponential function. The exponential function does that. That's not very difficult. The exponential function e to the i theta where theta's rational gives you roots of unity. These numbers aren't roots of unity. They are units and this function r is far more complicated than the exponential function. So Hardy knew that Ramanujan had found something. And in fact it's those three lines that literally blew away Hardy. And if you saw a short clip from the film Amonemune ago, this is what Hardy is referring to when he says these formulas defeated me completely. They could only be written down by a mathematician of the highest class. They must be true because no one would have the imagination to invent them. He had never seen anything like this before. He would have been impressed with just one or two of those evaluations. But then to end the letter with, and by the way I have a method that allows you to compute these for all then was just inconceivable. Great. So what did Rogers and Ramanujan actually do? So it turns out that Rogers wrote down some power series that are related to that continued fraction long before anyone even knew who he was. Certainly Ramanujan didn't know who he was. So it turns out that that continued fraction r is the ratio of two functions which we will call h and g where they themselves are very special. It turns out that if you define g to be this strange summatory q series and if you define h to be this summatory q series where this is just a shift each of these terms are shifted by q to the n when you sum over all n. Both Rogers and Ramanujan knew that the ratio of h divided by g gives you that continued fraction I started with. But they also gave you more. They gave you the fact that both of these identities could also be expressed as infinite products over all the non-negative integers n. So there's a lot going on here that's shocking. Why does this sum equal this infinite product? Why does this slightly different sum equal this only slightly different infinite product? And then why is this ratio the continued fraction? And then on top of that why would evaluating these functions at those strange numbers give you algebraic integer units? That was very surprising. Looks innocent. Well these products are essentially what we call modular functions. So if instead of just letting q be a formal variable we let it be e to the 2 pi i tau where tau is a complex number in the upper half plane. Then a modular function is a function that satisfies this transformation law. So we'll let capital gamma be a subgroup of SL2z. And a function f from the upper half plane to c is called a gamma modular function. If for every matrix A, B, C, D in gamma the function transforms exactly back to itself with respect to the merbiast transformation that naturally is associated to the matrix A, B, C, D. Those infinite products are examples of these kinds of functions. And those two strange identities those two infinite sums equaling those two infinite products appear throughout mathematics. I actually counted recently and in MathSciNet which is an online resource where you can read reviews of all math research papers. I think there are over 7000 papers that make use of those two funny identities. Those two funny identities are that famous. What role do those two identities play in modern mathematics? Well it turns out that you could prove those identities and Rogers and Ramanujan did this by just formal series manipulation because they were very clever. But it turns out that in the early 1980s some representation theorists, most notably Jim Lipowski and Igor Frankl at Yale, they looked for very difficult proofs of these identities. They saw some structure in the representation theory of infinite dimensional affine Lie Algebras encoded in those two strange identities. And they set out to prove those identities of Rogers and Ramanujan in this way. And if you've ever heard people talk about vertex operators or vertex operator algebras that entire subject was born because Jim Lipowski was trying to find a motivated proof of two power series identities. What did they end up doing? They gave Burke to a whole field which ultimately became in its most technical form the proof of the moonshine conjectures by Borchards which one Borchards the Fields Medal and I'll describe that in a moment. So those two identities have quite a history. The last three lines of Ramanujan's letters quite famous. Now what did I say about the folklore conjecture? Namely the very last line that Ramanujan claimed he could evaluate all of those values, e to the pi root n, and get a special algebraic integer unit. Ramanujan wrote his letter in 1913. That claim was not solved until 1996 by Bruce Burnt and two of his students, Heng Huachan and Zhang. Their proof was quite analytic and some arithmetic geometers were very well known. Brighton Case and Brian Conrad ten years later revisited their proof and now we have this theorem which finally confirms once and for all what Hardy was so surprised about in Ramanujan's letter. So let me just read this to you. If tau is a cm point, which just means that tau is a solution to a quadratic equation, ax squared plus bx plus c, where a, b and c are integers and tau is in the upper half plane, then e to the 2 pi i tau over 5 times the continued fraction evaluated at e to the 2 pi i tau is always an algebraic integer unit. This function just has by law because this theorem mandates it the ability to produce not only algebraic integers but the units in them. Right. So in view of that work, that's only one function after all R. That's only one function. If Rogers and Ramanujan found one function with this special property, you have to ask did they just find a glimpse of a theory or is somehow the Rogers Ramanujan function, the magical function sort of like the exponential function is magic. So the first fundamental problem which has been around for 100 years basically is this. Is there a larger and a conceptual framework of identities like that? Now the way I'm phrasing it here, it'll seem like there's no prayer of having one but that's one of the points of this lecture. Is there a framework where you can find these strange summatory series which automatically give you infinite product modular functions? That's problem one. And problem two, if there is such a theory, are there certain natural pairs of these identities just like the H and the G that Rogers and Ramanujan found? Are there certain pairs whose ratios naturally generalize this continued fraction and then have the property that all of their CM values are also algebraic integer units? Right. So this is the kind of thing that you would do in mathematics. Once you see a beautiful theorem, you ask is it a glimpse of a theory or is it the end of a theory? And for a long time it was believed that the Rogers Ramanujan function couldn't be just one example. It had to be a glimpse of a theory. All right, so it is a glimpse of a theory. So in fact, we have, I think, fairly easy to say, a fairly comprehensive theory. So let me explain to you what's true. Whenever I put quotes around a theorem, I'm just giving you the vague statement. I'm going to make some of these theorems a little bit more precise later. But the theorem is something like this. Let A, B, and C be integers chosen from the set. Okay. And let M and N be any pair of positive integers. So A, B, and C are some fixed triple, like 1, 2, 0, for example. And then choose M and N randomly among the positive integers. What I'm going to explain to you is how to construct the left-hand side, which is a power series, which is sum over lambdas. I'll define this in greater detail in the next few slides. What these lambdas will be will be partitions of integers. The lambda 1 will be the largest sum end of each of these partitions of integers. So what I'm going to be doing is I'm going to be summing over all the partitions of integers whose largest part does not exceed a fixed number M. So for example, when M is 1, summing up all the partitions whose largest part doesn't get bigger than 1 is just another cumbersome way of just saying we're summing over the integers. What we're going to do is we're going to sum over all the partitions of all integers where the largest part is constrained by some number. A shift of what are called the Hall-Littlewood Q series. So in the theory of special functions and symmetric functions, you probably know about the elementary symmetric functions like the sums of the powers of N numbers. The Hall-Littlewood polynomials will be related to these. I'll define them in a moment. And what the theorem is, is for all of these choices of parameters, this is automatically going to be an infinite product modular function. So the Rogers-Romonagian identities, those two that I started with, are just the instances where A, B, and C are these two triples and M and N are both chosen to be one. So what we have is, notice, what we have are four infinite families. We have four infinite families, four triples of A, B, and C, where for each one of those triples, you have choices of, in two directions, any arbitrary choice of integers, and it's automatically going to give you a modular function. If you start counting, you'll quickly see that for many levels, that's a natural way of parameterizing modular functions, that this exhausts them all. We know how to count the number of modular functions. So let me just give you an example. I will define these P's in a moment. It's actually quite simple. If you've never seen these Hall-Littlewood polynomials before, the examples I work out will give you a flavor for how they behave. It's very easy really. But before I do that, let me just show you how we pick. Suppose we again pick A, B, and C to be these two triples, and instead of letting M and N both be one, now let's let them both be two. The left-hand side becomes this, so we're summing over all partitions of integers only into ones and twos. No three is allowed. But what do we end up getting? Instead of getting the infinite products and the Rogers-Romanian identities, you get these two infinite products. And I think you see now my point. For every choice of M and N, I get different infinite products on the right, and all I'm really doing is modifying how I change variables inside these functions, and the parameter M is widening or narrowing the set of partitions I'm summing over, and that's exactly what controls the variation of the modular functions that one gets. These two functions are called Freeman Dyson's favorite. He wrote about these functions in his famous 1987 paper, A Walkthrough Romanians Garden, and so I like to give this as an example. All right, so let me explain to you what these functions are. You can build these with your own hands. That's what I like. Unfortunately, the symmetric group starts to get very, very large, so it's very hard to work out some without the help of a computer if you want to see these identities actually come alive on a piece of paper. So what are partitions? These are just non-increasing sequences of positive integers, so lambda 1 is the largest, and we just require that they have at most finitely many non-zero terms, namely so when you add them up, you get a number. So the absolute value of lambda will be the size of the partition. L of lambda will be the number of parts. So this is actually the number of non-zero parts. I'm going to allow parts for size 0. And for each positive integer i, I'm going to let m sub i be the multiplicity, the number of i's in a partition that's fixed. And just for bookkeeping purposes, I'm going to define m0 to be n minus L of lambda, which is like keeping track of the number of zeros I concatenate on the end, just so that I don't have to do some messy bookkeeping. So m0 will be like the number of zeros. n is not the number of parts. L of lambda is the number of parts, so that's why m0 is convenient to define in this way. So m2 would be the number of twos, non-increasing, so like 5, 4, 3. Yes, yes. So what we're going to do is, well it's not what we're going to do, this is well known. Let me just remind you of what Holland Littlewood did with partitions. So let's let lambda be any partition where L of lambda does not exceed n. And just for notation, I'm going to let x to the lambda be the monomial x1 to the lambda 1, x2 to the lambda 2, multiplied through xn to the lambda n. So it's easy to see how partitions will correspond to monomials. What's the Holland Littlewood polynomial? It's a polynomial. It's a polynomial in the variables x1 through xn, where the coefficients are themselves polynomials in q. The Holland Littlewood polynomials are variables, are polynomials in x1 through xn, where the coefficients are polynomials themselves in q. How is it defined? I'm taking the product i equals 0 to n of 1 minus q to the mi, divided by this rising factorial, times, why is it a symmetric function? We're going to take the trace, we take the trace of the action over the symmetric group. So omega is in the symmetric group. And omega, any permutation of sn, will act naturally on this in the usual way. If x1 and x2 are switched, omega of x to the lambda would be x1 to the lambda 2, x1 to the lambda 1, the other remains fixed in the usual way. And if you've never worked these out before, let me work a couple out for you by hand so you see what's going on. So if you don't remember the definitions, it doesn't matter, because I'm going to work out some examples and you'll begin to infer the properties. So let's consider here the partition, which is just the number 2. The partition, which is just the number 2. It turns out that this definition turns into this expression. So remember, we're going to be concatenating lots of 0's, parts of size 0. So to say we have x1 to xn, x1 will be 2, all the other lambdas will be 0. That's reflected right here. You have an x1 squared. And this is just all the other x's in the usual way. So I can compute this expression for every n starting at 1. So let's do that. Let's compute this expression for every n starting at 1 and see what I've actually written down. It's messy. You don't see what I've written down here, but it's actually really quite beautiful. If you let n be 1, this is empty. So that's 1. When n is 1, that's just s1. So that's just x1 squared. Empty products are always taken to be 1. So that is just x1 squared. If we let n be 2, also just with the single partition, just the number 2, this is a little bit more complicated and it becomes this. Remember, it's a symmetric function. And when n is 3, if you work it out by hand, you'll see that that is this. And I think you see the formula. It's not a difficult exercise to take this definition to show that for every n, what this polynomial... So what are we doing by increasing n? We're building more and more variables. I'm trying to build a power series, and one way of doing that is by increasing the number of variables. And by increasing the number of variables, what you see, and it's not a difficult lemma or exercise, is to prove that the partition 2 corresponds to the symmetric function, which is the sum of the squares of all the variables, plus 1 minus q times the first cross product symmetric function. But what is it that I've really written down? This is the beautiful thing. It's not very hard, but what the beautiful thing is, let's now let x1 be 1. Let x2 be q. Let's let x3 be q squared, and so on. Let x to the 100 be x to the 99. Just replace each of the xi by q to the i minus 1. So if we did that, letting this be 1 gives you 1. Letting x1 be 1 and x2 q there gives you 1 plus q, and I think you see the pattern. For every n, it's not a difficult exercise to show that this is the sum of the powers of q up to q to the n. It's probably the nastiest way you would ever want to write down the sums of powers of q, but it's true. All right. So that's what this is. So let's now make the identification where we let each of the xi's go to the powers of q as I've described. Let's then make the identification that the symmetric function, which is the sum of our powers, gets replaced by the geometric series and 1 over 1 minus q to the r. And if we make that identification, what I've just walked you through is that this formula, which becomes this after this identification, simplifies to just be 1 divided by 1 minus q. So taking limit as n goes to infinity, this is like the hard way for writing 1 divided by 1 minus q. Great. Why am I motivated to do that in a hard way? Because everything I just said, if I considered partitions like 2 plus 2, is not any more difficult than what we just did here, but the resulting series become more complicated. So for example, let's now take the partition 2 plus 2. It turns out that if you work out the first few n's, you'll begin to see a pattern. And it's not as simple as what we had before. It wasn't just the sum of those two symmetric functions. But it's a symmetric function of degree 4. That's easy to see. And if you work it out, you can prove that this is what you get. What do you think I just wrote down? Well, it turns out, it turns out that if I argue exactly as I did before replacing x1 by 1, x2 by q, so on and so forth, and then express each of these arth-power symmetric functions as 1 divided by 1 minus q to the r. So this right here, I'm going to replace as n goes to infinity by 1 divided by 1 minus q cubed. I'm going to replace that by 1 divided by 1 minus q to the fourth. If I do that for all of them and simplify, what I get is this. You know what that is? That's the second term of the Rogers Rimonogen series. And you know what? Just by taking all the partitions of size 1. Just by taking all the partitions that have nothing but 1s in them and doubling them all the way through. I did 2 and I did 2 plus 2 for you. If I then did 2 plus 2 plus 2 or the sum of n2s in order, I will be getting these series. And similarly, I get the companion Rogers Rimonogen identities by forgetting to multiply by 2. And that's how it works out. The symmetric group acting on 1s is exactly what these two identities become. So now I think you get the point. What is our theorem? Our theorem says, for every partition of every integer, I'm going to build for your q series by using the theory of symmetric functions. How do we do that? For each n, we express the polynomial, which is a symmetric function in x1 through xn, whose coefficients are polynomials in q, by replacing the rth power symmetric functions by the geometric series in 1 divided by 1 minus q to the r. And when I do that, for those triples that I showed you a, b, and c, for every choice of m and n, I'm automatically going to get an infinite product modular function. That's the framework that we're describing. If we went back and just studied some of the classical proofs of Rogers Rimonogen, you don't see this. Great. So let me give you a precise theorem. So here's just some shorthand notation. A, q, k will be this rising factorial, 1 minus a. Yeah. Well, it's just a counting argument. You get lots of new identities. Absolutely. Yeah, absolutely. So in fact, yeah. So I mean, I'm going to tell you what groups these are modular functions on. These functions are all non-vanishing on the upper half plane. It means their device is purely supported at cusps. And you can count the number of these forms. You can't produce more of the forms than exist. Yeah. So for shorthand, I'm going to let theta a, q and some more complicated theta a, qs be these sorts of infinite products, where k goes to infinity. And these are the kinds of modular functions that were studied by Kuber and Lang in their big book on modular units. And here's one example of a sample theorem. Opposite m and n are arbitrary positive integers. For arbitrary positive integers, when we sum over all partitions whose largest part doesn't get bigger than m, those functions, that is this infinite product. Kappa is 2m plus 2n plus 1. That's actually the level of this modular function. I'm going to go over exactly the same partitions, but instead of raising q to the size of the partition, we double up that size here, you get this modular function here. So it moves very nicely as you vary the parameters. Okay. We have a number of families. We have four infinite families. And let me just normalize them for you. Call these new power series identities phi. So there's four triples a, b and c. M and n are arbitrary positive integers. And there's an auxiliary power of q that I'm just going to tack on. I'm not going to give you the definition for it, but it's an elementary rational number. In the Rogers-Romonagian identity, this is one-fifth. These are those series that we defined a moment ago. And let me tell you what we can prove. I forgot to say early on, I feel very embarrassed about this. This is joint work with my student, Michael Griffin, who is now postdoc at Princeton, and my friend Ole Varner in Australia, who you may know through his work on special functions and representation theory. So if tau is a cm point, then the following are true. Evaluating all of those series that tau gives you a unit over this ring, z bracket one over kappa. Kappa is an elementary function in terms of m and n, a, b and c. And these two triples have the property that not only are each of these numbers individually units over this ring, the obstruction to being a unit over z exactly cancels out in both of them so that when you take a ratio, those values are all algebraic integers and are units. What is theorem two-two? That is, when m and n is one, the theorem of case and conrad, and this is a different proof of the last line of Ramanujan's last letter. But it's just one example of one, it's just one example. It's just one example of your moved tau. So here we have two triples and any pair of positive integers and that's how this works. So let me give you an example. When m and n equals two, here's a cm point i over three. You could do the wrong thing, so let's do the wrong thing. Let's plug in i over three and for tau using the first hundred coefficients and you get these decimal expansions. This looks a lot like one over the square root of three. It turns out that if you compute these in a different way, it turns out that these two numbers are not algebraic integers, but they're close. It turns out that this is indeed one over root three and so it's a root of three x squared minus one. And this number is a root to this crazy polynomial. Not an algebraic integer because you can see that from the leading terms. But when you divide one into the other, the theorem I just showed you is that they actually have to be units when you normalize by the scaling factor squared of three. And in particular, when you divide one into the other, it has to be an algebraic integer unit. And it is because it's a root to this polynomial which you can compute completely in a different way using the properties of those functions. The combinatorics of the functions allows you to find these polynomials and prove this rather than having to resort to decimal expansions. Great. Let me just give you a flavor of how one goes about proving this. We're very excited to prove this. It turns out that the proof of this is a combination of some representation theory that's more or less state of the art with revisiting very old, beautiful ideas of great people. This theorem that I'm showing to you now probably doesn't look great. It probably looks nasty. These terms are those pockhammer symbols I showed before, those rising factorials. So this is a ratio of rising factorial polynomials and lots of crazy parameters. Try to write that down even when n equals three. You'll make a mistake. Then you sum up to some term n a ratio of those pockhammer symbols except this one has multiple factors defined in a similar way. So this is a ratio of two times q to the r. Believe it or not, there's an entire subject called basic hypergeometric series that are populated by people who know how to look at an expression like this and see that it's the same as an expression like this. They're really good at seeing transformation laws and expressions that could have 20 or 30 variables. George Andrews is who, if you don't know the name, he is clearly the world's expert, living expert on this kind of thing. This subject goes back to people like G. N. Watson, so Watson proved that this equals this. So if you're lucky enough to have this identity, you're in a position where you might be able to substitute in and change variables and get something pretty by simplification. Well, loosely speaking, if you leave a as a free variable and let all of the other parameters including the limit of summation go to infinity, if you do it at the right rates, you get something much prettier. This, and this is what Rogers and later the Fields Medal of Selberg proved independently, they proved this much simpler identity by actually explaining how one goes about this limit process. Why did they do that if you let a be one or q on the left-hand side that immediately gives you the left-hand side of the Rogers or Monagian identities? The question is, what does it give you on the right-hand side? And let me just say that there's a famous identity called the Jacobi triple product, which you could learn in an advanced undergraduate number theory class. And when you plug that in, you get the Rogers or Monagian identities. Now what's unfortunate about that, it might look like just piecing together various classical techniques that accidentally work out to give you an identity you wish to prove. Where's the theory? So it might be frustrating. How do you learn a theory when you're just presented with an example of if you do A and then you do B and then you do C, you prove the theorem. How do you know to do A, B, and C? Well, for many decades, mathematicians had no idea how to develop properly the theory of basic hypergeometric functions. And that's where the representation theory comes in. The representation theory, the symmetric group, as discovered by people like McDonald, who you might know from the famous book, Tia McDonald, and people like Victor Katz more recently, have been developing this theory. They're really good at it. Their transformation laws come from understanding the representation theory of infinite-dimensional affine-lea algebras. And at the very forefront of this, this is where my friend Ole Varner and his student found an utterly crazy transformation. Their idea is to build on Rogers and Selberg, and instead of leaving A as a single parameter, they ramified it. They exploded it into n different variables. And they found a way to keep track of exploding that A into n different variables and then follow the technique of Rogers and Selberg. What you end up having to deal with are functions like this. They're kind of like your nightmare, Van der Mont determinant style expression that you would encounter first in abstract algebra. And what they proved is this. It's a generalization of one of George Andrews' formulas. This may look absolutely nasty, but I will say that if you study some of this representation theory, this begins to make sense. It's nasty to write down, but conceptually makes total sense. So what do we do armed with this formula? We let all the parameters except x1 through xn now go to infinity. You do a lot of combinatorics, maybe 20 pages or so of combinatorics. And you are very happy because you can end up proving that a series coming from the Hall-Litterwood polynomials can be transformed into an expression that at least begins to have pieces that looks like will relate to Jacobi theta functions, like the Jacobi triple product. So this is what we proved. And then let me just say we can modify the left-hand side of each theorem. You get each of our four infinite families. It takes work, but you know we've done it right when the right-hand side magically transforms into identities that people like Victor Katz and McDonald have been proving for the last 30 years. When you get all the modular units you want to get, then you're relatively confident that they've done a thorough job of exhausting these infinite product expansions. So if you've ever studied or want to study the representation theory of infinite dimensional Lie algebras, this is one reason why you might want to do it. If you want to complete or solve those two fundamental problems that arose from Ramanujan's identities. So if you want to learn more about the semi-email or maybe I can share this, certainly share this PDF with you. The paper just appeared in the Duke Math Journal. The introduction I think is very well written. I don't recommend that you read the rest of the paper because it gets very technical, but the idea is this would be one reason why you might want to get into that representation theory. Now developing on a completely different track arising from those two Ramanujan identities comes this famous work that was born out of the chase for classifying the finite simple groups. So this is the work of Monstrous Moonshine and let me now explain how it has become, well what it has become today. So it was a long time ago, I was barely alive in 1973, when Fisher and Grice conjectured that there should be a super large group, finite simple group, called the friendly giant which was later called the monster, which has this order. And it took a long time and Bob Grice in 1982 actually constructed the monster. The construction is still so complicated that it's not fair to say that we can go calculate in the monster group. It's huge, it's very difficult to realize. Bob won the steel prize for constructing this group and where does it live? It lives on top in the classification of finite simple groups. So finite simple groups typically fall into natural infinite families, like the alternating groups or the finite groups of leetype. And apart from those, it turns out there are some sporadic groups that are related to each other by this picture. The monster sits on top. By the way, this is a very, very interesting story. The M's, these are the Matthew groups that usually came out of the study of root systems, of leagroups that come from the study of lattices. This group over here, the Onan group, we actually just recently finished proving a moonshine for it. It's related to derivatives of L functions of elliptic curves. This whole story is very, very different. We'll be releasing our paper soon. And the theory of the moonshine here is much more number theoretic for these groups than any of these groups. And what the theory of Mancher's moonshine about is this group on top, the largest of them. How does it start? It starts innocently enough about 40 years ago when John Mackay, very eccentric number theorist or algebraist in Montreal, who cleverly noticed that 1, 9, 6, 8, 8, 4 is 1 more than 1, 9, 6, 8, 8, 3. But of course, he knows how to add that as a companion to several other observations of a similar type. Here you have what, 21, what is it? 21,493,760 is the sum of these three numbers. And this larger number is the sum of these numbers. What are these numbers? These numbers are the dimensions of the irreducible representations of the monster group. So by representation theory, you can write down all of the irreducible representations. These are homomorphisms. These are homomorphisms from your group into the general linear group over some complex vector space. And these would be the dimensions of the complex vector spaces. So like 1 is a trivial representation where every element gets mapped to the number 1. It turns out that from group theory or your first representation theory class, the number of irreducible representations is equal to the number of conjugacy classes. And in the case of the monster, there are 194 conjugacy classes. So there are 194 numbers, the smallest one being 1, the second smallest being 1, 9, 6, 8, 8, 3, up to a very large 194th number which encode at least the dimensions of these representations. So what people were trying to do in the 80s was really try to figure out what the best representation theory for this gigantic group should be. And when they were trying to come up with this, these accidents were observed where the numbers on the left-hand side turned out to be coefficients of a modular function. A famous one. The first one you ever study. It's an Alphor's books on complex analysis. I don't know why I call it Klein's J function. I do, but I think that's incorrect. But anyway, the J function is the modular function on SL2Z, the full modular group whose Fourier expansion starts out with these terms. This function generates all the modular functions on SL2Z, and it is for this reason this is called the simplest one. It's called a helped module. Q is e to the 2 pi i tau as it was earlier. And what Mackay and Thompson noticed is that these numbers, these numbers could easily be broken down in terms of the irreducible representations of the monster. Now, at first glance that might look stupid. If I have the number one at my disposal, I can get every positive integer I want in many ways. So how striking is it that I could use these 194 numbers with multiplicity to add up to be the coefficients of J? At first glance, that doesn't look striking. So what is it about this observation that ended up becoming this big theory? So let me explain. It turns out that the 194 representations have dimensions beginning with one, followed by 1, 9, 6, 8, 8, 3, up to this number. The English language probably allows you to read that number up, but let me just say this number, it's huge. What makes this so striking is really the combination of two conjectures. The first conjecture will also look a little bit weak for you, but when I explain the second one, you'll understand the importance. So Thompson conjectured that there should be an infinite dimensional graded monster module called v-natural, which is a sum from negative 1 to infinity of other m modules, one for each integer n starting at negative 1. And building on those elementary observations, the dimension of the nth component should be the nth coefficient of J. This conjecture is easily proved as stated, because I can always have as many ones as I would like to get any number I would like. What makes it important is this. What Thompson and others began to believe, and it later became this conjecture called monstrous moonshine that I'll explain in the next moment, is that this accident was only a glimpse of 194 accidents. What they said was instead of just taking the identity and building a formal power series where the dimensions are the dimensions of those modules, let's take any element in the group. You only need to choose one per conjugacy class. So take any one of the 194 conjugacy classes, pick an element in it, and under the assumption that this infinite dimensional module exists, let's take the image of G in the nth grading and compute the trace of the representation element you get. And what you'll get is a power series in Q to the n. You'll get 194 of them. These have to be 194 distinguished functions. And that's what the conjecture of monstrous moonshine is. There is a unique module, V supernatural, that has the property that for every one of these elements, every one of the conjugacy classes, there's an explicit genus zero discrete subgroup of SL2R. SL2Z is the largest of them, for which the function I just defined for you is the simplest modular function for that group, like J. These groups are hard to find. Around that time, people were studying something called the congruent subgroup problem. People like Sarah were interested in it. Finding these groups was very important at one point in time, and people are using the Riemann surfaces that are obtained by modding out by these groups all over mathematics and physics. These are important groups. So for the simplest modular functions on these groups to be in correspondence with a single representation of the monster was a startling thing. You could view that in two different ways. You could ask what is the most natural representation of this gigantic group? The answer is it's the arithmetic of 194 modular functions. Or, conversely, you could be interested in these groups and then say, well, the mystery behind these groups is really the story of the monster group. Either way, you look at it, something very mysterious is going on. So in the early 80s, building on their work on the Rajah's Ramanujan identities, Frenkel and Lopalski and Merman actually built a candidate for the moonshine module and proved it was unique. There cannot be two infinite dimensional monster modules that could possibly satisfy the monstrous moonshine conjecture simultaneously. That's actually not difficult to prove. That's like an exercise using orthogonality of characters. You can eventually prove at some point you're going to be dividing by something that doesn't divide evenly into a coefficient. So they constructed it if it exists and it turned out to be a vertex operator algebra. And I'll get to this in a moment. This algebra is exactly what Bortchard's souped up by making the monstrous moonshine vertex operator algebra. And then he ended up proving the original conjecture that all 194 of these modular functions encode the representation theory. And he won the Fields Medal for this. Okay. So usually when you win the Fields Medal for something, it's usually the end of the story. And for many years, I think people thought that was the end of the story. The monster is the largest of the finite simple groups. Maybe there's some loose ends with some of the other groups. For example, like I mentioned, the Onan group. But in spirit, maybe this was supposed to be the end of it. Turns out that it is not. In 2007, Ed Witten and Lee Song and Strominger a paper that I think has been cited like 6,000 times. If you're a physicist, you probably know this paper. And embedded in this paper is this question. The monster module that satisfies moonshine is dual to something called 3-dimensional, Witten's 3-dimensional quantum gravity theory. So by that, each of the irreducible representations for each of the components in this grading models a black hole state. And Witten asked, therefore there should be 194 of these black hole states. And so as n goes to infinity, how are these states distributed? By the way, this model for 3-dimensional quantum gravity has largely now been debunked. 2007 is like an eternity ago. But this is a beautiful mathematical question. What does it mean to say how are these different kinds of black hole states are there and how are they distributed? Well, there's 194 of them. You could ask, in the decomposition of the 50th coefficient, how many ones are there that you have to choose from the 194 kinds of numbers you're adding up to get the nth coefficient? Wouldn't it be disappointing if there's just like lots of ones? So it turns out that we proved the theorem together with John Duncan, my new colleague at Emory. We proved that this infinite-dimensional module is doing its very best to be what's called a regular representation. We proved actually an exact formula for how many components there are of each type. But what does this mean? In words, it means that as n goes to infinity, each irreducible component is proportional to its size relative to the sum of the sizes of all the dimensions. In particular, the irreducible representation, which is trivial, dimension one, is the one that occurs by far the least often. It occurs about the percentage of components as n goes to infinity of the trivial representation is about 10 to the minus 28th of 1%. So it almost does not exist. So asymptotically, there is a very precise limit. Right, and once we proved that a few years ago, we proved this theorem about the time that three Japanese physicists that are very famous at Gucci, certainly Uguri and Tachikawa, noticed that in their work on the K3 elliptic genus, they found what's called a Mach modular form, which I'll define for you. You actually can find this in some papers that I had written on Mach modular forms related to Ramanujan's last letter. And they noticed that the Mathieu group, M24, which is one of the larger sporadic finite simple groups, has a character table. So this is the ordering of the dimensions of the irreducible characters with these numbers. And they just noticed that that 45 looks like that red 45. And that 231 looks that 277. They noticed that the red numbers down here are among the first few coefficients of this Mach modular form. And they speculated there should be a moonshine for this that arises from this subject. And that was very surprising because of a number of reasons. One, this is a Mach modular form. It's nothing like the modular functions in Monser's moonshine. These were functions that people were just beginning to understand about 10 years ago. So it came as a big surprise. The physicist at Calgary, Terry Gannon, actually proved that there is an infinite-dimensional graded M24 module that satisfies moonshine, just like the J function and the other modular functions satisfy moonshine. And when that was proved, the physicist went to work and they tried to understand if this theorem of Gannon and that observation of the three Japanese mathematicians, if it turned out to be a glimpse of a moonshine no one had seen before. And what's crazy about it is that they figured it out. The number 24 is very important in physics and it's very important in number theory. It's just very important. It comes up in many different ways. It comes up in terms of multiplier systems of modular forms. And sometimes the number 24 comes up for the simple reason that every prime starting at 5 squared is congruent to 1 mod 24. It's shocking how often that little statement becomes important in number theory. For whatever reason, the physicist recognized that you could view M24 essentially as the automorphism group of what's called the A124 lattice. So on a lark, Jeff Harvey, who is at Chicago, he was the inventor of the heterotic string, together with John Duncan and Miranda Chang, thought maybe there's something special about 24 dimensional lattices because 24 is such a magical number. It's so magical that there are 24 isomorphism classes of 24 dimensional unimodular positive definite lattices. They wrote them all down. And what did they find? They found a moonshine. They found that the automorphism groups, roughly speaking, get the mod out by the vial group of the root vectors in the ADE system. But they found that if L of X is any one of these even unimodular positive definite rank 24 lattices and there's 24 of them, then the automorphism group modded out by the vial group, these are the group of symmetries for the roots. It's a group that we call the umbral group, which by the way includes the Matthew group M24. This is what they were trying to generalize. They were able to show or conjecture and compute the first terms of what should be a moonshine for every one of these automorphism groups. And what they actually found were candidates for what are called the Mackay-Thompson series. These are the power series for each of the conjugacy classes for each of these groups for this infinite dimensional module. And this is their conjecture. There is an infinite dimensional G module. G is this automorphism group modded out by the symmetries for which for each conjugacy class they could prescribe the representation function just like in the case of moonshine. What are Mach modular forms? Well, they're very much like modular forms. If you want to learn about it, let me just say a few words. So throughout this lecture, tau has been in the upper half plane. X plus I, Y, where X and Y are real. And Ramanijan's conjecture tau is a CM point, but here tau is just any point in the upper half plane. For every real number K, there is a Laplacian operator that allows you to study modular forms from the theory of differential equations when it generalizes ordinary analyticity. And a Mach modular form is a piece of what are called harmonic MOS forms. They come from things that are very much like modular functions and are generalizations of them. So a harmonic MOS form of weight K on a subgroup of SL2Z will be any real analytic function, say M, that transforms in this way for every matrix in this group. To be a modular function, this factor was not here, like the J function or the functions in the Ramanijan identities. M of A tau plus B over C tau plus D would be this, like K would be zero. But we have this theory for arbitrary real parameters K. And to be harmonic just means that the function has to be annihilated by that weight K Laplacian operator. There's a fundamental theorem or lemma that one can prove, would come up early in a course in the subject. And the theorem is that a harmonic MOS form of weight 2 minus K has a Fourier expansion very much like the modular functions. Except if the function is not holomorphic there will be two components to this Fourier expansion. One part that looks like an ordinary Q series just like all of our modular forms have. And another part which almost looks like a Q series but it's a little bit ugly. Each one of the coefficients is decorated by a function in Y, that's not a number. Y is the imaginary part of tau. So the Fourier coefficient is a function, still a function in Y. And what is a Mach modular form? Mach modular form is the simple part. Just the Q series part. These are related to ordinary modular forms by some differential operators. But the point is that what the conjecture of Chang, Duncan and Harvey says about these 24 dimensional lattices is that the moonshine for them involves weight one-half parts, these parts of weight one-half harmonic mass forms. And a few years ago, Duncan and Griffin and I proved that this conjecture is true and it's not as awesome as, certainly not anywhere near as awesome as what Borchard's proved. We didn't invent a new field of representation theory which is what Borchard's did. People are now trying to do that but what we did is we proved the conjecture. We proved that there is a unique module for each one of those lattices and it is true that the representation theory for those automorphism groups are dictated by the functions that they predicted. Great. What's shocking about that is that the Matthew group M12, one of the sporadic groups, is one of these moonshine theorems. And it turns out that three of the conjugacy classes, just like J, but three of the conjugacy classes involve mock modular forms that were literally listed exactly in Ramanujan's last letter to Hardy. So if you're interested in the distribution of black holes states for multi-centered black holes, if you believe in that, then some of the functions that Ramanujan wrote down in this regime control that distribution. And of course, nobody was talking about black holes when Ramanujan was alive. That was a conjecture. In fact, that was believed to be a mistake. The singularity was believed to be a mistake. So I think that's incredible. And Matthew didn't even know his group M12 the time Ramanujan wrote down these... Moss, the man after whom Moss forms were named, had not even yet invented Moss forms that wouldn't come until 20 years after Ramanujan wrote this letter. So Ramanujan's first letter gave birth to the Raj's Ramanujan identities in the first part of this talk. If you want to know what his theory is a glimpse of there, it's actually the explosion of work that came out of McDonald's work, the work that came out of Cats. And you could view the theorems that we proved as like a beautiful crowning corollary to all that difficult work in representation theory. Now we can really understand what Ramanujan wrote down. In case of his last letter, we now see it's also related to representation theory. His theory of Monk Theta functions, even the ones that he wrote down were already speaking to the representation theory of automorphism groups of rank 24 lattices. It's incredible. And these are theorems that we can now prove because the mathematics has caught up with them. All right, so my time is long over. I'm sorry I talked so slowly. So I'm not going to say any of this, but let me just say that the executive summary is this. If you're interested in moonshine, it's now a very active field. If you're interested in how black hole states are distributed, analytic number theory offers the tools for answering that. And if you're interested in just the first few formulas that inspired Hardy to invite Ramanujan, the representation theory answers that too. Great, thank you. No, it's not a naive question. That's like a million-dollar question. Ramanujan left behind three shabby notebooks. The third notebook is almost empty. The second notebook is an enlarged and edited version of notebook one. So you could argue that he left behind one notebook. I know lots of people who've studied these notebooks for their entire careers and they still go back a few times every month to mine them. What was his vision? He said that his ideas came to him as visions from a goddess. Take that any way you'd like. But all I can tell you is that every time someone has figured out something deep behind a formula which was previously considered, well, A equals B, I can prove that. But when you finally understood why we today might care about A equals B, you then begin to see Ramanujan like that formula as some strange analytic fact about these series and it turned out to be important. I'm going to give this lecture tomorrow where I'm going to talk about instances of this and you can ask the same question again and I can't offer any answers to you except to say that there's something very special about that. Every once in a while someone comes along in science, repels humanity forward. Most of us work as one of thousands slowly molding and building theories. Every once in a while someone comes along for what you ask questions like that. Think Einstein, think Newton. And I think Ramanujan definitely deserves his spot in that category. Any other questions? Comments? Solutions? I hope that was interesting. So I realized this was not like a class. I hope that was alright. But you now know that those two innocent identities is a huge field. So that you could have easily dismissed G over H equaling R as some little thing which you can prove in an hour if you have the right books, right, it's not hard but recognizing that it's a glimpse of something this big it took many many many people to figure out some of the most important people in the field. I'm talking about really famous people whose ideas we use all over mathematics were inspired by these shabby notebooks. I like that. I guess we all look forward to the movie and you'll talk tomorrow. If we have no further questions or remarks let's thank Ken one more time.