 This video is the first in a series of videos on Wienberg's algorithm and Katz-Mudi algebras. It's an expanded version of the Wienberg lecture I gave a couple of days ago. This is the first video in the series and will be mostly on Wienberg's paper. I plan to have later videos covering topics like John Conway's reflection group in 25 dimensions and discuss some Katz-Mudi algebra as an automatic form related to it. I'll start by introducing Wienberg's paper. Wienberg's paper is one of the papers in this book here. I'll actually try to remember to put a link to this book since you can get it online these days. His paper is this one on Arithmetical Discrete Groups in Lobochevsky Space. Lobochevsky Space is another name for hyperbolic space. I came across this paper many years ago and I noticed in the back of it there are these really intriguing diagrams. So if you look at these diagrams, well, these are just the Coxtor diagrams of finite reflection groups that I'll discuss briefly later. But down here there are some new diagrams I hadn't come across before and I wonder what they are. There are some even stranger ones here. You see these sort of rather impressive mysterious circular diagrams. So what I'll try and do in this first lecture is explain what these diagrams in Wienberg's paper is. Somehow there's something very compelling about doing deep high level mathematics just by drawing these tiny little doodles on a piece of paper. So I'll start by just summarising or reviewing some basic material about reflection groups. So here's an example of a reflection group. What we do is we take three mirrors. So you think of these as being mirrors in the plane. Here the mirrors are just one-dimensional lines because I'm working in the plane rather than in the three-dimensional space. And if you take a point here, you can reflect it in these mirrors. You get a better colour that doesn't really show up. If you take this point and reflect it, you get another point here. And then if you reflect it, you get points here and here and here and here. So you get a sort of kaleidoscope effect. And this reflection group can be described as follows. What we do is we take a fundamental domain. So this region here is called a fundamental domain. And what this means is that every point in the plane is congruent under some series of reflections to a unique point in this fundamental domain. For instance, this point here is congruent to this point in the fundamental domain. And the mirrors that bound this fundamental domain are called the simple roots or rather they correspond to the simple roots. They're not quite the simple roots. Strictly speaking, the simple roots are little vectors orthogonal to these hyperplanes. So I really ought to call this the simple root, but someone called the hyperplane the simple root. And Coxeter found a very neat way of drawing pictures of reflection groups by just drawing what is called the Coxeter diagram. And the Coxeter diagram works like this. What you do is you draw a point for each simple root. So these points correspond to simple roots. So there are two simple roots and therefore there are two points in the Coxeter diagram. And you indicate the angle between these simple roots by drawing various lines between these two points. For instance, if there's an angle of pi over three or 60 degrees, you indicate this by drawing a single line between these points. So here's another reflection root. This time I'm just going to take two mirrors that are orthogonal to each other and here a fundamental domain looks like this. And the Coxeter diagram of this looks like this. There are two simple roots here and here. So there are two points in the Coxeter diagram and there are the angle between them is pi over two. And you indicate this by drawing no lines between these points. There's actually a slight difference between Coxeter diagrams and Dinken diagrams. So the differences as follows. The Dinken diagrams keep track of lengths of the root. So if we've got a mirror here, then the Coxeter diagram just keeps track of the mirror, but there's also a root perpendicular to it or orthogonal to it. And the Dinken diagram sort of has some extra information telling what the length of this vector is. And secondly, these roots must lie in a lattice. So an example of a reflection group whose roots don't lie in a lattice is if you take this reflection group. So here I've got five planes and you get a little reflection group with 10 elements. And if you draw the roots orthogonal to these, you find that the roots don't lie in a lattice, a lattice is some sort of regular array of pots in the plane like this. So if you take the linear combinations of these roots, they're dense in the plane. So there are three sorts of reflection groups that we're going to be concerned with because there are three different sorts of geometries. So there is spherical geometry and there is Euclidean geometry and there is hyperbolic geometry. And you can classify the reflection groups in spherical or Euclidean geometry. There are actually some rather nice pictures of spherical reflection groups. So here are some old pictures in the book by Klein and Fricker on elliptic modular functions. And this is the 19th century German, very famous German book. And recently it was translated to English by Arthur Dupre. So if you want to guess, I think the AMS is now selling the English translation. So here's an example of a spherical reflection group. There it's been sort of projected into a plane. And here's another example of the icosahedral spherical reflection group. You can even make three dimensional models of the icosahedral reflection group if you want to make it look like this. Anyway, the spherical reflection groups are classified as follows. They have this rather unimaginative notation of An, Bn or Cn, Dn, E6, E7, E8, F4, G2 and then there's also In, H3 and H4. And these ones I'm not going to be interested in because these don't correspond to Dinkin diagrams. They only correspond to coxster diagrams. And here there are two Dinkin diagrams, Bn and Cn, which correspond to the same coxster diagrams. So there are some slight technical differences. If you want to see what the diagrams look like, instead of writing them all out, I'm just going to show you the pictures of them in Binberg's paper. So here is a list of the coxster diagrams for spherical geometry. The classification of Euclidean reflection groups is very similar. So here's an example of Euclidean reflection group. What I can do is I can just take a lot of mirrors that look like this in the plane. And you see there's a fundamental domain here. And now if you look at one of these points of the fundamental domain, you notice what you're getting. If you just look at the six, three hyperplanes through this point, you're just getting the spherical reflection group that we had in the previous, that we had a couple of pages earlier. So there's quite a close correspondence between spherical and Euclidean reflection groups. It's not quite one-to-one because I could take a different vertex, for example. And in this case, all three vertices give us the same spherical reflection group, but sometimes it's a bit more complicated. So for each spherical reflection group, you tend to get one or two Euclidean reflection groups, or sometimes you get no Euclidean ones. If you want to see pictures of the Euclidean reflection groups, they were also given by Binberg, and here's a picture of the Euclidean reflection groups. You can see they look quite similar to the spherical ones. So here, for example, is a spherical reflection group called F4, and here is a corresponding Euclidean reflection group called F4 with a twiddle on it. And you see that this diagram is the same as that one, except you've added this extra node at the end, and all the others are kind of similar. So you get a Euclidean one by adding one extra node to a spherical reflection group. Klein and Fricker also have a few pictures of some of the Euclidean reflection groups, if I can find them. So here are Klein and Fricker's pictures of three of the Euclidean reflection groups. So I'm just going to give two examples of Euclidean, or no, sorry, of spherical reflection groups that I'm going to need later. First of all, we can take the lattice In, and the lattice In is just the same as z to the n, so it's all vectors m1, mn with mi, just an integer, and it has the obvious in a product. So the norm of a vector, that's the square of the length, is going to be m1 squared and so on, plus mn squared. So you just do the most obvious thing. And then if you look at the, you can ask what are the, what are the simple roots of this? Well, first of all, we have to figure out what the roots are. Well, a root is going to be a vector v, so a vector r, such that reflection in the orthogonal complement of r takes In to In. And what does reflection do? Well, it takes a vector v to v minus 2rv over vv times v. So it's quite easy to check. I mean, this is obvious for vectors that are orthogonal to r, and it's also obvious for r, so it's true for all vectors. And if you notice, we want this to be in the lattice we first started with, and this will always be true, provided this bit here is an integer. So we really want vv to be equal to 1 or 2. And for more complicated lattices, it's possible to have reflections where vv has more ratio than 2. But I'm mostly going to be doing what are called unimodular lattices, which means the volume of a fundamental domain is just 1, and these are the easiest ones to deal with. So we can now try and find the roots of In. So we just want vectors of norm 1 or 2. Well, there are some obvious vectors of norm 1. We could just take a lot of zeros and then have one vector that's plus or minus 1. So these are norm 1. Next, we could have vectors of norm 2. So we take two entries that are plus or minus 1, and these give norm 2. And now we can ask, what are the simple roots of a fundamental domain? And this isn't terribly difficult to work out. So in two dimensions, the reflections look like this, and you can see for a fundamental domain, there are going to be two roots like that. And in general, the roots look like this. We can take norm 1 root, which looks like that, and then a norm 2 root that looks like this, and another norm 2 root that looks like this, and we go all the way down to this root here. And if you draw the corresponding Coxter diagram or Dinken diagram, rather it looks like this. We draw a point for each simple root, and all of these ones have inner product minus 1, which means we draw them by a single line, and these ones have... the angle between them is now pi over 4, which we indicate by drawing a double line, and a Dinken diagram, you draw a little inequality sign to point towards the smallest root. So the Dinken diagram of this is just a line with a line of points with something funny going on at one end of it. So the other example we're going to use is the famous E8 lattice. So the E8 lattice consists of the following vectors. It consists of all points m1 up to m8 in eight dimensions with either all the mi in z, or all the mi in z plus a half. So we have one of these two conditions, and we need an extra condition for some of the mi is even. So roughly speaking, this condition throws away half the vectors, and then this condition adds in the same number of vectors again. So this is sort of the same size as i to the n, except it's got this following extra property that the norm of all vectors is even. You notice this, if you take a vector where all the entries are a half, say a half up to a half, then the norm is a half squared times 8, which is 2, which is now even. So this is something funny that happens whenever the dimension is divisible by 8. You can do this sort of trick, and the norm of all vectors is even. If the dimension wasn't divisible by 8, then the norm of this vector would not be an even number. So we can ask how many roots does it have? Well, there are two sorts of roots. First of all, we can take two entries that are plus or minus one, and you can see there are 112 of these. Or we can take plus or minus a half, or up to plus or minus a half, and there are 128 of these. You might think there are 256, because there are eight choices for sign, but then we've got this condition that some of the vectors has to be even, which cuts down half of them. So all together there are 240 simple roots. And the Coxtor diagram or Dinker diagram looks like this. So you take 1, 2, 3, 4, 5, 6, 7 vectors like that, and then you have one extra vector there, and the vectors are like this. This one here could be the vector 1, minus 1, 0, minus 1, 1, 0. This one could be the vector 0, minus 1, 1, and this one has a lot of plus or minus a half, which I can't be bothered to work at, where all the signs go, and you can fill in the rest for yourself. We can also ask what is the corresponding Euclidean reflection group, which looks like the spherical E8 reflection group, except you add in translations of the lattice, and this is the effect that you add in one extra simple root onto the end, like this. So this is the Euclidean E8 lattice, and this is the spherical E8 lattice. And this diagram contains an awful lot of information. In fact, you can do some quite difficult calculations just by putting your finger over nodes. For instance, we can take the following question. Suppose you take the E8 Li algebra, whatever that is, something with corresponding to this, and ask what sub-algebras does it have? Well, you can find sub-algebras of the E8 Li algebra, or at least some of them, by taking this avineate Dinkin diagram and just putting your finger over various points. So if I put my finger over this point here, I see I've got a line of eight points, which is the An Dinkin diagram. And the An Dinkin diagram is just the Dinkin diagram with the Li algebra SL9 of R. So we've discovered Li algebra SL9 of R is contained in the Li algebra E8 just by this trivial operation of hiding a point. And similarly, you can find other sub-algebras of the E8 by putting your finger over other points. For instance, here we've got... that's actually the E6 Dinkin diagram. This is the A2 Dinkin diagram. That gives you another sub-algebra of the E8 Li algebra. So this diagram contains an enormous amount of high-level information, slightly hidden in it. Another example of a hyperbolic... Now we move on to hyperbolic reflection groups. So let's have some examples of this. Well, first of all, here's a rather nice picture in Klein and Fritter's book. So this is the fundamental... This is part of a hyperbolic reflection group in the hyperbolic plane. All these curved lines here are really straight lines in the hyperbolic space. They sort of look curved because if you embed hyperbolic space into Euclidean space, you don't really have enough room to do it properly. So the straight lines end up curved due to some sort of optical illusion. The fundamental domain for this reflection group consists of these little triangles here on the sides that are pi over 2, pi over 3, and pi over 7. The artist Escher also used hyperbolic space in some of his work. I think he discovered this by Thornton de Coxtor. So here's a typical example of one of Escher's pictures of hyperbolic space where he's sort of a tessellated hyperbolic space with a lot of fish. And again, if you look carefully, you can see there are some reflection hyperplanes. So there's a sort of white line here going through all the fish. And if you look very carefully, you can see that's a sort of... you can sort of reflect in that. The picture looks as if it's getting... as if the fish are getting very, very small near the boundary, but they're not really. That's just another optical illusion. You should think of all these fishers as really being the same size. And it looks as if it's getting really hot and crowded around here, but it's not at all. Again, it's just... it's just actually really no more crowded down here than it is up there. Another example of a hyperbolic reflection group which you've seen if you've covered modular forms is the group GL2 over the integers. So this is just the set of all points A, B, C, D. There's a 2 by 2 matrices. So A, B, C, D are integers. And A, D minus B, C is equal to plus or minus 1. And this acts on the upper half plane H which consists of all complex numbers tau x plus iy with y greater than 0. And it acts by A, B, C, D, tau equals A tau plus B over C tau plus D. Except you've got to be a little bit careful because tau might end up having negative imaginary parts. So what we need to do is identify tau with its complex conjugate. And then you get an action of GL2Z on the upper half plane. And you can ask what does the fundamental domain look like? And it looks like this. And what you do is you take... this is going to be the real line and this is going to be the complex line. And then you take a little circle like this and then you take the point of half and you draw a vertical line up here. And the fundamental domain now looks like this. And if you want to know what its cox to diagram looks like, it again looks like this. So there are three bases of this fundamental domain. So we have three points. Look at the angles. The angles here are pi over 2, pi over 3 and there's something funny going on here. So we draw a single line between these because there's an angle of pi over 3. I should say which of these points corresponds to which point. Well, this wall here is going to correspond to this point and this wall here will correspond to this point and this wall will correspond to that point. And now we've got to do something about this angle here and there's really no good convention for it. Sometimes people indicate it by drawing a very thick line like this and other people do things like draw a double line or something. There was no fixed convention. So here the points meet at infinity. Sometimes you can get mirrors in hyperbolic space that don't meet even at infinity and then you need to think of something even more complicated to draw here and again there's really no good convention for this. This reflection, by the way, corresponds to taking tau goes to minus 1 over tau, a very famous Jacobi's imaginary transformation. By the way, if you've done modular forms you've probably seen a slightly different fundamental domain that looks a bit bigger. So if you add in this, the fundamental domain for SL2 looks like this. So SL2z just as AD minus BC equals plus 1. The problem is that the fundamental domain for SL2z is not the fundamental domain of a reflection group. These walls here are not reflections for SL2z. So if you want reflection groups, you have to use GL2z rather than SL2z in hyperbolic space. In high dimensions, there turn out to be absolutely massive numbers of examples, even in dimension 2. What I'm going to do is look at a particular series of examples studied by Wienberg, which are given by automorphism groups of Lorentzian lattices. So let me explain what these are. So Lorentzian lattice is a lattice, except instead of being contained in Euclidean space Rn, I'm going to have it contained in Lorentz space Rn comma 1. So this consists of all the points x1 up to xn, and then I put a vertical line for xn plus 1 to warn you there's something different going on. And the norm is going to be x1 squared all the way up to plus xn squared minus xn plus 1 squared. So you need to pay careful attention to this minus sign here, which makes everything a little bit weird. By the way, there are two conventions for Lorentzian space. You can also use R1 comma n, where you have one plus sign and n minus signs in the metric. And physicists are divided into two groups of people who won't speak to each other, one of whom uses this convention and the other of whom uses this convention, because this is, of course, the spacetime in special relativity. I'm going to use this convention because it's a little bit easier when you're talking about lattices because it will make various lattices Euclidean. In fact, if you go further into the theory, you find there are very strong reasons for using this convention instead. And unfortunately, this means I constantly get confused about which convention I'm using. So there are probably going to be a few sign errors later on in the talk. So how does this tie up with reflection, metabolic reflection groups? Well, let's draw a picture of Lorentzian space. Well, what you do is you draw a picture of the vectors v with v, v equals zero, and these form a famous double cone. So this is the light cone. You're doing special relativity. These are the vectors in momentum space that light travels along. And then we can have vectors r with r squared greater than zero, and these are space-like. So these sort of something to do with points of space. And then we have vectors of negative norm. So suppose I take the vectors of norm minus one. So the vectors of norm minus one are going to form a two-sheet of hyperboloids. There's one sheet up there and the other sheet down there. And what you do is you throw away one of these sheets and fix this sheet here. And it turns out to be a copy of hyperbolic space Hn. And the metric on hyperbolic space is very easy. You just take the metric of Euclidean space, except it's a kind of funny Lorentz metric where you have a minus sign here and just restrict it to this hyperboloid. And there it gives you a nice positive definite Romanian metric. And that just makes this into hyperbolic space. The other nice thing is if we take a vector V of positive norm here and look at the hyperplane orthogonal to it. So we get some sort of hyperplane here. So this is going to be bar perp. Then this is a mirror. We can reflect in it and this gives us a reflection of hyperbolic space. And we want it to preserve a lattice so we usually want r squared to be one or two. By the way, there's a funny thing you can do. What happens if you take r squared equals minus one or minus two? Well, the formula for reflection still works but what it does is it takes this copy of hyperbolic space to this other copy of hyperbolic space. So it doesn't quite give you a reflection of hyperbolic space. What it gives you is a sort of funny sort of automorphism of hyperbolic space where you fix a point and kind of invert in that point and map everything to minus itself. Anyway, these sorts of involutions give you reflections and these ones don't although they're still quite interesting. And we can ask what is the automorphism group of L? Well, what is L? L might be, for example, the lattice. It might be the lattice i n comma one which consists of all points m1, mn, mn plus one with mi in the integers and the norm, of course, is m1 squared plus mn squared minus mn plus one squared. So this will, and the automorphism group is, it has a reflection group inside it and this reflection group has some sort of coxster diagram. All right, I should really call it a Dinkin diagram. And we can also look at automorphisms of the Dinkin diagram and, in fact, we get a semi-direct product of a reflection group and the automorphisms of the Dinkin diagram and this is almost the same as the automorphisms of i n comma one. It's not quite the same, for instance, there's a factor of two coming from the fact that the automorphism minus one also acts on i n, but the point is if you find the reflection group and the Dinkin diagram, you've pretty nearly found the automorphism group of i n comma one. And there's a second lattice that Wienberg examined, which is the lattice, this is two i's indicating that it is an even lattice and this is constructed in a similar way to what we did for the E8 lattice. So we're going to take m1 up to mn mn plus one with all the mi in z or all the mi in z plus a half and we also want some of the mi to be even. And this works if n is congruent to one modulo 8 and if n is congruent to one modulo 8, the norm are always even. So this is called an even lattice because the norm of every vector is even. So what Wienberg did in his paper that we're going to go through is work out the automorphism groups of these two lattices for certain small values of n. So to do this, we used Wienberg's algorithm. So what's Wienberg's algorithm? Well, Wienberg's algorithm is very simple in principle. What you do is you take hyperbolic space. I'm going to draw hyperbolic space as a disk as in Escher's picture and I'm going to take a fundamental domain. So here are three reflections and here's a fundamental domain. And what Wienberg's algorithm does is as follows. First of all, you pick a point P in the fundamental domain. And now, so step one is you pick P2. We find the walls of the fundamental domain in order of the distance from P. And that's it. That's more or less what Wienberg's algorithm is. Of course, there are some details to fill in, like how do you find the walls in order of their distance from P? And for this, you need the key point. The angle between walls is always at most pi over 2. And what this means is if you're finding the walls in terms of distance from P, suppose you found walls A1 up to An, then this gives you a constraint on An plus 1. So you have to kind of use these constraints to narrow down the possibilities for the next wall. The reason why the angle between the walls is at most pi over 2 can be seen if suppose you've got two walls that look like this and suppose you've got an angle here which is bigger than pi over 2. Well, if it's bigger than pi over 2, then there's going to be another wall like that, which means that this wasn't really an angle of the fundamental domain. So that's what Wienberg's algorithm is. We're just going to find all the walls in order of the distance remembering that the angle is at most pi over 2. Now let's translate this. Let's look at Wienberg's algorithm for lattices. So that was Wienberg's algorithm for hyperbolic space. We've now got to translate this into the language of lattices. So let's see what happens. Well, so here's our Lorentzian lattice and here's its light cone. And we've got to pick a point P. Well, Lorentzian space, sorry, hyperbolic space is going to be, say the norm minus one vectors. And what we're going to do is we want to pick a point P in here. And this is just going to be a vector, vector P with P squared is less than zero. So it's going, we have to pick some time like vector. And we then need to find the reflection hyperplanes. Well, as we saw in the previous slide, the reflection hyperplane is just going to correspond to a picking a point R with R squared greater than zero. And of course we want, in fact, we want R squared equals one or two because we're working in a union modular lattice. And now we want to pick them in order of the distance of P from R. So what's the distance from P to R perp? Well, I don't really care what the exact formula from the distance is. The key point is that it increases with this thing here. First of all, we normalize R so that it has norm one. And then you just take it in a product with the vector P. And as I said, there's a formula for the hyperbolic distance involving this, which I don't really care about. All we need to know is that the distance increases with this vector V. So what we're going to do is we're going to find all the reflection hyperplanes by picking points R in increasing order of this value here. Now, what's the condition that the hyperplanes have angle less than or equal to pi over two? Well, this turns out to be the condition that R1, R2 have inner product less than or equal to zero. So all the conditions of Wienberg's algorithm can be written very neatly in terms of this inner product of Lorentzian space. So let's see how this works for IN1. So step one is we need to pick our vector P. And we're just going to take P to be the most obvious possible vector of negative norm, which is zero, zero, zero, one. So P squared is minus one. And next we find the roots with Rp equals zero. And this is just going to be the reflection group of IN. And notice we actually have to make a sort of choice here because we need to pick one of the fundamental domains containing P, which is like picking one of the fundamental domains of the reflection group of the lattice IN. And remember, this is the thing with Dink and Diagon B, M. And we can just pick the simple roots as follows. We have minus one, naught, naught, one minus one, naught, all the way up to naught, naught, one minus one. And as you remember, this gives us a Dink and Diagon that looks something like this. So the step naught of Binberg's algorithm, we just get the B and Dink and Diagon. And then let's start with N equals two. Well, for N equals two, we only get one extra root, which is this one here. And this is norm one. And the Dink and Diagon we get looks like this. So here you notice that we've got the B and Dink and Diagram. And we've got one extra root. So this is the extra root here. So if we go back to Binberg's paper, you can see that this is the first of these mysterious diagrams he had. Now let's try N equals three. In this case, the root we found here actually dies off. And instead we get this root here, which now has norm two. And the Dink and Diagon we get now looks like this. First we get this bit here, which you notice, of course, is just the B three diagram. And then we get one new vector, which looks like this. There should probably be a crow on it. And this vector actually turns out to be stable in the following sense that as we increase N, we just keep it. We always get a simple root that looks like this. So sometimes simple roots kind of get killed off and sometimes they turn out to be stable and you're keeping for all N. In fact, it wasn't really killed off. This root kind of turns into this root in some sense as you increase N from two to three. So what this means is the Dink and Diagram is always going to look a bit like this. There'll be something going on then. We always get a piece that looks like this from BN and we always get this one extra root here. So, and for a while, that's all that happens. Again, if you look at Binberg's paper, you can see that for the next few values of it here is five less than or equal to N less than or equal to nine and nothing much new happens. However, when you get to N equals 10, something changes. This time you get a new root which looks like this. One, two, three, four, five, six, seven, eight, nine, ten. It's very easy to lose track of how many of these we've got. Three. Now you see this is norm one because we have 10 times one minus three squared, which is one. And this means the Dink and Diagram now looks like this. I hope I've got the right number of bits on them. So we now get an extra root here. So this was the root with three ones in it that we had before. And there's an interesting new phenomenon we have here is that the number of simple roots is greater than the dimension. And this means in particular that these roots are not linearly independent. So up until now, all the roots were linearly independent as happens for spherical reflection groups. But now we've got this new phenomenon. So for N less than 10, the fundamental domain is in some sense some sort of simplex. But from now on, the fundamental domain starts to be more complicated. And again, this root turns out to be not quite stable. But if we go to N equals 11, we get one, two, three, four, five, six, seven, eight, nine, ten, eleven, three. And this is now stable. We retain it in all bigger dimensions. So for N equals 12 and 13, nothing much new happens. We get a diagram that looks a bit like that. And for N equals 14, we get yet another new root, which looks like this. I'm going to put some dots here because I need to lose track of how many ones I've got. So now we have a root that sort of has a four here. And this is norm one. And if we look at its Dink and Diagon, which I'm just going to use Lindberg's paper for because I get it wrong otherwise. So here's the diagram for I-14-1. We notice that we've got this extra phenomenon that the Dink and Diagram now has a symmetry of order two, because we can just flip it like that. So let's say there's the Dink and Diagram has symmetry. And from now on, the symmetry tends to increase as you make N bigger and bigger. We usually get more and more symmetries of the Dink and Diagram. And this goes on up to N equals 17. So here's this diagram for N equals 16. And you see this time the symmetry group now has gone up to order four. And here, if you look rather carefully, you see the symmetry has gone up to order eight, I think. And at N equals 18, Binberg and Kaplan-Scheier gave up doing it by hand because the number of roots starts to increase rather rapidly and used a computer. So for N equals 18 and 19, there's the following interesting new phenomenon. At about this point, the Dink and Diagram is almost transitive on simple roots. It's not transitive on the simple roots. These are the roots of the Dink and Diagram for two reasons. First of all, some roots of norm one and some roots of norm two. This is the obvious reason why it can't be transitive on them. The roots may have different norms. There's a second more subtle reason. The roots may be parity vectors. And this phenomenon is going to turn up a little bit later in some of the later lectures. So I'll explain what this is. So a parity vector is a vector R such that Rv is current to v squared. So vv mod two for all vectors v. So an example of a parity vector is just a vector that's all ones. Or more generally, if all the entries are odd, then you can see that R has this condition here. Now we want R to be, have norm one or two. So I suppose we take R to be this vector here. So I'm going to have a lot of ones and then a three, sorry, and then a three here and then a five here. So this is norm one if there are 17 ones here. So R squared equals one. Now if we want all the things to be odd here and we want the norm to be one, this can only happen if n is congruent to two modulo eight. Because if a number is odd, then it squares always one mod eight. So there's a modulo eight condition occurring here. So if we want norm one roots that are parity vectors, then n has to be congruent to two mod eight. Similarly, if R squared equals two, then this corresponds to n being congruent to three modulo eight. And you notice 18 and 19 are congruent to two and three mod eight. So for these two values of dimension, we sort of unexpectedly get two sorts of simple roots of norm one or two sorts of simple roots of norm two. Now we get to n equals 20. And this is where there's a really big change in the behavior. And here it turns out there are an infinite number of simple roots. So Vinberg's algorithm kind of breaks down. Well, it doesn't break down. It just goes on forever. I mean, it will find all the roots, but it will just take an infinite amount of time to do so. Furthermore, the symmetry group of the Dinkin diagram is infinite. So I'm going to give several explanations of why things change so much at n equals 20. Let's first of all give Vinberg's explanation. So Vinberg's explanation for why this happens is there are lattices in dimension 19 with determinant equal to one, such that the root system that's called lattice L has rank less than the rank of L. And there are two such lattices. And the root systems have Dinkin diagrams of A11, D7 and E6 cubed. So these root systems have rank 18, but the lattices themselves have ranked 19. So they're unimodulated lattices whose vector space is not actually spanned by their roots. And Vinberg showed that whenever you get a lattice with this property, then it causes an infinite symmetry of the Dinkin diagram. Roughly speaking, if you take the lattice, modulo the root lattice, then this tends to act on the Dinkin diagram of a corresponding reflection group. And the point is if you've got a lattice L, then you find that L is actually equal to omega bar modulo omega, or some omega squared equal to some norm 0 vector in I 19, sorry, I 21. So this is going to be a 19-dimensional lattice. And by sort of staring at this, you can see that the lattice modulo, the root lattice, actually acts as symmetries fixing this vector w. So if this group is infinite, then the Dinkin diagram turns out to be infinite. So that's as far as Vinberg went in for the odd unimodulated lattices. He worked out the Dinkin diagrams up to 19 and showed that in dimension above 19, the Dinkin diagram actually becomes infinite. He also did the case of even lattices. So we have I 9 1 and I 17 1, and I guess I 25 1. So let's look at these three lattices. Well, for this lattice, what we do is we can write this as E 8 plus this little two-dimensional hyperbolic lattice with inner products like that. And that means if we start Vinberg's algorithm, we start off with the Dinkin diagram of E 8. Just as for the odd lattice, we start off with the Dinkin diagram of B n. And in fact, we get not only E 8, but if you think about it, we actually get an E 9. So this is the E 9 Dinkin diagram of the Euclidean reflection group. And then if we apply Vinberg's algorithm, we find there's just one extra vector, which is this point here and here. This is for obvious reasons. This is usually called the E 10 Dinkin diagram. And then Vinberg showed that that's the only extra root you get. So this is Vinberg's Dinkin diagram for this lattice. Now for the 18-dimensional lattice, something rather similar happens, except we start with 1, 2, 3, 4, 5, 6, 7. We start with two copies of E 8 because we can write this lattice as E 8 squared plus 1, 1, minus 1. So as before, we get affine E 8 squared. And then there's one extra point here. And now if we look at this diagram here, we can see several rather interesting things in it. In particular, we can see all the even unimodal lattices of dimension 16. So one even unimodal lattice of dimension 16 is E 8 squared. And we see there's the affine Dinkin diagram of E 8 squared if I put my finger over that. But there's a second even unimodal lattice of dimension 16. And we can see it if I put my finger over these two vertices here. And if you look at this, you will see it's actually the affine D 16 Dinkin diagram. You remember the affine D 16, affine D and Dinkin diagram looks like this. So in fact, the affine Dinkin diagram of any even unimodal lattice of the right dimension is actually contained in the Dinkin diagram of this Lorentzian lattice. By the way, the same thing works for odd lattices. You may think, well, you can now classify unimodal lattices just by writing down these Dinkin diagrams and spotting affine Dinkin diagrams inside this. Unfortunately, this doesn't really work. The trouble is, except in a few cases like this, the Dinkin diagram just turns out to be too complicated. I mean, as we saw before, it's actually infinite. So good luck writing it down and trying to find all the affine Dinkin diagrams inside it. And in this case here, Wienberg showed that the Dinkin diagram is infinite. Well, for odd lattices, Wienberg showed the diagram was infinite by finding a unimodal lattice whose root system is a smaller rank than that of the lattice. And in 24 dimensions, there's a very famous lattice with this probably called the Leach lattice. It's unimodular and it's just no roots at all. So in particular, the rank of the root system is less than the rank of the lattice in a rather drastic way. The rank of the root system is actually zero. So for this root system, we can carry out Wienberg's algorithm just before. And in fact, we get the same thing as before. We get three copies of E8 and an extra point here, but we get lots of other points as well. In fact, an infinite number of them. So that's the summary of Wienberg's paper. We calculated all the Dinkin diagrams of the reflection groups of unimodular rents in lattices in the cases when they were all finite. So the next lecture, I'm going to talk about Conway's work on this lattice here. So Wienberg showed the reflection group was infinite and Conway came up with this absolutely spectacular calculation because he managed to work out what the Dinkin diagram of this is and it turns out to be more or less the Leach lattice.