 All right. The first speaker today is Todd Drum from Howard University. People know maybe I'm introducing myself, so that was exciting. All right. So I don't know how much I was going to offer. I don't have very many euros on me, but maybe for every typo you find, you know, I'll give you a little bit. But I found the first one, that was the day I traveled, not the day of the lecture. So I don't know why it's up there, but this is just weird. I don't know if I can watch it from here. All right. So what we're going to talk about today is Lorentzian geometry. Let's see if we can. All right. So the first couple of slides are going to be very general, very general dimensions. After a while, I'm going to disappoint any physicists who are here and go basically the three dimensions. Most of the next three talks are going to be about three dimensions. And I'll tell you why I'm going to disappoint you, but there will be other reasons. But a lot of this stuff, the general stuff is all in all dimensions. So we're going to talk about a Lorentzian, a flat Lorentzian space. We're going to actually introduce a little curvature lift later on for fun and excitement. The tangent space is just Rn1. And what we're going to do is we're going to choose an origin. An affine space doesn't necessarily have an origin. You just choose a point to be your origin. And we're not going to always write this, where is it? Oh, it's that one. We're not always going to write. Oh, that was back. That's too bad. Okay. We're not always going to shoot. Okay, now I get it. I'm clueless, but we're not always going to write this origin very often. But really, you should think of if you have a vector, a matrix acting on a point, really it means the point in relationship to that origin. So we're going to make an identification between the point and the vector that points from the origin to that point. All right, so let's see. We'll go to the next one. All right, so there's our tangent space. I like to write my vectors vertically. I don't know why. But every once in a while, the same space. I'll just put a little transpose on there to make you happy. And we have the standard indefinite inner product. My wife doesn't call this an inner product. She just calls it something strange. She doesn't even know what it takes to call it. She's an agotic theorist, so we don't really know where she is. On average, we kind of have an idea, but not very often. Anyway, and the set of matrices which preserve this inner product is going to be O and one. And what do I mean by preserving this inner product? I mean, if I multiply by the two vectors by both A, that they're equal to the inner product. And I'm going to use the standard because I don't want to worry about the cross product and everything. I'm going to use a dot as the standard, the inner product. And later on, maybe not today, but the next couple of days, I might introduce different ways to write this inner product. So we're just going to be fluid with this, and we're all going to be singing together soon. We're going to talk about SON1. Those are the ON1s which have determinant one. Of course, you can have some which have determinant negative one, which also preserve this inner product. And ON1, and we're going to talk about the connected subgroup containing the identity. So a lot of things we're going to do with just that one. Now these matrices are a little bit hard to write. They're not quite as easy to write as just ON. It's a little bit hard, and it's hard to figure out exactly what they are. And we're going to maybe talk about how to write them in a nicer way, in a sense. All right. All right. So we have our space. And I can only draw in three dimensions. On every other Thursday, I think I can see in four dimensions, but that's usually chemically induced, and we don't want to do that today. So I want to have some terminology that comes from physics to talk about these matrices, these vectors, I should say. And the first thing is I'm going to look at all the vectors because it's an indefinite inner product. You can take the inner product with itself and get a zero. So you can look at the set of vectors which the inner product with themselves is zero. And we'll call that the light cone. That's from physics, right? You can imagine a point. This is like the speed of light, right? You have a spark goes, and where does the spark go? Where does that spark go? It goes out like that in time. You can think of the third dimension. The third dimension is time, and these other two is space, or if you want to actually think about physics, maybe you want to think about three dimensions in space, or 27. I don't know what's the new number. Don't follow it. And again, inside the light cone, right there, the vectors which point inside the light cone are called time-like. They point in the same, and outside are space-like, all coming from the space. You have the idea that two points here, if the two points in here, if the vector is in here, they differ by something which could differ in time. So you could possibly get from that point to the other point in time. If you have two points out here, like a point here and a point there, you can't get to there in time. You could maybe join up later, but you can't get to that point in the same amount of time. So the ones on the outside differ space-like, and the ones on the inside differ time-like. All right. Now, here's one thing that turns out to be an interesting little tidbit, right? The Lorentzian geometry doesn't necessarily define what future is or past. So you have to go ahead and say, I'm going to declare. There's another piece of information that you need. And one of the things you do is you declare. You say, I want this upper nap to be my future, or this to be my past. And we'll talk a little bit about what can happen when you mix up future and past and what you get weird things happening. Maybe they're not physical. Maybe they are. We don't know. So we have time like... All right. So one of the big deals of all of this, and what I want to push through with this is that whenever you have Lorentzian geometry, you also have hyperbolic geometry. And there's a model of the hyperbolic space that sits inside. In fact, we're going to write a couple more later on. They all sit inside here. But one of the models that I like in particular is the very physical model of the hyperboloid, right? You take all the vectors whose inner product with themselves is negative one. Okay? And you can actually take the one... There's a negative nap too. There's a hyperbolic plane in the bottom there. Okay? But that's a very nice physical model. And what I think is one of the more natural models of... Probably the most natural model of hyperbolic space. Really nice thing. Okay? I'm not going to go through the differential geometry, but it does... You define a metric of constant curvature negative one. This hyperboloid has a constant... And it's defined by the inner product. You know, the standard inner product. Okay? Geodesics through... In this model. And this doesn't have to be in three dimensions. This could be in higher dimensions. But geodesics are the intersections of planes through the origin and the hyperboloid. Okay? All right. Now, there's other related models to this. And one of them is, of course, the projective model. We can think of points as just collection of the lines. And we just say every line is a point. We like to talk about projective model. Where we allow... And we're going to write these sometimes... I don't know how much we're going to write these things, but we're going to write these classes of points. So we'll have a vector here, but we want this to represent all the vectors, which are a non-zero multiple of that vector. Okay? And we're going to do a little bit of a projective model. But one of the things that's going to come up again is the Klein model, which is kind of like the projective model, but intersected with or you can just project onto the nth dimension being one. What? All right. So do I owe you money now? Okay. So my first... This is really n plus one. So this should be the n plus one, right? I've dropped an extra n plus one right there. And that should be an n plus one right there. We owe John, I guess, a dinner. It will be a crappy dinner. That's right. We're back. We're even. I owe him a cheese steak. You have to come to Philadelphia for that one. Okay. So we have these models of hyperbolic space that live inside of Lorentzian geometry all the time. Okay? And we're going to play that against each other a lot as we go further and further in. We're going to play that. We're going to look at what these mean. All right. So, okay. So we're going to talk about isometries. As I said before, the linear isometries. They have... O n plus one has four connected components. It has the two connected components, which are S, O, N, one, and the negative... The ones with the term in it negative one. But it also has the orientation reversing things, too. So you get four connected components there. And these are isometries. You can think of all these isometries of HN. The hyperbolic in space. Okay. Now, an affine isometry is something that... I don't remember seeing that early in my career. Right? So an affine isometry is you hit it with a matrix and then you translate it. So you take an element like this and here you're taking an element of O n plus one and then you're translating it by a vector. Right? I like the little different scripts. Other people don't like that. They get a little weepy about these things. But I like the little... See? That's a math cow. Okay. All right. Okay. So here's a nice little proposition. I like to prove one thing. I think this is the only thing I proved today. Okay. If you look at an affine transformation and this doesn't have to be in any dimension. You know, this is dimension whatever. We haven't... We haven't... And it doesn't even have to be Lorentzian in that sense. If your A doesn't have eigenvalue one, then you have a fixed point. Fixed points are going to be bad for us. Maybe they should be good for us. A lot of things should have fixed points. But right now we're going to... It would be nice to know whether we have a fixed point in this case. We don't have... And it's a really nice little proof. Right? Okay. Basically you can solve this equation. You can solve Ax plus A is equal to A because Ax minus one... If A doesn't have one as an eigenvalue, you can diagonalize that matrix and you can take its inverse. So it's a really nice thing. It's a nice little observation. The first... Well anyway, I'm not going to say who did it. I wasn't allowed to say who first observed it. I don't know if he first observed it or not, but I'm not allowed to say it. Okay. So now I'd like to restrict things. All this stuff was very general. Lots of very general things. Now I'm starting to talk about three dimensions. Some of this is actually... We can go back and say this is more general. But let me just start working on three dimensions because that's what I'm going to work on mostly. Before I do that, let me say one thing about this particular proposition. Here's where three dimensions is somewhat more interesting or maybe less interesting than four dimensions in that a generic element, a generic isometry of O21, all elements in O21 have one as an eigenvalue. You can prove that it's not that hard to do, but in O31, generic element, you throw a dart at the dartboard, you hit an element of O31, which I do on occasion. You have a dartboard made of elements of O31. Everyone has one of these. Okay. They do not have one as an eigenvalue. So generically you might have... You're not going to get fixed points. So you're not going to have what we're going to talk about proper free and proper actions on E31. So in some sense that's kind of why we're going to restrict ourselves to R31 because we don't want to worry about these. All right. So now we're going to three dimensions. Let me draw... Oh, I have chalk, so I'm going to draw a little bit here. Just to remind you, so here we have our null plane. And what's going to be important to this is going to be looking at lines which are perpendicular to a certain vector. And if you have a vector which is space-like, so you have a vector v which is space-like, the perpendicular is going to intersect this like that. Right? Like that. And by the way, that also defines a geodesic in the hyperbolic plane because in here there's the hyperbolic plane. Right? And a plane through the origin defines a plane. So a vector will actually define a vector in R31, will actually define a geodesic in the hyperbolic plane. Okay? On the other hand, if you're light-like, so I'm going to draw another. I can draw a bunch of these. Okay? If you're light-like, the null plane, well, it's null with itself, so in fact it's going to be... Oh, that's crap. Okay? That's a terrible picture. But we'll get more pictures of it later, so I'm not going to worry about it. But it's actually tangent to that null, to the light cone. Okay? The null, the vectors which are perpendicular, Lorentzian perpendicular to a light-like vector are tangent to the null cone, which is kind of an interesting point that happens. All right. I should have put the Planck radius model. I just want to make sure that we understand that as a review, hopefully, mostly a review if you don't talk about this later, a lot of what we're going to do is going to interplay between the hyperbolic geometry and the Lorentzian geometry. And in the hyperbolic geometry, one of the things I like to understand how to write an isometry. And one of the best things about the upper half plane model of the hyperbolic geometry is that we have a really nice description of the isometry group of the hyperbolic space. And that description is two by two matrices with determinant one. So it's really, really easy to write those down. Now, you actually want to not include multiplication by plus or minus one, too. But isometry, so we're going to have this relationship, a dictionary, between elements of SO2 or O21 and PSL2R. And we're going to talk about that dictionary as we go ahead. All right. What did I say next? Okay, so now we have SO21, and I already talked a little bit about this. So we have a generic element of SO21. And I want it to be the identity element. I want the determinant of one. I just, I like, we'll start with the easier parts. There's stuff that goes on with other connected components of SO21, but I'm just going to worry about SO21 for the time being. And we have, they all have eigenvalue one. I think I lied before when I said all elements of O21 have eigenvalue one. O21 could have eigenvalues one or negative one. But let's just, that's why we like to stay with that. Did I lie before? You didn't want to say you lied, Todd? I'm a liar in a cheat, a lion cheat. But don't forsake me because I'm not bad. I'm just a little misled. That was from a bad song that I know. That no one else does, actually. All right, so there's a classification of these elements, these isometries of SO21. Okay. And there's the elliptic ones. Their fixed eigenvalue is a time like eigenvalue. And what it does is it rotates, it's really just a rotation, right? And there's also a parabolic one, which whose fixed value, fixed eigenvector is something that's light like. Okay. And the orbits, if you look at the orbits on the hyperbolic plane, there'll be horror cycles, if you've heard of horror cycles before. I love a good horror cycle joke. Do you know any horror cycle jokes? I don't know any horror cycle jokes, but I'd like a good one. But the more interesting part that we're going to hit over and over again today, because we're trying to simplify things, is the hyperbolic, okay? So we're going to talk about hyperbolic elements of SO, the identity component of O21 or SO21. And those particular isometries have orthogonal matrices, have three eigenvalues. Okay. And I've been taught, I don't know why. I always liked my lambda to be greater than one, but now my lambda is a less than one. And there should be a zero less than that too. There are three positive real eigenvalues. Okay. And let me just draw this, draw it up here. Again, I don't know if I have the picture, I have this picture on a lot of things. So you have a hyperbolic element, and this is, so this is A, and this is A naught. It's going to be space-like. It's not obvious, but it's not hard to prove that that is going to be space-like. And you're going to have an expanding and contracting vector, right? There's going to be one that's bigger, whose eigenvalue is bigger than one, and one that's less than one. All right. And so a little bit of thought, and you'll realize that the expanding and the contracting eigenvectors have to be null vectors. Because if they weren't, what would you do? You would go to a different, you would change the eigenvector, you would change the length of them. If you were here in the timeline, you'd move it straight up, and you'd be on a different hyperbola. This is going to be A and A plus. The expanding and contracting vectors are points on the boundary of hyperbolic space, if you want, because that's kind of a boundary of hyperbolic space, right? The hyperbolic plane is living in here. It's a tiny little plane today. Okay? A plus and A minus are the eigenvectors. Okay, so am I going too fast? I feel like I'm going too fast. All right. And I'll also call, by the way, these are, for the same reason, this plane that's defined by A plus and A minus has to be Lorentzian perpendicular to A naught. All right? And again, it would be, because you're interacting, one would be multiplication, right? It has to be that it's zero, because otherwise you would multiply by a k. I can write that down if you want. By the way, feel free to stop me and say, you're confusing me or stop talking. People have used that. My wife asks me that every day. Just stop talking. Every day she says that. My kids really do say that. My wife doesn't always say that, but my kids actually do say that. But anyway, it's perpendicular to this. So that A naught defines that plane that goes from A plus to A minus. All right, we're going to talk about the, these are just elements of SO21. They're just the linear part, right? They're the matrix part of that. There's also a transformation. We're not going to really, we're going to talk about those affine transformations. We're going to say whether they're elliptic, parabolic, or hyperbolic, just because of what the linear part does. Okay, so now we get to have a little fun. So now we get to choose, there's lots of choices for A naught, A plus and A minus, right? Lots of choices for that. We can choose A plus or A minus to be future pointing. You can actually choose it without the notion of future and past too, just so that they're on the same map. But let's just make ourselves easy. Let's assume we have a future. You know, maybe you all have a future. It's unclear whether I do. I do have a past, but we don't want to go there. Okay, and we're going to make an arbitrary assumption that we're going to have Euclidean length one, because that's really, what, how long those vectors, those are null vectors as opposed, as what the Euclidean, what the Lorentzian geometry is. So how long it is doesn't really make any sense to talk about a length, but we're just going to assume it's going to be Euclidean length one. You don't actually have to make that assumption, you just have to assume that it is non-zero. Okay, but let's just assume that. Okay, I don't know if I, maybe, was it on a thing? Huh, I forget. Did I talk about that? I know I didn't say anything about it, but not only do you have an inner product, because you have three dimensions, you also have a cross product, too. The same properties work through the cross product, that when you take the dot product with the cross product, the two vectors, it's just the determinant of those vectors, right, it's Lorentzian perpendicular. And you can choose that when you take the cross product of A naught and A minus, A minus and A plus, they go to A naught, and most importantly, that it's greater than zero. So what does that mean? It means that you have a dot product, so what you're defining is an orientation for your space. You want to choose, because there are two different directions, for A plus and A minus, we're talking about future or past, right? We say their future, that defines the vector. Well, it doesn't define the length, but we'll fix the length, right? That's the problem. A naught has that direction or that direction. And there's a particular direction that we get to pick. And that's when we take the cross product, right hand rule, right? If you ever teach a right hand rule question, I taught a little physics, and I love to see the students all working on their example. What, which way is that going? Right hand rule, it points in the same direction. In fact, the minus to the plus points in the same direction as the A naught. Okay? So that will define it. Okay? All right, we already talked about the A naught, the purpose of terming the axis for A naught. Okay? All right, so that's a really important point that given these two things that living in the upper nap, the future, right? Given that that determines the direction for A naught, determines which direction you are. Okay? It's going to come up to be a big deal right now. All right, so now we get to define a Margulis invariant. Okay? This is going to be one of the more interesting things that we're not going to prove the, prove the big things about the Margulis invariant, but we're just going to define it. Okay. The first thing before I go any further is that if you have a hyperbolic element, if you have a hyperbolic affine transformation, again, it takes a little bit of algebra, but you can show there's a fixed, a unique fixed line that's parallel to X naught. So, if you have an affine transformation, I don't know where it is, but it's sitting right here. All right? Somewhere here. See, out. And what A does to it, it moves along this line by translation. Right? Everything's moved by translation, which is not hard to see. So, there's the line. And all you know is that it's in the direction of A naught. It takes a little bit of algebra to prove it. Not so bad to do. And what we're going to say is we're going to look at the Margulis invariant. As you take a point on that invariant line, you see how far your affine transformation moves you along that invariant line. And you say, how far did we move? Well, we need something of unit length. What do we have in unit length? A naught. So, we're going to measure it. Okay? How far you moved? Right? AX minus X dotted with A naught. Pretty straightforward thing to do. But it has an interesting consequence. This is a length, right? And if you want to think about it, if you want to think about it as you take all of A, all of En, all of e to one, you mod out by this cyclic group. I think this is not working. But anyway, just by generating by this element, what you're going to get is you're going to take two, you're going to get some sort of torus, some sort of filled-in torus, right? A solid torus. And there's going to be a unique closed geodesic on that torus. All right? And what is the Margulis invariant measuring? It's measuring the length, the Lorentzian length of that unique closed geodesic. But it has an interesting point, is that it could have a negative length. And that's going to be, that's going to come up in a second. So the Margulis invariant, Mark, and I'll tell you a little story about what Margulis told me one time about when he came up with it and what he was trying to prove. But, you know, in a second. All right. So it's another way to think of the Margulis invariant. It's just the sine, Lorentzian length of that unique closed geodesic. Okay, so, oh shoot, that was not that one. So the first thing is that the Margulis invariant is zero, and only if you have a closed fixed point. In fact, the way you get the information, you get the way you understand this sine, Lorentzian length is to restrict x to this closed geodesic. But it works, it works no matter where you are. You can take an x off that line, because the motion, the other motion is perpendicular to xA0, so it just doesn't even notice the other motion. Really, you don't even have to worry about what x you pick. But the understanding of the definition, the meaning behind the Margulis invariant comes from the fact of picking the x on it, on the axis, but you don't really have to do that. Okay? It's a class function, it's invariant under conjugation. Nice little foot. And here is probably the most interesting property. And that is this one right here. Alpha of A to the n is the absolute value of n alpha of A. Okay? Well, let's draw a couple of pictures before we get any further. Okay? Because I think this is kind of a good picture to draw. Okay, so here we have an app information, and here we're going to go, and this is x A plus and A minus. And a transformation that goes to x A of x, x goes to A of x, right? Now you do the same thing. You take the inverse. Take the inverse of that. The inverse, which way does the inverse move? It moves opposite direction. I can do that. I'm good at that. But what happens to x plus and x minus, A plus and A minus when you take the inverse? Well, one is contracting and one is expanding, right? So you're switching the expansion and the contraction. So what happens is that when you go the inverse, this is A inverse plus A inverse minus. I don't think that's a good picture, but the idea is you're changing the direction. So you're changing, in particular, A naught. They're the negatives of each other. Okay? So no matter what happens, this tells you a sign that it's invariant under change going from A to A inverse. Okay. That's wacky. And I'll tell you something even wackier in about a second or two. Okay. Whoa, whoa, whoa, whoa. Oh, okay. I want to tell you what proper actions were. That's right. You write a few lectures and it's like, oh, no, what did I say here? Okay. I just want to talk about proper actions just for a second. So what we're going to be interested in the next part of this is to talk about which ones are proper actions. In other words, what do I mean by proper action? There's a definition. We'll go through the definitions. But basically it means if you have a group which is discreet and acts properly, or for those of you who learned mathematics, not from my advisor, a free and properly discontinuous action, okay? You get, what's the nice part is you get a manifold when you are mod out by that manifold, mod out by that group. Okay. All right. There, and in particular, if you talk about not, Lawrence in this case, you'd have an old school, 19, I don't even know when Beaverbock, I know he was alive because he wasn't, he might not have been a nice guy all of his life, but I know he was alive during the World War II or right before World War II. But basically for Euclidean motions, you know that all the proper actions you get are either, they're either ZNs or they're finite covers of the ZNs. Okay. And then you start to say, well, do we have another question like, do we have another theorem like this for co-compact affine actions? By the way, this was for a compact set, co-compact. All right. So we have a conjecture by Auslander, which says that if you, if you have a discrete group which acts properly, then G is virtually solvable. Now, I don't know too much about algebra, but I know one group which is not virtually solvable is free groups. Anything that has a free group is not virtually solvable. That's what I know. Okay. All right. And it's true up to dimension six. It was Frieden Goldman proved it in dimension three and did a lot of things. They proved it in, it's true for affine, for Laurentian actions too. In fact, they proved it for Laurentian stuff, that if it's compact that you actually have this. One of the questions Milner wrote a paper, a beautiful paper where he, does Milner write crappy papers? Maybe he does. I don't think so. I think he writes pretty much good papers all the time. Anyway, he said he wondered whether it was going to be true if you took away the compactness. Okay. All right. So, now we'll go on. I don't know. Now it's a little offside. Okay. And the answer to Milner's, the Milner's conjecture is no. And we're going to, that's the end result of what we're going to talk about today. Okay. So, let's go back. That wasn't a side. Now we're going to go back to Margulis's affine, Margulis is invariant. Okay. Margulis, as he explained it to me one time, he said, I was actually, he was actually trying to prove Alcindor's conjecture for Laurentian geometry. Or even the question that Milner said, and I don't put conjecture because he did not conjecture it. He just said, I wonder whether that's true. Okay. He was actually trying to prove it's not true. And one of the ways, one of the things he did was he proved the opposite sign lemma. And the opposite sign lemma goes like this. If you have two elements, which have opposite signs, then the group that they sit inside cannot act properly on this. Okay. And he thought he was done. He thought he'd always be able to find things with opposite sign. And what I'm going to tell you and what we know is that we can actually find things where they're all the same sign. All the same sign. Okay. So this is the opposite sign lemma. All right. Okay. So for whenever we find affine, whenever we find these proper actions, which we're going to talk about in a little bit. Okay. And whenever you have a Lorentzian affine action, then you always know that the signs of all the elements in the group are all the same. Okay. Now, here's the weird things that happen. You can define this same thing. There's always every element. If you look at the, look at increased the dimension, but don't think of Lorentzian groups anymore, but increase the signature, right, to set of R21. So it's the standard metric with two pluses and one minus. That's Lorentzian group. But if you increase it to four pluses and three minuses or three pluses and two minuses, right, those groups all are real split. They all have one as an eigenvalue and they get eigenvalues. So it's very similar. Okay. So a lot of this discussion, a lot of what we're discussing today, can be extended to these higher dimensions. But not the higher dimensions that you want to do it, not SO31, but SO32, SO43, SO65. I can't do arithmetic today. Okay. You can keep doing it. But in every other dimension, so in the 3-2 dimension and in the 5-4 dimension, what happens is that when you take the inverse because of the number of A pluses and the number of expanding contractions and the number of contracting directions, okay, you actually get that the Margulis invariant of the inverse is negative the original. And so you can never get counter examples to, you never get a counter example to Alzheimer's or Milner's conjectures in those dimensions. So a lot of what you'll see, if you see Margulis talk or Soyfer, they're interested in understanding these higher dimensional, what we're doing in higher dimensions, but in 4-3, 6-5, everything going up from there. All right. So it's a wacky thing. All right. So the first examples of these things, which we're going to talk about in more detail are we're constructed by Margulis. I've read the paper. I don't think anybody else should. No, no. It's a hard paper to read. It's a lot of estimates, a lot of really, really hard things. You don't really see what's going on, okay? And these are what there are proper actions which are not solvable, so they're going to be free groups. We're going to usually find free groups which act nicely on this thing. So that's what we're interested in finding. Free groups that act nicely on this thing. All right. So now, so the next examples that we're done, we're originally done by, well, a long time ago, let's just say it, let's not say who they're done by, but a long, long time ago, some of us are a little bit older than we should be. Or used to be. That's right. My future and pastoral. And the idea is to take what happened, because all of this is happening on the hyperbolic surface too, right? To take what's happening in a hyperbolic surface and move it up to the affine, to the Vorensian surface, right? That's what we want to do. Okay. So let's think about, so what we're going to try to find are proper actions, free groups. Let's make it easy. We're not going to talk about extensions of free groups. In fact, the free groups is pretty much it. Well, not. There's a triangle group too, a little bit. Okay. That act pro, that act nicely on, on E2 1, right? Affine 2 space. Vorensian space. And what we're going to do is mimic it on what happens in the hyperbolic plane. Well, what happens in the hyperbolic plane? How do you make a free group in the hyperbolic plane? Well, you take an element, which takes the red circle, the red geodesic, or the blue geodesic to the blue geodesic, right? And you can look at, you know, masculine combinations or whatever you want to talk about. You can talk about all the big words that you want, but you get a free group of free group action. Okay. All right. And that's the way we build these all, all these things. The fundamental group, the fundamental domain of this is the region that's bounded between the four disjoint axes. The action is from the red to the red, the blue to the blue, like that. And we want to take this picture and we want to see, does there, can we draw a picture like this in a higher dimension in the Lorentzian geometry? Okay. And so, let's start to draw that. Okay, so the problem with this is that we have to take a line and we have to extend it to some surface, now a plane. But if you extend it to planes, planes in general are going to intersect. They're just, you know, put a bunch of, you get two parallel planes, but there's going to be a lot of intersection. You're not going to be able to deal with that. It's not going to work. Okay? So there's got to be a way to extend this notion of a line to, to a plane or to something, to something of co-dimension one, right? Some plane or surface. Let's see if we can do that. All right. And we have a crooked plane. That's what, of course, everything should be. You take a line and you move it up to a plane and now you have a crooked plane. All right, so let's see if we can see what this, what this crooked plane is doing. I have to, I go, I can't do that. Okay? So, crooked plane is made out of four pieces, right? There are the pieces which are inside the light cone, right? There's, and we'll call them the stems. There's this part and there's another part down there. It's like the continuation of that, that plane. It's the same plane going down and they're identified by the projective geometry, but basically they're, they're telling you, and you can see the hyperbolic geodesic in there, right? That's the intersection of that stem with that hyperboloid. All right? Well, actually just, that's not the hyperboloid, but it's close enough. Oh, no, it is. It is. I have the hyperboloid in there. There you can see the hyperboloid. There's the picture of H2 in there. So that's easy enough. And then how do you connect those two is the problem and you connect them by taking half planes which are tangent, Lorentzian perpendicular, but tangent at those two points called, we call the hinges. I don't use that term very often, but we'll call those the hinges. Those are the wings and there's the stem. I got grief from, from who was it? Toronto. Who does, who does geometry? The woman who does geometry in Toronto. Anyway, more symplectic. Anyway, she said it disturbed her because stems were like a, a plant and wings were an animal and we were mixing the two. She was very upset about that and I had to agree that she was right. All right. And so now you can start to, if you get a 3D printer, you can start to build these things, right? You can ask them to build. You put a little distance and I'll tell you, I can tell you a little bit more about this. We're not going to talk about these particular examples, but here you can see. So here's, here's a stem, right? Each one of these boundaries and there's a wing, right? A stem and wing. This is going to be a triangle. Basically, this is an affine triangle. Lorentzian triangle, a triangle or prism if you want. Okay, bounded by three crooked planes. Okay, and they all fit together because it's going to tile. I'm going to tell you this. I saw a lecture, well, I was at a lecture that Bill Goldman was giving and he actually was the one who arranged for these to be made and he sent them around the audience and these two young women were like working on them. They couldn't get them to work and they're like passing them to me and I kind of like said, I take them and I just go boom, boom, boom and they're all fitting together and they go, that was amazing. I said, I've been thinking about these a little bit too long for that to be amazing. But anyway, if you want to play with these and look at them, they all fit together, you can pass it around, you can play with one of them, look at them. But these are what we call crooked planes and they're kind of the answer to how we can deal, well, I can deal with the question which is how to make a proper discreaction. By the way, there's also if you've got a crooked plane, you've got two crooked half-spaces. One of the things about a crooked plane is it divides the Lorentzian space into two pieces. It divides it into that piece and to the other piece, right? It's dividing the two pieces. So we'll call those crooked half-spaces. Anything we would just start calling things crooked and I can blame Bill for that because he liked it a little bit more. He was the one who came up with the crooked. If you want to blame Bill, Bill Goldman, feel free to do that. I'm supposed to be nice to my co-authors today. I forgot that. All right. So I have a little theorem that if you have, basically, that if you have a bunch of disjoint, mutually disjoint crooked planes, okay, well, crooked half-spaces, right? These are kind of the compliments, right? If you want to think it, so instead of the crooked half-space, you want to think of it like these are this, the analog of the crooked half-spaces like that's a crooked, that's going to be that half-space, that half-space. So you have these two-end half-spaces and then, and you have this, that A goes to the compliment, it takes that half-space to the compliment of the other half-space, right? Then gamma is proper. Okay. So now how, the question is how do you find proper actions? So how are we going to find proper actions? We're going to start with a a free discreet linear group, in other words, we're going to start with a picture in hyperbolic Spain defined by these two-end geodesics and we're going to build, we're going to put a bunch of crooked half-space, crooked half-planes, right? And what you have to get used to is that you can see pretty easy by just starting to draw this, is that if you have a geodesic, if you have two geodesics and you draw the crooked planes, they don't intersect. The only place they intersect is at the origin. Okay. Because they kind of wrap around, they have a weird, because of the way you do it, they kind of wrap around each other and you can see them on these examples. They're kind of twisting. There's no curvature, well, there is curvature, but there's no really curving, right? There's no things, but somehow they're wrapping around because you're, the way you're putting these things together. All right. And then you separate them. You just say, okay, I've got these these two-end geodesics and I move them away. Okay? From each other. And then I, that moving away does give rise to a proper deformation. You can find, and you can find a Lorentzian transformation which moves one geodesic. Once you've moved them away, you can find one that moves that crooked plane to the next crooked plane. All right. So let's see if we can see this picture. So here we have the picture of them. They're sitting there. There you have, I have four in this picture. All right. So there's one crooked plane. There's another crooked plane. Here's a third crooked plane. And there's the fourth crooked plane. And so now I want to be able to separate them. I want to be able to move them. And you can tell by looking at them, once you, there's a nice place to move them. You can just move them in this, in the stem. The stem is defined by a plane. So if you move them in that, a quadrant in the stem, they move away from each other. It's a little bit of seeing it, but it, in a sense, what one way to think about this is you're just moving, if I just took these crooked planes and I just moved them like in a vector that goes that way. All right. They move that way. And this one, so they're all moving away from each other in a way. All right. They're moving away in this plane. So you want to take a vector right along here. Okay. And there's a whole space of these vectors. Okay. There's a whole quadrant of these vectors that you can move them away. Actually, they are wiggled and you might imagine, but the way you prove things is you want to actually make it a little bit concrete. So, right. So I'm going way too fast because I'm almost at the end of today, but it's always good to end early. I don't care what people say hour to hour in 15 minutes. I'm now at 45 minutes. Have I gone to only 45 minutes? Gosh. Or am I 145, 115? Okay. Well, anyway, I'm going to quit anyway at some point. So here's the theorem that's nice, right? If I give you a free-to-speak group in SO21, for every free-to-speak group that you can do, I can find a transformation which, a proper, an affine deformation, whose underlying group is that element. See if I can say that right. I give you a linear group then I can find a affine set of affine transformations whose linear group is the same is that original one. For every one of those linear groups I can find an affine one. It's free-to-speak. Okay. And what we're going to do, the next lecture is going to show you a lot more of these things. Draw some pictures, do other things. Okay. So there was a conjecture. There was a conjecture. Fanny. There was a conjecture. It's no longer a conjecture. It basically said any free-to-speak group which acts freely and properly on affine space must be, you can find crooked planes which bound it. Okay. We called it the crooked plane conjecture. We even named it. Okay. Goldman and I. And then we have this. And I'm not going to tell you how they did it because they didn't do it the right way. They did it the wrong way. A much more clever way. Much more insightful way. Just depressing the living daylights out of me. They're too smart. They're too nice. And they're too hard-working. Danziger, Gerito and Cassell. And they were able to prove this. They were able to prove that every free proper action is you can find a fundamental domain bounded by four discrete crooked planes. But they don't use it by, they don't actually prove it by doing something, but they prove it by a different method which is all relating to it. And that's what we're going to talk about a little bit tomorrow is understanding all of this from a to what's actually happening on the underlying surface. You're actually going to deform the underlying surface. And they deform the underlying surface. They understand this in a really striking way. So I went really fast, so we'll stop there. Since I'm the only one, is there any questions? Does anybody have a question? Because I can go on for a long time. The one crooked plane or this one or this one or the one first one, right? Yeah, you can't see through it as much. So I can do this. Where's Francois? Because he can draw these better than I can. I've seen him do it. It's like, gee whiz, how do you draw these things? He goes, well, you draw a thousand and it gets easy. So no matter what, I always start with the light plane. Yeah, yeah. There's two wings. Yeah, that's a wing right there and that's a wing. So you take this, for instance, if you want to take it like, I like doing a little bit off like that. So there's your stem and then you take a wing. You take, you go half plane. This one connects to down here. There's a half plane. And over here, you take this half plane and it goes in the front. That half plane. A great question. Why don't I take the other? What did I have before? What was the first theorem that I talked about in this thing was Margulis' opposite sign lemma. So one of the things you can prove is that if you pick the right ones, I probably picked the negative one. I don't know which ones I picked. But if you have two crooked planes which are separated, and the alpha that does this, the transformation that does this is a positive Margulis invariant. There's only one set of those choices that you can make. If you choose the other one, they'll always intersect. So the choice of which set of half planes you choose to make these separate is equivalent to choosing to making your choice of the sign of your Margulis invariant. And they all have to be the same. And if you have two, by the way, if you take a half plane, if you take any other half plane here and you go in the other direction, it's always going to intersect. And you can't move them away from each other. They're just always going to intersect. So the alpha, the Margulis' alpha, and the choice of which one of these things which you pick is really intertwined. You have to do a certain one, right? A certain direction. I forget which ones. I think we've changed the way we positively oriented or extend. I don't even know the names anymore. But that direction that you pick has to be consistent. Does that help? We actually get... You can make orbifolds. You can do orbifolds with like... Virginie Charette has a paper about taking like that thing. If you look at that picture, that red one in the middle, and you reflect across what we call the spine, because on this crooked plane, not only do we have a wing and a stem, we also have a spine, and there's one unique space-like vector on that. And if you take a reflection in that spine, a Lorentzian reflection in that spine, it just reverts them. So do I have those things? Do I have those back? I want to show you a theorem. Sorry. Okay, so I told you that I said, oh, well, we're only going to deal with SO2-1, the connected component. Well, O2-1 has four connected components. Two of those have negative one as eigenvalues, but SO2-1 has positive... Everything has one as an eigenvalue, but some of the eigenvectors, the other eigenvalues can be negative. In fact, you get examples of these things, and we'll talk about this more later on. But here's the proof of why you can get these things with negative eigenvalues for the other ones. Here we go. It's really hard. It's a hard proof, because what does negative eigenvalue do for the plus and minus? It does it by... It just takes the upper nap, it touches the future to the past, and the past to the future, really screwing up, you know, any physics that might happen to go on, and you get it like this. And if you don't think that's exactly the way I found it out here, you're crazy. That's exactly... I had built these by hand a long time ago. It's like, hey, I did something wrong. Oh, no, I did something right. This is amazing. It's like, oh, that just... Because it flips it. So what a transformation will do is it'll move it, and then it'll flip it. So you can kind of imagine. And what we'll talk about a little bit on foreshadowing, but a little bit on tomorrow, is that even though the underlying surface is... That's changing the orientation of the underlying surface. The orientation of the three-manifold is still... It's still oriented. So you have an unoriented surface... You have an oriented surface, an oriented Lorentzian manifold over an unoriented surface, which I think is kind of weird and disturbing. It's a manifold with a free group. You can't really get too many extensions of free groups. Yeah, it's hard to do that. You can get... If you throw away the free part, if you worry about... If you allow fixed points, you can get certain things. So what you can do is, if you just imagine... You could take this group where you have these three crooked planes separated by... And you take reflections on each one of these spines. You get a proper action, but it's not free. It fixes that. It'll fix that line. So you have to throw away all the lines and all the iterates of that line. Those three lines. But you still get a proper action, but it's not... But you get free groups. So you do get orbital to some extent. Okay? There's also things about what happens in elliptics. This is not really the proper actions, but I know that Thierry Barbeau does a lot of stuff with elliptics and actually thinking of these is not just gluing the elliptics. But that's not where I'm going today or tomorrow. Okay? These are all handlebodies. That's what... That's what Francois Finney and Jeff proved. This proves that all of these are handlebodies. So they're all... What, tame manifolds? So, yeah, they're all handlebodies. But I believe... Did you prove this by doing it, by proving it, by proving the crooked plane conjecture? That was your implication that all of them were handlebodies? Oh, right, right, yeah, yeah. There's another proof that I'm not going to... You'll have to ask Finney. She'll tell you other things that I don't know. But, yeah. All right? That's what we're going to talk a little bit more about that tomorrow. So I'm going to talk about what those parameters are yet and how to talk... We're going to go through this. I just want to... Today I just want to start off and build these things and show you what they look like, as opposed to talking about the whole space. Which, again, the method that the DGK, which I just call them DGK when they're not around, figured out was they have a lot... They can describe so many things. It's really, really clever. But I'll let them to describe everything of that. But we'll show you pictures. We'll show you the pictures that Bill used to show. If you've seen a lecture by Bill Goldman, you get to see these crazy pictures that we have. And I'll show you some of those tomorrow. Free group and two generators. So if you have a free group and two generators, well, this is tomorrow. I'll leave this tomorrow. I'll talk about this tomorrow. Any other questions? Yeah, that's all it is. Another way to think about these is just shocky work. And so you have a combination theorem like you did before. It's a little bit harder to prove that they fill up all of free space. And there are examples where they kind of go off. It's hard to... I don't know if anybody's ever constructed those examples, but it's hard to construct those examples where they don't. But you have to actually prove that they all... One of the hard approves of this all is that once you have this fundamental domain, do they fill up all of free space? And it's one of those... It's a lot of estimates. It kind of reminds me of a lot of what Margulis did originally. A lot of estimates like how much... You take a ball and you say, oh, well, how much does it squeeze it and how much it doesn't squeeze and stuff like that. And you kind of want to build it. You want to say, oh, I don't have any accumulation points. And it takes a while to prove that, but it's not that exciting. To prove it. It's exciting to know it, but it's not exciting to prove it. All right?