 So the title of this lecture is, What's All This Good For? Okay, so I've been busy defining lots of spaces. I guess, I mean, that's what I did last time and the first time this time. So there's not going to be any proofs or constructions now. I'm going to, well, I'm going to tell you what people have been using these spaces for. So the most obvious thing, I mean, why did I make these spaces? I had these groups. Well, I started with the group of outer automorphisms of free group. Then I went on to the group of automorphisms and the group, these groups, ANS, which are kind of generalizations of automorphisms of free groups. So the most obvious, what I hope this was good for is learn about the groups, including. Okay, so the first thing you can do if you have a space, a nice space with a nice group action, especially if your space is the nicest, namely it's contractable with a proper co-compact action, is use it to find a presentation for the group. So Ken Brown wrote a paper showing if G is finitely, so you don't need that. You don't need anything. G acts on a space X, which is contractable. CW complex, you need a self-complex, proper action. Actually, you don't even need contractable. So I'm getting all you need is simply connected. CW, proper, co-compact. Then you can use that to write down a presentation for G in terms of the stabilizers. So you find a fundamental domain for your action. This is basically a generalization of Bass-Thur theory. Basically, you take your simply connected cell complex, look at the one skeleton. That basically gives you the generators in the quotient. And then you got these two cells, and they're going to basically give you your relations. So that was done. That's been done. So Armstrong used KN0, KN1, excuse me, to give a presentation for autofn. But actually, we didn't quite use KN1. We used because, you know, in order to write this down, you need to know all the stabilizers. And the stabilizers can get pretty hairy as the rank of the graph goes up, goes up. The stabilizer is basically the group of automorphisms of the graph, and that gets more and more complicated. But it turns out that inside here there is a subspace, which is also called KN1,2, which is also simply connected, not contractable. But it's simply connected, and the stabilizers are very tame. They hardly change at all when n goes up. So given this nice situation, we could write down a presentation for the group. So that's the first thing, maybe the easiest thing to do with a complex with a nice action. Your space doesn't even have to be all that nice, just simply connected. So then, okay, so now we have a presentation. So what can we do with the presentation? Well, you could draw a Cayley graph and all that. But the easiest thing to do with a presentation is abelianize it. These are not new results, incidentally. I mean, this presentation was new, had features that other presentations didn't have, but there had certainly already existed presentations for the group. And so in particular, abelianizations of the groups were already known. But the point is, let me just call it, this is in general. If you have a presentation, you can compute the abelianization. But the abelianization is only, so that's an invariant to the group. But it's only the first invariant in a long list of invariants, namely the homology of the group. So what is the homology of the group? Well, here's one way to define what the homology of a group is via a theorem of Harewitch. If g acts properly on a contractible, I want to stop myself before I go too far, on a contractible x, English, properly on a contractible, I shouldn't stop quite here, x. There we go. Then the homology of the quotient space, x mod g, is an invariant of g. So it doesn't matter what kind of a space you have, as long as it's contractible and g acts properly. You get, when you compute its homology, you get a group invariant. The first homology is the abelianization. And this is one way to define the homology of a group. There are other, more algebraic ways, but this is a fine definition. So we have contractible spaces. So I told you that algebraic topologists liked contractible spaces with proper actions. They didn't really care too much whether it's co-compact, and this is the reason. If you're interested in the algebraic topology of these groups, then that's all you need. So it can use, yeah. Whoops, nobody stopped me. Even the people that should know. What did I say? I forgot to say something. And in fact, this is wrong. Stabilize your information? Yeah, I said properly. I heard it, somebody whispered it. This is only true if g acts freely on a contractible space. Then the quotient's an invariant. That's only true. We don't have a free action. But it turns out that Baumschlag and Taylor conveniently proved that, well, in fact, what they proved implies that all of the ANS has torsion-free subgroups of finite index. So if the stabilizers of the action are finite, and this group doesn't, so a subgroup certainly acts on the same space as ANS does, and if this doesn't have any torsion and the stabilizers are finite, then this group acts freely. Gamma is the homology of KNS modulo gamma. You can also define co-homology. Okay, that's nice, but we were kind of... Gamma. That's gamma. I've got a torsion-free subgroup of finite index of my group. My group acts properly on KNS. So gamma acts freely. So I can compute the homology of gamma by using this space. So that's nice. I was really kind of interested in the whole group ANS, though, but all is not lost. So remark. If I have a finite group H, and I look at its homology, or co-homology, and instead of using Z coefficients, I tensor the chain complex with Q, then I get the homology of H with Q. I want to use reduced homology to make this a cleaner statement. Turns out that's zero. HI is zero for all I. So heuristically, you can think that if I'm willing to use homology with rational coefficients instead of integral coefficients, then I can't tell whether my group is acting freely or properly. The stabilizers, as far as the homology is concerned, are the same as trivial stabilizers. So in fact, you can make this precise. This implies, eventually, that the homology of these groups ANS with rational coefficients is actually isomorphic to the homology of the quotient space with rational coefficients. So I lose some torsion information, but I still get information about the homology of my group, at least with rational coefficients. And so the proof of this, I mean, this is an easy spectral sequence argument that I'm not going to subject you to. So if you feel like it, you can sit down. And if N and S are small enough anyway, you can figure out exactly what this space looks like, exactly what the quotient is by the whole group, and figure out its homology. Right. That's correct. Is the homology of ANS with rational coefficients related to the homology of gamma with rational coefficients? Of gamma? Not directly? No. But we needed gamma to be able to do this argument somehow. In order to get the actual co-homology, a space whose actual co-homology with integral coefficients is the same as the homology of the quotient space, you need a free action. But if you're only interested in rational coefficients, then the homology thinks the action's a free action. So you can actually compute rational homology. Right. I still have a board up there somewhere, and this brings me to... So it turns out that the homology of these groups is very mysterious. Nobody really understands it at all. But we do understand something about the Euler characteristic, at least as of a week ago. We'll actually be understood longer anyway. So I just want to tell you something. I've been talking about stuff that started in 1920. We finally moved up to 1970 and then up to 1990. I want to tell you something from last week. Okay? So homology. So for a finite CW complex, X, we have an Euler characteristic. Minus 1 times the number of I-cells. And another way to compute it is taking the sum minus 1 to the I of the dimension of the homology of the space with trivial rational coefficients. If I was going to use integral coefficients, I would have to say the rank of the homology. But I'm using trivial rational coefficients, and I can say the dimension of the homology. That turns out to be the same thing. So, right. I know the homology of A and S with rational coefficients is the same as the homology of that quotient space. If I take that space, then I get the sum minus 1 to the I of the dimension of the homology of A and S with rational coefficients is, yeah. I compute the Euler characteristic of X is that number. So that gives me some, if this number isn't zero, for instance, it tells me that there is some homology, which is not a priori clear. So this is an interesting number, the Euler characteristic of this space, that people have been trying to compute. So I got up to Sakasai, Merida, Sakasai, and Suzuki computed this for n less than or equal to 11. That's as far as the computer would go, and that took a couple of years. Yeah, so it's negative, and n equals 11, it's about 1204, 1200, somewhere around 1200. So apparently there's lots of rational homology. On the other hand, so far in all the computations that have been done, there's exactly one. There's lots of homology, and this is a negative number, so there's lots of homology in odd dimensions. At this point in time, there's exactly one homology class in odd dimensions that's ever been found. So there's lots of stuff hiding, we don't understand, we can't find it. It's, yeah, there's an open question for you, where is all of this homology? But it turns out that this number, this sum of this number here is not actually such a good invariant of groups. So one nice thing about Euler characteristics is that if you have a vibration and everything's a finite CW complex, then the Euler characteristic of the total space is a product of the Euler characteristic of the fiber times the Euler characteristic of the base. So if you wanted to find some sort of an Euler characteristic for groups, you'd kind of like to have the same sort of a feature, so instead of a vibration instead you would think about short exact sequences, kernel, group, and quotient. You would like it to be true that the Euler characteristic of the group is a product of the Euler characteristic of the kernel times the Euler characteristic of the quotient, but that doesn't do it. The definition left is not multiplicative, but Wall said he fixed that. He said if G contains a torsion-free finite index subgroup, let's define the Euler characteristic of G to be the Euler characteristic of gamma divided by the index of gamma. Now that kind of makes us happy because we know how to define the Euler characteristic of gamma for a torsion-free finite index subgroup is just the regular Euler characteristic of a space that we know more or less. So it turns out that this is independent of the choice of gamma and behaves well with short exact sequences. But yeah, so we could try to find some subgroup of finite index of our ANSs and compute this Euler characteristic there and divide by the index and we'd get a number. It turns out that Brown comes to the rescue, says you don't have to do all that work. If you have G acting on X, like up there, X is contractable now, properly co-compactly, then the Euler characteristic of G, as defined by wall, is the sum of minus 1 to the dimension of sigma divided by the order of the stabilizer of sigma and this is overall orbits of cells. So that means we don't have to bother with this subgroup of finite index and trying to compute this Euler characteristic and dividing by the index. We can just look at the action of G on our space. So in the definition of wall, the chi of gamma is using the previous? Yeah, either one works. Yeah, chi of gamma is, that's just the regular, yeah, usual definition. So it's just the Euler characteristic of that quotient space. I love it because being quotient free means, you know... Got a nice... Yeah, you have a space that actually computes the homology of the group. If gamma isn't torsion free, we don't have a space. I mean we have a space but it doesn't compute the homology of the group. It just computes the rational homology of the group. And that's not good enough, it turns out. Okay, so yeah. So I just want to advertise a recent result. I started at 4, right? Yeah. So anyway, John Smiley and I in 1987... Not too recent... ...used this to get a recursive method of computing the Euler characteristic of out of fn. And we did it for n less than or equal to 11, in fact. And for n equals 11, it's approximately, I think, 2,000. Minus 2,000. It's a rational number now, it's not an integer anymore. It's a rational number, it's approximately 2,000. Okay, but in 2019... ...and so it turns out it's always negative. Everything we computed, it's always negative and it seems to be growing pretty fast. And then Zagie computed it for n less than or equal to 100. And he sent us this page with these long, long numbers. They're all very large, getting very big, very fast, they're all negative, etc. But still we didn't really know exactly what was happening. We couldn't prove, for instance, that it's always negative. So now we can. The Euler characteristic of out of fn is negative. And it grows asymptotically the same. It grows like a gamma function, a factorial function. n minus 3 halves almost over, if I get this right, log squared n. And I think there's a square root of 2 pi in there somewhere too. Okay, so this is a gamma function that's like a factorial. So this grows almost as, almost factorially fast. The other thing about these Euler characteristics, where we walls Euler characteristics, is if you have an arithmetic group, it turns out that the Euler characteristics you get can be expressed as products of values of zeta functions. Yeah, so let me just say here, and there's a close relation with zeta functions. So, you know, this was known for arithmetic groups that there's this connection with zeta functions for a long time. And it made a big splash in the 1980s that the same thing is true for mapping class groups. That's a theorem of Herr and Zagier. That the rational characteristic of a mapping class group is a zeta function value. So it turns out out of fn has the same thing. Okay, I'm done advertising recent results. But the point is, yeah? You said that these Euler characteristics were rational? Rational numbers, yes. I start with a number with a, here's the definition. So by a special value of a zeta function, you mean after removing some pi? Well, it's a, yeah, it's a... It doesn't matter if they're all up. Yeah, yeah. So the point is that in order to prove this theorem, we looked at the action on this spine that I defined. So it's another use for the spine. What else can you do with the spine? Well, as a geometric group theorist, you might, you like the fact that the spine is quasi-isometric to the group. So if you want to compute quasi-isometry invariance of your group, you can use that space instead. Oh, yeah, I dropped the eraser, didn't I? Where were we, Euler characteristic? So qi invariance, qi invariance. So for example, what's a quasi-isometry invariant? How about the number of n's of a group? That's a quasi-isometry invariant. So what is that? If you look at the Cayley graph, or any space that's quasi-isometric to the group, then how many n's does it have? Well, you look at compact sets. You look at points outside the compact sets. And, yeah, you see, well, you let the compact sets get bigger and bigger and see how many leftover pieces there are. And there's either zero, if it's finite, or one, or two, or infinitely many, like if you have a free group. And Stallings proved that a finitely generated group has zero, one, two, or infinitely many n's. OK, so you can, so it's easy to see. Maybe it's even easier to see. So the spine is quasi-isometric to the group, but so is the mortification. So let's talk about that one for a while. So it's easy to see that this mortification, o n, has one end, if n's at least three. In other words, if I take two points, if I find a base point and I take two points that are very far away from it, I can connect them by a path that's very far away from it. That's what it means to have one end. Got two points that are far away, I can connect them by a path that stays far away. There's also a notion of higher connectivity, so this is called connected infinity if it's got one end. There's also a notion of higher connected, like simply connected at infinity. It's true that b o n is simply connected at infinity. So what does that mean? It means if I take a loop that's way far out here, way far away from the base point, then I can fill it in by a disk that stays way far away from the base point. That's what simply connected at infinity means. In fact, this mortification, o n, is two n minus five connected at infinity, and that's what Bestvena and Fein originally proved using the mortification. So this is a theorem of Bestvena and Fein. So far I'm just talking about the space, the space or the group. This actually has implications for the cohomology, too. So by work of Bieri and Ekman, I said what one connected at infinity means. If you have a loop that's far out, you can fill it with a disk. So you can imagine what n connected if I have a k connected means if I have a k sphere, I can fill it in with a three ball that stays far away. I mean, with a k sphere, I can fill it in with a k plus one ball. There you go. That stays far away. That's what two n minus, that's what k connected at infinity means. Okay, by work of Bieri and Ekman, this implies that the homology and cohomology of this group satisfy a kind of duality that's like Poincare duality, but not quite. So in Poincare duality you would have a duality between the k-th homology of some space, of some manifold, rather, and the d minus k-th homology of the manifold. Well, here we have, let me call this just an, an, and instead of isomorphic, yeah. So I'm going to get an isomorphism between homology and cohomology in the complementary dimension, but I have to, it's going to have twisted coefficients. Where d here, little d is two n minus three, and big d is the cohomology, the only non-vanishing cohomology of an with coefficient. Oops, this isn't true. Let's take a torsion free subgroup of finite index. This is true if I put in a torsion free subgroup. So there's some sort of a duality which is like the duality, Poincare duality, except that you have to add somehow the, the homology has coefficients twisted by some module, and the cohomology doesn't. Okay, so that's, so not only do you get information about the number of n's and how connected this, the, the group is that infinity, you get information about the cohomology as well. There's a duality. Right. What other quasi-isometry invariants do you know? You may know about Dain functions and isoparametric functions. In general, is anybody, so I guess Anna didn't really talk too much about Dain functions this morning, did she? No. So in other quasi-isometry invariants, the Dain function of a finitely presented group, G, well, it measures basically how many relations you need to apply. If you've got a word in the generators and it gives you the identity in the group, it measures how many relations you have to apply to prove that. So it turns out that up to some fuzziness, this is a, an invariant, a quasi-isometry invariant of the group. And it's, it's also thought of as, so if you're thinking about a word as a path in the Cayley graph, then it tells you how many relators you have to apply to check that that word is trivial, or in other words how many two cells you need in the Cayley complex to stick in to prove that this loop contracts. So it's an isomeric, it's a one-dimensional isoparametric function. So since this is a, which is a quasi-isometry invariant. So since it's a quasi-isometry invariant, you can calculate it if you have any space that's quasi-isometric to your group. So we have lots of spaces, two spaces at least that are quasi-isometric to our group. Let's do, see what I want to do, otter out. I don't think it matters. Just calculate the Dain function K and 0. So how do you do that? Well K and 0 is a simplicial complex. And we know exactly what the simplices are. So take a loop and what you have to do is define some sort of natural paths from all the points on the loop to one point. Any old paths will do, but it's good to have ones that fit together nicely. And if you choose the paths nicely, you can actually count how many triangles it takes. How many triangles it takes to fill them in? I don't need that one. Define paths count triangles. So that'll tell you how many triangles you need to fill in a loop. And so that'll give you an upper bound on the Dain function, which turns out to be exponential in the length of the loop. So that gives you an upper bound. You can get a lower bound. Oh yeah, I'm sorry, this was Alan Hatcher and myself in 2000 and something or other. I didn't write it down. And to get a lower bound, you can use a trick that people already knew a lower bound for GL3Z. It's exponential. And there's a map from out of F3 to GL3Z. Namely, you take a free group and you abelianize it, you get a free abelian group and that induces a map on automorphism groups and that gives this group. But we're not so interested in groups, we're more interested in spaces. So there's a nice space that GL3Z acts on called a symmetric space. And of course out of F3 acts on this K30. And the reason, the way they proved that GL3Z has an exponential Dain function is by finding loops in this space that are very hard to fill. So the trick is to take one of these loops that's hard to fill, to find a map from this space to this space, and lift the loops from this space back to this space. They can't be easy to fill over here because if they were easy to fill over here they'd push the filling down here and get a cheap filling here. But you can't. These loops are very hard to fill back here so they must be very hard to fill back here. So that's the idea. But that only works for n equals 3. And then for n bigger than 3 it turns out that you can use all three different ways of thinking about outer space to jack up the n equals 3 result to any n. And let me just say that without doing it to other descriptions of on to bootstrap to any n. So this bootstrapping was done first by Handel and Mosher, and later Martin Bridesen and I did another version. So that was in 2010. Okay, so we've got, I've told you how to compute co-homological invariance of your group, various quasi-isometry invariance. Oh yeah, one thing I wanted to say. So why is this space so good at figuring out things about out of fn? Here's another nice result. So I have an action of a, let's just take s equals 0 so I make sure I don't get any details wrong. a n acts on k n and it acts by simplicial automorphisms. I showed you that the action doesn't change the graphs at all. It just changes the markings. So any way you think about k n it doesn't really change the shapes of the simplices. It just changes kind of where they are. So that gives you a map from a n to the group of simplicial automorphisms of this spine. This works for any n and the theorem is that this is an isomorphism if n is at least 3. So that theorem is Martin Bridesen and myself. So one reason you can get so much juice out of these spines is that the whole group is encoded in the spine. The set of simplicial automorphisms of this spine is the group. And later Aramayona and Sutu proved that in fact so a n acts on the whole sphere complex and they proved that that's also an isomorphism. So you could use either the whole sphere complex which is a hyperbolic complex remember which some people like or you could use the spine and in either case those combinatorial objects encode the entire group. And I've got 9 minutes to tell you about the metric theory of outer space and I'm going to do it a little bit. So Indira objected to me on the very first day that I was describing a topological space instead of a metric space. Well yeah there isn't really a nice metric to put on outer space. You could for instance put a metric where all these simplices were regular Euclidean simplices. That's a fine metric. I don't think anybody can prove anything about that metric but it's a fine metric. But on the other hand there is another metric called the Lipschitz metric on outer space on ON. I don't think it's anybody studied it for ONS not that I know of. Anybody out there studying it for ONS? Yeah okay. So Lipschitz metric on ON. So the people that have done the work on this think of use the graph model. So what's the Lipschitz metric? First of all remind you what the Lipschitz constants of a map is. If x and y are compact metric spaces and you've got a map from x to y then the Lipschitz constant of f is, I've got compact sets so I can say the max of over pairs x, x prime and x. The distance between f of x and f of x prime divided by the distance between x and x prime. So it's how the maximum amount distances could get stretched by this map. Compact metric spaces. So now how do I define, supposing I have two points. So let me draw them like this. Here's g. Here's g prime. I've got two maps from the rows to g. And I wanted to find a distance between those two points in outer space. So let me call this x, call this x prime. The distance between x and x prime is going to be, it turns out I can say, well let's just say inf of the Lipschitz maps. The Lipschitz constants of maps f and what are these maps f? They're maps f that I can put here. In other words f composed with g is homotopic to g prime. So that's the definition of the distance between two points between x and x prime. It's not too hard to check. It satisfies the triangle inequality. And it satisfies the distance between x and x prime is zero. It implies that x and x prime are the same point. But does not satisfy the distance between x and x prime is equal to the distance between x prime and x. So it's a kind of a funny metric. This is really easy to see. Let's take two points in outer space. One is just a rose, marked like that. And here's another point, same marking. Whoops, b. A is tiny and b is big. So the mapping here is the identity. So I want to map here that minimizes, let me try it this way first. Minimizes the stretch. Well b might have to stretch to almost twice its length. But that's as far as it has to go. So this is x and this is x prime. The distance between x and x prime is less than two. On the other hand, what happens if you try to measure the distance between x prime and x? Well, this tiny little loop here, it's just got length epsilon, has to get stretched all the way up to a half. So it's at least one over two epsilon, where this has length epsilon. So the distance between these two points is different if you go in different directions. Well, that should be not equal to the distance between x and x prime. No, if the distance is zero, then the points are the same. No, below the distance between x and x. Oh, yes. Thank you. Yes. Okay, so what am I going to say about this? Only one thing that Bestvena used this metric to prove a really nice theorem. You could always symmetrize the metric. Yes, you can, but then you can't. You lose geodesics. You lose, yeah, it's not complete, it's geodesically complete anymore. So with this thing, it is geodesically complete. But if you symmetrize, it's not. That's an interesting feature. Right, Bestvena, let me tell you what Bestvena did. Well, he did many things with this metric, but one thing he did it is used this metric to find nice representative G to G of an automorphism to Fn. So he showed how to identify the fundamental group with G, that's by a marking. Same marking on both sides. And so you've got an automorphism of the free group and he showed how to lift that to a really nice map of here called a train track map. He actually, his proof only does it if V is what's called fully irreducible. And I don't think anybody's really written. So you can find, yeah, this is a very nice proof of a result originally due to Bestvena and Handel. It's the beginning of a long story about how to study automorphisms of free groups. If you have a single automorphism, if you can find a nice representative, you can figure out lots of things about that automorphism. And yeah, it's five o'clock. They've proved many, many things about single automorphisms and groups of automorphisms and subgroups of automorphisms using train track maps and various improved, no, relative train track maps, improved train track maps, improved relative train track maps, completely split relative train track maps, et cetera, et cetera. So they keep getting better and better representatives for automorphisms and proving more and more theorems about the space. So I think I'm going to stop.