 So we're going to start at start gently. I'm going to sort of try to recall basic facts about Fuxian groups and about Tichner space in a way which sort of sets us up to think about inosov representations. So this is the overview, which I basically said. But you can sort of, you know, you can study representations in a PSL2R. And then it's fairly obvious how you might try to abstract those things to study representations into isometric groups of other hyperbolic spaces. We're going to be fancy, instead of calling them isometric groups of other hyperbolic spaces, we call them rank-1-lea groups. And then higher Tichner theory attempts to sort of generalize these techniques into the setting of higher rank-lea groups, where things get a little bit trickier. And the real reason they get tricky is that when you look at the, these lee groups are, again, isometry groups of manifolds. But the manifolds they're isometry groups of are no longer negatively curved. They're only non-positively curved. And that causes us a fair sum amount of trouble. OK, so I feel like I should start by defining H2 for you. But I won't. So we're going to call a representation of pi 1 of s into PSL2R fuxian if it's discrete and faithful. And if you have a discrete, faithful representation, then you can form a quotient surface. So maybe here's your naked topological surface S over here. And then you form your quotient n sub rho. And this is H2 moded out by rho of pi 1 of s. And then rho gives you an identification of the fundamental group of this surface with the fundamental group of that surface. So then that gives you a homotopy equivalence. And it gives you, in fact, a homotopy class of homotopy equivalence. But because these are just surfaces, every homotopy equivalence is homotopic to a homomorphism. So in fact, we get a homeomorphism. OK, but I want to sort of look at this from a different viewpoint. I want to now, so John talked about not liking differentiation. I'm going to talk about a viewpoint where we don't even like continuity. Sort of talk about fuxian groups from the coarse geometry viewpoint. So one basic point from that is that if you have one of these fuxian groups, then that gives you a quasi-isometry from the group into H2. And you construct that quasi-isometry. So maybe this is made from, so over here you have what's the picture I want to draw. So over here you have the fundamental group of the surface, which we can think of as, I won't even try to draw it. So we're over here in H2, and we have a tessellation of H2 by maybe these octagons. And then we draw sort of a dual graph to that tessellation. And there's one vertex for every copy of the fundamental domain. And we're going to think of that as just a copy of the Cayley graph sitting within H2, or in fact, a copy of the group sitting within H2. And then we want to think of this representation as giving rise to a quasi-isometry from the group into H2, which you just take a group element. You pick a base point here. This is x naught. And you identify the group element with the ordered point gamma of x naught. OK, so well, for this to make sense, pi 1 of s needs a metric. So for those of you who haven't seen that before, the natural metric that you put on a group is the word metric. So that means for you choose some generating sets. So there's a whole family of metrics on your group. And then you talk about the distance between the identity word and some other word. It's just the minimal word length of that representative. And the distance between two words, gamma 1, gamma 2, is just the minimal word length of a representative of gamma 1, gamma 2 inverse. And so a way we, is this the, OK. We often think of this in terms of the Cayley graph. So what we do, so let's do the Cayley graph for the free group, for instance. We put a vertex for 1. And we go out here. Over here is A. Over here is A inverse. Here is B. Here is B inverse. And then we get, there's AB. There's AB inverse. There's A squared. There's BA. And you just keep going forever. So you can think of there actually being a graph associated to your group, where each edge corresponds to multiplying by a generator. And then the natural metric on that graph, where you give every edge of length 1, induces the word metric on the group. So what I've really drawn, or not really drawn, but sort of indicated here, was I've tried to stick sort of the Cayley graph inside here. So now we've got a metric. Now what does it mean to be a quasi-isometry? Well, a quasi-isometry is sort of like an isometry if you have bad eyesight. So the first thing you might do to worsen, an isometry means distances are preserved. And the one thing you might say to loosen up that notion is you might go from isometry to by-lipschitz. You don't care up to a factor of k. So you might say the distance between f of x and x of y is between k times the distance of x of y and 1 over k times the distance of x of y. But then we don't really care what's happening locally at all. And we're going to, in fact, allow ourselves an additive constant of error. So we really don't care. We just care that in the large, that as you make progress in x, if you map over into y, then you're making a similar amount of progress in y, or sort of comparable progress in y in the large. So the most basic example you can think of this is if you look at and you include z inside r, just as the integers, that's a quasi-isometry. And here it's, well, in one direction, so this direction is, in fact, an isometric embedding. But in fact, if we also go backwards and we just map any point to the least integer, say, above it, then that map is a quasi-isometry in another direction. Sort of the constant, its multiple constant would be 1, and its additive constant would also be 1. But you might also say, if you look at r2, what does r2 look like? r2 looks a lot like the group z2. So this is the Cayley graph. Graph paper is the Cayley graph of z2, and its inclusion into Euclidean space. This is sort of, now we're hopping that up into the hyperbolic world where we look at the Cayley graph of the surface group, and we've embedded it inside h2, quasi-isometrically. And so there's a general principle at work here, which is called the Milner-Schwarz inequality, which says that whenever you have a group acting properly and co-compactly on some reasonable metric space, then you, in fact, get a quasi-isometry between the group and the metric space. So here we see that z is quasi-isometric to r. Here we see that z2 is quasi-isometric to r2. And over here is supposed to be a picture. The fundamental group of a closed surface is quasi-isometric to the hyperbolic plane. OK? And what's the basic idea is, well, you just take this orbit map. If you have gamma acting on your space x, so now this is no longer h2. This is x. This has changed its stripes. And you have some nice action by isometries. Here's you pick some base point x0, and there's gamma of x0, and say it's gamma 1 of x0. That's gamma 2 of x0, gamma 3 of x0. And that gives you an embedding of the group into here. Why do you know that that's a, so this gets you, you can extend it, in fact, to this map of the Cayley graph into there? Now, why do you think that should be a quasi-isometry? For one thing, these edges all have the same length. And this is just the amount x0 is moved by any generator. There's some maximum value of that. And each of these edges is at most that length, is at most the maximum movement of x0 by a generator. So right away, this map, if you just take what I choose, I chose k to be the maximum that x0 is moved by any generator. And then right away, this map is k-liption. So you have one direction in the quasi-isometry. And what do we need to know for the other direction? We need to know that if you take a word of big length, then, in fact, its orbit is far from the origin. So that's roughly proper discontinuity, said that vaguely. But we also want sort of linear progress away from the origin. So what do we do? So if we take any orbit point over here, some general point gamma of x0. And we sort of join it to x0 by a path. Well, if you look at the diameter of the quotient object, it's at most some number we'll call that c, which means that as you travel along this, you first divide this path into segments of size c. And then at each point here, you pick an orbit point. So here, you're going to pick maybe this orbit point is called gamma 1 x0. This is gamma 2 x0. This is gamma 3 x0, gamma 4 x0, et cetera. And so you can see that if you divide this into five segments, then each of these segments, you can find a nearby orbit point. And the distance in the group between x0 and gamma 1 of x0 is some number. These orbit points are at most 3c apart, 1c, 2c, 3c apart. And since this action is properly discontinuous, there's only a finite number of group elements within 3c of the origin. So each element, each pair of orbit points, which differ by most 3c, differ by at most some bounded number of generators. So then I can take this little segment of size c, which goes from here to here. I can do that in a bounded number of generators. I call that r1. And then I can get from gamma 1 to gamma 2 by r1. I can make gamma 2 to gamma 3 by at most r1 and gamma 3 to gamma 4 by at most r1, et cetera. So I know that that tells me that if my orbit point is not very far from my base point, then in fact, the word length of my element is not very large. This is sort of a very general principle in geometric group theory. And one application of that principle is that when you have a Fuxian representation, that the orbit map is a quasi-isometry. So what's the second key property of Fuxian representations? Well, one of them is that when you wiggle a Fuxian representation a little bit, it remains a Fuxian representation. We know that we saw in Francois' talk that a Fuxian representation is an entire component of the representation variety. So what's the sort of a general way of seeing this? If you take a quasi-isometric embedding into a hyperbolic plane and you wiggle it a little bit, it remains a quasi-isometric embedding. And if something is a quasi-isometric embedding, if the orbit map of a quasi-isometric embedding is a quasi-isometric embedding, then in fact, that representation must be discrete and faithful. Because you can't have an infinite order element which fixes a point because that would not, then the orbit point, the orbit map would be bing, bing, bing, bing, bing, bing, bing, you know, stay there. And you also can't have a very large number of orbit points near the origin. So you have to be discreet. OK. So what does this follow from? This is a basic fact about hyperbolic space, which is that if in hyperbolic space, on every segment of length 50 miles, you make definite progress, then you make definite progress over your entire time. So if you're traveling for a million hours and over every segment of 100 hours, you make definite progress, then over the entire segment of a million hours, you make definite progress. Now notice this is very much not true in Euclidean space. Because what could you be doing in Euclidean space? So in Euclidean space, you could be traveling around the circle. So this is not a boundary of H2. This is a circle in the Euclidean plane now. And if you look at it, if you take a very large circle, if you look sort of locally, it looks like you're traveling along the line. This is calculus. This is what we spend an entire semester teaching our students, that locally you look like you're tangent line. So no matter how long a time frame you measure and see that you look like you're making definite progress, you could be circling around and coming back home. You could just be on a very large circle. So what happens in the, so this is sort of a, I was trying to figure out what to do with this. I don't want to do a whole class on hyperbolic trigonometry. But we know that in hyperbolic space that if we walk 100 feet and we make a right angle and we walk another 100 feet and we make a right angle and we walk another 100 feet and we make a right angle and we walk another 100 feet and we make a right angle, about how far we'll be from home. Almost 400 feet, right? The hyperbolic plane, right? So there's this calculation I like to do to illustrate how bad the hyperbolic plane is. So I mean, suppose you're golfing and you're 100 yards from the hole, which we'll think of as 300 feet, and you hit a shot, which is only one degree off. How far, so in Euclidean space, you're about five feet from the hole. Because you can just do that by, well, you're like 300 times 2 pi divided by 360. Pretty good estimate, you get about five. So if you do this calculation in hyperbolic, what? Yeah. See, hopefully, until it goes, it's right. Yeah, so what happens in hyperbolic space? I've drawn this picture, but actually, how far are you from the hole? You're over 590 feet from the hole. So even if you walk 300 feet in hyperbolic space, turn around, go nearly backwards. You turn at a 179 degree angle, and head nearly backwards, you'll end up like 590 feet from home. So it's that same basic principle you made definite progress. Another thing, and this is also true in any so-called hyperbolic metric space, and the basic idea of a way of saying this is that if you take a quasi, a map which is making definite progress, so you're one of these so-called quasi-GD6, well, what happens in hyperbolic space is you fellow travel the GD6. So if you're going from x to y, and you make sort of Kc quasi-isometric progress, so you're sort of wandering, but you're making definite progress in definite time, then you have to have stayed near the, in fact, the most efficient possible path. But what happens in Euclidean space is that if you're going a long distance from x to y, this is in Euclidean space. You could go up here and then over, and that is a square root of 2 quasi-isometry. On the other hand, it's gotten super far away. So you can never tell locally what happens. So here you can imagine you're trying to go from x to y. You're trying to travel along this GD6. And so suppose you didn't make good progress. Well, that would mean that at some point you have to get very far away and then turn back. And then if you look at this, there's some place where you were very inefficient in doing so. So if you look at the standard proof of the fellow traveler property, it sort of contains this. And why do I only need, I said I only have to look about the ball of radius L about the identity. Well, what's the nice thing about this orbit map? It looks the same at every point, because this is an equivariate picture. So that if I move, if I look at what the image of my Haley graph looks like near the identity, and I move to any old orbit point over here, then it looks the same. If I go along a generator here, I have the exact same looking pictures, an equivariate picture. So that means that if I'm a quasi-isometry on the ball of radius L about the origin, I'm a quasi-isometry on every ball of radius L. So if I want to see what's happening in the graph, in the whole Haley graph, I know that if I'm traveling point X to point Y, that locally, I'm always making good progress. So globally, I'm always making good progress. So if you think about what that means for a representation, if you move your representation a very small amount, that means every generator gets moved a very small amount, which means that every orbit point gets moved a small amount. Now, of course, the problem is that a orbit point, which is a million generations away, gets moved a million times small. But a million times small could be big. But this says that, well, I only need to check. There's some radius L, which is a million, and say if I control a million translates, then I'm done. And if I always want all the million translates to be with when one of what they were before, I just have to adjust each generator so that it's very, very close to my original choice of generator. So that tells me that if I wiggle it just a little bit, then I wiggle the representation a little bit. I wiggle the orbit map a little bit. And since I only need to look at it on a ball of radius a million, I can wiggle it so little that on the ball of radius a million, it's still very well-behaved. So that's why that if you take something which is a quasi-isometric embedding and you wiggle a little bit, it remains a quasi-isometric embedding. And this works, this same proof, works in every if you have representation of the isometry group of any negatively curved space. You can make this exact same space. So in fact, any sort of hyperbolic space, this will work. So what about teichner space? So I think Francois defined teichner space in a slightly different way. But I want to think of teichner space as a space of representations, not as a space of surfaces. So what he said was that a point in teichner space, so here was Francois' picture. He has a point in teichner space. I have my naked surface S and I put a metric on it. Well, OK, how does that give me a representation? Why is that a representation? I can instead think of here I've got my naked surface S and over here I've got my hyperbolic surface X, which is S with the metric M. And I have a homeomorphism H from S to here. Now X is equal to H2 moded out by gamma, where gamma is contained in the group of orientation preserving isometries of H2, which I can think of as PSL2R. So now when I look at H, now H star gives me a map from pi 1 of S to pi 1 of X. But pi 1 of X is just its group of covering transformations, which is gamma, and gamma is sitting inside PSL2R. So that's how you can sort of go from a space of metrics to a space of representations. As I wiggle the metric here, then I wiggle the group of covering transformations, and hence I wiggle this representation. And oh, I missed out. That's supposed to be moded out. This X means I've moded out by PSL2R. Because obviously if I take a representation and I conjugate it by PSL2R, then I have the exact same quotient surface. And that really would just mean that, for instance, I can write this as H2 mod gamma. And that means I've chosen some point on the surface, say Y naught, and it lifts to some fixed point X naught up in H2. But then if I move and I say I want this point over here, Y1, to lift up to X naught, what happens? I conjugate the representation. So it's like just choosing a base point on the surface. And we don't really want to care about what base point we choose on the surface. We only want to care about the conjugacy class of the representation. And so what Francois said, I don't think he said it in exactly these words, is that typing their space is a component of this space of representations. And it's isomorphic to R6g minus 6. Oh, that's calligraphic R naught, math BBR. And just to remind you, he had these Fentiel Nielsen coordinates, which are you took a pair of pants decomposition of your surface into three curves. And then one coordinate was the length of that curve. Another coordinate was the length of that curve. The third coordinate was the length of that curve. And then he showed that you can build a pair of pants with links L1, L2, and L3. And you build another pair of pants with length L1, L2, and L3. And then you just glue them together. But these are GD6, and there's lots of ways to glue them. You can glue them with a twist. And why do you get a real number? Well, you should think about this surface as sort of clothing being worn by the surface. So as I twist around, I twist around, my clothing is different. I can only twist so far. So if I wore my clothing twisted around three times, it'd be quite uncomfortable. So it really would be different. You think about this as an imperfect analogy because there's no topology. But think about those children's PJs with booties. Now, it's the same clothing. But if you put the PJs on the children and you've twisted the foot around three times, they will scream. It's the same clothing. They're not going to experience it the same way. So it's not just the clothing. It's how you wear them. We're all fashionistas now. So this is one way to think about the twist. And so we've already seen that tight mirror space is open in this representation variety. Why is it closed? Well, this is sort of a basic fact called the Margulis Lemma, which says that if you have a limit of discrete, faithful representations, then that limit is also discrete and faithful. So I won't go into that. But let me just tell you, that's a basic and not terribly hard Lee groups fact, which is very general. So you may have seen this in the form of Jorgensen's inequality, which is just a quantitative form of the Margulis Lemma. OK, so you might start generalizing to this what I would call somewhat higher tight mirror theory. This higher tight mirror theory language is not too flattering for those of us who've spent all their life working in Renequan. We suggest we've all been doing lower tight mirror theory the entire time. And actually, at my office, on one side, on my office is right between Ralph Spudzier and Gopal Prasad's office. And I sort of figure they've been thinking to themselves, Dick, 20 years, two Lee groups. Come on. So I've been trying to impress them by expanding the number of Lee groups I can work in. But you can expand a lot of what we've just said into a Renequan Lee group. Well, what is a Renequan Lee group? Well, we take one of these real complex quaternionic or octionic hyperbolic spaces, and we take its isometry group. So up to passage to covers, that's just what a Renequan Lee group is. It's like, well, you could just say, like, the isometry group of H2 is PSL2R. Well, and then there's also SL2R. But that's just an index 2 cover. So up to covering space theory, these are the same thing. And then if we have some, and all my groups can be torsion free, because torsion is kind of irritating and not to the point. It doesn't make any difference in anything we're doing. So all our groups can be torsion free. We can say a group is convex co-compact if the orbit map is a quasi-isometric embedding. So I look at my representation, and I just look at what I pick a base point in my hyperbolic space, and I move it around by the image of the representation, and I see, ah, did that give me a quasi-isometric embedding of my group? And if it does, I'm going to call that representation convex co-compact. And as we said before, if your orbits make definite progress, then you certainly have to be discreet, because if you're indiscreet, there's going to be orbit points arbitrarily close to the identity. And if you're, it also has to be faithful, because if you're not faithful, then you have some infinite order element which doesn't move the base point at all. So that then have infinitely many, infinitely, so the base point is infinitely many orbits of itself, which is very far from being a quasi-isometric embedding. And moreover, this argument I gave you, that if you take a quasi-isometric embedding and you wiggle it a little bit, it remains a quasi-isometric embedding, that works in any hyperbolic space, not just in H2. And in fact, it works in any space with curvature bounded away from 0, bounded above, and away from 0 in any reasonable sense you want to make. So cap minus 1 spaces is a very general argument. So if you take a neighborhood of your quasi-isometric embedding in your rank 1 lead group, all the nearby representations are convex, co-compact. So that's a pretty good property. So that's what the property we're going to want. Ooh, I'm speaking of being slow. That's a property we're going to want to have if we're going to think of what's a good representation into a higher rank lead groups. Well, there's some cautionary tails here. So let's tighten their space. Well, this was super nice because it was not only an open set of representations, it was also a closed set of representations. Well, that's kind of special. And even in PSL2R, if you think about various people who have talked about shocky groups, so in H2, you maybe have a shock group generated by the thing taking this circle to this circle and another element taking that circle to that circle, that's going to be a convex, co-compact representation. We see the quasi-isometric embedding very naturally because you see this sort of nested picture and you see sort of one guy. So in here and then that guy dives into there and you see a nice embedding of the Cayley graph of the free group sitting right there in your picture. In fact, it's easier to somehow see than the surface group case. OK, but you could do something really bad here, which is, or not, it's not so much bad as not convex, co-compact, is what happens if you let the circles touch. So I'm an old school guy. I don't call this a shocky group. I know a lot of people do. This is a limit of shock groups in my world. And this group is still the group generated by those two transformations is still discreet and faithful. But it's no longer convex, co-compact. Why is it no longer convex, co-compact? Well, if you look at what is the orbit of this element do. Now this element, so if you take x naught here, this is going to be a parabolic transformation. It preserves a horror cycle. And as you sort of what happens as you go along the horror cycle, as you make linear progress along the horror cycle, well, if you go inside, if you go and along the boundary of the horror cycle, you go log n along the horror cycle. And we have to make definite linear progress. And log n is not definite linear progress. So a lot of us are more comfortable with this calculation if you see it done with the horrible being based in infinity, right? So there's x naught plus 1. And you go all the way out to x naught plus n. This distance is really about log n. So it's not making linear progress. So it's not, in general, true that a limit of quasi-isometric embeddings is a quasi-isometric embedding before we got away with it. Because it is true the limit of discrete faithful things is discrete faithful. And a discrete faithful representation of a closed surface group is always a quasi-isometric embedding. So we sort of won there, just because we got lucky. So in general, this will not be an entire component. So if you're trying to generalize, typing their space, that might kind of worry you a little bit. But that's something you've got to live with. And another thing you might see is discrete faithful representations need not be convex co-compact. We also saw that in this example. And if you look up and you say you go from PSLR to PSL2C, things get even worse. If you look at the set of discrete faithful representations of a surface group into PSL2C, that is a closed subset by the Margulis Lemma, whose interior is the set of convex co-compact representations. Well, so far, that doesn't seem so bad. But that set is not even locally connected. This is a result of Ken Bomberg and Aaron Madden. So things get kind of bad. But nonetheless, we persevere. So let's see another way of deforming representations in the rank one setting. So how would we actually build a deformation of a closed surface group into PSL2C? Well, one thing I can do is here's H2 sitting inside H3. Here's H2, that's sitting inside. The big guy is H3. So an isometry of H2 extends to an isometry of H3, and all is well and good. So I can get a representation into PSL2C just by looking at PSL2R as a subset of PSL2C. That's not so interesting. So I want to get a representation which actually deforms away. So one thing you might do is you're on your surface here. You take a curve C, and that curve C divides your surface into two pieces, S0 and S1. And you might say, now if I look above this up in H2, what's the preimage of that closed GD sick? It's a bunch of, it's infinitely many of these GD sicks. And what I might do is I might say, I'm going to take H2, and I want to stick H2 inside H3. And I'm going to first do, I'm going to take this piece here and stick it in inside that play. So this is X0, maybe over here is this piece is X1. But what am I going to do? Now I'm going to bend a small angle theta. So I'm going to come over here, and there's my angle theta. And I'm going to stick X1 in that plane. And then when I come over here, I've got to bend another angle theta. And I keep going, and I keep going, and I keep going. And since my original representation was discrete faithful, if I bend a very small amount, that's going to remain a quasi-isometric embedding. So it's going to remain convex co-compact. And you can also see what this, you can also see this at the level of the limit set. You're going to see, at first approximation, you saw that circle. But now you bent it at angle there, but then you come over here, and you erase this little segment, and you bend it at angle theta, and you get one of these quasi-circles that Pepe was talking about. Just by you iterate this nasty construction, and it's house dwarf dimension is no longer even one. So it's become infinite length. It's become a pretty bad set pretty quickly. OK, so this is a way to, geometrically, let's think about what is going on algebraically. Well, I take my Fuxian representation, and I have one representation restricted to the fundamental group of this piece, and another representation, which is restricted to the fundamental group of this piece. And now I take this axis here, call it L, and I look at a rotation by a small angle theta. And what I'm doing is I, on S naught, I'm going to take my representation just to agree with the original representation rho. And on S1, it's the new representation, which is I conjugate by this rotation. And the key fact here is that r theta commutes with rho of pi 1 of c. It's in the centralizer of pi 1 of c. That's the key part of this construction. So algebraic, so I could have started, I could just have written this down, except we wouldn't really know what it meant. And hopefully we have some picture of what it means now. And so by stability, pi rho theta will become convex to a compact for small theta. But eventually, it won't be discrete faithful. If you think about it, what happens if you go all the way around to pi? Then your representation is back into PSL2R, but it's what it's volume. The volume of that representation is 0. This is exactly the sort of picture that Bertrand was talking about, because we have built. In fact, what we have built is a pre-geomatization with volume 0. There's different ways of seeing this, but since he talked about that way. So somewhere in the middle, you hit a non-discreet, non-faithful representation. So a key feature of these convex, co-compact representations is you have what's called a limit map. So when I was building this limit set, this limit set, it's a quasi-circle, but the key thing is a circle. It's an image of the circle. And we can think of, if we think of the surface group as essentially H2, we can think of the boundary of the surface group as being a circle. And so this is an embedding of the boundary of the surface group inside of the boundary of H3. And that's the whole idea of a limit map. That always comes as part of this picture. Or when we do one of these shockey group's constructions, you get an embedding of the cantor set inside of the boundary of your space that you're doing the shockey group construction on. So more generally, a very quick course on what a boundary is. So if you have a proper GD symmetric space, that just means that closed balls are compact. Closed balls of any diameter about any point are compact. And it means that the distance between any points is exactly the length of the shortest path between those points. That's what a proper GD symmetric space is. And it's hyperbolic if any G's triangle is delta-thin. So that you always see this very non-Euclidean picture that no matter how you draw a triangle with three sides, the third side is within distance delta of the first two sides. So why does hyperbolic space have this property? Because if that weren't true, if this got more than delta from those two sides, then its area would have to get big. Because then I would have, this point was more than delta from the other two sides, I would have embedded delta ball. And the area of a delta ball is at least, well, it's bigger than the area of Euclidean delta ball. So it's growing faster than delta squared. So right away, we know that there must be some. And this same argument works in the hyperbolic space. Of any dimension. And it works whenever you have curvature bounds by comparison theorem. So this is very general notion. And then if whenever you have a hyperbolic space, you can talk about the set of equivalence classes of GDC grays. So if you think about what should the boundary of H2 be? Morally, I pick a base point. I look at all the GDC grays emanating from that point. And then that space of GDC grays is naturally all the directions to go to infinity. Well, but if you think about something like the Cayley graph of a closed surface, there's going to be lots of things which go halfway around an octagon and then keep going. And they're going to be closed GDCs which go in the same direction, but they're different. So we're going to have to, in general, say two GDC grays are equivalent if they stay near each other the whole for their entire life. So the set of all equivalence classes of GDC grays is the boundary. And there's a general property that if you have a quasi isometry, then you get a map from the boundary to the boundary. And the basic idea is that if quasi-itometry takes a GDC gray to a quasi-GDC gray, remember I told you that quasi-GDCs live near GDCs. Well, this is also true. Quasi-GDC grays sort of track GDC grays. So you get a well-defined map of the boundaries whenever you have a quasi-isometric embedded. And so right away from that fact, you get that whenever you have a convex co-compact representation of a hyperbolic group into a rank one league group, then you get an embedding of its boundary into the boundary of the hyperbolic space, which G is the isometry group of. Okay. So this is now a pretty sort of flexible notion and the sort of limit map is something that we care a lot about. And so the goal of higher type theory is to take this theory and start dealing with representations into general semi-simple league groups of higher rank. So let's just say PSLNR for the moment, but for instance, Pepe, his talk was about PSLNC, which is also when N is at least three as another higher rank league group. Well, why not just use the original definition? Well, the problem with the original definition is we want the stability property. So we could say I'm just gonna study representations in the higher rank league groups so that the orbit maps are quasi-isometric embeddings. So if you have PSLNR, you can in general mod out PSLNR, obviously it's supposed to be N or N minus one. And that gives you what's called a symmetric space, this quotient, and this is a manifold non-positive curvature. And then we identify PSLNR with the group of orientation preserving isometries of X, maybe call that XN. And so then given a representation row from gamma into PSLNR, then we get an orbit map tau row from gamma into XN, and we might say, okay, I'm just gonna study quasi-isometric embeddings. Well, what's wrong with that? Well, in some sense there's nothing wrong with that, but it doesn't have some of the features we want. And the first one that fails is the stability. We would like to say that a good representation should sort of be, a good notion of a nice representation should be stable. And if we wiggle the representation a little bit, it should still have that property. And this fails right away because quasi-isometric embeddings, if you wiggle a quasi-isometric embedding into Euclidean space, if you wiggle a line, you could also find yourself having a circle. And so in fact, and so if you think about this, another way to say this is that a translation occurs as a limit of rotations. So if you have a translation, Z goes to Z plus one, say, and then you just take bigger and bigger circles and you rotate by an amount so that this is one, so this is really near one. You rotate that angle. And if you look at, if you let those circles have bigger and bigger radius, then those rotations will converge to a translation. And clearly these representations, which take Z to a rotation, are not nice representations. And the limit, you get a translation, which is a perfectly nice representation. So you've got some stability problems. And moreover, Olivia Guichard jumped this up a little bit and you can construct representations from the free group on two generators into PSL2R cross PSL2R, which we could stick inside PSL4R if we wanted to. And you get a limit map of the free group onto H2 cross H2. This is a quasi-isometric embedding, yet it's a limit of non-faithful representations. Okay, so that says even if we sort of, a lot of times with even in the rank one world, representations of Z are quite different than representations of a bigger free group. They're special, we call those elementary because a lot of things don't quite work. So this is a non-elementary representation which still has this bad lack of stability. Okay, so let's construct some representations. Let's just think about some examples of representations we might like to handle and which we might like to think of as nice representations. So one class of representations is we can now think of, and this is, this occurred in a couple talks, think of isometric group of H2, not as PSL2R, but think of it as SO21. And then from that point of view, it sits inside PSL3R and you can think of it as a group of automorphisms preserving a disc inside RP2 from this viewpoint. So you can think of PSL3R as the group of projective automorphisms of the projective plane. And then you can think of there's some nice round disc sitting within the projective plane which is a copy of H2 which is preserved by your representation. And that quotient object, you could think of it as either a hyperbolic surface or as what's called a real projective surface. It's a quotient of a domain in projective space by projective automorphisms, right? I mean a hyperbolic surface is a quotient of hyperbolic space by hyperbolic automorphisms. So you could generalize this idea. And we could also generalize our bending construction. So we again take this same picture where we divide a surface up into two pieces. And now I just, again, I take a one parameter family in the centralizer of, oh, that should be the centralizer of row of row naught of pi 1c. And then we can define a new representation where every time we hit C we conjugate by something really near the identity which centralizes row of pi 1 of C. So algebraically it's the exact same construction. And geometrically it's doing something different. It's not bending, some people like to call this bulging. Bill Golden likes to call this the bulging operation or instead of projective bending. But you can sort of think of, so let me draw the projective plane like it's a sphere. That doesn't seem so good. But here's some disc, this is sort of the disc which we think of as our projective model for hyperbolic space sitting inside RP2. So maybe I should erase this and have my identifications there so it really looks like RP2. And then when I wiggle my representation, I now, I look at the image of C is going to be a bunch of copies of lines and then every time I hit C I'm gonna bulge out a little bit and replace it by something different and iterated. What does it mean? A wide bulging. Does anyone have a good piece of intuition for why bulging is the right word? Bill Goldman calls it bulging. I think because it's not bending, you're not bending along an angle, you're sort of shearing and I don't know. Yeah, this is not something I do for a living. I'm faking it here. But yeah, nobody wants to help me. No, no one's gonna help me. I think we need it. So it was a bulge, yeah, that's true. I think, yeah, I think that's it. The point is that when you're doing these constructions, these are ellipsoids rather than circles, so they're bulging. But yeah, it's, I don't know. It's a little, okay, anyway. So you can do this exact same construction whenever you have a co-compact group, a co-compact group of isometries of hyperbolic end space and you have an embedded totally G6 sub-manifold. You can sort of, just every time you hit this, you sort of decompose the manifold along the sub-manifold. Every time you hit the sub-manifold, you do this bulging operation or you conjugate by something preserving the plane. Okay, so what Benoit proved is that when you do this, each of these deformations is the whole anomy of a convex projective structure. And so what he shows is that this bulging operation that I was drawing here, you end up with a convex subset of a projective plane which is preserved by this group of automorphisms. So I can quotient out by it and I get a so-called convex projective surface. And so this is sort of amazing. The really, the really amazing, well, the fact that this works for small values of T is just, we should have thought that was gonna be true. The amazing thing is that it works for all values of T. You can bulge as much as you want and it will still remain a convex projective structure. I think that's more surprising. And in fact, he showed that this entire component consists of these convex projective structures. So these are sometimes called Benoit components. I'm not sure they're called Benoit components by anybody but me, my co-authors, but I think it's a good name. Okay, and so he also proved that the result of this bulging procedure, you get some nice C1 boundary. So it's quite different than when we did this operation in the Fuxian setting and we did the bending, we ended up with something horribly not C1. It's not even a house-dorff dimension one, not even rectifiable. When you do this in the projective setting, you get something which is C1 and is a rather nice object and convex. And then another first class of representations that people started thinking about was the so-called Hitchin representations. And so Hitchin, at least to me, when I first learned about this, didn't seem like a very promising construction to find anything interesting, but I guess that's why I'm me and Hitchin's Hitchin. So you take, there's an irreducible representation of PSL2R and of PSLNR, which is unique up to conjugacy, and it comes from regarding Rn is the vector space of degree N minus one, homogeneous polynomials in two variables. And then you just, if you have a thing, I have a element of PSL2R, A, B, C, D, you just take X to A, X plus BY, and Y to CX plus DY, and then you can sort of see what that does to an automorphism and it turns out to give you a linear transformation. So I did an example here, you can just write it down. What a diagonal matrix does, and you see that the image of a diagonal matrix is represented by this diagonal matrix. So the nice thing about this is this is diagonalizable with distinct eigenvalues. So if you take anything in PSL2R, which is a hyperbolic element, then it's gonna be conjugate to that guy, so it's image it, it's gonna be conjugate, it's gonna be conjugate to this guy, so it's image is conjugate to this guy, which means it's also diagonalizable with distinct eigenvalues. So that means, well, I mean, it's pretty nice, it's irreducible, it's various things. Okay, and then what you might do is take a Fuxian representation into PSL2R and then post-compose it with this irreducible representation and end up in PSLNR. Well then you just get a copy of Teichner space sitting inside the representation variety and what Hitchin says is let's look at the component of the representation variety given by, which contains the image of the Teichner space, okay? And that's called the Hitchin component. He used analytic techniques to show that this is topologically a cell. So topologically it's RN squared minus one times two G minus two, where you see when M is equal to two, this does give you R6G minus six, which is good. And he called this the Teichner component and the main evidence for this being, for this fact being this topological fact that this component happened to be a cell, okay? And then you might ask yourself, why is that interesting component? Well one piece of evidence for it is that if you look at when N equals three, they're representing the PSL3R, this is just the same thing as the Benoit component. So this corresponds to spaces of convex, whole known ways of convex projective structures on surfaces when N is three. But in general, very little was known about what the geometric nature of these representations were, which is sort of where Anasov representations got their birth. It was Francois Labry's attempt to understand these so-called Hitchin representations which gave birth to the theory of Anasov representations, okay? So now I'm gonna attempt to tell you now I'm about maybe halfway through the talk. So let's think about what Anasov, let's see if we can try to get a definition of what Anasov representation is at least in a fairly simple setting. So for one thing, to get the original definition we have to think about the geosic flow of a group. So let's just imagine our group is the fundamental group of a closed negatively curved manifold. And then the geosic flow of the group is just the geosic flow of that manifold. Then one can think about, so if you have the geosic flow of a negatively curved manifold, okay so here's your, our negative curved manifold is just gonna be a surface for the moment and then we take the universal cover to be H2 and the geosic flow on the surface is your space of all unit tangent vectors at every point. How might we think about that? Well a unit tangent vector is determined by, I take the geodesic it lies on, I lift up here in T1 of H2. Now I can think of a point which is in S1 cross S1 minus the diagonal and that's going to give me a geodesic and then I say well how far along am I on the geodesic? So I cross it with R, this is the so-called hop parameterization of the geosic flow of H2 and so there's various ways you might do this but one canonical thing you might do is take X naught and I'm gonna take the point on that geodesic which is closest to X naught and that tangent vector V that's the point. So what I've drawn is V there is now the point X, Y, zero and then depending on how you wanna do this you travel T along here, that's X, Y, T or it's X, Y minus T. I always get confused about that myself in fact and get myself in trouble. Okay, so the nice property that the geosic flow on a neglic curve manifold has is that it's a Nossov which is sort of what gives the representation as their name is there's gonna be sort of an Nossov property associated to them which means that if you take the tangent space to T1 of M and you can split it into three pieces. Well one piece pieces the direction and the direction of the flow and of course nothing much is happening on that direction but then you have two other directions one direction which is being contracted and one direction which is being expanded. This is basic a Nossov dynamics and you can sort of see that in H2 so if you're at a point V here you just draw so that's your point V and now if you draw the horizontal through Y and you look at those that family of tangent vectors that gives you a segment of tangent vectors and what's happening when you flow they're getting contracted they're getting closer to each other and you see if you lift upstairs and you take that segment in T1 of H2 and you take the tangent space to T1 of H2 the tangent space to that segment within T1 of H2 that's going to give you V minus or V plus whichever. Okay, okay so the key thing or at least for the original definition of a Nossov representation is the existence of limit maps. And so where do these limit maps go? So you start with rho from gamma into PSL and R and you wanna say where should the limit maps be and it turns out well there's various there's a lot of different places the limit maps might go but we're gonna talk about projective and Nossov transformations and if we do those the limit maps are going to go from the boundary of the group into RPN minus one and you have another limit map from the boundary of group into the gross monium of all N minus one dimensional hyperplanes in our N. So in general they'll go these maps will go into partial flag broadies so you can think of this as the space of lines this is the space of N minus one planes are both partial flag broadies. So depending on what kind what flavor of a Nossov you want they're gonna go into different partial flag broadies. Okay, so you're gonna require that these maps be continuous you want them to be equivalent and then you also want them to be transverse which means that psi rho of X directs some theta rho of Y is all of our N if X does not equal to Y and I write it this way because this generalizes well but what does this mean? This guy's a line, this is a hyperplane that just means the line doesn't line the hyperplane, right? So the image of any line doesn't line the image of the hyperplane associated to a different point. Okay, so what would it mean to have limit maps of this type? Well, in the rank one setting this is enough right away to generate to say you're discreet faithful in fact if you think about it, but let me not go into that. But what does this tell you here? So let's make a sort of simplifying assumption. Suppose that you look at the image of this limit map and it sort of spans our N which means you take all the vectors in the image of the limit map there's uncountably many of them of course but you just look at what they span. You know that they span some vector space and if they span all of our N what does this rho-equivariance tell you? Well you know that the action of the group on its boundary has a lot of hyperbolic dynamics in particular if you look at the action of a one element on its boundary it has two fixed points gamma plus and gamma minus and everything except gamma minus is being sucked into gamma plus when you hit it with larger and larger powers of gamma and then when you hit it and then on the other side everything but gamma plus is being sucked down into gamma minus when you hit it by larger and larger negative powers of gamma. So you have these north-south dynamics and so if by the equivariance these north-south dynamics live on the image of this limit map right because this is an equivariant homomorphism. So you have north-south dynamics on the image of the limit map. Well what does that mean? That means that if you look at rho of the attracting fixed point of gamma since everything in the boundary of the group is getting sucked into gamma plus that means that everything on this limit map is being sucked into gamma plus but this is a linear map so that means that its action on the projective plane has an attracting fixed point. Well what does that mean? That means your matrix is that means your linear transformation is proximal right? It has a unique eigenline of maximal modulus and that eigenvalue is real right? If you think just about what the Jordan decomposition looks like if you look at your Jordan decomposition and if you have two fixed points of equal eigenvalue then you're gonna get everything of two fixed points of equal eigenvalue and those are both the highest eigenvalue then you're gonna get everything's gonna be attracted not to a point but to a line. So, okay and moreover since you see the same thing for gamma inverse you notice that every element is in fact bi-proximal which means there's a unique top eigenvalue of maximal modulus and a unique bottom eigenvalue of minimal modulus. So right away we already know a lot about our representation and in fact this assumption we're making is equivalent to the representation being irreducible meaning that the group doesn't preserve any vector subspace and there's a theorem of Gichard-Veinhard that if you're irreducible representation just the existence of limit maps makes you projective and awesome but in general we need to add a little bit more. Ooh, okay I better stop. So I was about to do all the scary stuff like tell you what a flat bundle was and tell you what a flow parallel to a flat connection is but maybe I will do that tomorrow.