 Okay, so I got about two-thirds of the way through what was supposed to be my first talk, so I think I'm going to try to finish up the first talk and then give you my favorite part of the second talk, and so, and finish it in the hour. So just to remind you where we were, we were about to define a project of an OSOL transformation, which is a specific type of an OSOL representation into PSLNR, and the definition starts with the existence of limit maps. So these limit maps are row-equivariant continuous maps from the boundary of the group into RPN-1 and into the gross monion of N-1 planes, and the transversality is just that if you're two distinct points, the line associated to X never lies in the plane associated to Y, and there's one immediate consequence you get of this is that row of gamma fixes psi row of gamma plus, right? So right a lot where you found one eigenline, and in fact you found a second eigenline, row of gamma also fixes psi row of gamma minus, where gamma plus and gamma minus, this is the attracting fixed point of gamma on its boundary. So right away we know we have two eigenlines, and if we also assume that our representation is irreducible, then we know that this is an attracting fixed point and this is a repelling fixed point for the action on the projective plane, and that's another way if you want to, from the point of unilinear algebra, that says that this matrix, row of gamma, is a proximal matrix that has a unique eigenline of maximal modulus and a unique eigenline of minimal modulus. So right away, just from the existence of the limit maps, we have some dynamical information about our representation, and in fact it's going to turn out that if your representation is irreducible, this is enough to get everything we need, but let's talk about sort of the more general situation. And to do that, oh, let me also remind you that we have a GD sick flow. So U gamma is the unit tangent bundle, and then phi t, whoops, was the GD sick flow, and often it's convenient to work up with U gamma tilde, and we can parameterize that as the pairs of points in the boundary, well, you don't want those points to be equal, cross r. So you can think of a pair of points in the boundary as giving you a GD sick in your space m, and gamma is going, and r tells you, the r coordinate tells you where you are on the GD sick, and then you have this flow. So phi t tilde just takes a point x, y, s, and it's the silliest possible flow. It goes to x, y, t plus s. So that's quite a nice flow. With the level universal cover, this is a very easily described flow. So now we're going to have to, I'm going to try to unpack some words, which I initially found very scary, and turns out are not very scary at all when I was first reading about Nassau representations. Now what they always start by saying, take the flat bundle over the GD sick flow determined by the representation, and you know, I don't know, I mean, I guess those who educated in Europe probably don't find that scary. But for me, this sounds disturbing, but strictly less than zero. So here, m was a closed, negatively curved, closed manifold, and the curvature is strictly less than zero. So in particular, this G sick flow is a Nassau, which means it has a direction parallel to the flow, a contraction direction, and an expanding direction. So what is the flat bundle associated to a representation? Well, you just take this cover of the G sick flow, U gamma tilde, and this is, to be even more explicit, remember this was just the tangent bundle of the universal cover of M. It was not some weird object. And then you cross it with Rn, and then you let the group, it acts on the G sick flow, just by the group of covering transformations of U gamma tilde over U gamma, and it acts on Rn by the representation. So here you have an action of gamma on, what is my notation, U gamma tilde cross Rn, and then it just takes a point, so if you have gamma, it takes a point Z and a vector T to the point gamma of Z, and the vector rho gamma of V. That's all it is. And then we quotient out by that action, and we get what's called the flat bundle associated to the representation. And then it turns out these transverse limit maps just give you a splitting of that flat bundle. And what does that mean? That means you get a line bundle, and you get a hyperplane bundle. Well, what's the line bundle? Well, let's work again up in this cover. And the line bundle over the point X, Y, T, well, what's the line associated to X, Y, T? Well, it takes psi rho of X. That's a line, so that gives you a line bundle. And then the hyperplane associated to Y to X, Y, T is just to be theta rho of Y. So you have a natural, since psi rho of X and theta rho of Y give you a splitting of Rn, this gives you a splitting of your Rn bundle into a line bundle and a hyperplane bundle. And this is all being done equivalently. So we can work up at this cover level and just quotient out later. As long as everything we did was equivariant, we just quotient out. Yeah. Why? Yeah, these are not smooth bundles. They're just bundles. The natural class of smoothness here is helder. So if you're finished, helder is like analytic, but for the most of the rest of us, helder is like continuous. So these are not that well behaved from a regularity viewpoint, but they are helder. And then they say, OK, take the flow, and then they say take the flow parallel to the flat connection on the flat bundle. And that seems even scarier, but in fact it's something very simple. You take your flow on the unit tangent bundle and you lift it up to a flow here on U gamma tilde. And then what is the lifted flow to the flat bundle? Well, at the level of this cover, you just act by the way the flow does. So you've got a decomposition of your lift of your flat bundle into U gamma tilde across Rn. And in the U gamma tilde factor, you just flow. You take X, Y, S to X, Y, S plus T. And what do you do in the other factor? Nothing. You just leave the vector fixed. So it's the stupid flow. Right? It's not some really bizarre flow, but up at the level of your cover, you're just flowing along in the base, and then you're doing nothing in the fiber. Now of course, then when you project, you look at the action of gamma on this, it's getting twisted by the action of row of gamma when you'd go downstairs. But up in the universal cover, it's the silliest possible flow and the easiest possible flow to understand. So this is the flow parallel to the flat connection. And that by construction, this flow preserves the splitting. Because as you flow along in time, your X vector doesn't change. So therefore, your line doesn't change. If your vector started in your line, your vector doesn't change. So your vector stays in your line. And the same thing with your hyperplane. So this gives you, in fact, a flow defined on psi and a flow defined on theta. So the actual definition of being projective out in Ossoff is you have a flow. You have a bundle psi. You have a bundle theta. You can define a new bundle psi tensor theta star, which is hum theta psi. And you say, well, this flow should be contracting on this complicated bundle. And so now if you do a little bit of tensor manipulation, you see right away, and on the next slide, I'm going to give you a really concrete way of saying this without all that tensor analysis. But for the moment, it's just sort of abstract nonsense. And in particular, it says the flow is contracting on psi. Well, let's think about what does that mean for the flow to be contracting on psi? What that means is that you take some norm on e rho. So what does that mean? So that means you get, this lifts to a norm on e rho tilde, which is just u gamma tilde cross rn. And that's just a continuously varying family of norms on rn. That's all this. For each point, I have a norm on rn. And it varies continuously. And it's equivariant with respect to the action. So if you act by rho of gamma, then the norm here changes by the action of rho of gamma. So what does it mean for the flow to be contracting for this eigenline? So let's look at what happens to psi rho of gamma plus. So let's take some vector v in psi rho of gamma plus. And let t gamma be the period of the flow line associated to gamma. So that means that if you look at what is the flow associated to gamma, it's you sit over gamma plus gamma minus cross r. And if you recover, that line is preserved flow line in u gamma associated, well, flow line. Let's use better language than flow line. A better word is closed orbit. Oh, yeah. That was keeping me awake. That was helping me keeping you awake. Sorry. I'll work on it. I'll try to talk in a dull monotone. A nice, peaceful voice so we can all slumber. Anyway, so what does that mean? Let's think about this really concretely. That says that now if you flow phi t gamma of v, that just gets taken to v. So if you let z be the point gamma plus gamma minus 0 and try to break this down, you have a point z in your u gamma tilde, and then you have a vector v. And if you flow along there, this just goes to phi t gamma of z comma v. And phi t gamma of z is equivalent to z. So this is equal to gamma of z comma v. So what is contracting mean? Well, that means that if you look at the norm of v measured at the point v hat t gamma of z, that that norm is less than the norm of v measured at the point z. But we said that this point is exactly gamma of z. So what does this tell you? Well, if you look at it, this tells you that phi t gamma of v at phi t of z is exactly equal to 1 over lambda 1 of rho gamma of v at z, right? Because this norm is supposed to be rho-equivariant. And so what does this rho do? Rho takes v. That says that the v at z has the same norm as rho gamma of v at gamma of z. That's what the rho-equivariant is. And so that tells you rho gamma of v is exactly lambda 1 rho gamma of v. So that tells us that, let me write some of this down, since I got myself v at z is supposed to be equal to rho of gamma of v at gamma of z. And this is exactly equal to lambda 1 rho gamma of v measured phi t hat of phi t of z, OK? So that says when you rearrange that, you can pull the lambda 1 out and you get this thing. So that says that this is equal to 1 over lambda 1 rho gamma times v at z. So that tells you, when you look at it, that tells you lambda 1 is bigger than 1, right? So, well, that doesn't, OK, well, that didn't get us very close. So lambda 1 is bigger than 1. We felt like we knew that already. But even more, now if you have a contracting flow on a compact space, it's not only contracting, it's uniformly contracting. So if you have a flow which is contracting on a compact space, at any point if you flow for some amount of time, you contract it by a factor of 1 half. But that varies continuously, and you're on a compact space. So there's some time t naught that if you start at any point and you flow for time t naught, you contract by 1 half. And so now if you contract, so now if you flow for time 18 times 2 naught, 18 times t naught, you contract by a factor of 1 over 2 to the 18th, right? So this is uniform contraction, follows from compactness. U gamma compact implies contraction is uniform, i.e. that the norm of phi hat t of v at phi t of z is less than c times e to the negative At times the norm of v at z. So it's being contracted by some exponential factor A. And so what that tells you is that, in fact, if we put this all together, I should have probably put this all in the slides. I just decided at the last minute this would be a good thing to do. So if we put this all together, that tells us that the norm of v at phi t gamma of z is less than or equal to c e to the negative At gamma times the norm of v at z. So if we play around, this implies lambda 1 rho gamma is bigger than 1 over c times e times A to the t gamma. But what about t gamma? So you've got a compact space and you're flowing on it. So what is the relationship between the period of the flow and the word length? Well, they've got to be related to one another. If you take the universal cover of the flow and you take the group, then those two are quasi-symmetric. And so you know that t gamma is comparable to gamma, and this is the reduced word length. So reduced word length, you can define it a bunch of ways. One is saying it's the minimal length of a word conjugate to gamma. Another way to say it is it's the translation length of gamma on its caligraph. So let's take that. This is translation length of gamma on its caligraph. So up to choosing a constant, I'm allowed to replace t gamma with reduced word length. And since, again, we don't care about constants. We're coarse geometers today. So this implies that lambda 1 of rho gamma, which is this top value, is bigger than 1 over some constant e to the negative k times the reduced word length. And so that implies that the log of lambda 1 of rho gamma is bigger than some constant times the word length minus c. And this should sound familiar to us. Let's think about back to Fuxian groups. What is the log of the top eigenvalue? Well, it's half the translation length on h2. So this is saying that if you have a Fuxian group, the translation length grows roughly as fast as the word length. And so we're seeing that same fact now transported up into this abstract setting of Projectivinossov groups. They have this same property, that translation length where we're interpreting translation length as log of the top eigenvalue is roughly word length. And this has a fancy name. It's called well-displacing. So a representation into PSL and R is well-displacing if log of another and a fancy name for lambda 1 is the spectral radius, which is another thing that used to confuse me at first. I had to teach myself that spectral radius meant top eigenvalue. So the log of the top eigenvalue is at least k times the word length minus c. And here's the more concrete formalization of this bundle ham theta psi is contracting. You can just say that if you put any norm on your flat bundle and you take any point in the g-sick flow, any vector in the line bundle at that point, and any vector w in the hyperplane bundle at that point, and you flow for time t naught, then you look at the ratio of the length of v over the length of w with respect to this norm, it's contracted by a factor of 1 half. So you can see that this right away, if we didn't know before, the lambda 1 is the top eigenvalue, this tells you that the vectors in psi are growing faster than any of the other vectors. So this tells you that definitely the lambda 1 is the eigenvalue of maximum modulus. And so in fact, we could have given this definition, but I wanted to at least put this sort of fancy definition up there, but this is totally equivalent to that. So once you have something which is well displacing, well, this also turns out to be in this setting equivalent to your orbit map being a quasi-isometric embedding. Well, why is that? Well, the log of the top eigenvalue is coarsely the translation distance on the symmetric space. So this tells you that the translation distance on the symmetric space is roughly comfortable to word length. And so if you think about it, your Cayley graph is getting embedded in your space. Your Cayley graph is full of axes. And each axis is quasi-isometrically embedded because the translation distance in the axis is roughly the translation distance in the symmetric space. So we recovered the fact that our orbit map is a quasi isometric embedding from this sort of abstract definition. Remember, that was something we definitely wanted to be true. We wanted to look at representations which were quasi-isometric embeddings. But we realized that wasn't quite enough. So this is a strengthening of that condition. And again, we saw that if you're a quasi-isometric embedding, you've got to be discreet because your orbit can't accumulate. And you've got to be faithful because your orbit can't pile up at any one point. So as a summary, we've got this definition of a projective and also of representation. It guarantees we're discreet and faithful. It guarantees we're well-displacing. And the associated orbit map is a quasi-isometric embedding. And the image of every element is bi-proximal. So you should think of the image of every element being bi-proximal as being, so if you're used to rank one Leigh groups, if you have a representation with this convex co-compact, that means the image of every element is a hyperbolic transformation. It can't be elliptic or parabolic. So this is sort of a generalization of that fact. It's that every element has to be bi-proximal. And one thing which I don't have on here, I should, but is I certainly don't want to prove is one advantage of this definition is this definition is in terms of dynamics. So this says that we have hyperbolic dynamics. And one of the really crucial things about hyperbolic dynamics is they're stable. If you wiggle a hyperbolic dynamical system, it remains a hyperbolic dynamical system in a very broad setting. So in fact, this setup is set up exactly so that you can now sort of mimic the sort of technology you see that goes back to like Hirsch, Pugh, and Schubes book and prove that. And in fact, another formulation of this is that there's a section of the flat bundle whose image is a hyperbolic with respect to the flow, whose image is a hyperbolic set. Well, yeah, anyway, maybe I have to be a little more careful about that, but there's something like that is true. So you know that if you take a bundle and you wiggle it a little bit, and if your original bundle had a section whose image was a hyperbolic set, your wiggled bundle also has a section who's a hyperbolic set. So that tells you that if you wiggle a projective Anosov transformation, it remains projective Anosov. And there are now many definitions available of Anosov representations. And one of the real advantages of this is this one is that it's set up to apply to very standard technology to give you stability. And I think that's one of the things that LaBri was thinking about when he made this definition. OK, so let's see what this means in a couple of our examples. Well, let's think about these Benoit representations. Remember, I told you that if you take a lattice, a co-compact subgroup of SON1, and you wiggle it within PSL N plus 1, it remains so the original lattice preserves some round disk. And the new one, Benoit-Approved, it preserves some convex domain. So if you take this wiggling rho t from gamma into PSL N plus 1R, you started with rho naught from gamma into SON1. So I'm going to draw in the picture where N is 2 here. Then you get this omega t. And this is C1, strictly convex rho t preserves, rho t of gamma preserves omega of t. So this is kind of amazing that it stays C1, remember. It does not stay at all C1 if we do the same thing in the rank 1 setting. But so now what is our limit map? Well, here you have an action on this space. This guy admits a complete metric called the Hilbert metric. And so you can just like when you take the hyperbolic metric on H2, naturally the boundary of H2 is its boundary circle. When you take the Hilbert metric on this guy, naturally its boundary circle is the boundary of this domain. So you get an identification, psi rho goes from boundary gamma to the boundary of this domain. And one thing you should be careful with is this identification is not C1. It's only held here, even though this boundary is C1. So it's like if you have two Fuxian groups. And if you take and you conjugate one to the other, that limit map is only held here. In fact, it can't be differentiable at any point. So this is something you often see in this higher-typing theory setting. The image of your limit map may very well be a nice smooth object, but your limit map itself is not smooth. And that can be very important. Now what's theta of rho? So here's psi rho of x. What should theta rho of x be? Well, we have a C1 sub-manifold. There's a hyperplane staring us in the face. The tangent line, that's theta rho of x. Theta rho of x is just tangent plane to omega t. So these two transverse limit maps are staring us right in the face. And what is the transversality? It exactly reflects the fact that I told you this was strictly convex. So if you unravel what strict convexity means, it right away implies transversality. So you can see in this work of Benoit, which is done about the same time as Lavery's work, a lot of the ideas of a Nassau representation is just very naturally occurring. OK. Well, what about these Hitchin representations? What Lavery proves is that, in fact, they're a stronger kind of a Nassau. They're a Nassau with respect to a minimal parabolic. So if rho is Hitchin, Lavery shows you get maybe psi hat rho, which goes from the boundary of gamma into the space of flags on Rn. So a flag is a line, a 2 plane, a 3 plane, a 4 plane, all the way up to an N minus 1 plane. And then, so once you have this, you, of course, have limit maps into RPN minus 1. You just take the first factor, and then the last factor is a map into the gross monion of N minus 1 planes. So the project of a Nassau property is just sort of like a restriction of the existence of this limit map. And another sort of setting which we haven't talked about, but we've talked about, various people have talked about ping pong. And going back to teats, you can play ping pong in SLNR. And if you start with a finite collection of bi-proximal elements, which are sort of in general position, which means that the attracting line of one should never lie in the repelling hyperplane of the other. Then if you pass the high enough powers, the thing they generate is projective Nassau. And if you can play ping pong on the RPN minus 1, and that ping pong construction produces a canter set. And that canter set is just the image of the limit map. So all that stuff is playing to you. If you want, and if you want to get the map into the gross money of N minus 1 planes, you just work on the dual projective space, which you can identify with. The projective space of our N dual is naturally the gross monium. So you can sort of use this duality and win. So there's these various examples. And so now if you start with a general semi-simple Lee group and a pair of opposite parabolic subgroups. So in SLNR, that might mean P plus is the stabilizer of a partial flag, and then P minus the stabilizer of a dual flag. So in our case, P plus was the stabilizer of a line, and P minus is the stabilizer of a hyperplane. And then your limit maps go into the associated flag spaces, G mod P plus and G mod P minus. G, SLNR minus the stabilizer of a line is exactly RPN minus 1, et cetera, et cetera. And then you can do this whole formulas. And I think I'm going to skip over this. But you can talk about the Anasov section of the flow. And this is a little slightly different formulation, but it's really morally the same thing. So you can do a bunch more examples. So for instance, if you can redo the rank one theory. And if the rank one theory, the only parabolic subgroups, there's just one of them up to conjugacy. And that's just the fixed point of a point in the boundary. So in PSL2C, this is just the set of upper triangular matrices. This is another thing that really bothered me coming from rank one. The parabolic subgroup does not consist of parabolic elements. Still troubling to me. It contains all elements which fix infinity. So the set of the upper triangular matrices, you can think of this as all isometries of H3 preserving the point in infinity. And then it turns out to be an Anasov is exactly equivalent to being convex co-compact. So there, we've done nothing new. We just have some fancy formalism for thinking about the objects we had before. But it tells you that this is, again, more confirmation. This is the right way to generalize convex co-compact representations. If you're generalizing convex co-compact representations and you restrict to PSL2C, you better get convex co-compact things, or you probably don't have the right generalization. And the hitching representations are Anasov with respect to the set of upper triangular matrices, which is the minimal parabolic subgroup of PSLNR. Another example, another thing which I would guess is an example, though I don't know for sure, is we saw in Pepe's second talk that he constructed these mere Schottky groups. And these limit set was a cantor set of projective lines in CP3. Well, what is that? That cantor set should be, so that means they should be Anasov with respect to a stabilizer of a 2 plane in C4. And then your limit map would naturally go into the space of projective lines in CP3, which is G mod P plus in that setting. So maybe that's an example, too. Just is sort of striking to me, so that probably should be an example. But then there's also this theorem of Guichard and Beinhard, which I've leaned on heavily in my own work, which says that in the end, you can always just think about projective Anasov transformations. I mean, you start with any old Lee group, and you start with any old parabolic subgroup, then you get some representation called the Plukar embedding of your group into PSLNR. And a representation from gamma into G is Anasov if and only if eta composed row is projective Anasov. So in some sense, if you're scared of your current Lee group, just embed your Lee group in SLNR in the right way, and you can work there. And so this is actually a trick which they make extensive use of in their work. It's quite powerful. And you get all these very general things. With this general definition, the representation is discrete faithful. They're quasi-isometrics. The images of each element is proximal. Whatever that means with respect to your parabolic subgroup, they're stable. And the action of outer automorphism group is probably just continuous. Oh, I forgot to say it. When we proved well-displacing, if you think about Francois' proof of Frick's theorem, the mapping class group acts properly as continuously, untightening their space, he exactly used the fact that your representations are well-displacing. If you sort of strip it down, that was the whole proof. And then a little group theoretic fact about that. The group theoretic fact was that you can have a finite collection of curves whose length determine your representation. So you put those two facts together. You have a proof of properties continuity. And well, that proof generalizes to show that the outer automorphism group of any hyperbolic group acts properly just continuously on any space of an ossof representations. So serve another nice confirmation that what we know from the rank one world continues onward. You said it needed to flip it, didn't you? Yeah, sure. Even in, oh, yeah, so well-displacing says translation length grows like word length. So if you have a parabolic element that right away fails, translation length 0, the translation length of the square is 0, the translation list of the third power is 0. So the translation length is growing like 0. And the word length is growing linearly. So they're not growing at the same rate. Or? What is longer than the other? Yeah, well-displacing implies discrete faithful. And properly is continuous, yeah. Well, if you think about it, suppose you have a sequence of things, suppose you're indiscreet. That means that there's some orbit accumulating at the origin, orbit of the origin, accumulating of the origin. So then you have an infinite collection of elements which are moving the origin smaller and smaller amounts. So you have an infinite sequence of elements whose translation lengths are converging to 0. Well, the word length of those elements is going to infinity and their translation length is going to 0. That's bad. But so it's even stronger. So in particular, it implies every group element is hyperbolic in the rank one setting. But it's stronger than that. So if you think about, well-displacing is equivalent to convex co-compact, is what it is, in the rank one setting. What? Yes. So there's a paper by Guichard, D'Alzant, Moses, Moses, and Labrini doing that. It's an awful lot of firepower for that result. They do have a very general discussion of the relationship between quasi-isometric and well-displacing. It's in Bob Zimmer's birthday thing. OK. Or if you take these five or subgroup of a three-manifold five ring over the circle, this is an example. Every element is hyperbolic, but yet it's not well-displacing because it's not a quasi-isometric embedding. OK. And now there are a number of different definitions. One thing I should say is we leaned on the fact that Gamma was the fundamental group of a negatively curved manifold to get our G-sig flow. But Gromov has defined a G-sig flow for any hyperbolic group. And it's a little tricky to work with. And then you can make this whole thing go through for arbitrary hyperbolic groups. But there's some definitions now which avoid that by two teams, one, Garry Toe, Gishar, Kassel, and Veinhardt, another one, Kapovich, Lieb, and Portee. And they both have developed definitions which avoid the use of the G-sig flow and even definitions which avoid the use of the limit map. So you can sort of look at things. So these are two. So you can, well, Fannie and Francois should feel free to yell at me if I've really oversimplified. But to really oversimplify, the work of theirs involves a study of the Carton projection. And they get a criterion in terms of the norm of the growth of the Carton projection, which I would think of as like some sort of hyped up version of the weld displacing condition. It's a fast strengthening of the weld displacing condition. That fair, Fannie? OK, good. And whereas the Kapovich-Lieb-Portee team, they work in terms of the action directly on the symmetric space. And so they have a very, and they look at the action on the symmetric space and the action on its teats boundary, et cetera. So they end up with some similar results but with really different and complementary techniques. So let me now do my favorite part of the second talk in 15 minutes, which is the second talk was supposed to be about the existence of a metric on higher titanar spaces. And in particular, there's a metric on the Hitchin component whose restriction to the Fuxian locus is the Vey-Petersen metric. So it's a generalization of the Vey-Petersen metric into the setting of higher titanar spaces. This is joint work with Martin Bridgeman, Francois Lavery, and Andre Sanberino. And how this work proceeds is we take a higher titanar space, we take a family of representations, and we convert them into a family of flows. And these are so the Hitchin component is converted to a family of flows. And each of these flows is a reparameterization of the geosic flow on a surface. So now we've replaced family representations by a family of flows, or a family of reparameterizations of affixed flows. But the study of a family of anossof flows is exactly the thermodynamic formalism. And so now, once we've made this transition from a representation to a flow, we have this huge toolkit called the thermodynamic formalism, and it allows us to do things like to define a metric. So we can talk, this metric in some sense measures how close the difference between two representations is going to be the difference between these two reparameterizations, the difference between these flows. So I'm not going to tell you all about the thermodynamic formalism, but let me tell you how you take a representation and turn it into a flow. And this is a very beautiful idea, which goes back to Andre Sanberino's thesis. And when I learned this, I was just learning all this stuff for the first time, I thought it was one of these ideas that had been around 100 years, because it was so natural. And I couldn't, you know, it's like several months later, I learned this was his idea, and I was quite impressed. So what's the idea of this flow? Super simple. So I'm going to define, by the way, I'm going to put a, I'm going to post the slides in my second talk if you want to look at and see the whole thing. But let me just show you this highlight. So you define a real bundle over boundary gamma, boundary gamma cross diagonal. This ought to remind you of that U gamma tilde of the T, of the unit-tanched bundle of the unit-tanched bundle universal cover. It had this structure. It was boundary gamma cross boundary gamma minus diagonal cross R. So it should maybe not be so surprising what I'm going to produce. It's going to be a repramtrization of the G-sick flow. And now pi inverse of a point x, y is going to be exactly, well, I want to get a line. Well, but my limit map gives me a line. So I'm going to choose a vector v. And I'm going to choose it in psi rho of x. So this is asymmetric, but I could write it more symmetrically. But the symmetric way of writing it is simpler. v and psi rho of x. And I don't want to choose 0. And then I'm going to identify v with negative v. So it's a choice of vector in the line bundle up to sign. Or you can think of it as a choice of norm on the line, whatever you want. And OK, so what is the flow on this space? Psi t tilde takes the point x, y, v. Well, I don't really know what to do with x and y. So I just leave them fixed. Well, what do you do with if you have a point in r minus 0, what do you do? We really can only take v to e to the tv or e to the minus tv. Those are almost the only choices for a sensible flow. So here's our flow. Pretty sure it's e to the tv. I always have to think twice before whether it's e to the minus tv. But those are the only reasonable choices. So we think of this as a flow space where we just flow in the line, preserving the line. And this sort of mimics the fact before we were flowing. We looked at the flow that takes x, y, t to x, y, t plus s. It's sort of the analog of that. Now, there's an action of gamma on f sub rho. And what does it do if you take some element gamma of x, y, v? Well, gamma acts on the boundary of gamma. And then you take v to rho gamma v. So again, it's acting by the group on the first factor and it's acting by the representation on the second factor that should feel familiar. And the fact that this makes sense, this is well-defined exactly because psi rho was rho-equivariant. So the equivariant's the limit map coming on here. And so now what I want to do is I want to quotient out. But the problem is I don't know right away that my action is nice. So it turns out that what I first do is show that I've produced a parameterization of the g-sig flow. And once I show that, then I know the action is proper and co-compact. So a little lemma you can show, and it's not hard. I'm not going to do it for you. We don't have time. But there exists a gamma invariant, or gamma equivariant, would be a better way of saying it. And it's going to be helder. Again, that's just the natural setting. If you're used to hyperbolic dynamics, everything is helder. That's the best smoothness for everything. Helder orbit equivalence. So what might be interesting is the question is the helder exponent feels like it behaves a little bit like the Hausdorff dimensional limit set. But there's some results like that. But that's g tilde from, yeah, it's in its infancy. It's in its infancy. So there's a helder, orbit equivalence means it's a homeomorphism which preserves orbits. It doesn't preserve the flow. So it preserves the flow lines, but it doesn't preserve the flow. So you can write this down. You clearly, at the levels of your cover, you're going to take x to x. You're going to take y to y. And then you've got to choose how to map t to v. So you just have to write down a formula for it. It's not that hard. So once you've got that, that implies that gamma's action f sub rho is proper and co-compact. So that implies that you can define, you get an orbit equivalence between u gamma and u rho, which is f rho mod out by gamma. So this is an orbit equivalence. And then it's a general theory that if you have two flows, so this flow is a NOS off, so if you have two, so if this flow is a NOS off, then also this flow will be a NOS off. And then if you have two NOS off flows, which are orbit equivalent, one, it turns out as a repramptrization of the other. So now you know that, in fact, this is a repramptrization. Then you can replace this by not, you can use g to construct a function f, which is to say that u rho is the repramptrization of u gamma by f. And so, and also, another nice thing here is if you start off with the Gromov-Gesig flow, and then you don't know that's a NOS off. You don't have this theory of Megalocurve manifolds. You don't know you have a NOS off flow, even in the metric sense. But it turns out you can take this construction and use the NOS off property of the limit maps to prove that this flow, f sub rho, is metric NOS off. So this tells you something about the dynamics that tells you the Gromov-Gesig flow of a group which admits a NOS off representation is a NOS off, which is something we didn't know before, in which we don't know whether that's true for general hyperbolic groups. There's something special about these hyperbolic groups, which are occurring here. No, not natural. No, it's just helter. It's just helter, yeah. You can sort of write that. You do an averaging procedure. So the point is, if you end up with something which is reprammrization by a holder function, it's differentiable in the direction of the flow. But this map, we don't start out knowing it's differentiable in the direction of flow. So there's a little bit of abstract dynamical systems, which is allowing us to make an upgrade in regularity using the NOS offness. It uses the NOS offness in a crucial way. So rho gives rise to u sub rho, and there exists f sub rho such that u sub rho is f sub rho. Such that u sub rho is u gamma reparameterized by f sub rho. So now what we get, we get a map. So for instance, of the Hitchin space, h n of s goes into the space of helter functions on u gamma, right? So you start with representation, and you end up with a positive helter function defined on the flow space. And then there's also something called the pressure function, which is associated to the flow. And there's a little fact that p is analytic, and p of negative h rho equals 0 if and only if h is the topological entropy. What? Yeah, right. This whole, so for people from the rank one world, Ruel showed that the house dwarf dimension of the limits set of a quasi-fixing group varies analytically. So this whole project is a big jump up of that. So h is the topological entropy. So this is an analytic function. This tells you that once you can show that f rho varies analytically, which again is a nasty exercise in the Hirsch-Peschube technology, then you can show the topological entropy of this flow varies analytically. But what is the topological entropy of a flow? It's exactly the exponential growth rate of the number of orbits of length t. So you take the log of the number of orbits of length at most t and divide by t and pass to the limit. And the fact that the limit exists at all, normally you just have to take limb soup. The fact the limit exists at all follows from being an Ossoff. So the topological entropy, h is equal to number of limit t goes to infinity. You divide by t. On the top, you take the log and you take the number of orbits of u rho of length less than t. So you can, equivalently, that's going to be roughly the length of the number of closed gd6 in the quotient symmetric space of length less than t. And then there's some very famous work of Sullivan, which says this topological entropy in the rank 1 case is just the Hausdorff dimension of the limit set. So one thing that comes out of this is that if you take any analytically varying family of convex co-compact representations into any rank 1 lead group, the Hausdorff dimension of the limit set varies analytically. So this was previously only known for surface groups in PSL2C and free groups in PSL2C and free products of surface groups and free groups PSL2C. And Semuel Tapie, I believe, proved it with C1 in a fairly general setting. I'm not sure how general. But this is right. So right away, before we even getting to define the metric, just at this place, we've got analyticity of everything, analyticity sort of everywhere. Yeah, and this is just coming on the thermodynamic floor. This is just starting now. Now we start hitting the thermodynamic formalism hard. It turns out that the space of pressure zero functions admits a semi-norm, which is a positive semi-norm or a non-negative semi-norm. And we just pull that back by this mapping. So we pull back from this. We take from here, we go in. This gives us a mapping of the space of pressure zero functions. We have a form here. We pull that form here back onto the Hitching component. And we then have to show that it's positive definite. So that's nasty. When you want to show positive definite, it's kind of nasty. It involves a lot of trace identities in the end. And this is all a generalization. Also a generalization of Thurston's idea that the Vapeterson metric should be the Hessian of the length of a random GD six. So that's another starting point for this whole thing. But I better stop.