 So I'll start by reminding you where we left off last time. So we're considering a stratum of a Boolean differentials. So the stratum is some set of a Boolean differentials x omega, which we can draw as some collection of polygons with edge identifications. And we have local coordinates that we've written in terms of their real and imaginary parts. And we're thinking about the dynamics of gt, which is e to the t 0, 0, e to the minus t, which acts linearly on these polygonal pictures. And so we have some sort of guess was that omega and x prime, omega prime, the same x j, so real parts of coordinates, the gt orbits converge to each other. So these get close, exponentially fast, as t goes to infinity. We may be assuming something about the gt orbit, like it doesn't just go off to infinity, but something that would be generic. And this guess came from thinking about this picture where we wrote the coordinates as a matrix like this. And locally, the action was very simple. You're just acting by this 2 by 2 matrix on each coordinates, just the action of gl2 are on R2. And so you see that this contracts the imaginary parts. So somehow it doesn't really matter what the imaginary parts that all get sucked in together exponentially fast. Except there's a change of coordinate matrix if you come around and come back to where you started. And this was a matrix in gl and z that's called the consavage storage co-cycle. And it's sort of the non-trivial part of the dynamics, because this is evidently very easy to understand. And it's the linearization of the cut and paste. And you can also think of it as follows. So you have your stratum, and then you have this gt orbit. So you say maybe it starts off here, and then it comes back close by. And you think about parallel translation of some homology class. So here I'll fix some gamma in h1 of the surface. And as I act by gt, I'm changing the metric, but the topology isn't changing. So in other words, I can just sort of take this homology class along for a ride. And in fact, I could take a whole basis of homology classes. And I can transport them all along here and sort of see what basis I get. And I get this change of basis matrix, which is exactly what this consavage storage co-cycle is. So it's a monodromy matrix. Longness, you can think of closing up the path. And then it's literally a monodromy matrix, if you're used to thinking about monodromy. OK, so we want to show that the consavage storage co-cycle somehow loses out to this very strong exponential term here. It's powerful, but not as powerful as these exponentials. And to do that, we're going to focus on just not the whole matrix, but just one homology class. So we're going to take one homology class, and we're going to parallel transport it around, and we're going to see how fast it grows. And although the asymptotics of this doesn't depend on the metric, our study of this problem will depend on picking a nice metric with good sort of analytic properties. And that metric is called the hodge norm. And versions of this are used whenever you have a family of Riemann surfaces or a family of algebraic varieties. You're studying variations of hodge structure. So it's a pretty important thing in general. So the background maybe is we can start. So let's take x with each one of xc. So first, homology with complex coefficients. And I can write this as h10 plus h01. So this is the hodge decomposition, valid for all scalar manifolds. Also, this is the space of a Boolean differentials. And this is its complex conjugate. So the holomorphic one forms and the anti-holomorphic one forms. And I can think about the sort of usual cup product or intersection pairing, which just says I have two homology classes, 801 and 802. And I calculate their pairing by omega 1, which omega 2. So this is something topological. This doesn't depend. You can do this for a topological surface. It doesn't need a complex structure. So just a new surface without a complex structure you can do this for. Well, perfect question. So you can define it on the whole thing. And it's just some topological thing there, right? It's just something you can do this even for differential forms. I don't know if we holomorphic. But we can think about what this looks like on 1, 0. So for example, I'll claim that if eta is in h1, 0, x, then eta, eta is greater than or equal to 0, with equality if and only if eta is equal to 0. And actually, you can be more precise, eta is what we've been calling the area. So if you represent the appealing differentials in terms of polygons, this is just the area. And the reason why, so if locally, we pick some coordinate where eta is dz, and I write z is x plus iy, then i over 2 eta wedge eta bar is dx. So similarly, you can show that this pairing is negative definite here. So overall, it's some Hermitian pairing of signature gg on this whole vector space. And it's important to realize that as I change the Riemann surface, for example, by moving along a gt orbit, this of course doesn't change, because this is just apology. The surface is not changing topologically. But these do change, because it depends on the complex structure. So you have what's called a very, this is the Hodge decomposition, then you have what's called a variation of Hodge structure. So you have these two spaces whose direct sum is the whole thing and they're sort of moving around. And so that means even if you picked a class here, when you change the Riemann surface, it would sort of move. So you somehow don't really have a, you do have a real positive definite inner product here, but it doesn't immediately help you. But there's a pretty easy way to make this help you. So there is an isomorphism from h10 x to h1 xR that sends eta to the real part of eta. Sometimes this is called the Hodge representation theorem. So for example, the fact that this is onto says for any real homology class, you can find a holomorphic one form with that real part. Or if you think about it the other way that this is injective, it says that if the real part of a holomorphic one form is 0, then the one form must be 0, which is pretty intuitive. And so we can define the Hodge norm h1 xR via this isomorphism. So maybe I'll give this a name, I don't know, h. So in other words, if I want the Hodge inner product of two real homology classes, then what I should do is I should look at the abelian differentials that give rise to them and then do this to them. Question so far? So this gives a family of norms and it varies. It changes, but it changes nicely as you move the Riemann surface. And there's also a version on complex homology. What you're doing up to a factor of two that I'll never remember is you're just changing the sign here. So you have an inner product here and the negative of an inner product here and they're orthogonal, so you just change the sign and then you get an actual inner product. But it depends on the Riemann surface structure. So now I want to start thinking about how Hodge norm varies along here. Oh, and I should say, of course, there's a dual norm on h lower one. I'm actually sort of in the motivation I was thinking in terms of homology because that's more intuitive, but then you actually work in terms of co-homology. And it's sort of no big deal. So I want to think about how Hodge norm varies as you move along here and there are two special homology classes or co-homology classes that we'll start off by considering, namely the real part of omega and the imaginary part of omega. So I want to consider how their Hodge norm varies. And this will be the one thing that's really easy and transparent to do with Hodge norm. So it's a good warm up. So I'm going to consider the Hodge norm of, let's say, the real part of omega varies along gt x omega. And actually, I'll calculate it exactly. And just to fix the convention, let's say, the area of omega is 1. So let's say gt of x omega is x prime omega prime. To start off, I want to think about what co-homology class is omega prime. And this is actually really straightforward because remember, we're just sort of acting on the real and imaginary parts. So for example, you could think, what are the periods of omega prime? Well, the periods of omega prime are the periods of omega, except you've made the real parts bigger by e to the t and the imaginary parts bigger, I mean, smaller by e to the minus t. So in other words, you've just scaled the real part of omega by e to the t and the imaginary part by e to the minus t. So in other words, sort of the class of, well, maybe I should say t instead of prime. The class of omega t is e to the t times the class of omega plus e to the minus t times the co-homology class of the imaginary part. These are the periods of that. So this, you could think this is gamma i. And then this sort of xj plus iyj is the integral over gamma j of omega. Yep, yeah. Like if you take a differential form and you multiply it by 2, it's integral over whatever you're integrating it over increases by 2. Well, so what I'm saying is, so we start off not knowing what this is. Oh, that's going to be easy to figure out. And we say, OK, I don't know exactly what this is, but this is some differential form and I know it's periods. It's some co-homology class and I know it's periods. So which co-homology class has the right periods? Well, it's the co-homology class of omega except I've scaled the real and imaginary parts because that is a co-homology class and it has the right periods. I guess maybe a missing piece of this discussion is that h1, say xc, is h1 xc dual. So a co-homology class is determined by its periods. OK, so now this is going to be easy. So I want to know the hodge norm of the real part of omega. So oh yeah, this should be an i. Yeah, thanks, Dore. Anybody else found any other horrible typos? OK, so I want to compute the hodge norm. So what am I supposed to do? Where is it? It's like right here. Here it is. So I'm supposed to figure out what the abelian differential is representing that homology class and then just take its area. That's what this is saying. So the abelian differential, xt with real part, the real part of omega, is, well, so here I have an abelian differential and it has this real part. So I just have to scale this. So it's, I guess, e to the minus t omega t. Now area is preserved by the gt action. So omega t is the area of omega. If you want, I mean, you think of it times the determinant of omega equals 1 because we assumed it's 1. And this hodge norm, we said, is just computing area. So in other words, the hodge norm of real part of omega t omega on xt is e to the minus t. No, it should be e to the minus t. You're supposed to wedge with the conjugate but then actually have to take a square root somewhere. Norm is inner product with itself and then you take a square root. OK. And so similarly, if I use the imaginary part of omega and I ask for its hodge norm, hodge norm is e to the t. So we do have some cohomology classes whose size grows very, very quickly. But somehow, they're the expected ones. We completely understand what's going on with the real part of omega and the imaginary part of omega. That's always the part where we have a precise on the nose sort of trivial understanding. And really, the goal will be if I take some other cohomology class that's not one of those, can I understand how its hodge norm is changing? So I'm going to denote hodge norm on xt by norm sub t. And we have the following theorem of Forney that says, let c in h1 x are, OK. So c's going to be, instead of using cohomology, I'm using cohomology. And I just want to parallel translate this and see how it grows in hodge norm. And I'm going to let alpha t in h10 xt such that the real part of alpha t is c. We're not surprised to see this play a role because this is sort of what shows up when you, in the definition of the hodge norm. Then d over dt of ct squared is minus 2 times the real part of some form called the b form, where the b form, so sometimes it's called b sub omega for alpha theta is i over 2 times the integral of alpha wedge theta omega bar over omega. The derivative of the norm of c, yeah. So c is not changing. I put the subscript in the wrong place, thank you. But the norm is changing. So it's sort of interesting, right? Alpha t is expected to tell us the norm of c, but it's actually also telling us the derivative of the norm. I don't think b stands for anything. So this is certainly a Beltrami differential. And Teichmuller theorists will recognize this as the formula for the derivative of the period matrix. And the proof is identical to the proof of the formula for the derivative of the period matrix. So in some sense, they must be totally equivalent, but it's actually easier just to derive this than it is to sort of first derive the proof for the period matrix and then try to come up with sign. I'm not even sure anybody's written down an exact way to get one from the other, although they must be totally equivalent. OK, so by the way, you should at least type check this. This is type dz, dz, dz bar, dz. So two dz's cancel, and this is type dz, dz bar. So this does integrate. So OK, this is some formula in terms of differentials. And I thought about proving it for you, but I decided I'd spare you. The proof is not so hard. I could do it in 10 minutes. This is just a question of, at the end of the class, you might not remember anything other than I have computations that seem sort of unfamiliar to you. It's the sort of proof that I think even if you know Taikman theory, you've got to go and sit down with for an hour to try to imprive it to get in the actual meaning from. Oh, could I re-derive those formulas? Yeah? OK, so here I was able to exactly give a formula for how the hodge and arm is changing. In general, you can't do that. The variation of hodge structure is considered a very mysterious thing. Somehow, it's always enough to understand the variation of hodge structure. It encodes everything about how the Riemann surface is changing. But in a very mysterious way, the periods of all abelian differentials or another way of putting it is how this hodge decomposition varies. If you know the periods of all holomorphic one forms, then you know the Riemann surface. That's the Tarelli theorem. Yeah, this tells you to first order. There is no GT. There is GT. XT is defined to be the Riemann surface you get from GT. So I'm keeping this convention here, that GT x omega. So this is when C is some very special homology class. So these are two special choices for C, the homology class. I want to understand how it's changing. And then what's going on here is now I'm going to pick some other C. It's not the real part of omega. It's not the imaginary part of omega. It's not some linear combination of them. It's just some random co-homology class. It's not. It's just an arbitrary co-homology class. Yeah, so let me redraw the picture. But then C is co-homology. Yeah, it's independent of the complex structure. The hodge norm depends on the complex structure. There's a T here. So the definition of this hodge norm is you take the holomorphic one form whose real part is C. And then with respect to XT, with respect to the complex structure at time T, and then you compute the area. Question? No question? OK. OK. Other questions? Um, yeah, but somehow it doesn't help to think of it that because it's very hard to compute this guy. Like, this is a very analytic formula. I don't really know what alpha T is, but it's something. I mean, this is not unfamiliar to an analyst, right? There's some quantity you don't know, but you can understand its derivative in terms of what that quantity is. So now you're going to be able to, if you understand what it starts, and you understand its derivative in terms of where it is, you'll get some estimate on growth rate. Yeah, yeah, and it'll be a super easy one that doesn't even bear writing down because we're not going to use very much about the growth rate. So it's a good one to use because we can compute a derivative. And maybe I'll write down one more thing and then we'll continue this discussion because then it'll be clear why this is so good. So let me write a corollary for this. So rather than compute the derivative, it's more helpful to compute the derivative of the log if we're going to be studying exponential growth rates. And this is minus the real part of this guy here of CEC. So I have some formula for the log. And I'm not going to do these computations because I don't think it would help you to see it, but it's very easy to see that this is less than or equal to 1. There's some Cauchy-Schwarz argument. And with a quality, or let me just say with strict equality, quality if with strict equality with less than. So you don't have equal to except in the with strict inequality if C is not proportional. So C is not in R times the real part. So we have that one homology class which grows at rate e to the t, which is enough to compete with the e to the t in that matrix e to the t 0, 0 minus e to the t. And in fact, if you think about it, it had to be that way because it sort of balances out with that. But other than that, the derivative of the log grows strictly less than 1. And that's enough to tell you you get a smaller order exponential growth rate. So maybe we'll just recap the discussion. Yeah. I mean, it's obvious that I didn't do the computation. But it's essentially when does equality hold in Cauchy-Schwarz? OK, the actual way it ends up happening is that alpha t is not proportional to omega t. That's the way it ends up showing up in Cauchy-Schwarz. So this condition is very nice because it's independent of t. Yeah, yeah, this condition is independent of t. So if you start off with this condition, you keep this condition. This condition, which we already know because if you start off with c being a multiple of the real part of this, then that'll continue. I mean, that doesn't change. Yeah, there's an identification provided by this parallel transport. So remember, the picture we have is something like this. So I start here, and then I wait until I come back. And as I change the complex structure on x t, the homology isn't changing. So a homology class or a co-homology class can somehow come along for the ride and be parallel transported. Technically, there's something called the Gauss-Mannon connection. It's a flat connection that allows you to parallel translate. But it's really easy. That makes it sound more complicated than it is. You parallel translate along the loop. And then when you come back close by, you also have a comparison. On the one hand, you parallel translate. And on the other hand, the bundle's locally trivial. And your goal is to sort of compare those. And what we're doing is, actually, that's a really hard problem. And it's not easy to know what to do. So we're doing something sophisticated by saying, oh, we're going to pick the right norm. And we'll instead understand just infinitesimally how the norm grows. No, no. There are other ways of sort of doing this sort of thing, like using Teichmeler theory. But everything uses something a bit sophisticated. I mean, essentially, we're understanding this matrix as the linearization of cut and paste. And there's no, except in very, very special cases, there's no obvious, it's not super obvious how to do. OK, so yeah, the recap is we show that if you take some homology class and you parallel translator, it doesn't grow very big. So we had this picture that looks like e to the t, 0, 0, e to the minus t times x1, xn, y1. And then this can save a short co-cycle. And this is given by just parallel translating. What? Parallel translating is just the homology class comes along for the ride. So let me draw a picture. It's a trivial bundle. It's a trivial bundle. So here I have sort of x0. And here I have, I don't know, x1 half or something. And I got this just by sort of continuously deforming the metric. Well, the homology of the surface hasn't changed. So what's the parallel translate of this curve gamma? It's the same curve. You've changed the metric. The complex structure is changing. Yeah, OK. So the metric and the complex structure are changing, but the topology isn't changing. But we don't know how to do that. No. I mean, I'd be happy if it was more elementary also, but this is somehow a subtle point. And actually, often I don't do this when I teach mini-courses. And then I've been getting complaints that say, you make everything seem really easy. And then I try to read papers. And it's a really hard subject. I can't read any of these papers. So I'm trying to show you a little bit of the actual meat because I don't know how. I mean, I could with the computer, I could approximate it. I don't think it's going to save you anything because to write down that, to find the basis, especially with the real imaginary parts, you're going to need the period matrix because that basis depends on the period matrix because that's the complex structure. So then you're going to have formula with period matrix. And then you're going to have to either show the growth rate of the period matrix along the cycle, which we don't know how to do, or you're going to take the derivative of the period matrix, which is, in fact, how to explain that's what's going on. Or we see the formula. This is the period, the derivative period matrix. In that formula for alpha t, you're trying to solve for alpha t because of dr, t, alpha t. First term is going to be easy. That's how you get the derivative. That's why you're restricting yourself to the first term because finding the other terms can become a very global problem. OK. So essentially, we've shown that this matrix, except for part of it relating to the real part and the imaginary part, which is sort of obvious, when we had this guess about two things getting close, we had some restriction that they had the same real part. So this is somehow not too big. It's pretty big. It can be exponentially big. But except for the sort of obvious part of this coming from the imaginary part of omega, this is somehow a smaller exponential order than this. As long as your geodesic recurs, because somehow if your geodesic went straight out, you'd always have strict inequality, but this sort of might tend to 1. But if your geodesic spends a lot of time in a compact set, then there's some bound in the compact set, and then you get this actual contraction. So what you get from this, so what the profit is from this analysis, so the way a dynamicist would say this is GT is non-uniformly hyperbolic, which is sort of saying there are these stable manifolds with the same real parts. And when you apply the flow, they get close together. And the non-uniform refers exactly to the fact that if you just go out the cusp, you don't know what the contraction is, because you just have the strict equality. And I mean that could go to 1 as you go out the cusp. And you should compare. So the easiest hyperbolic system to compare to, if you're not a dynamicist, is the action of 2111 acting on R2 mod z2, which is just the torus. So here you have a contracting eigendirection and an expanding eigendirection. So at every point, you sort of have some expanding directions and some contracting directions. Similarly, this is like having the same real parts. Everything gets sucked in together. OK, you actually have to work a little bit harder for non-uniform hypolesty. I'm putting a few things under the rug. But say if there's, no, no, it does follow. It does follow. Direction is omega and the other directions are contracting. No, no, no. The expanding directions are all of the directions where you change the real parts, because they get hit with the e to the t and they can save at church. Coast cycle isn't as strong as the e to the t. The contracting directions are the imaginary parts, because they get hit by e to the minus t and they can save at church. Coast cycle isn't as strong as e to the minus t. So anyways, the main point I want to make is these are very chaotic dynamical systems, ones that are hyperbolic. So in particular, you get that GT acting on the unit area locus in some stratum. So the locus where all of the surfaces have the same area. So I said before, this has some measure, which is finite. So this is actually ergodic. And roughly, that follows from hyperbolicity from what's called the Hopp argument, which is a standard way to prove hyperbolicity in this context. OK, so these estimates are due to Forney. They're relatively recent. Ergodicity goes back way before that due to Mazur and Beech independently. And I've actually never read Beech's proof, but Mazur at least used Teichmeler theory. As I said, these aren't really such easy problems. It's helpful to have some technology to use. It's related to a Nassav. Yeah, you do have a decomposition of the tangent bundle. A Nassav generally refers to something stronger like this, where you have uniform sort of hyperbolicity in a very nice way. This is sort of a weaker form of hyperbolicity. Other questions? OK, so that concludes the section of the mini course on the dynamics of GT, which, as I said before, is sometimes called the Teichmeler geodesic flow. The details won't be important for the rest of this course. I'm not going to talk about Hodge-Norm again. That was just to sort of give you a flavor of some of the meat that goes into understanding the dynamics. Now I want to talk about, rather than GT, all of GL2R. And I think maybe I'll get right to the point with the recent big theorem, due to Eskin Mirzikani-Mohammadi, which says GL2R orbit closures are manifolds. Yeah, the fact that orbits are manifolds is trivial, because they're just modeled on GL2R. Thank you. GL2R orbit closures are manifolds. So you look at the GL2R orbit. The GL2R orbit contains the GT orbit. This is very chaotic thing. So usually, GT orbits on their own are dense. So in particular, usually, GL2R orbits are dense. But they're not always dense. Last time we talked about, very occasionally, they might be closed. And then the surface satisfies the beach dichotomy as very special properties. So this says the orbit is always, it might be dense, in which case the orbit closure is everything. That's a manifold. It might be closed. That's a manifold. It might be something in between, but it'll be a manifold. And this is something you should pause to appreciate, especially if you're not in dynamics. On the scale of real analysis to complex analysis, where in real analysis, everything is false that you wanted to be true. Functions are not differentiable at any point, et cetera, et cetera. And in complex analysis, everything that's miraculously true is true once differentiable implies infinitely differentiable. Dynamics falls extremely far in the real analysis side. People don't prove theorems like this. This is on that spectrum. This is a complex analysis style sort of statement. People don't prove things like this in dynamics very often. They show that orbit closures can be fractals of arbitrary house dwarf dimension. That's a typical statement in dynamics that you've sort of not along to. You see this one, and you're like, what is going on here? And so I'll tell you a little bit about what's going on here. First of all, though, I just want to finish this statement, because it's actually even stronger than this. So there are manifolds cut out in local coordinates by linear equations. Yeah, so you have these local coordinates given by edges of the polygons. So you're cut out by, so you're even locally a linear. This is a really hard theorem. This was one of the big things for which Mirza-Khani was a word of the field metal. So this is a 200-page paper plus 100-page paper building on a decade of very active work in two different fields. So I'll give you examples in a minute, but first of all, what inspired them to try to prove this? So yeah, the fact that we said the stabilizers are discrete. Yeah, we talked about that. It's essentially coming from period coordinates. The fact that period coordinates are coordinates will tell you an orbit locally looks like GL2R. Well, so for example, you could think about, would this be true for the GT action? Like, what if I just look at GT orbit closures? And there, the answer is no. GT orbit closures can have any host or dimension in the fractals. The comparison is that you look at geodesic flow on a hyperbolic surface, on the unit tangent bundle to a hyperbolic surface. And there, it's a theorem that the closure of a geodesic can have a host or dimension anything between 1 and 3. It's a theorem due to, I don't remember who it's due to, maybe somebody like Carolyn Syracuse. I don't know. Anyways, it's been known for decades, I think. So really, dynamics is very complicated, right? You build, essentially the reason that you expect this in dynamics is chaos or sensitivity on initial conditions. So you expect if you change the initial conditions, the tiniest bit, that won't change the first chunk of your orbit, but it might change the rest arbitrarily much. So you sort of expect, sometimes at least, for example, when you have a system like this, that you can sort of very closely prescribe exactly what you want an orbit to do. Not exactly, exactly. Each point has to be the image of the previous point. But in the large, you can really prescribe what things do. Sometimes. I'm going to defer that question to the end. The question is about period coordinates and how big are the domains? But we don't actually have a very good understanding. And I think trying to answer would confuse the issue. So it's a good question. And I just don't have anything helpful, I can say, in one minute about that. So I want to talk about the inspiration for this work. I'll help you put this in context. So this is part of a general theory due to Ratner. I'm going to present a special case due to Danny and Margulis. So I'm going to look at SL3R mod SL3C. This is a great space that you should be familiar with. It's the space of unit volume tori of dimension 3. And SL3R acts on this by multiplication on the opposite side that you monitor it by. And I want to think of a particular one parameter subgroup, so a particular r action on this via matrices. And just to pick one, I'm going to pick 1, 1, 1, t, t, t squared over 2. So hopefully I wrote that right so that this is a one parameter subgroup. So I'm going to look at the action. So this acts gives an r action on this space. And then the statement is every orbit closure is a manifold. And more than that, it's actually a sub-homogeneous space. This is what's called a homogeneous space. By the way, technicality is technically this is an orbifold, not a manifold. And so this isn't a manifold, it's an orbifold. And similarly, stradar orbifolds. So you're not going to get a manifold in an orbifold. And to be really technically correct, I should say it's an immersed orbifold. It can cross itself. Legal jargon has now been completed. So this is a really important statement. What's special about this? So I can't replace this with anything. If I put some e to the t's in the diagonal, this would sort of display the chaotic behavior of geodesic flows. And it would be a total loss cause. It would be quantifiably extremely super false. So what's special about this is it's unipotent. Unipotent means the eigenvalues are 1. And in particular, that means that there's polynomial growth. And you're sort of what polynomials you're staring at, t squared is a polynomial, contrasted with an e to the t, which would be exponential. OK, so in general, Ratner's theorem says that if you look at any unipotent 1-parameter flow on a homogeneous space, then all of the orbifolds are nice. And this is a super important theorem. People use it everywhere. They use it a ton in number theory, physics, algebraic geometry, literally everywhere because it's applicable whenever you have some sort of symmetry group which is a Lie group. A Lie group show up everywhere because the symmetry group of anything is typically a Lie group. And then you can try to understand your object up to symmetries. OK, so that was proven decades ago. What Eskender's Akane-Mohammadi did isn't the exact analog of that. The exact analog of that would be 1t01 orbit closures. That's the unipotent subgroup. And that's the real holy grail. That would allow you to clean up all sorts of old problems. People would love to do that. They can't. I mean, there are good math petitions who've spent 15 years trying to solve that problem. So they needed a little bit more than just that. And correspondingly, actually, their proof is nothing whatsoever like ratners. Because ratners is all about unipotent orbits. And we still understand almost nothing about orbits of 1t01. Not at the level needed for a ratner. I shouldn't say almost nothing. We understand some useful things, but not nearly at the level needed for a ratner. And I think the feeling now is that the 1t0 version is probably false. OK. So I want to just briefly tell you about another development that took place that was sort of one of the more proximal motivations to the most recent attack of Eskender's Akane-Mohammadi. And that was work of Benoit Kein, who in the simplest case were thinking about the SL2Z action on the torus R2M on Z2. And what they were thinking, I'm just going to say this very briefly. I'm not going to write it because this is pretty far from the main topic, is they said, let's think of a random walk on SL2Z. So say pick your two favorite matrices. Don't be stupid and pick one to be a power of the other. And give them each probability a half. So toss a coin over and over again and pick one of the other. And then track the orbit of a point on this torus. What can you say about it? So they show that typically this equidistributes by studying what are called stationary measures here. And so this was a really big breakthrough because it's some sort of rigidity statement non-involving unipotence. And that actually played a role in this portfolio. Somehow they couldn't use that in the main part of the proof. This proof, the 200-page paper consists of 100-plus pages reducing it to a situation where they can apply a more complicated version of the Ben-Locke argument. So just to give you a flavor, somehow what you end up studying is invariant measures. So you don't consider sets, you consider measures. Measures are nicer than sets. This is called measure rigidity. And it's extremely abstract. Nowhere will you find a picture of any polygons. All they use about modulite space is some results of Forney very similar to what we just did about growth rates as you move along typology physics. And so in particular, we still don't know what the orbit closures are. They're manifold, they're linear manifolds, great. But how many are they? We don't really have a good understanding of that yet. So I have just enough time to give you the easiest example. So the easiest example is maybe I'll take this squared-held surface, which is a 4 to 1 cover of a torus. And I'll act on this by GL2R. So maybe I'll shear it a little bit, and then I'll get something like just a sheared copy. So this no longer covers the same torus, but it covers a different torus, the sheared version. So I can act by the same matrix on everything, on both the thing I'm covering and the thing that's covering. And so the orbit consists of 4 to 1 covers of a torus. And so you can use that to see that the orbit is closed, because the space of 4 to 1 covers of a torus branched over 1 point is closed. So this is a closed orbit, so in particular, it's an orbit closure. So supposedly it's defined by linear equations. So at least in this example, I should describe the linear equations. So first of all, let's say v1, v2, v3, v4, v5. So these are local coordinates given by these. You can express any other edges or linear combination of these edges. So what are the equations? Well, so this edge and this edge, they map to these two edges, which are the same. So this is going to be a torus cover. I better have v1 is v5. And similarly, I'd better have v2 is v3 is v4. So this is the orbit. It's defined by these equations. You can see that if you wiggle this picture, keeping those equations, you're still a cover of the torus. So next time, I'll talk about some more examples and also how we study these things. So next time, we'll be a little bit more like a survey of what we know now. I'll be sort of doing less proofs, but I want to give you a flavor of what's going on, like what the cutting edge is. Any questions? Mike? Maybe we can talk about it later. Any other questions? No, that's the input. And the output is ergodicity, which says something that averages of a test function of some observable over orbits should just be the average of that function. And the HOP argument says, well, the averages should be the same along stable manifolds. That sort of gets sucked into at the same point because the orbits sort of behave very similarly. And you also show it should be sort of a reverse argument shows it should be stable under the unstable manifolds by considering sort of negative time. And then it's like saying, oh, I have a function on R2. And it's constant along horizontal lines and it's constant along vertical lines. So probably the best place to learn about this is you should try to Google for a proof that geodesic flow on a hyperbolic surface is ergodic. That's the situation in which most, well, at least in which I learned it. You could also try looking for this. These are called toral endomorphisms. Yes, sir, I meant constant negative, like a hyperbolic surface, an acyl to our mod gamma. Yep? No, the hydrogen arm is gone now from the lecture series. That was when into understanding something about the dynamics of GT. It does play a role in here. Not directly, they don't talk about hydrogen arm a lot, but they use theorems of Forney, which are based 100% on hydrogen arm. That's the only input specific to the situation, other than that it's 100% abstract ergodic theory.