 This is the following statement that suppose SL2R, again, acts on a probability space and the action is ergodic, then the action is actually mixing, which of course implies everything that we spoke about yesterday, but well, usually it's the other way around that things are built. The proof is goes along, goes as, I mean uses Moutner phenomena and some other facts. And let me just recall what mixing means, mixing, if you've seen the definition fine, if you haven't, this is a definition, for any two functions f1 and f2 in L2 of x mu, if you look at the inner product, gn f1 f2, this inner product converges to integral of f1 with respect to mu, integral of f2 with respect to mu, for any sequence n, for any sequence gn, which goes to infinity in SL2R. SL2R has a topology because it's two by two matrices, for any gn such that gn goes to infinity. And this is the inner product with respect to mu, of course. Statement clear? And I'm not going to give the proof, but I just want to maybe mention the steps. So one thing that this is an exercise that you should do and it was buried in the last part of second exercise yesterday, that SL2R can be written as SO2, diagonal SO2, the carton decomposition. And well, I think actually in the third set of exercises, this for gl and qp was mentioned. This is a simpler one. And well, the proof is that you look at g, you look at gg transpose. That's a positive matrix. You can diagonalize it. So that's basically what it is. And this one goes by the name of carton decomposition. After this, you can write gn as an element in SO2, a diagonal element, another element in SO2. Compact things cannot hurt you, so you can reduce to gn being diagonal and use an idea like Mountner phenomenon that we spoke about yesterday. But you need to combine it with one other piece of technology. And what you need is a Lagos theorem, together with compactness of the unit ball with respect to weak start apology. With respect to weak start apology. And if you put these together, yeah, what? You can tell me. Both are three dimensional, right? SO2 is the circle. And the diagonal group is a dimension one. And another dimension is three dimensional. Three dimensional. And SL2 also is three dimensional. And one plus one plus one equals three. Yeah, yeah. As I said, the proof is you look at g, you look at gg transpose. This is a positive matrix. You can diagonalize it and that gives you k1. And the square root of the diagonal matrix gives you a. And then you can read out what a2 is. That's the proof. So that's all nice. Let's leave them though. So now I'm going to maybe finally introduce one homogeneous space. We actually did introduce a homogeneous space yesterday, SL3, R1, SL3, Z. But we didn't prove that this was finite value. So I hope to give a proof of maybe in dimension two, a proof that you can then generalize it to dimensions higher using induction. So, well, as always, we're going to start with the general setting. But in a minute, g will be SL2. So g is a locally compact second countable group. Examples are n, zn, all discrete countable groups or SLNR. Gamma subgroup is called a lattice. Gamma is discrete. And second, g mod gamma has a finite g invariant measure. So first of all, these assumptions here tell me that I have left g invariant measure, the R measure, which is unique up to scalar. And a lattice is a discrete subgroup, which is kind of everywhere in your group. So when you take the quotient, the quotient is a finite measure space. It doesn't have to be compact. And actually the example that we are going to be working with, this is not a compact space. But it carries a rational finite measure. So what are the examples? The example is this and this inside Rn. This is a lattice. Why? Because Rn mod zn is the n-dimensional torus, and it's compact. It's a compact group. It carries an invariant measure, namely the Lebesgue measure. If you push it forward to tn, you get a measure. That's an easy example. Still easy, but maybe let's clear. If you take the group of strictly upper triangular matrices, this is not an abelian group. If you choose the entries to come from z, and you choose the entries to come from R, this is a discrete group, and the quotient is again a compact manifold. It's a nil manifold. This is an important group, and this is a nil manifold. And the example that we are going to be mainly concerned with is discrete subgroups of SL2R. So gamma equals SL2z, or gamma equals fundamental group of a surface of genus G with G bigger than or equal to 2. These are discrete, and I mean, I need to say where they sit and are discrete. They are subgroups of SL2R. They are discrete, and they are lattices. There's a geometric way of seeing this, and I'm going to draw the picture here, but I'm going to give a proof of SL2R. This is z that is R, starting from my hand, rating this is. I'm going to give a proof that generalizes to higher dimensions, and this proof using the action on upper half space, well, doesn't exactly generalize. So remember from your function analysis, well, from your complex analysis course, that SL2R acts by Mobius Transformations on H2, where H2 is the pair xy, y bigger than 0, and it preserves a metric. The metric is, I need to give you an inner product, and the inner product is this. This is the metric it preserves. This is the metric of constant negative curvature, and any surface of genus bigger than or equal to 2 has universal cover, which is the unit this with respect to a corresponding metric, or H2 with respect to this metric. So fundamental group acts on H2. It's the group of depth transformations. It's discrete, and the surface being compact exactly tells me that I have a compact set that I can put it everywhere with elements of gamma and cover the space. That means H2 modulo gamma, if gamma is pi1 of SG, this is compact. It's your manifold. Now what does H2 have to do with my SL2? If you look at this, and let z equal i, then you see that stabilizer of i in this action is precisely the group SO2, a compact group. So this is nothing but SL2R mod out by SO2 mod out by gamma. That was compact, and this is just a compact bundle over a compact space. It better be compact. So this guy is going to be compact. That is showing this one is a lattice. Okay, this was fast, but it's okay. We are not going to use it. You just take it as a black box. That is a lattice. Maybe I should draw the fundamental domain for the action of SL2Z in H2, and then we will prove differently that this is a lattice. This one you have also seen in any sort of complex analysis course that you take. You talk that SL2Z, the action of SL2Z takes this and ties the upper half space. So that is a fundamental domain for the action of SL2Z, and probably you have checked it using the following fact that this one acting on Z sends Z to Z plus 1. So I can translate this right and left, and this element of SL2Z acting on Z sends this to, it's an inversion, so I can bring that downwards, and you can start tiling. That's how one can show SL2Z. This is a fundamental domain, and hence it's a lattice. Well, why is this a lattice? That there, even though I didn't do things slowly and precisely, at least I could hand wave and say this is compact, so we are happy. This is not compact. It's unbounded. It goes up. Why does it have finite measure? Because the measure that SL2R leaves invariant is the following measure, and now you start integrating. The integral is between negative one half, one half, and for every X I need to integrate from that to infinity dx dy over y squared, and hopefully everybody can show that this is a finite area. It equals this one I know, equals pi over 3. So that's another way of seeing SL2Z as a lattice, but as I said, forget about all of that, here's a proof that SL2Z is a lattice in SL2R. I'm going to give another one. I will give another one. I'll give another one that does not really care much that this was two. And, well, the reason I do it is I want to identify this space of lattice. Oh, come clear, why I'm wasting mine and your time doing it. It's okay. So yesterday I said that SLNR, modular SLNZ, this can be naturally identified with the space of discrete subgroups of RN. Now we have a name for them, lattices in RN with volume of RN, modular lambda equals one. So in order to show this thing as finite volume, I need to be able to find good basis for lambda. So you give me a lattice in RN and I want to find some sort of reduced basis. Of course, even for Z2, okay, if you ask the same person, they would choose E1 and E2 as a basis, but if you ask a crazy person, they might choose different basis because you have many vectors that they will generate Z2 as a lattice. Now, what I want to do is to find the basis E1, E2 for Z2 or for any lambda that you give me, what is somehow a reduced basis that I can find for it. And I'm going to do this in group coordinates because then I can give you the formula for our measure on SLNR and tell you to integrate just as I integrated over there. So I'm going to do it in group coordinate. Yeah, you wanted to make it work. So what are the steps? Step one, you give me a lattice. This is a discrete subgroup. I can look at all its vectors and find the shortest vector. I can always do that because it's a discrete set. So given a lattice and now I'll work with two. And of course, you can guess what the exercise is. So G times SL2Z, then GZ2 is the lattice. So this equals GZ2 but I don't really want to look at that. I want to look at G gamma E1. These are not all the vectors but they are the primitive vectors. But if I'm looking for shortest vector, I better look amongst the primitive vectors because if a vector is twice the other one, the one that was smaller is I mean the one that's twice this one, this vector is smaller. So I look at G gamma E1. This is G applied to the set of primitive vectors in Z2. There exists some gamma. So replacing G by G gamma is less than GV for all V in Z2. Is it clear what I'm even trying to do? So what am I trying to do? I am trying to find some little gamma in SL2Z such that G gamma has a simple description. Maybe you gave me G but in this coset space, there is no unique choice of G. I can replace G by any G gamma and it's the same point in the space. So what I'm trying to do is maybe you gave me G that the coordinates, the 2 by 2 matrix that represented G was very tilted. Some coordinate was huge, some coordinate was small. I'm trying to find some integral matrix so that if I multiply G by that, this has a very nice form. What kind of form? In a minute. That's what I'm trying to do. So I'm trying to replace G by G gamma so that this has a very particular form. And that's how this reduced basis is found. Step one is that I can replace G by G gamma and assume that this holds. This already puts quite huge restriction on G. Why? Because of first an exercise. This is usually referred to as the compositional. SL2R can be written as SO2 times A, the diagonal, times the unipotent that we spoke about yesterday. Okay? Okay, I gave it a nice name. Of course, it goes with this name but there's a different name for it. What's the name that we all know? It's the Gram-Schmidt process. You give me a basis and if you Gram-Schmidt it, this is exactly what you get. So the composition that Gram-Schmidt process gives you. So acting by elementary matrices you can make them orthogonal, then you have the norms and then you get orthonormal. That's what it is. So I have this decomposition. Now the claim is that if this holds, then if I write G as KAU, A is very specific. Under star, if we write G equals KAU, then it equals e to the t, e to the minus t and e to the 2t is less than 2 over root 3. Here also this is less than or equal. There may be that few vectors have the same norm. How do I get this? I know that some vector will have minimal norm. This vector better be primitive because I can always divide and make it primitive. The primitive guy is going to have smaller norm. Primitive guys are orbits under e1 with SL2Z. I replace G by the corresponding element and just assume that the vector was e1 to begin with. Does that make sense? So the most important fact that was used is that this is a discrete set. So it takes its value, its minimum and then I just call this new vector e1 by moving my G. Now I stop here and say we are done. If I show this, we are done. Why are we done? Let me first say why we are done and then maybe I actually leave this to you as an exercise to prove. Why are we done? Because u is harmless. u looks like that. I can replace this by a translate of it by integer so I can add n where n is in integers and make sure this is less than 1 half. This I can always do. I am free to move by elements in SL2Z and if I do that, this is a matrix multiplication even I can do, I am going to get that and then choosing n, I can guarantee that this is less than 1 half. There exists some n in z such that that happens. So what did I show under this star start? So we showed there exists some gamma in gamma in SL2Z such that this gamma is different from that gamma perhaps but such that g gamma can be written as something in SL2, some diagonal element with e to the 2t less than 2 over root 3. This particular number is not very important. Any number would do but this is the optimal times 11x where x is less than 1 half. I am going to replace this by big k. I have a compact set, a compact set and I said that is not compact. This looks much like that. It is a compact thing but there is one from here to here is a compact direction. I go up and the k was not there because I modded it out but it is not exactly this. So I found a representative in this set. This set has a name. It is the Ziegels set with 2 over root 3 and 1 half and the claim is that this has finite value. Of course 2 over root 3 comes from here and the 1 half comes from there. So it is all the orthogonal matrices times the diagonal guys with this restriction times the unipotent guys with x less than 1 half. I claimed that SL2z was a lattice so I better be able to show this thing has finite volume. This is again another exercise. Recall that the harm measure for SL2r can be written in this coordinate as this is the rotation group so there is d theta then I have the diagonal group dT and of course I have used the notation I used T too many times dS except there is a weight and the weight is 8 to the 2T. This is the harm measure where I decompose G as cosine theta sine theta minus sine theta cosine theta e to the t e to the minus t and 1 1 s. Now calculus exercise is that this region has finite area with respect to that measure. Volume of S2 over root 3 1 half is finite and this shows that it is a lattice. Another exercise which is not a calculus exercise is that prove SLNz is a lattice in SLNr. The idea is very similar you need to do induction and it's a good exercise. Any questions? Yeah and one unbounded direction. It's integrability exactly because if you integrate with respect to that it becomes integral from negative infinity to whatever e to the 2T dT integrable function. Okay so now why did I do this? It's a good question. I want to now state Mahler's compactness criteria. When do I stop? Let's say 20. Yeah let's say 20. So I have 20 minutes. Okay so what is Mahler's compactness criteria? It's SL2z, SL2R model SL2z which now I'm going to give it a name X sub 2 so X sub n is the space of lattices in Rn the space of unimodular lattices in Rn meaning that the volume of the quotient equals 1 so this is what Xn is going to be. For any epsilon I'm going to define Xn of epsilon or X2 of epsilon to be the set of lattices. Lambda is a lattice in Rn, covalent equals 1 such that the infimum of normal v we just said this is a minimum actually so maybe I'll write minimum v in lambda non-zero is bigger than or equal to epsilon. So it's the set of lattices that they don't have short vectors. A corollary of what we just did is that X2 epsilon is compact for all epsilon positive. And of course they exhaust the space. The space can be written as union of these but this is a compact set. Why? Because let's go back here. What does that condition the infimum of non-zero vector bigger than epsilon do? I have an upper bound. It puts a lower bound here. So it just cuts this. It gives you a compact set. Now why do I care? This is a non-compact space so when you have a flow on it it's not even clear that this flow is not divergent. It doesn't go out to infinity. Of course we have point carrier currents which tells you almost every point for a flow is going to come back but given a particular point this is not at all clear. So the action of the group U is a very particular action and you have a non-divergence. Before going there let me give an example of not non-divergence. Consider T going from X2 to X2. T is multiplication by the matrix 2 1 half. I have the space any subgroup acts from left so I can let this one act. Concareer currents tells me that every point will come back almost every point will come back close to itself infinitely often. But it doesn't tell me what happens if I give you a particular point. So I'm going to give you a particular point namely the standard lattice. And Tn of X is divergent. It leaves every compact set. Why is that? What does it mean to leave every compact set? Let me show you that I can produce lattices with short vector in this orbit. For every n or every epsilon positive there exists some n null such that Tn of X is not in X2 epsilon for all n bigger than n null. That's what it means to leave every compact set because of Mahler's compactness criteria. This has a name. But why is that? See what happens to the vector E2. The vector E2 is contracted because I'm multiplying it by an eigenvalue less than 1. So the vector E2 just goes to 0. What I'm doing is taking this standard torus and taking this curve and just pinching it to 0 along the orbit. So this in the orbit goes to a degenerate torus. So it's not recurrent. It leaves every compact set. The action of U is a completely different game. For position lambda in X2 one of the following calls. Lambda has a horizontal vector or there exists some capital T which depends on lambda such that well maybe this is Tn null. For all T bigger than Tn null if I look at the times in the interval 0T such that my unipotent ut ut remember what it was. It was this guy. Applied to lambda lives in X2 of 1 over 1000. This is a pretty random number. Some epsilon. This measure is bigger than 0.9 times T. So not only do I come back to a fixed compact set I come back a lot. I spend most of my life in a fixed compact set. So this is in strict contracts with this statement that I leave every compact set. This flow leaves every compact set. This flow either something very algebraic happens you have a horizontal vector or you are biased towards the compact set. In dimension 2 this is easy enough that hopefully I will finish the proof today. Dimensions higher this is very nontrivial result due to Margulis which was one of the ingredients in his proof of super rigidity. Not even this but just to have infinite return to a fixed compact set. And this was generalized by Danny and then Climax and Margulis with applications to life and time approximation. Dimension 2 this is really an exercise so we are going to do it. Of course I'm not going to prove that. I'm going to prove something that implies this and to connect these dots is an exercise for you. So the following is the proposition I'm going to prove. For every interval I subset of R the following holds that lambda in x2 and rho be a number that is positive but less than one half and assume that the supremum of norm of utv t in I is bigger than or equal to rho for any non-zero vector. Then for all epsilon that's less than rho over 10. Let me write it and then I'll explain what this means. So it's a mouthful. What is it saying? I have my lattice lambda sitting in R2. It's got a bunch of points and I'm looking at my flow ut and I want to see what happens to this lattice. I am looking for a statement like this that the lattice comes back to compact set but the lattice is not one point it's infinitely many points sitting everywhere. What this proposition is telling you is that look at individual points and suppose well of course individual point think of themselves as center of universe so pretend that's the case just follow what happens to this along the orbit forget about the rest of the lattice. So I just look at what happens to individual points and I put this condition that the orbital points grow so if I look at this and hit it with my unipotent which by the way if you look at the action it's just straight line because this point was x, y you hit it with the unipotent it's x plus ty, y so it's straight line and assume that this is really the picture that your points move and they attain certain supremum so the supremum grows becomes bigger than a row then I can find times that if I look at all of points moving together none of them is in the small blocks that's the statement down here I can find times and lots and lots of them oh sorry so that's probably I can find times and lots and lots of them so that no points from this infinite collection comes to the small blocks that's the statement okay I just corrected there was a typo with that should have been bigger than 1 minus 3 is the statement clear? was it? intersection of what sorry now what's the proof? the proof is relies on dimension 2 really and if you want to generalize this to higher dimension you need to be more goalist and okay, art so first of all I'm going to make my life easy and work with max norm of course the statement the result all of it is irrelevant I mean it's insensitive to the norm because all norms are equivalent to each other so I'm going to work with max norm and I am going to recall or point out the following special property of dimension 2 if lambda in R2 is a unimodular lattice is a lattice with volume equals 1 then you cannot have two linearly independent short vectors period then you can find v1 and v2 linearly independent both in the lattice with norm of them less than a half just can't because if you did then they would give you a fundamental domain and the fundamental domain is going to have volume less than 1 and this is very crucial because I am going to look at the behavior of all points and this will give me some umbrella that if one vector is short shorter than one half no other vector is shorter than one half this is extremely important it's not important but I'm going to work with one half anyway I mean it would be bigger than or equal to so okay so I have the interval I I'm going to cover it with sub intervals and use conditional star to say that this is a disjoint covering and then inside these intervals I am going to control when vectors are short that's the strategy for every v in lambda define IV of rho well we said it's not important but I need a notation so IV of rho to be the set of times in my interval such that the vector v has norm less than rho that's not equal to rho so let me draw this picture here I'm working with max norm and this is the robot I'm looking at all the times where my vector along the orbit comes in the robot okay star implies IV1 does not intersect IV2 so the collection of IVs gives me a covering disjoint covering of a subset of I in particular what does this imply this implies that the measure of union IV sum of the IV rows equals the measure of union and this is less than or equal to measure of I that's what it implies I can take the sum and the measure does not go up now inside each of these intervals if I have short vectors what does it mean to be in here that means some vector became short inside each of these intervals I'm going to detect the bad intervals what are the bad intervals the bad intervals are this vector being less than rho does not hurt me it hurts me when it's less than epsilon and I'm going to control those times these in the vector being IV rho because of what I'm about to write is less than epsilon okay so now I'm going to look at within this interval that this vector the orbit of this vector entered the umbrella I'm going to look at whether or not it hits the dangerous box and I would like to show that the measure of this is less than the measure of that with actually a definite factor then I'm done because I know that happens each of these have measured less than that and the bad times are contained in the union of these that's what I need to show so let me write a claim oops this was the whole proof sorry so what is the claim if IV rho IV epsilon sorry is not empty suppose for some V I have a time that I become less than epsilon then the measure of IV epsilon is less than epsilon over maybe 2 epsilon over rho minus epsilon times the measure of IV rho I'm going to be down special I can't leave it like this I need to finish so because there was complaint this is exercise this is where you use this condition and let me draw the proof of it if you're here that means that the speed see the speed is given by Y if you just look at what it means for a vector to have that property and because you need to get out of this the distance because you're supposed to reach the boundary then you will have at least one of the end points for getting to rho will have to be contained in I that's somehow if it's not clear ask me later okay now finish the proof this set Utlanda is not in X2 epsilon this is contained actually equals to union of IV epsilon by definition now the measure of this is less than or equal to the sum of measure each of them and this is less than or equal to that 3 epsilon over rho and this is less than or equal to 3 epsilon over rho times measure of I by what I wrote over here if you prove the claim then you're done and the claim is the fact that linear function this was actually quite an important linear functions if they are small for time it takes them quite some time to grow that's what it is so I'll finish with this exercise which is a generalization of the fact about linear functions suppose F is a polynomial of degree D or whatever then the measure of times so that F of T is less than epsilon times the supremum of F over I this measure is less than C epsilon to the alpha times the interval there exists C and alpha depending only on the degree such that this happens and this is a generalization of in here you're just really dealing with linear functions but that's okay, sorry