 Okay, so today I have somehow some kind of an ambitious plan. Let's see how far we can get. Somehow, feeling like, especially given Simeon's course next week, it's good to talk about Marbouz's thesis idea, this idea of thickening, but we have some unfinished business. So these are the two things I want to do today. Let's recall what the theorem was. This is theorem of Ratner, and the proof that we are giving here is somewhat a simplified version, which is due to Manfred Einstein there that he wrote after Ratner's work. The theorem was that if so, any SL2R invariant ergodic probability measure ergodic probability measure on SL2C module of gamma, where gamma will be assumed to be a lattice. This you don't need if you're assuming the measure is probability. If you're assuming gamma is lattice, you don't need to assume measure is probability, but I'm putting them both in there. It is one of the following H invariant. This we call H, and I close the H orbit, and the second possibility is it's the harm measure. And we set up some notations I won't recall yesterday, and there were two lemmas, which one of them I proved, the other one I didn't, and because I want to give the proof, I'll just draw a picture of the proof and not give the complete proof. There exists epsilon positive, such that as soon as your set is almost full measure, one of the following holds, the space let me call it X, one orbit has full measure, or there exists a sequence Xn in omega and GN, which equals X of RNs, RNs RNR, which let me remind you was I little SL2. RNs are non-zero, they are going to zero, and GN Xn is in omega. As was remarked by, well, it's more somehow natural to say there exists points in the support of measure so that this happens, I'm showing a little more because I need a little more. There are points in the space that ergodic theorem for them takes forever to hold. I want to throw those points out. So what the lemma is saying is that there are points in the support, but you can make it a little, it's a souped up version of points in the support. There are points in the support so long as you assume your measure is close enough to the full space, you can actually find these points in this set. That's the lemma, we did not prove it, let me draw a picture of the proof. Picture of the proof is that take a point in the support, then if I draw a small box around it, remember our coordinate system, our coordinate system was that I can have the h direction here, this is h delta, which is the exponential of ball of radius delta in h, and this is the r direction, this is x of r delta. So I have a product structure for a neighborhood, the coordinates are h direction and r direction, choosing delta small enough, I can assume that this actually embeds in g mod gamma because gamma is discrete, I have a neighborhood of identity, I can assume that around p, I have the same sort of structure that I had in g. Now, okay, this, because this is appointed the support of the measure, this box has positive measure. I take, this is the box, I take the measure, restrict it to the box and call it new. Resrict it to the box, normalize it so that it's a probability measure. So I have a probability measure in this space and I have two directions, the directions which are h directions, these directions I understand. My measure is invariant under h, that means when I move from here to here, there is no cost. So in these directions, what I have is the hard measure on h and I can disintegrate this using Fubini, so I'm gonna have lambda r, let's call this capital r sub delta over r sub delta with respect to some unknown measure. This is basically restriction of new to this transversal direction. Using Fubini, you can integrate this. Over here, you have measures that you understand in the transversal direction, you're integrating with respect to the new. Now, you have the set omega that sits, no other color, there is a set omega that sits here. I'm going to look at these leaves and see how they intersect omega. So I'm gonna define the other set, which is the set of r, so that when I look at lambda r and I look at this h delta multiplied by x sub r, which means this leaf intersected with omega, I want this to have most of the measure. This is a set and because this was a positive measure set, again, applying Fubini, if I assume this epsilon is small enough, I can guarantee that sigma of r prime delta is bigger than 0.8. This is a probability measure. I have a set that's almost all of the space. Suppose the measure was actually full. If the measure is full, almost every point should satisfy something like this. So you have that. And now you just choose a density point for r prime delta. If you choose a density point for here, that means that every small neighborhood have positive measure. Now, every small neighborhood has positive measure, has one of the two consequences. Either there's exactly one leaf that has positive measure, then one holds. Because this is a piece of leaf, you can move it with h and that h orbit has positive measure. Ergodic, that means it has to be full measure. Or if not, then there are leaves that accumulate. Because there are leaves that accumulate and because the joint representation of h leaves these invariant, I can find a good point here, a good point here, the transversal direction is given by x bar bar. That's the proof. So choose a density point for r prime. And that's the proof. And then there was lemma two, which said if you have somehow unexpected displacement, then you are very happy. So let x prime side x. So maybe I will write it in terms of the lemma. This was lemma two, which was there exists full measure such that if x is in x prime and gx is in x prime for some g in the centralizer, which is in c, then mu is g invariant. That was the second lemma. Now here are some exercises. So we want to apply Ergodic theorem. We want to apply Ergodic theorem for the action of u. Maybe, okay, when we start writing the proof, I wanna call that mu is u invariant. Mu is certainly u invariant and by a Mountner phenomenon, mu is u ergodic. U being the unipotent direction in SL2. We are gonna use that. But these exercises are, they have nothing to do with the theorem we are trying to prove. There are exercises in measure theory. So one, for any interval i of the form ab, there exists the full measure x prime. If I look at n times ab, meaning na nb and length of this integral, which I'm gonna write it like that, and integrate from na to nb, f of utx dt, this converges to integral f d mu for all x in x prime. So the assumption is that mu is u ergodic measure on x. So suppose you have any ergodic measure on a space, then this is, Berkov would tell you you can average from zero to t or zero to na, zero to nb, sorry. And I'm assuming a nb such that this, for all this and so i b, i is an interval ab, then there exists a full measure set such that this happens. And I'm assuming a nb are positive. It's, of course, is irrelevant. You can assume them to be whatever they are. And the same holds. Basically what I'm saying is that average from zero to nb, average from zero to na, subtract. That's what this is saying. Now, I need Egorov. So this is a convergence point-wise. For given n, you can define capital f sub n of x to be this average. And what Berkov is telling me is that these functions almost surely point-wise converge to a function. Egorov tells me I can make sure this happens uniformly as soon as I'm willing to give up part of the space. So let i be an interval ab. There exists, so for every epsilon, there exists a compact set, omega epsilon of x prime with almost full measure such that star is uniform on omega epsilon. What does that mean? For every f continuous, let's say compactly supported function on x, every delta, there exists some n which will depend on f and delta, such that if n is bigger than n then when I average over this interval from na to nb, my function, subtract from the integral, this is less than delta for any x in omega epsilon. So it's uniform. So I have uniform convergence on a set of almost full measure. And well, that's the set that I'm gonna put it here. Not exactly the set. I need a souped up version of this yet. The intervals in R, you can choose the end points to be rational and these every interval can be approximated by such intervals. So the third part is that this was something that worked for one interval. I can find a set that works for all intervals. This capital N is going to depend on the interval also. I don't care. So there exists omega epsilon as above such that triple star holds for all intervals. And this is again, because you can choose a countable subset of intervals that every interval can be approximated. The symmetric difference of every interval and this countable subset can be controlled. And I'm looking at functions that are bounded. So that's, I have a set for which Ergodic theorem holds uniformly and the interval you can choose, of course, if you choose a very short interval, you need to wait very long before you get the little approximation. But that's what it is. Okay, now proof of the theorem. So let epsilon be small such that omega epsilon, as in part three of the exercise, satisfies lemma one. So I choose a set whose measure is big enough set of uniform convergence whose measure is big enough so that I can apply lemma one. Of course, else why would I have set it up so that I would have written the whole thing. So omega epsilon satisfies lemma one. If one in lemma one holds, one in the theorem holds, this is left to you. Essentially, all of it is contained there. There is one thing that is not completely trivial and that is, so okay, you have an orbit whose measure is one. That's exactly what part one in the theorem said, but part one in the theorem also demanded the orbit to be closed. Lemma one did not say the orbit is closed. You need to show the orbit is closed, but that's because H mod the stabilizer of the point carries an H invariant finite measure that means the stabilizer of the point, intersection H is a lattice and that will imply the orbit is closed. It's an exercise, not a completely trivial exercise, but so we assume R2 in the lemma holds that is there exists a sequence Xn in omega epsilon. I have ergodic theorem for it uniformly for all intervals and the sequence GN that they look like X fob Rn going to one, not equal one, such that GN Xn is in omega epsilon, okay? Again, I'm going to separate it degenerate case. A degenerate case is that one of these RNs lands in the centralizer of U. So case one, better to take care of easy cases first, right, that case? Those we can do. Up to here, everybody knew how to do, the rest is what Schroedner taught us. So case one, if RN is in centralizer direction, GN is in centralizer of U direction, then what does this mean? This means GN must equal an element that looks like one, one, zero, I X0 for some, maybe X is not a good, for some T0 that is non-zero. That's how the exponential, these RNs are living in, okay, I erased it, these RNs are living in I times SL2R. Trace zero matrices with entries in R but they have an I. When you exponentiate something that's in the centralizer of U will look like that, and T0 is non-zero. But then lemma two tells me that mu is, so suppose there exists, if there exists some N such that this happens, maybe I will call it N0, GN null invariant, that means mu is invariant under the group generated by H and GN null, and this is an exercise that this is the whole group, this is G. By the closed group generated by this, this is G, this is an exercise. There is no subgroup of G containing H except H and G. It's a maximal group. And okay, you can multiply matrices and prove it or you can think about it and realize that the action of H, the conjugation action of H on IH on this is a reducible representation. So as soon as you have one vector, you get the whole R, and when you have the whole R, you have the whole group, that's, yeah. What is this, sorry? X prime? Oh, X prime, you mean that X prime? Oh, omega epsilon is subset of X prime. So I had this X prime, that's the full measure set where ergodic theorem holds. And inside this full measure set, I'm choosing better and better sets. So the X prime that goes here is the set where, say, one holds. The set that you have. It's the set where ergodic theorem. I think yesterday we had it as the set where ergodic theorem holds for all bounded continuous functions or something. Omega epsilon is a set where the averages along U certainly Q distributes, the Q distributes actually uniformly. And that's what went into the proof. Okay, so in this case, two in the theorem is satisfied and again, we are happy. So finally, we come to the interesting case. The interesting case is that we have this transversal return to ourselves in the support of the measure. And the transversal returns are not in the centralizer. They're somehow in general position. What do we do then? So case two, really the interesting case. GN is not in the centralizer of U for a limit. So now, oh yeah. So I set this, but never really put it in writing. I should write it here. Mountner implies U is ergodic. I'm using this here. So my measure is ergodic. And now there was a picture that I drew yesterday and I was vague about what we are going to do with it. Now let me try to make it a little more precise. I have this point XN, I have the point GN XN. This is the point GN XN. The strategy is to try and flow these two points with the unipotent and study how they diverge. So you'll flow them and you study how they diverge. They will diverge because GN is not in the centralizer. The two orbits will diverge from each other. They won't stay parallel forever because that was exactly saying that GN commutes with the U action. So I go to a point UT XN and I see what its friend is doing. UT GN XN, okay? Let me try to compute this UT GN XN. UT GN XN, I can write it as UT GN UT inverse UT XN. This is the same as that. That's what guarantees the divergence. So if I am not in the centralizer, this does not equal GN, that means it's gonna change, okay? So let's rewrite this UT GN UT inverse. This is a change in the displacement. Now my GN looked like X of RN. So UT GN UT inverse is UT inverse. X of RN UT inverse. And this equals X of, I can take UT in sight. That's what it looks like. I can write it as X of UT RN UT inverse. Now here it comes a two by two matrix multiplication that has given at least one person, Phil's Medal and many prizes to many other people. That is this matrix multiplication. So we're gonna write this down. This is one, one, T, zero. RN looks like I, A, N, I, B, N, I, C, N, I, D, N. Let me take the I out. So it's gonna be I times whatever I get here. I just won't write I. This is something we need to compute. You can do that. I got a PhD just for doing this. So this is D N minus C N T minus C N T squared plus D minus A T plus C N, sorry, plus B N. I might be off by some minus sign, but irrelevant. What is important is staring at these coordinates. What do we realize? We realize that these are all polynomials. So matrix coefficients are polynomials in T. This is always true whenever you have a finite dimensional representation and an important matrix. When you multiply the matrix coefficients are polynomials in T. These are all polynomials. This is linear and this is quadratic. That one does not change. The one two entry is quadratic, okay? Now I'm going to renormalize. These RNs were quite small. They were converging to identity. So RNs are quite small. I'm going to renormalize to be the minimum. So what did I get? I got a map, let me, so this defines a map T going to the Lianje bra R. Where the element I get here is this matrix. This is a polynomial. I'm going to renormalize and think of it as a map from interval zero one. So this map sends T to phi and tilde of T which is I times this matrix. Because the two points were close to each other I need to wait quite some time before this becomes size one. I started something that was size epsilon and I have a polynomial that's changing. I need to wait quite some time before this gets to size one. So I'm going to renormalize and define phi N of T to be phi N tilde of K and T. What is K N? K and T is a supremum over all integers K such that phi N tilde of ball of radius zero K is inside ball of radius one. Why do I do that? This is basically bookkeeping. I have polynomials that I need to wait quite some time before I see them to be non-trivial. I just multiply or divide by this distance and think of them as polynomials in interval zero one. What do I know about phi Ns? Phi Ns are polynomials of degree less than or equal to two on the interval of zero one. What else do I know? I know that phi N of zero one goes inside ball of radius one and I know that supremum over N and T of phi N of zero one equals one, normal phi N of zero one because of this normalization. They are non-trivial polynomial. This phi N that I defined here is finite. Why? Because this polynomial is non-trivial exactly when G N is not in the centralizer and that's my assumption. G N is not in the centralizer. This is a polynomial that's not constant. It's gonna change. It's gonna change. It's gonna grow at some point. It's gonna get out of ball of radius zero one and I just catch it right before it goes out. This is finite and K N goes to infinity as N goes to infinity because G Ns were going to zero. These two I know. And the way I normalize that phi N is getting closer and closer to have norm one to have supremum of the norm equal one on ball of radius one. These three imply what? Implied that phi Ns are uniformly convergent after passing through a subsequence to a polynomial phi where phi is a polynomial phi of zero equals zero and the supremum or the way I have done it normal phi of one equals one. I have a sequence of polynomials of bounded degree that is pre-compact and their norm goes to one. I can pass to a subsequence and get that. Now the big claim is that my measure is invariant under exponential of this phi. This is what we're gonna prove. There's one other fact about the image of phi that can be proven but I don't exactly need it so we'll see how we are doing in time and then come back and do it. So note that phi is a polynomial from zero one to R which is IHR and phi of one has norm one. In particular phi is a non-trivial polynomial. Now the big claim is that mu is invariant under expo of phi of S for all S in R. If I prove this I'm done, agreed? The claim implies the theorem, why? Because phi takes values in R. When I exponentiate I'm gonna get elements that are outside H and I said anything that's outside H, if I add it to H I generate the whole thing because expo of phi of S is going to be in expo of IR which is not contained in H. This implies if I take H, add to it expo of phi of S, the group generated is G and we're done. So this is the claim to prove but why is this true? It is true because let me first draw the picture of the proof and then we write the epsilons and deltas. Why is this true? Look what I did here. I waited till things are basically size one. So I waited till the thing that gives me the different, the displacement between here and here is size one. Let me just look at phi of one. I want to show the measures invariant under phi of one. That would be plenty good. So this is the time that I had to wait which is KN times one. So I see phi of one. This is phi N of one. It's converging to phi of one. Because these are polynomials, I know that I can wait for a little longer one plus epsilon times KN and the value from here to here does not change. It's basically phi of one, phi N of one times something that depends on epsilon. So this is going to be phi N of one plus something that depends on epsilon. So on this piece, the value of polynomial phi N is nearly constant. What do I know about my point XN? I know that over this long interval, I'm okay distributed with respect to the measure. So the average from here to here is very close to the measure. The average from here to here is also very close to the measure. If you remember the silly argument for having the displacement from the centralizer, this is all I needed plus the fact that the cost of going from this orbit to that orbit was a constant number GN. I'm not constant, but I'm nearly constant. So that goes to the error. And I get more and more invariant so I become say epsilon more invariant under phi N. You take the limit as it goes to infinity. You see invariance under phi of one. That's the proof. Now, I can write it or I can leave it to your imagination. Write it, okay. We'll show mu is invariant under phi of one, as I said, X for phi of one. So give me any function that is continuous and compactly supported on X. So in particular, it's uniformly continuous. Phi N's are converging to phi. This means there exists some eta positive such that if I look at F of X of phi of S applied to any point Z, I subtract from it F of X of phi N S prime applied to the same point Z. This is less than delta for any delta. There exists this such that that happens for all S, this is one, sorry, and that's S for all S in the interval one minus eta, one plus eta, okay? Uniform convergence, uniform continuity of F implies that. Now you start averaging. This is gonna be my I to which I apply part three of the exercise. I know that if I expand this far enough, I have a distribution on these expanded averages and because KN's go to infinity, I can choose KN to be large enough to get that. By choice of omega epsilon, there exists some N null such that if N is bigger than N null, I have the following two inequalities. In quality one is that if I average my function, UT GNXN, subtract from it that over which interval, over the interval KNI, that's I, and of course I normalize, and this is not a good name, A1. Subtract it, this is less than delta. Uniform convergence for the point GNXN. The second is, let me just work this out. Now the integral KNI, or maybe A2 is the same average. Now I apply it to the point XN and I take UT here and I put X of few of one, let's call this little Q. I know XN is a point of uniform convergence with our notation yesterday, this is FQ, this minus integral FQ d mu is less than delta. And of course A3 is that these two are close to each other, that's because of that, is less than delta because of that. You put these and then you're done, of course here, I use the fact that UT GNX equals phi N of T UT XN, I used that identity, which is what we started with. Definition of phi N was precisely, phi N tilde was that you conjugate phi N is the expanded up version of it, which is precisely, okay. And that means these two integrals are close to each other, which means mu is Q invariant, two minutes to early. Any questions? I'll stop there then.