 Vnemonjoda je hrča Black joy. Xe poucha jo to po te carke. Full Ko bo se na tan globa igrami nekaj za te kar. Tudi teko smo druge veseli Zvenepe. Ko do St stylingoje si ta sarje So braโци, aircraft. Iko se a takt stonesje hija raht. in vse mene, ko je vse vse intervali, nekaj nekaj nekaj intervali, in da se me zelo, da vsi tudi mene se zelo, da bomo vse obradi vse rukljene vse. Vsrečno je, da r in j je vse hradi vse j-staur, In lambda nj is the Lebesgue measure of the J-stower at step n. And the very important quantity we had the Rozevičko cycle. So a n i j, where the cardinality of visits of i n j to i 0 i up to under t up to r. So this is a tool we set for impose the Joffentang conditions on interval exchanges. And these matrices are a product of d by d matrices. And you should think of each of these e i as a somehow multi dimensional entry of a continued fraction map. So they are really generalization of the entries of the Gauss continued fractional algorithm. And maybe something which I didn't say yesterday, I can also say that you can think of a n as a 0 of rn of t. So there is a renormalization map, which takes the it, and looked at the nth iterate of this renormalization is the nth induced map, but rescaled. So when I induce, I'm looking at smaller and smaller scales. But if I risk, I can always divide by the lengths of the interval so that I go back to unit lengths. And this induced it rescaled is giving me renormalization operator. So renormalization is some zooming in and rescaling. So you can think of these matrices as a co-cycle. So it's actually the Rosivic. And actually we were doing acceleration of Rosivic co-cycle on the space of it. And so this matrices somehow are produced in a deterministic fashion by the renormalization dynamics. And this is the part that we are skipping in this course. And of course, you can study this renormalization and the properties of this co-cycle. This is kind of dynamics in parameter spaces. And that's how you prove full measure of the yafantan conditions come from the study of this renormalization. I'm just kind of using the renormalization as a tool to study Birko-Samsen, not studying the dynamics of the renormalization in itself. OK, so then yesterday we spent the last hour a little bit in a hurry. But I hope I gave you an overview of how you prove mixing estimates and stretching of Birko-Samsen in the asymmetric case. So today I want to do the other part of the game for the first hour. So I want to study absence of mixing. So we already stated yesterday the criteria. So I want to recall you. So recall. So we will put ourselves in the, I will get there in a second, but I will go in the symmetric case where my roof has symmetric singularities and we want to prove absence of mixing for typical ITs. And the criterion of Kocher-Gin used several times after him. So it's the following. So we had en rn where this partial rigidity sets, so which means that the measure of the sets is bounded below uniformly in n. And in some proper sense, rn is a time where along this set, the map is approaching identity. So this is what we call rigidity. So if it was in the whole space, we will say that the map is rigid. We don't have rigidity. Oh, actually, for almost every IT, you also have rigidity. But we don't want to use full rigidity. We just need to use partial rigidity. So to have positive part of time where the iterate of your map looks like identity. Partial rigidity plus the following bound. Along this rigidity, you want to have no stretch. So the uniform constant, such that for every x and y in en, when I look at the corresponding rigidity time, my Birko sums don't vary too much. And this implies no mixing. So I want to verify this. I want to use this criterion. So I need two things. I need to produce partial rigidity sets. And I need to prove bounds on Birko sums. No stretching bounds. Partial rigidity, this is what I call no stretching. No stretching, no shearing. Maybe I should write it, no shearing, you see. Maybe let me write it in shearing. Not shearing along this partial rigidity. So x and y, their orbits don't vary too much. I don't see this growing discrepancy, which gives me shear. OK, so the first part. So first, I want to build partial rigidity sets and show them where they lie. And this is really, it's written in my words, but it's actually Katok. So Katok didn't use Rozević induction, but essentially it's his argument. Just I'm going to present it with the towers of Rozević induction. So it's Katok. It's his paper in the 80s where he proves that no IET is mixing. OK, so let me also recall you. OK, maybe it's not important now. Good. So let me show you what are the sets in the towers. So now we see whether we understand well our towers picture. So this is where the tower dynamic is useful. So we have these towers. So first of all, I'm going to pick, so for every L, I'm going to pick, say that we pick the largest tower. So pick J, such that what is the area, R and J, R and J lambda and J is greater than 1 over D. There are the towers. One will be the largest. If the step is balanced, you could take any of them and they will all have fixed bounded. The area will be bounded below by 1 over D. But I'm also telling you Katok argument. I want to show you that here I'm not using anything special. Any IET could work here. OK, now we look at, we have the, say that this is my largest. I don't know what is in my picture largest. Let's say that this is the largest. So this is my J. And now I induced map on, if I induce on the base of this tower. So I take, when we get the color, I take this interval. It's a small interval. And I look at the Poincara, the first return map to this interval. We said yesterday, any induced map of the IET is an IET of, at most, D plus 2 intervals. So the induced IET is, again, has D plus 2. So there exists, let me call it J, continuity interval. Continuity interval for this induced map. As such that the area of J is, at most, at least, 1 over D plus 2, the length. I divide in D plus 2, whose J is. Sorry, maybe I was using the notation lambda for the length. So I'm inducing, and I just pick the largest. Say I just pick the largest of this interval, say this one. And now I will build the tower over this. Here, right? So this tower, I claim, will be my partial rigidity set. And let me also call let Rn, the return time, first return time, of J to INJ. So I have an induced map. So this interval comes with the return time to the base. So now I can set EL to be, we said, the tower over Rj. So this is the union of ti of j for i from 0 to the height, Rj. This is minus 1, is the height of the tower. This is the blue set. And I claim, so, claim EL Rn, our partial rigidity pairs. OK, so the area, I chose it to be big. So what is the Lebesgue measure? It's Rnj times Lebesgue measure of J. I sorry, I'm sometimes switching between notation. Sometimes I use Lebesgue, sometimes I use absolute value for Lebesgue. So what is the measure is, this is 1 over d plus 2 Rnj lambda nj. And j was chosen so that this area is at least, so I'm using this one and this one. And they give me 1 over d d plus 2. So the area has a lower bound. And now I claim that, so, ti Rn of the base is back in the larger, ti Rn of j, by definition, is back to the base. So when I look at this return time, it will maybe go many times around, but this small interval will be back to, maybe I will plot it in a different color. It will be back, I don't know, somewhere here. Maybe it overlaps, maybe it doesn't, doesn't matter, in the same base. So these two intervals are certainly close. But I claim, look at the picture, please, say that I take some other interval in the middle, or some part of the tower. So what will it do? It will go up, it will come back on the red, and then it will remount the tower up to the same level. So each interval will go back and come back up to the same height by construction. So what I claim is that this implies that for every ti of j in the n, for every i's floor, Rn plus i of j is contained in ti of i and j. OK? And so every floor of the tower comes back in the same floor of the bigger tower. And the towers are shrinking to zero. So this is rigidity. But the size of i n j is going to zero, and this implies rigidity. So it's coming back very close to itself. Fine? So this is, by the way, really remark. So katoq, some version of katoq proof, katoq argument, for katoq argument, actually we can prove something a little bit more. So basically what we proved, basically along the same line, we can prove that there exists a partition. This is just n's tower, n's towers floors. So I did this only for this tower, but without a lower bound on the area, I can do the same argument for the other towers. And I can find partitions, and there exists finitely many. Finitely many. So at most, let's see, d times d plus 2, return times, rigidity times for floors, for this partition element. So I can basically look at the full partition. Each tower I can divide into d plus 2 induced towers. And each of these towers will have a unique rigidity time. So there will be one time where every floor simultaneously come back near itself. So with this, maybe if you want, you can, I'm not going to do the details, but this implies that t is not mixing. So if I give you some set A, first of all, I can approximate it with floors of towers. So I can believe that my set is almost union of floors. And then there are finitely many times for which this set. So there will be at least one of these times which will pick a significant proportion of A. So there will be a significant proportion of A, which will come back very close to itself. But if a set has too much self-intersection, it cannot mix. So you can make the epsilon and delta proof. But essentially, if the set is small enough and the A intersected some return time of A is greater than a constant times the measure of A, it cannot be of order mu A squared. It cannot be what it should be by mixing. So you can work it out, but this is essentially the full, close to be the full proof of absence of mixing for NEIT. And actually, that's to advertise. So I taught in a summer school, the summer at ICTP. David and Jerenevor TA in the school. There are lectures online on YouTube for the summer school. So I think in the very last lecture, I was doing a research induction, and I did this proof with also the last part. So you can also watch the last lecture of the ICTP summer school. OK, so we have rigidity sets. So now it's time to do, maybe I leave the rigidity sets or, where are they? OK, maybe I'll just rewrite this EN qui. Ah, no, no, not everything here. OK, it doesn't matter. I think you, I leave the picture is there. So we'll just keep this picture and erase the board. Sorry, I'm going to find. Anybody knows where the eraser? Should we be? Ah, sorry, thank you so much. Couldn't see. So now, balance, sorry, cancellations. So now I want to prove the following proposition. So OK, I prepare my notes, but then I don't use them. So maybe I should sometimes. So that I'm sure that I don't forget what I want to tell you. Yeah, it's OK. Now I want to prove the second part, the second ingredient that I need. So I will write it as a proposition. So for almost every IET, there exists a sequence, and k tending to plus infinity, of balanced, positive and balanced induction time. Ah, sorry. OK, maybe that's fine. I will write it in generality. OK, so we already have our function with symmetric log, and I'm going to study the derivative, right? Remember, yesterday we studied the derivative and proved that it grows faster than an L1 function. It was growing like R log R, the R-spirco sum. And now I want to basically prove that I will do such that. So for every x in the base, I'm going to look at the birco sums up to the height of the tower. So for every 0 less than R, less than R and j. So I'm looking at the birco sum from the base up to something less than the height. And this is the step and k of the induction, and this is any, actually, sorry, for every j and for every x in ij. So for all points in the base, if I look up to the height of the corresponding tower, I want to prove the following bound, s. So this is the, ah, no, actually, let me write it like this. This is the birco sum of the derivative at x, and I claim that, and maybe I should say here, there exists an L uniform. Sorry, maybe let me make a simplifying assumption. Sorry, I should have done this earlier. OK, let me just write. Sorry, can I, what can I write? Let me write here. So I'm going to assume, I should have done it before the proposition. So I could write a general formula, but for the whole purpose of today, let me assume that the function is simply one, it has one, one, two single added. It has one over. The function is log x plus log of 1 minus x. So the function is symmetric log. Otherwise, I complicate my notation for nothing. This is my function. Yesterday also, we just did one sided, and now today we do the symmetric pair. And so f prime is simply 1 over, OK, maybe with a minus, OK. 1 over x minus 1, 1 minus x, so OK. So that's the type of function we are studying, and probably I want it to negative, so maybe it's with a minus. OK, so yesterday we saw that for one sided log, this was growing like R log R. Here we want to say that for the special subsequence of times, I can bound it with uniform constant times R, like it were integrable. So then, of course, as we learned yesterday, I have no hope to control the closest points. So I need to add, let me write it here, what are x is 0 and y is 0. So x is 0 is just the closest point to 0. It's the minimum of, OK, this distance is, if you want, which is just x, in my case. I'm just calling it x is 0 for symmetry. So this is the closest point to the origin along my tower. And y is 0 is the closest point to 1. So this is the minimum of closest point to 1. So the minimum minimizes 1 minus ti of x over the orbit. So closest, closest visit to 0. And this is the closest visit to 1. So these two, I need to trim them somehow. If I want to control my Birko sums, this can always spoil the behavior. But we will see that we can also bound them in a second. We will bound them, too. But I want to emphasize that these two can be arbitrary close. But everything else has a behavior which is much tamer than the Arlogar we saw yesterday. So let me try to convince you that this is sufficient. So I will tell you why this is the case and how you can prove this. But before doing that, I want to convince you that if we have that, we have no stretch. We have this second part of the criteria. So again, this is an estimate on Birko sums. It's not for every time. It's only for a special subsequence of time, which we will have to choose carefully. And the derivative, essentially, even though the function has this 1 over x singularities, and yesterday we saw that if there was only one of them, it would grow like Arlogar. When there are two, there would be some cancellation effect between positive and negative parts. And this logar will disappear. And remember, yesterday it's really the logar which was given the stretch. So this will be the no stretch. And now let me show you how proposition plus rigidity sets implies no mixing. So let me conclude the argument assuming the proposition. So I will write it a little quickly, because it's not my. So take the rigidity set. So take nk, as in the proposition, and take enk corresponding partial rigidity. So I will look inside my set, and I need to prove that the Birko sums, for any two points in this set, the difference of the Birko sum is bounded. So this is my blue partial rigidity set. First of all, I want to make sure that also I want to control x0 and y0. So I don't want to go too close to the singularity. So the first thing you can do, you can trim. Let me write it like this. I will show you. You can trim nk to control x0 and y0. I. What I want to do is just remove from my set some kind of pillow. I want to remove the right side and the left side. And I will now look at the smaller set, where I trim the boundary. So this will be a nk prime. Yeah, consider a nk prime obtained, where a proportion c over the, sorry, c over, what is the height? c over r nj proportion was removed on both sides. Let me not be right it too formally. And here I'm using balance. Using balance is used here. I will explain. So sorry. So this height is r nj. And by balance, you see all towers have comparable ratio. So the width is also something like a border, constant times r nj. So if the tower is balanced, the heights and the lengths have all similar ratio. So I have enough room to remove two pillows, to remove something which is, sorry, maybe it's a different constant. So there is some constant. And then I can find a smaller constant, which can fit, I don't know. This could be, this is related to the balance and the number of intervals. And this will be a smaller constant. But I can remove something which is proportional to the height from the right and to the left. This will imply that, this will imply that, this will imply a control on x0, y0, which will be greater than something like 1 over constant r nj. So there will be some, no, sorry, no, sorry, constant. Points cannot go too close, neither to 0 nor to 1. So what happens? So somehow, these floors are all disjoint. It's a partition, right? So wherever they are in 0, 1, I have this little pillow, which protects me from 0. And that little pillow that protects me from 1. OK? So I'm just removing the closest points. I think I can erase this. And now, now basically, maybe I should write one keyword. The keyword is, keyword is mean value. So now I want to compare, maybe this is a zoomed in version of ek prime. Let me write it like this. Maybe not. OK, let's write it like this. This is my ek prime. So I need to compare points in ek prime. And I need to study comparison. And basically, I can do two cases. It's enough that I compare x and y in the base. And then I compare y above x. So if I can compare any two points in the base and any two points on top of each other, I can control any two points. And so first, let's say, what is s rn over f of x minus s rn f of y? No, sr, sorry. This is r, not rn. So basically, it's just mean value. So here, I will compare. I will have something which is like m. Then the two n points will be something like 2 over c, because I control them. This is the bound for the derivative. So here, I'm using the bound for. Here, I have continuity within this set. So basically, maybe I should have not one more step, sorry. So this will be less than if x and y are both in the base. If both in the base, you see immediately that I can just do mean value. So this is just the derivative at some point in the middle times the size of the interval. And this is less than, by the proposition, is less than m plus 2c. This is the control of x is 0 and y is 0. Times rn. And the size by balance is, again, something like some constant over rnj. So this is m prime, some big constant. So it's the right order. The derivative is of order, is at most rn, and the base is 1 over rn. And the other case, we have to look a little bit in the tower. I think I'm spending too much time on this. Look at it. What if x and y are above each other? So what do I have to compare? Maybe I need two colors. If x and y are above to each other, I'm comparing the birk of sine for x, which is something like this, and the birk of sum for y. What would be the birk of sum for y? It will start from y, go to the top, and then it will come back somewhere in the base, and go up to the same height. This is the birk of sum of y. So they have one piece in common, and the difference is, again, some birk of sum from the base to some height. So maybe I'll just look this one. So similar for y above c picture. So, again, the difference, you will do it by mean value. So I have to compare this part of the red with this part of the yellow, and I do mean value. And I use the derivative up to, the sum of the derivative up to this height. But an important point, which I really need, I cannot allow myself to do only the birk of sum of the derivative up to the full tower. So if I want to do the full tower, so I really need this intermediate kind of times. OK. So now, what do we do? Now I want, oh, I erased the proposition, I want to prove. This was a mistake. OK, maybe I'm too late, I already erased. So now I want to prove the estimates on the derivatives. OK, so I want to give you the heuristic argument, or why, or what do you want to prove to show cancellations. So we need somehow some finer, understanding of equidistribution of the orbit points to get this no shearing estimate. So what I will do now is maybe to give you an heuristic. So we started yesterday for the asymmetric log with an heuristic. We saw where the log R comes from, lying and imagining that all points are equidistributed or maybe great space. So let me say heuristic for cancellations. I call them cancellations because maybe I should write heuristic for no shearing. And no shearing comes from cancellations between the positive and the negative part. So let me recall you. Maybe I will call it, OK, f prime, I don't know if it's OK, let me call it just g is 1 over x minus 1 minus x. It's minus f prime. And again, the main part is that it's not l1, but something like the principal value of g is equal to 0. So the reason why it's not l1 is actually so there is a positive explosion like 1 over x and a negative explosion like 1 over x. And they are the same order. So if, for example, I look at the integral from epsilon to 1 minus epsilon symmetrically of g, this will be 0. So there are cancellations between the two parts. And that's what we need to exploit. I want to exploit cancellations between these two parts. And to exploit this cancellation, I need to know that my sequence is pretty well distributed. So if my sequence was indeed an arithmetic progression, it would be perfectly balanced. But my sequence will deviate from an arithmetic progression. And I need to control it quite finely to exploit cancellations. So what we are going to do, maybe I'll give you the key idea. First point is, let me introduce some notation. So I have my orbit. So I'm going to look, actually, maybe I should say. First of all, I want to prove, I'll just rewrite this srf prime of x less than constant plus 1 over x0 plus 1 over sy. And again, like yesterday, there are two steps. So step one will be to consider the full height, to consider r equal to the full height. So this is like a special big of some case. So I will need it for intermediate times, but I first prove it for the full tower. And I will only tell you this. So I want to consider a bit of some of the derivative for the full power. And the step two, like yesterday, it will be the composition into special bit of sums for other r. So if I have an intermediate r, I don't know, up to here, I will use full towers of the previous steps to approximate it. So if I have a full r, I can maybe decompose it into a number of full towers of a lower time. And then I can interpolate with other full towers of a smaller time, and so on, and so on, and so on. So I can approximate another r with previous times. There is some tricky part here, too, but I also don't want to. I think I'm happy if I can show you what do you do for a special bit of sum for the full height. So now I'm going to take the full orbit. So take x in the base, and r will be the full height, r and j. So this is somehow the better distributed picture. So I have my point, and I look at the orbit. So the orbit will go around somewhere. But let me for convenience of notation, I'm interested in comparing closest visits to the right with closest visits to the left. So let me relabel the points ordering them by distance. So we are considering the dot up to dr minus 1 of x. Relabel points, and let me say let. So x was x is 0, less than x1, xi. Let xi be such that xi is the ice distance from 0. So this is x is 0, this is x1, this is x2, x3, and so on. So xi now are the distances from 0 in increasing order. And let me call y0 less than y1, yi. OK, it goes to yr minus 1 into xr minus 1. So I'm not going to label the points, but I'm going to label the distances. So y1 will be the closing distance from 1, y2, y0, y1. So yi is the ice distance from 1 in increasing order. So it's the distance of the ice closest point to 1. So I just look, is it clear the definition? Sorry, at these points, basically, if you want, I look at my orbit, and I order it from left to right. And then I look at 1 minus my points, and I order them from right to left. So somehow I want to cancelate ice point with ice point. And again, if everything was perfectly arithmetic, they would perfectly cancel. But you need to allow, of course, for some discrepancy. So I'll tell you what we want shows. If, let me show you claim, so OK, actually maybe more than claim. So we'll show something like this. We'll show that xi doesn't vary too much from an arithmetic progression. So we will show if everything was arithmetic, xi would be i1 over r. There are r points. If everything has equal distance, xi would be 1 over i. So it's arithmetic plus, and let me write it like this, some epsilon i over r. Some error from arithmetic. And I have to tell you something about this error. And similarly from yi plus, OK, I should use a different notation, some epsilon tilde over r. Well, and what I will show about these errors, what is key is the following, that if I sum, this could be positive or negative. So I guess I need to put absolute value. And if I sum them with respect to i squared, this will be less than a finite sum uniformly in the induction So we will show this, and I claim that I will show you. I claim that this is sufficient. Claim this is enough to prove the proposition, to prove step one, step two. The statement I hope is clear. So I'm looking at the discrepancy from arithmetic, and I want to have some form of bound. Let me convince you why this is a good bound. So proof of the claim. So I want to look at s r g of x. g is my minus derivative, is my function. So what is this Birkov sum? This is the sum from 0 to r minus 1 of 1 over ti of x minus 1 over 1 minus ti of x. And up to rearranging it. So this is just definition of Birkov sum. And this is just the sum of 1 over xi minus 1 over yi. I'm just reordering these elements, and reordering these elements with my notation. And now let's keep aside 1 over x is 0, and 1 over y is 0. I can keep them aside, because they are aside in my result I want to prove. So I just need to worry for the others. The first and the last, I cannot hope to compensate them. So it could happen that the first is skewed to the right, or skewed the last. Then you will not have a bound unless you throw these points away. And then, so what I'm estimating, I want to estimate the sum from 1 to r minus 1, 1 over xi minus 1 over. And I put that common denominator xi minus yi xi yi. And just put the common denominator. Should I call this star? So I want to use this claim of the arithmetic. So if they are both close to arithmetic, the main order cancels. So when I look at the difference, i over r cancels. So here I have epsilon i plus epsilon i tilde. This is just a diskre over r, sorry, over r. What about the denominator? So you can remark that by balance, the orbit is at least some constant over r-spaced. So we said that all the steps of our towers are of order. So the way you should think is that all my points belong to an orbit, they belong to some floors of a tower. And the floors in a tower are all disjoint. And the points cannot be closer than the lengths of the shortest floor of my partition. And all my partition is balanced, so everything is order over r. So this implies that both xi and yi are greater than some constant i over r. So a lower bound is of the right form. So if I put this in, the denominator becomes i squared over r squared. So each of them is times constant. The constant, I can take it out. It's a uniform constant. I can take it out. It's a uniform constant, depending on balance. And now we are done because one r cancels, and the other r is constant times r times the series, which I am assuming it's controlled. And this is a substantial less than some constant, right? So I get my constant times r bound. That's the idea. So what can I prove? Is it clear? This is really the key idea of cancellations. So comparing ice visit from one side with ice visit with another, and hope to have enough control to cancel them. The orbit is 1 over r space because you're using balance time. I'm using balance, yes. It's not 1 over r space. The lower bound, yes, it's that lower bound. But, OK, now let me tell you, OK, maybe I'll put them here. Let me tell you some special cases. So there are some special cases in which you can prove something quite good about this epsilon i. And now I put this case that you like. So I hope I didn't lie. I'm trying to oversimplify a little bit. I hope I don't oversimplify and then say something. OK, so a special case is if I have a rotation. If my interval exchange t is a rotation. And in this case, I will take r and k. I can take is a denominator of the convergence. So in this case, if you can prove something very special about yi and xi, maybe you can just using my notation of ordering from the right and the left. In this case, you can prove that xi and yi are actually just a shift of each other. So this is something like there is some delta, or I will write delta over r, just to understand that it's order 1 over r. So there is a really strong kind of symmetry. So xi, yi, if you take your orbit and you flip it, what you see is a rigidly shifted copy of the same orbit points. So it's like the difference between xi and yi is a constant. So everything here is the best it can be. So this is constant over r. And this is indeed, there is a joint paper, quite old that I have with Yakov Sinai when I was a PhD student. And this you can prove by basically combinatorics of substitution of word combinatorics. You can understand very well the partitions that arise from rotations when you do these rocking towers. And you can call them symbolically and there is some coding which is almost palindrom. So when you flip it, it looks like the same and you can prove this very concretely. So without induction, it's just a combinatorics of word kind of problem. And there is another case which I told you where absence of mixing was proven well before, well, no, not well before, shortly before my general result. So there is a genus two, genus two k's. So if you want is, so it's actually corresponds to five intervals and the permutation is one, two, three, four, five, goes to five, four, three, two, one. And this was done by Sheklov. And in general you can, if you know what is hyper elliptic rosy class, this is something which is true for hyper elliptic rosy classes. And essentially what Sheklov proves is a stronger form of control than what we have in the general case. So essentially we also prove something like the difference between these two visits. It's bounded by some uniform constant over r. So there is, it's not a shift, but there is a strong bound discrepancy. And this, again, Sheklov proves it combinatorially, but I can say one key word, one can understand everything geometrically in these two cases by hyper elliptic involution. So the hyper elliptic involution on the surface generates a strong, gives you some very strong property about closest visit from the light and closest visit from the left. So you can kind of flip your picture and there's a lot of inner symmetry, which helps you. And the other special case, which I also proved before the general case, I think it's actually, so our IETs of bounded type. So let me, I haven't defined it, but it basically means that if you look at this positive balanced acceleration, one way to say that these norms are uniformly bounded above. So it corresponds to bounded type rotation numbers, where the entries of the continued fractional expansion are bounded. And for those, this is already more similar to the general phenomenon. What I would essentially like to prove is that one, in this case, you have I, AI will be, okay, maybe I should put an absolute value, and like this, will be bounded by something like a universal constant times I to the gamma. So, maybe I should, no, it's okay. And where gamma is some constant between one and two, no, zero and one. Yes. So there is a power kind of deviation. And you see, this is still good. So this is still good because this is good. Because if I look at the series of constant I to the gamma over I squared, gamma is less than one. So this is finite. So I can allow for a power form of deviation in I, power less than one. So this is much more representative of the typical case. But unfortunately, if the IT is not bounded type, so maybe as I do one more thing and then we have a little bit of a break. So I'll tell you just the general case and I will try to explain from which, we have a break now. And the general case, I don't know if I can prove it. The general case, we will not be able to have a uniform constant. So we will prove something like this. So is XI is some constant, which depends on the point. And then there will be this power form of deviation. But what I can tell you of this constant is that this constant in some sense depend on all the history of the entries of the continued fraction expansion. So this constant record the past up to the point where you are. So I will write you some kind of expression. This XI is something like maybe, okay, there is a universal constant. And then I will write it like this. It depends on the entry of the matrices. Sorry. And so we are looking at the renormalization time ane. So it goes from ni to nr. Hope I'm not writing too small. Am I writing too small? Okay, maybe let me write too. Okay. Where Ci is less than the series from n that goes from, I will tell you what ni is to nr of the norm of a, and this goes from ni to nr. So this is the product of the matrices from ni to r. But this depends on some constant times. It's exponentially less relevant. What else should I say? What else should I say? Nr minus ni, I guess. Where ni is the, so I have the interval zero XI. So I'm looking at points from zero to XI. And I can kind of look at what is the largest, the largest base interval that I can fit. So where the Ci is the inf of n such that some i and j is contained in zero XI. So it's kind of something which tells me the size of the interval I need to, the order of zero XI, something like this. Okay. Okay, so I think it's a good point to stop. So what I want to stress is that this constants in some sense, according to where my point is, don't depend only on the last entry. They depend on all the previous entries from the last backward. But what's kind of important is that somehow they, in a geometric series fashion. Okay, maybe I'll stop here and continue later. And then I'll try to tell you, I want to finish telling you a little bit what are the type of the yo-front time conditions you need to conclude the proof. And why do you get this type of estimate? I will try your more, more things in a few minutes. Okay, so at first I realized there are two as small erata, two as small typos in what I, let me do some small correction that I realized. So first of all, the last thing that I wrote, I think I just noticed now, this index I wrote nr, but it should be n and n. So this is the parameter over which I'm summing. So this sum, this is the product from ni to some n, where n is something between ni at the last and this thing increases with the difference n minus ni. So sorry, it's correct in your typos, remove the r index. And then I had this thing in my mind when I was writing the proof with law, I think I maybe made a little bit of confusion when I gave you the sketch that the estimates on the Birko sums are sufficient for verifying the criteria. I think I, one has to be careful, there are two things which look similar. There is Rn and there is Rnj. So Rn is in the rigidity time, it's the partial rigidity. So it's the iterate on which I come back, the small interval j in my set n comes back to the base. And I think I only discussed when I discussed the difference, that's what you want to estimate in the criteria, we want to estimate the difference for time up to Rn, the return time. And what I discussed while doing this mean value and cases is actually the difference for a full height of the tower. So I was discussing Rnj, this is the height of the tower. So Rn is in priori much bigger than Rnj. So there is one extra step which I didn't discuss and maybe I confused you or myself. So then you need Rn, so need plus need to decompose Rn into Rnj and in this Rnkj. And that's another reason why we need special times where essentially the Nk entry is bounded, so that there will be finitely many needed to, okay, there is a little bit extra step which I maybe skipped. So, okay, I want it to correct since it's recorded forever, it's better that it's recorded correctly. Okay, so we were, I hope, this is really key, this argument about arithmetic and cancellation. So this may be the main idea you should carry home about the cancellations in the symmetric case, but I want to conclude by saying, so we said you can get some kind of, some form of bounds but these bounds from arithmetic are quite complicated so they depend on the point in some way. So I want to do two things to finish this absence of mixing and I want to convince you why you can prove this type of bounds and vaguely, I mean at least I'll give you an idea and then tell you which type of the euphantine conditions you need to make everything possible. Okay, so maybe I will start by how to prove something like x i is equal to i over r plus c i times i to the gamma over r. Something that's what we want to prove, right? Plus, maybe one way to write it is to write it plus big o c i, something which is big o, this will constant in c i. Okay, just not to write epsilon i. The error I want to estimate it like this with the constant which depends on the point and has that complicated expression. So I want to give you a hint of where the power come from and where this expression come from. First of all, so you want to look at the interval which goes from zero to x i. So you can revert this spacing estimate into some number of visits estimates. So I can look at the cardinality of visits of my orbit to i i. So when I say visits, I mean visits of the orbit I'm considering. So by definition, how many visits my orbit makes to i i? i i is the ice distance from, the ice point from zero. So this I know how many visits it makes to i i. i i minus one, okay. Maybe i plus one maybe, okay. Zero would be already one. Okay, depends how you come to me if I put the semi open. I think actually i is correct. Okay, so this is by definition I'm putting from x zero up to x i and it contains, if I put it closed maybe, oh, i plus one, okay. I plus one, sorry. I got confused myself. So if it's x i, it will contain, for example, x one contains one point if I take it open. Okay, why? Good. So i, which is the number of visits and then how many visits do you expect? So this phenomenon of ITs that they have this power deviations of ergodic averages. So first of all, how many visits do you expect? We have an orbit of length r. So I expect the expectation should be r times the length of the interval. This is the main term. But if I look at an IT and I count, this is like ergodic theorem, how many visits I have r cross i, but then there will be some error, which I expect, which should have a power for motivation. Something like this. Where, so you basically I want to say, if you want to prove something like this and that's what you can expect for IT, that an IT, the number of visits, behave like the ergodic term or the main term plus an error, which is a power. Maybe I should put something like this. And if you have this, essentially you can divide by r and maybe you will get, you will get that sorry, the length of this length is x i, the definition of the interval. So you get that x i is equal i over r plus some error, which is of the form r x i to the gamma divided by r. So I'm just rearranging the, you see what I'm doing? I'm bringing the error on the other side and divide it by r, okay? And r over x i, again it's of order of i. So that's kind of where you get the power form of deviations. I'm just saying that power, this power form of discrepancy from arithmetic comes from power form of deviations for the interval zero x i. Did I? And so how to prove, to prove maybe we call it star again, I keep calling star everything. So this is like red star. So the statement was that this red star is essentially equivalent to what I want to prove. And to prove red star, you first consider special intervals. Special intervals are my induction intervals. I m or m less than k balance maybe with a j. So first I consider my special intervals. Or maybe with an i so that we don't use them. So for them, so the cardinality of visits to an interval of this form, you control them by the cosecans, because by definition, these are a, m, n, i, j. These are entries of the cosecans, okay? So those you can estimate by using your cosecans. And the fact that for this intervals, you have some power form of deviations is one of the very first results on interval exchange transformations. So let's say that, so a and m, i, j, we have some power form of deviation will be something like, what is it? R times the length plus an error term, which is power of the main term. This is, was essentially proved by Zoric in paper in the 90s. And it's this form called power deviations over godic averages for interval exchange maps. And then for me proved it in the context of translation flows. So this is essentially power deviation. What I actually do, I need a little bit more quantitative form of Zoric. So I need some estimate where this constant somehow depend only on the difference between n and m. And then I need, I told you this what I was already yesterday was telling you it's some kind of Peron Frobenius argument. So if you have Peron Frobenius and you use this balance times to have a definite amount of shrinking of your cone at every step, then you have like a quantitative control on these deviations. So finally, many steps give you a finite gain in this power, okay? So I need proved by Zoric plus, so uniform version, uniform version that I use both for mixing and absence of mixing of a result proved by Zoric. And then, so this is again step one, and then the approximate. Then I'm going to say approximate ii by special intervals. So yesterday we saw the composition of normal Birko sums into special Birko sums. Now we are doing something in some sense similar button space. So we have any interval and we approximated with special intervals. And it's the composition similar to yesterday. So here you will take the largest interval, I don't know, i, this will be this i and i. So there are some i and i will be fully contained here. Maybe only one, maybe more. Maybe there are two of these are, this is the partition and i. Then you will have a reminder, and then you will have, which is maybe of, these are intervals of the base of n i minus one, and then you will kind of, the floors of towers of order. So you kind of decompose interval into intervals which are towers of Rozivic induction balance times. And when you combine step one for all this level, that's where you get, so the strange constants. So this is what produces expression for ci. So, you know, how many intervals of one step of this type you use would be at most the norm of some, okay, it would be some norm of some matrix, it would be most of the norm of n i, then the number of steps of the, so here the entries of the matrix will come into play and will produce this complicated looking constant, okay. So, but what's important that somehow, if my interval is not a special interval, discrepancy from the average number of visits depends on all the level of Rozivic induction which enter in approximating it. So, you know, some interval could be worse to distributed than others and this could happen because in the past time I was worse badly distributed. But the influence of the past, it's less and less important. So that's what this geometric is. If I have a huge matrix a million of time ago, it can spoil liquid distribution, but it should really be huge. While a badly distributed matrix shortly ago, can spoil a lot easily. So, they all affect but less and less in the past. That's kind of the meaning of this constant. And this is enough for cancellations. Just to conclude, what type of diopatine causation do you need to request in order to control these deviations? So, you remember we have this complicated expressions for the constants which depend on the past. And what I want from this constant is that when I sum over I squared, I get something finite and bounded. Remember, so, I'm not telling you how you get this, but I will just tell you the flavor of the diopatine condition and then I will conclude for the, and go to, oh. Okay, so what is the flavor of the diopatine condition? So, diopatine condition to prove absence of mixing. So, my proposition one had an almost every I.T. So, I want to tell you which almost every, I'm talking about, are based on the following lemma for almost every I.T. There exists a sequence for every epsilon. Sorry, maybe I should put it before. Oh, no, okay, for every epsilon, there exists a C such that, and there exists a sequence of balance times tending to infinity. I want conditions of this form. Then when I look at the matrix A and K and go in the past, remember I'm only using balance steps, but so this is. I want to say that infinitely many times, I see matrices which in the past grow in a controlled fashion, super exponentially. Let's see what I'm saying. So, what is this estimate telling me? Put N is equal to zero. Then you have constant. So, A and K should be bounded. And if I go backward and K minus one and K minus two, I'm allowed to grow, because if I impose that everything is bounded, I have a bounded type I.T., which have measure zero. But the grow, if I place myself at NK, I'm bounded, and if I look backwards, my matrix grow, but super exponentially. So, this is something which almost every I.T. will satisfy. So, almost every I.T. will have bounded moments infinitely often. Not only, almost every I.T. will have bounded moments where the past looks tame, where the past is not too bad. So, if you ever saw the theory of circle diffio, for example, your cause has some condition for linearization of circle diffios, which is like being infinitely often, that you often time in the past. So, these are kind of recurrence to a good set for the past style of condition. So, you find many moments where the past is good. And this is the kind of flavor of this condition. So, this lemma is crucial to produce the good times and to produce, to control this series of constants. So, I'm not going to tell you why. But, so this lemma actually needs to be applied four or five times. So, it actually has to be, okay, I'm going ahead of myself. So, yesterday I gave you an exercise about proving some condition was full measure for the rotation. So, tool for the lemma. So, yesterday we used, yesterday, Tuesday, we used Avila Guzela in your cause and the estimates on the cosine called Avila Guzela in your cause to prove the full measure of the mixing that you often time condition. Here, you actually use much less. You just need, only need log integrability. Only need the norm of, I'm calling it like this, the cosine and the inverse cosine on the space of IET. Ah, sorry, I said log, but I didn't write log. The integrability log of this is finite. So, this is the condition for Ozeladets theorem to be applied, right? So, this is actually much older than, this is Zorich. And it's really his, yeah, I should say, and I think Konsevich also had the fundamental role. It's Konsevich Zorich, studied the Apunov exponents and proved this is what you need for Ozeladets. So, this is what you need for Ozeladets. Ie, Ozeladets, plus, so maybe I would say some losing, and plus Poincarere carats. So, I actually, you really need the invertible so you need to go in the past and you need to say for typical IET, if I look at the past cycle, the product grows exponentially because there are Lyapunov exponents, but the n-th matrix in the past grows super exponentially. It's like the n-th term in the Ergodic theorem goes to zero if you're, okay. You can try to think about this line and try to prove this lemma. And so for almost every IET, the growth in the past is super exponential, but the constant will depend on the IET. Then you want to make the constant uniform, so you do some losing, and then you return to infinitely often, you will return to the set where in the past you are good, and this will give you this estimate. I'm talking to some people who maybe know more, and don't worry if you don't. So, but this is as much as I will tell you. And what is good of this log integrability that it's inducible. So, if I accelerate a co-cycle by looking at return times, it's still log integrable. So, if a co-cycle is log integrable and induced co-cycle is log integrable. So, you can repeat, you can apply this lemma to the acceleration obtained by looking at time nk. And I need to actually apply this lemma three or four times in the proof because every time you decompose time or you decompose space, you need to look at the special times, which are a subset of the previous special times. And I think I told you a lot of what I, I hope you got the flavor. It becomes very technical to kind of do this proof, but the idea that you need quite a fine control to achieve these constellations, you need quite a fine control of the distribution of your points. And again, you can see that Rosivic matrices play a key role because they allow you to estimate deviations quite well. Okay, I think I'm happy. And what's happened after? Actually, it's kind of funny because what I told you in all the course so far is already, I would say, almost 10 years old. So, but I told you that a lot more has happened in the past few years. So, in the past three, four years. So, this luckily Hamiltonian flows were studied in the 90s. Then there was a big gap in, after the original results of Kočergin and Hanin Sinaj. And then myself, especially Sheklov, and the people had this revival on mixing and mixing properties. And with mixing, I didn't tell you about who also proven 10 years ago. But then in the last three, four years, there has been an explosion of new results, which were made possible by some breakthrough, maybe in some, okay. I will tell you, I want to give you a flavor of what's happened recently. So, what's beyond? So, maybe the nice part will be to have two boards. One, what's beyond mixing. And one is what's beyond absence of mixing. Ah, sorry, before I do that, maybe a question, which was done yesterday. And maybe we always proved something about almost every IT. But at the beginning, I promised almost every IT. I wanted to tell you, every result, which we proved for almost every IT, produce a full measure set of locally Hamiltonian flows. And I think I promised, but I didn't do it really nicely. But I told you there is a measure class on the set of locally Hamiltonian. So, maybe let me just tell you that just as a remark. So, believe me that full measure set of IT produce full measure set, but with respect to some period coordinates. So, if you have locally Hamiltonian flow, we, I will call you from the first lecture, it's given by a closed one form eta. So, just to finish off. So, you can look at the integrals of eta along the base of absolute homology. So, I will write something like this. So, this is, this are the period coordinates where gamma 1, gamma n are a base of the homology, relative homology of the surface relative to the fixed points of the flow. And, okay, so this is, this gives you a measure class. So, a bag measure put back by the period coordinates give you a notion of zero measure and full measure. And you can see that once you prove something for almost every IT, essentially, these are like your lengths of the exchanged intervals. They are essentially the invariant measure, transverse invariant measure. And if you want the details, I can refer again to Davide wrote everything nicely to prove this full measure, you just need to find the good base of homology. So, you need to do it carefully so that you have a base that sees all minimal components and, but it's, so. So, we did finish to prove what I promised in the first lecture on the classification of mixing and absence of mixing. Okay, sorry. You're saying implicitly that the space of all, of all... Locality Hamiltonian. Locality Hamiltonian, it's this finite dimension of space. There is this, yeah, there are these measures, yeah. So, yeah, you can use this. There is this is what is called sometimes katok fundamental class or katok. So, you fix first the type of, you have to fix type of zeros and type of, so you have some open set where you fix the two, twice the genus. So, you fix the genus and you fix the number of centers and the number of simple settles. And this n will be, for fixed number of singularities in type, you have an open set where you have this, it's like a finite dimensional leg bag measure. These are your moduli in this open set, yeah. So, but I'm not saying that, I'm saying that those are what matters for the godic properties. So, once this moduli are generic, almost every, then you can find all the typical IT and this was a question that someone asked me yesterday. And you're right, I didn't tell you how, maybe you asked me, I forgot. Okay, so beyond, so beyond, beyond mixing, what's beyond mixing? So, we already said beyond mixing, we have quantitative mixing. So, this is, you can make everything that I did today quantitatively. So, if you have a locally Hamiltonian flow in this asymmetric, so this is the asymmetric setup, you can actually prove and you have some two functions, G and H, which are, which are continuous C1, who we compact support in S minus the fixed points. So, supported outside of the singularities and David, the paper I think it's a published in last year, right? No, 19, ah? 17, okay. So, you have some estimates for the decay of correlation. So, if you look at the integral of G composed with 50 times H, ah? On the surface. So, this decays like constant divided by log G. Power of gamma, gamma is some power. Oh, correct? Yes. Okay, so this is, not only you can prove mixing, but you can see that mixing is actually rather slow. It's like a power of log. And this is not a bad estimate. This is not maybe optimal with terms of constant and power, but this is really the true form of the error term. So, you do have mixing for these flows, but everything is happening slowly. Why should there be a log? Because everything relies on stretch and stretch is logarithmic. So, this log is reminiscent of the log in the stretch. So, all the mixing is produced by shearing and shearing is logarithmic. That's where the log come from. Okay, so this is essentially clear. It's a refinement of everything that we've done so far. And you can have multiple mixing. So, what is multiple mixing? So, for every k greater than two, you can look at the measure of some measurable sets A1, intersected with 50, let's call it A0, 51 over A1 up to phi tk, maybe minus one, A k minus one. So, you can look at multiple intersection. For mixing, you can have only two sets. Here I take k sets and I flow them. So, you say that k mixing, your flow 50 is k mixing. If, for every choice of measurable sets, this tends to the product of the measures of the AI. And as ti minus tj grow as every difference of these times grows to infinity. This is k mixing. So, instead then two sets, you take k sets and you push them together. So, for almost every theorem, for almost every it and f with asymmetric log over t, the special flow, our locally Hamiltonian flow is k mixing for every k. This is mixing of all orders or multiple mixing. So, there is a conjecture by Rocklin that mixing should imply mixing of all orders. But this conjecture is a big conjecture in ergodic theory, which is still open. So, when you have a class of flows, which is mixing, people try to prove that actually in special classes you can prove mixing of all orders. So, actually when I prove mixing, I remember the first question that Sinai asked me is, are your flows mixing of all orders? And shortly after I met Tuveno, Jean-Paul Tuveno, could tell me, oh, you should prove your flow mixing of all orders using the retinal property. And this was many years ago. And at that point, this was not feasible, but recently there was a breakthrough on the retinal property, the switchable retinal property. And this is what made this possible. So, this result is a result by Adam Kanigovski, Johanna Kulaga and myself. And it's still a preprint, but it's a preprint of 16, and it's appearing in the Journal of European Jems, Journal of European Mathematical Society, but it's been in press for some time to appear in the backlog. And, but it's for the rotation, I should say that for the, I wish I had space here, for the rotation case, this was proven by Adam, maybe I read just Kanigovski and Basam Fayad. And they introduced this retinal property, switchable retinal property, which was the needed breakthrough. So, and this again is based, or maybe I'll say, we'll see how we go. This is again based on shearing and having good shearing estimates. So, the way you do it is you prove, okay, maybe I'll say it later, you prove some good, even better shearing estimates than what we did, and they give you this retinal property and by abstract ergodic theory, this retinal property plus mixing automatically give you mixing of all order. So, even though it's not, you don't prove it directly, but you prove it through shearing. And what's beyond, let's go on the other board. And maybe I can also mention, okay, maybe I showed that the boards are too small. Okay, so there is another result which is in the spirit of mixing and improving mixing estimates, which concerns the spectrum. So, yeah, I don't think I can define the spectrum for you. So, maybe I will just say, this is not for the log flows, but what's are called gotscher gene flows. So, if you take, again, a result for the rotation and for powers, for power type of singularity. So, function of this form, symmetric power. So, one over axle alpha, one minus x to the alpha, and here alpha is actually less than one, but very close to one. So, it's very close to one, for some maybe 1,000s and one minus 1,000. And Fayad Forni and Adam Kanigovski proved that in this case, the special flow has absolutely continuous spectrum. Again, forgive me if you don't know what the spectrum is, but I won't define it for you, you can ask me later. So, this is kind of maybe quite surprising for entropy zero dynamica systems. And what do they really do? They actually, everything is based again on quantitative mixing in some sense. So, they need to prove, let me call this CGF, this correlations. So, it is based on something like this, so, for special functions F, which are co-boundaries, you want to prove that the self-correlations are in L2, from zero to T. Am I writing correctly? Okay, so, basically what you need to do is to perform a similar mixing argument by shearing and quantify your shearing and prove some quantitative estimates on shearing. So, again, even if it's not visible, but this is, again, a result on shearing and improving estimates on mixing by a shearing. And here maybe, here the decay of correlation is actually a power, but it's a power which is maybe not in L2. And maybe I should just write, sorry, what did I write? I think I would just write. The correlation function is in L2. C FFT belongs to L2 of DT. Okay, this is what you need. What kind of singularities of the flow is it that gives rise to these kind of singularities? Ah, yes. So, this maybe is interesting to say, it has a smooth flow application, and you basically have a linear flow with a stopping point. And the stopping point, it's, yeah, I think it's just a stopping point, but then you can prove, yeah. And, okay. And, okay, and then vice versa, this is, sorry, you can, spectral theory is certainly one of the things that people in ergodic theory care about, but so the spectrum of the powers within this absolutely continuous. The other result in spectrum I wanted to mention is about beyond, somehow beyond absence of mixing. So, all these results I think of them in the spirit of mixing via shearing improvements. Here I want to improve the cancellations that we did today. So, what can you try to prove beyond absence of mixing? You can prove singularity of the spectrum. So, you can prove, for example, that all there is, again, this is fairly old for rotations, again, and function with symmetric log. So, this is just symmetric log. Schist of Franchek and Mariusz Le Mancik, I think it's 2003, prove that for almost every rotation number, the special flow, it's opposite to what we had there, has a singular spectrum. So, mixing implies that the spectrum is continuous. So, here we have, actually maybe they can prove more, they can, I should say, countable, the bag. No, the bag, the bag spectrum. The bag maximum spectral time. And here they, on the other hand, you have singular spectrum. And what I wanted to mention of this result is that, so, I'm gonna take another five to 10 minutes to finish, no? I'm taking back the five minutes of the break and maybe the five minutes at the beginning. So, what they are doing here is to study. So, someone asked me what is rigidity good for and what are, so, here what you need to study in, you need to find some special times, Rn. They will be Qn for the rotation, where your Birkov sums, sorry, and some sets, Xn, whose bag measure goes to everything. So, not on partial rigidity, but on the full space, you want to prove some bounds of Birkov sums. So, you want to prove so that your Birkov sums, you are allowing yourself, also, I can also allow myself to some centering. So, there are some constants going to infinity, two constants, there exist a sequence, an. So, find Rn and the sequence, an. And this sequence are centralizing constants. So, I can allow myself to take my Birkov sums, maybe translate them up and down, but the soup on my nice set should be uniformly bounded. So, this is a form of what we did today, it's a form of bound on Birkov sums, right? So, today we proved that the difference between X and Y on the set en is bounded. Okay, here instead of writing the difference between two points, I will take one point as a reference and it will give me an, so it's a tightness somehow, it's uniform tightness, but not on a partial rigidity, but on the full space or something which is growing to be the full space, okay? And they also need a little bit, they can actually do singular spectrum, they need another ingredient which is somehow obvious from the logarithmic singularity. So, you need some kind of tightness, this is a form of tightness with exponential taste. So, what is left out of this set has mass which is decaying exponentially and this comes from the long singularities which are intrinsically there. So, apart from some exponential, this small mass, everything is in a tight part of space. So, you will see that this is a refinement of absence of mixing. So, we have singular spectrum and continuous spectrum, absolutely continuous spectrum and this result we are working currently with Adam and Chistofranček and also Tchaika gave this in one case a contribution. So, we are trying to extend it to IT. So, we want to prove that absence of mixing that I showed you today actually improves to a singular spectrum. What is really hard is that I showed you today that cancellations need balance times. I can prove cancellations only using balance times. But for here it's crucial that you have full rigidity. So, it's like a double game. You want towers which are balanced for cancellations but you want one big tower or rank one for this control. So, how do you put them together? So, there is one case with which I'm quite confident we can do. Well, maybe I shouldn't write it but let's rate work in progress with Tchaika, Franček, Adam and myself, which is genus two. Why genus two is better? Because I showed you genus two has very good cancellations which come from hyper ellipticity. Those cancellations are very strong and they also hold when you have a big tower. I don't prove them with balance, I prove them with geometry. So, there we have a way out and in the general case we have some idea, there is a trick to put together two things but we will see whether it will work or not but we have hope now. And the last thing I want to mention that we mentioned, quantitative mixing, mixing of all orders, results on the spectrum and there is a recent trend which is partially motivated by maybe surtegonality conjecture. So, there is a big working but not, I think it's of independent interest in ergodic theory, which is this jointness. So, again, I can look at my flow and I can look at what are called rescalings. So, if I have a transformation I could look at the powers. If I have a flow, I can look at the flow multiplied by a scalar. So, lambda here is a real number. So, this is the lambda rescaling. So, it's just a rescaling time linearly. And for certain flows, like for the horocycle flow, it's under entropy zero flow, if I rescale the horocycle flow, the geodesic flow intertwines the horocycle with the rescaling. So, in some examples, all rescalings are conjugated to each other or isomorphic to each other. But, but I would like to conjecture in the life, it's maybe both, but my feeling currently is that among parabolic flows, the fact that these rescalings are conjugated is kind of a rare phenomenon. You shouldn't expect it. But I would like to try to convince myself that it's more typical in the parabolic flows to prove that these two flows are disjoint. So, this symbol orthogonal, here means disjoint in the sense of Hustenberg or a suspectory disjoint, maybe in disjoint in the sense of Hustenberg. And this means that there are no common joining, common nontrivial. And again, forgive me if you are not an endodic theorist and who works with joining, so I can give you the definition of joining later, but I just want to picture some. So, joining are this measure on the products which are invariant by the product flow and have the correct marginals. Okay, so this is the notion which was very much introduced by Hustenberg and has powerful applications in the godic theory, the godic theory of joining. So, so maybe let me say, let me put this, is the definition disjointness of rescalings. Let me say that the flow has the disjointness of rescaling property. If and only if, maybe for all lambda or maybe for almost every lambda, t is disjoint from t lambda. And two recent results in these directions are on one hand a result by, so Adam, Le Marius Le Mancic and myself, we proved in the asymmetric log for almost every, ah sorry, this is for rotations. Everything is for rotations. For almost every rotation number, we have, let me call it disjointness of rescaling. So, this property is true for, these are anode flows, asymmetric of rotation. For almost every anode flow, we have disjointness of powers and this implies something about, disjointness of powers is sometimes used to prove maybe sort of gonality in conjecture. So, this is a very recent preprint. So, it's a preprint of November last year, October last year and also Adam and maybe I should write Berk and Adam Kaniukowski, they also prove the same result for, also for rotations, but for symmetric log. So, both symmetric and asymmetric log, maybe I should write an alpha. Again, for almost every alpha, you have disjointness of rescaling. So, the proofs are very different though. So, their proof for symmetric log is that refinement uses something of this flavor. It's about tightness of Birko Samson exponential tails. So, by exploiting the exponential tails of this picture, they prove actually spectral disjointness even. The proof of myself with Marius and Adam, instead it's in some sense refinement of the mixing estimates. So, okay, maybe allow me the last three, five minutes. And I want to say, so, the last ingredient. So, in some sense, multiple mixing, sorry, mixing of all orders and this disjointness of rescaling for asymmetric. For asymmetric. Are both based, are based on some form of quantitative shearing. So, they're based on improvements of the shearing mixing estimates we saw. In the form of the so-called switchable rutter, switchable rutter property. And we may be finished with the picture. So, we proved that if I take two points X and Y, they shear in this asymmetric case, right? So, the, if I take X and Y, they shear. So, what you can do is you can kind of wait until this shearing is actually of size one and say that it took you some time capital T to get to shearing one. You move one of the orbit, the one which was back by one. So, that you kind of realign this two points. Yeah, so, this is what is being shearing one. I mean that this is one ahead. So, you flow for one more this and you realign them. And then, if you flow shears slowly, these points will start shearing again, but it will take you some amount of time before they shear again. So, say that I look when they become epsilon sheared. So, first I wait until they are one sheared and then I check for how long they are still epsilon sheared. And turns out that in this slowly shearing, slowly divergent in this parabolic systems, if it took me time T, I can still hope to stay epsilon close for a fixed proportion. So, kappa will depend on epsilon. But for a given epsilon, for a given epsilon, I can find the kappa and I can find the delta such that if any two points which are delta close, maybe apart a small measure of epsilon measure of my space, after they shear by one, they will stay epsilon close for a positive proportion of the time it took to shear. And this picture will happen infinitely often for large, for infinitely large T. So, this is a quantitative form of shearing. So, if you can prove it, you have a lot of consequences on joining rigidity. This is a property that Ratner proved. So, this property is too much to hope for flow with singularities because sometimes your two points will run into a singularity and you will completely lose control. But sometimes you can, you are happy if you don't see your property in the future, you might be happy if you see it when you flow backward. So, if you can allow yourself to either flow forward or flow backward and see a picture like this, you have what is called switchable Ratner and this is something that can be proven and it's something that we proved, they proved for rotations and we proved for IET. And this property automatically allow you to upgrade from mixing to multiple mixing and it's also key for the disjointness criterion that we developed. And what do you need to make this quantity effective? You actually need better shearing estimates than what we did. So, we essentially need to refine the shearing estimates and if you are interested, I can tell you more, but you also need better diofantine conditions. So, we had some bad sets that we had to throw but for mixing, you were happy to throw these bad sets for arbitrarily large times, but for this type of property, you are allowed at the beginning to throw some part of space, but then, once you throw your initial part of space, you kind of need good estimates out of those points for all times. So, you need to refine even further the diofantine condition for mixing and get some new full measure diofantine condition for having disjoint property. So, again, this was very hard, but I hope I gave you a picture that there is a lot going on very recently beyond what we saw, but all based on this type of control on Birko-Sams for either for mixing on. And you can, now there are this kind of, yeah, a lot of new research going on and Birko-Sams, an estimator Birko-Sams can give you a lot of fine information of spectral properties combined with some new techniques, okay? So, I hope you got a good feeling of this area of research and I thank everybody who was here until the end and everybody who watched this YouTube channel will watch it in the future and go to the end, will deserve a clap to the audience, okay? Okay, thanks, I'm done.