 translation surfaces and lots of geometric pictures. From today, I think we'll abandon the geometric aspects and we will move to more combinatorics and dynamics. So I want to move to, so first I want to kind of solve one of the exercises, the first exercise that I gave yesterday. So let me remind you that yesterday we were looking at this polygons which have opposite parallel sides glued together by translations. And we said that they give us a translation surface, surface which is locally flat. And we can talk about linear flow. We can move in a direction using the identification. We have a flow which moves in a straight line on the surface. And today we want to do what many people in dynamics do when they have a flow. Many times it's convenient to go to max. We saw it already last week. We saw it already for the square. We can take a section and induce. Sorry, I already did the wrong session. This section is some interval which is transverse to your flow. And it could be different. It could be shorter. It could be if the flow is minimal, every transverse segment will be hit sooner or later. Let me show you the solution of the exercise. I said that I want to induce my linear flow on this section over there. And I need the different picture in the homework. So what happens? So if I start from some point on my section here, it depends on where the point is. So if the point is in this blue area, what happens when I flow? I flow. And then B is glued with B. And I come back in this blue area below. I think if you did the exercise, it's really just showing the solution. So if you have a point here in the yellow area, for example, it hits C. And then it travels with C. And it comes back on the image of C. So these colors represent straight lines which travel together and return together. So if you draw this map from the section to the section and the section is, let's say, normalizing to B01, you see that your map does the following. It cuts your interval into four parts, which are the points under A, under B, under C, under D. And, for example, little A comes back. Little A didn't do B. It comes back here. C comes back. So each comeback translated in a different part of your section. So the map is continuous. It's actually an isometry on each of these intervals. But it's globally discontinuous. So there are four continuity intervals and three discontinuity points. And these are the maps we will treat today. So we will abandon the polygons and go to the world of interval exchange maps. That's what these type of maps will now define. And let me remark that if, for example, if you are interested in cutting sequences, and it shows your section nicely, like here, if you wanted to understand the cutting sequence of a linear trajectory in the polygon, in this case, you can reduce to study the interval exchange map and code the trajectory of this map by an itinerary. So you can record whether I enter A, B, C, or D. And if you record this itinerary, it will give you the cutting sequence, like in the square. So in some sense, if cutting sequences can be reduced to a coding of this type of maps. So that was my motivation. And now let me start with interval exchange map. So IET stays for interval exchange map. Interval exchange transformation. Thank you. Actually, I think classical is called transformation. Your cause would like to write maps. Now I get into your cause with all. Sometimes it's also interval exchange. Your cause would write IEM for interval exchange maps. Thank you for pointing out. OK, so these are bijective. Let me first let me say in formal there. Bijective piecewise isometries. So they are 1 to 1. And they are piecewise isometries of 0, 1. I will give a definition more formal in a second. But first let me fix something. So if I have, let me fix, I will define what is, I will fix an alphabet. So first of all, if I want an interval exchange of D intervals, for example, in our example D was 4, it's actually nice to give names to the intervals with letters. So before I give you the definition, let me say I will fix an alphabet made by these symbols, the cardinality of this alphabet in D. And in our example, our alphabet is A, B, C, D. And I will call I alpha, I will use the letter alpha in the alphabet, the continuity intervals. So it's piecewise continuous. And I will call I alpha the continuity intervals of the map. So for example, I A, I B, I C, and I D. I want to say this before giving you the definition. So you can think of alphabet. OK, let's do it like this, definition. D from 0, 1 to 0, 1 is a D I T. D is the number of intervals. It's a D I T. If there exists intervals I alpha, and sometimes it's also nice to have a name for the endpoint. So I call them U alpha, V alpha. And I want to take them semi-open. You could choose another convention. You can do them semi-open to the left. Actually, 0, 1. Let me do it like this. 0, 1, open. 0, 1, open. And I take my interval semi-open. If there exists I alpha, where alpha varies in my alphabet, such as that, I can partition 0, 1 in this interval. So 0, 1 is the union of I alpha, union alpha in alpha or alpha. And this is a disjoint union, so it's a partition. Restricted to I alpha, which maps I alpha to the image of I alpha, is a translation. So I partition my segments in interval. And each interval is translated by a certain amount. So something of the form x goes in x plus some delta alpha, some displacement. You want, of course, that the images don't overlap. I want a one-to-one map. So I want the images again. And the union in alpha of t of I alpha is again equal to 0, 1. And this is again a disjoint union. Intervals each I move by a translation. And together, they reassemble to form 0, 1. I think it's very, very clear. And let me tell you that you already know one example. So if you take a 2IT, 2ITs, these are rotations. We already saw last week that I can open up my circle. And a rotation actually looks like an exchange of two intervals, OK? 1 minus r, yeah, thank you. M is very good spotting typos. I'm sure many other people are good spotting typo, but they don't shout out. So you have to become as brave as Amy, who always asks when she doesn't understand, or when she wants people to understand. And let me make a silly remark. So I want you, let's will be an exercise. So the first mistake that people do when they say an interval exchange, they think, oh, but these maps are periodic, because I permute my intervals, and then they come back to be the same. No, if the lengths are not rational, for sure, intervals, when you want to do the iterative of an IT, you have to be careful that an interval will move, and then might be broken up again at the next iterate. So let me urge you to do this exercise. So I want to say that t to the n, the nth t to t composed with t n times, and an IT is also an IT. But in general, of more intervals. So I'll just ask you to play with drawing some iterates. And actually, typically, this would be at most d minus 1 times n plus 1. And I will ask you to compute this number. And let me give you a hint for this exercise. So as you apply iterates, you have to chop your intervals into smaller and smaller bits. So you really have to just draw an interval exchange and draw the square and draw the cube to convince you. And let me just say, so you can remark, if you have a smaller subinterval, let's go JAB contained in 0, 1. If you have a smaller subinterval, so you need this remark. And of course, I forgot the order. So OK, let me do it now. So take a subinterval. I want to look, what are the discontinuity points of my interval exchange? So the discontinuity points of t are the middle points of the intervals, right? So these are the sets I call them u alpha, u alpha, alpha in alpha, maybe minus 0. u alpha were my left end points. So all left end points are break points, but 0. These are the discontinuities of t. Now my remark is that if J t of J dot dot dot dot t to the n minus 1 of J, all don't intersect. All don't intersect the discontinuity set. If you don't hit the discontinuity, then t to the n, restricted to J, is continuous. Points travel together unless they end in a break point when they are cut and split. So the hint, when you want to compute the iterates, so the hint for the exercise, is consider pre-images. Consider pre-images of discontinuities. To find continuity intervals for t to the n. And now I need a little bit more notation. So are you OK so far? So now I want a little bit more notation. So to give you an it, a d it, I need to tell you two things. I need to tell you what are the lengths of the intervals. And I need to tell you how I permute them. So to give an it, you need two things. Lengths, and lengths, lengths datum, lengths. So lengths, what would be lengths? Lengths would be a vector, would be collection lambda alpha. And this would be the lengths where lambda, let me write already, lambda alpha would be the lengths of the interval, the beg lengths of the interval i alpha. And this is a vector which you want to have positive entries and add up to 1. So let me call belongs to delta d. So these are the set of lambda alpha r plus to the d. So all of them have to be non-negative. And the sum has to add up to 1. This is called the simplex. You just want positive numbers which add up to 1. So for example, if you have three intervals, the lengths, if this is lengths of a, this is lengths of b, and this is lengths of c, you want the sum is equal to 1. So you're kind of looking at the face of a simplex in r3. That's what this means. And then you need a permutation of the alphabet. You need a permutation. And I don't want to do too much notation because my time is short. So let me just say I will record the permutation like this. I will write two rows. And in one, I write the letters of the intervals on top. And here I write the letters of the interval on the bottom. For example, there, a, d, c, b, a. Let me write anything else. I use letters. It's nice that these letters are kind of names for the intervals. So they don't have to be in this order, a, b, c, d. They could be in any order in the first row and in any order of the second row. And now I'll tell you something. When interval is changed, it was first written. People would just write 1, 2, 3, 4 goes to 4, 3, 2, 1. And it was actually your cause. Because the Fields Medalists also proved lots of things of silk odysseal who said, no, no. Let's give names to intervals. And let's write a row of names that could be scrambled. And actually, it's really nice to have names for intervals. Because when we will do induction, like we will do soon, you want to follow what a single interval does. And if you like, for example, cutting sequences, you may believe that you want to follow what happens to the interval a. And a might move around. So it's nice to use this notation. And now, OK. So I want to make an assumption, which is a standing assumption on the permutation. Standing assumption. And my standing assumption is that the permutation is irreducible. And I'll tell you what this means. I, let me give you first an example. So I don't want something like this. C, B, A, D. If I have a permutation where there is a block, I don't want any initial block of letters which are permuted among themselves. Irreducible? Yeah, no initial block. So this is no want, no. You don't want no initial block of letters permuted among themselves. I'm not very formal, but hopefully you understand what I mean. Permuted among, unless, of course, it's the whole. No initial block of letters of lengths less than D. Sorry? Yeah, OK. So this is what irreducible means. Why I don't want that? Well, independently on the length, if I have a situation like this, I can cut my interval in two parts. And the first three play by themselves. So I can reduce my interval exchange to a smaller one. It's clear that I want this to say something non-trivial. Ah, you want the intense tripper? So no, that's fine. So I'm not, no, I'm just saying, is the permutation will still, if the permutation mixes the letters, you can have what is called a connection point. You can have a common discontinuity between the two partitions. But for example, it's a very good exercise if you can try to prove you have these two rotations, the kind of swap side. This interval exchange, you can prove it's minimal. So orbits are dense. So it's not kind of trivial in some sense. But no, that's not for yourself. But then I mean reducible, because the first two play with each other, and the second two play. So any initial block, I draw three, but you could have AA or you could have ABBA. Any initial block, which is not everything, which is permuted by themselves, I want to exclude. But you could do two rotations, which kind of swap with each other, or maybe one piece. And that is could be, OK, that's for, OK. So what did I want to say? So maybe let me say what is my definition, as a definition of when I will say almost every IT, of almost every IT. So there is a space of IT, maybe almost every DIT. For me, this means equal any pi irreducible, almost every Lebesgue, almost every Langston vector in the simplex. So that's what almost every mean. So what are we? So Lebesgue, what does it mean? There is a Lebesgue measure in RD. So the Lebesgue measure on RD induces a Lebesgue measure on a hypersurface. This is where length is equal to 1. So you put the Lebesgue measure on this hypersurface, that almost every choice of lengths which add up to 1. OK, so first, there's one thing I want to tell you but not do. So this is a theorem. You can ask, for example, like we did in the rotation, when will orbits be dense? So when will my IT be minimal? And I will give you a result by Keen. It's actually not the stronger form, but it's OK. So if the length pi is irreducible from now on, if I don't say pi is irreducible, always, if the lengths are linearly independent, rationally independent, then T is minimal, which means that for every point in 0, 1, the forward orbit under the interval exchange is dense. If you want corollary, almost every IT is minimal. I don't want to go into this, because I want to go towards other ergodic properties. But this is actually a theorem which is quite elementary to prove. And they can give you a reference. There are some lecture notes of your cause of some lecture notes of, yeah, and it's actually half a page. And it's not too difficult to prove. But I think it's nicer to think about it in terms of flows on surfaces. So if you think on the surface, so you can better see what happens. There are some strips which cannot come back, which cannot close up. And then you can look at it. I think it's nicer geometrically, and it's not very enlightening to see it on the interval exchange level. So I want to state two more results about interval exchanges, so which I want to give you a flavor of. One I will prove, and one I will give you some flavor of. And I want to go into the techniques. So say this is theorem one. Now remember what we did for rotations. For rotations, we knew that they are vile theorem. We knew that they are uniquely ergodic. So it's natural to ask whether there are, if these interval exchanges will also be uniquely ergodic. So this is a really big result of the first kind of in. It's from the 80s. It's a result from which, and independently, measure. I think it's two papers on annals back to back in the 80s. And it was a conjecture of a kin, kin's conjecture. Almost every, almost every means always that. Almost every IET is uniquely ergodic. So we did unique ergodicity last week. We did a lot of ergodicity. Uniquely ergodic means that there exists a unique invariant measure. What is the obvious invariant measure? What is the invariant measure for an IET? What is the invariant measure for the rotation? La, yes. So IETs, maybe I should have said it at the beginning. They are piecewise isometries. So what is an invariant measure? They preserve Lebesgue. Maybe you should try to. So only Lebesgue is the only invariant measure. I don't think David approved it, but at least David decided once you are uniquely ergodic, you're automatically ergodic with respect to that unique invariant measure. OK. And I will state yet another theorem, theorem 3. And this is also quite the old result by Katok. In the 80s, also, 82, maybe I don't remember. Maybe I don't know. Let's say 80s. Actually, IETs are not mixing. And this is actually not an almost everywhere. So IETs are never mixing. So no IET is actually mixing. Not almost everywhere, but any. So so far, it might look that interval exchanges really generalize rotations. So rotations also are minimal, uniquely ergodic, never mixing. Actually, IETs are much different than rotations. For example, maybe I will just put this as a remark. Because I don't think we defined weak mixing, but maybe some of you might have heard. So maybe let me put it as an aside. It's also true that almost every IET is weak mixing, which rotations are not. It's something in between mixing and ergodicity. And this is a much more recent result by Arthur Avila and Giovanni Forni. And it's one of the results, maybe 2002, I don't know. Arthur Avila won the Fields Medal. And this is one of the works which is also enumerated for his Fields Medal. But so maybe let me say. And also, there exist IETs which are minimal, but not uniquely ergodic. So this is not the case for rotations. So for rotation, we have a dichotomy. I will remember at lecture one, even lecture two, we did either you are periodic or you are uniquely ergodic. So essentially, as soon as you are minimally, you are also uniquely ergodic. These are rare. These are measure 0, but they are still rich. So why did I put on the board these results altogether? So actually, the proof, apart from Keen, so the proof of theorem 2, theorem 3, and these other two results all use renormalization. Well, you can also prove at least which proof of this result. Meisler has a more geometric proof, but which is proof of this? This proof and these two proofs all use renormalization. So main tool, renormalization through, let me write like this, Rosy Vich renormalization. And that's the tool I want to explain. Today and maybe some part of tomorrow, we will start to do this renormalization procedure. And then I promise I can prove that ITs are not mixing. And maybe I can say some things about unique ergodicity in a baby case, in a special case. And there are many more results that you can prove. Maybe I can say something about deviations or ergodic average. There are lots and lots of, and actually, this is used a lot by even in current results. And it's one of the tools that I've used the most in my own research. And sometimes I don't study linear flows on transnational surfaces, but I study smooth area-preserving flows on surfaces. And the Poincaré sessions of smooth area-preserving flows on surfaces in suitable coordinates are also interval exchange maps. And you can still study a lot of ergodic theory for this area-preserving flows using this tool here for ITs. So ITs don't only appear in linear flows, they also appear in more generally in smooth area-preserving flows on surfaces. OK, OK. I want to make first another remark or another, maybe a recall, maybe a recall from who was here last week. If I give you a subinterval, j contained in 0, 1, I can define tj, I will call tj, the induced map of t on j. So maybe I should say from now on, I also assume always pi irreducible. And from now on, I will also assume that the lengths are rationally independent. So I have minimality. OK, so I can define the induced map or first return map. This is also called first return map. And let me recall you for who wasn't here. So tj is a map from j to j. And tj of x is equal to t to the rx. So what do I want to do? Sorry, I have a small interval. We should draw the picture before. I have a small interval. And I take a point here. It maybe maps out of my interval with my interval exchange. But my orbit is dense, so I come back to y. So I want to look at the first return time that I come back and accelerate my map. So tj is equal to a power of t, t to the rx of x, where, and now I have the problem that maybe someone does all right, where r of x is the minimum of the integers such that tn of x belongs to j. This is the first return. Say it again. Should I write it again over there? Yes, good point. I think people who are here, I think most people in this room, I did return maps last week, but there are many new people who maybe are familiar with the Namica system. So tj of x is equal to t to the r. Maybe you can also call it rj of x. I'll say it wrong. rj of x is the minimum. I'm assuming minimum. So there is an n, minimum n greater than 1 such that tn of x is back in j. And of course, x was in j. Let's find on j. So I take a point in the small interval and iterate my map until I'm back in j. And now I'll have another exercise for you. I really want you to do some pictures of ITs and play with IT a little bit. Otherwise, it's not possible to follow if you don't play with IT. So this is why exercises are so important. You need to sit down and do things yourself to really understand. So I want to say x. So for any j in, so, okay. Any in, okay, pj, any pj in this form, tj is again an IT. It's again, I mean, maybe you would believe this. It's an interval exchange map of, and in this case, the number of intervals for powers. It was growing, but here it can only be at most d plus two intervals. And you can try to convince yourself why there could be something worse from inducing. This is relating to the endpoint. So when you come back, you may be unlucky and at some point your interval comes back and hits the endpoint of j. And then there is some additional breaking point which doesn't come from these continuities but come from, from, and do you remember if you were here last week, I asked you to do an exercise. If you take a rotation and induce it on any interval, the induce map is an interval exchange of three intervals. You see that in that case, you can have one more. For example, you have to start from a two IT, you get a three IT. So that exercise was preparatory for this. But we also saw last week that they've induced a rotation. If I choose my intervals carefully, if I induced on these nice arcs, I actually get again two IT. I get again rotations. Sorry. So if you choose your interval carefully, you can actually get d. So you can get d, d plus one or d plus two. But if you're careful, you can get d. And this is exactly what I want to do. So now I want an induction algorithm which does the following. I want to produce a sequence, IN, N in N, of nested intervals. So IN plus one is contained in IN, sequence of nested intervals. And actually they will look like this. So I zero will be zero one. And my next intervals, with the rotation, we were chopping from the right and chopping from the left alternatively. Here we will actually only, we will keep zero fixed and just shorten our interval from the right. So they will look like this. I one is contained in I zero. And zero is a common endpoint. And then at some point you get to IN. They will shrink towards zero from the right, and so on. So I will induce around zero from the right. And these intervals will be chosen so that the definition is such that T to the N, so I will call T to the N, will be T to the IN. T to the N will be the induced map. And I want this to be a D, I, T for every N. Sorry. Okay, so I want to choose my intervals smartly so that I get again an I, T of two intervals. And in some sense, again, if you were here last week, you remember what we did with the rotation? We were choosing our interval to be some closest returns. And sometimes we were looking at the next closest returns. They were shrinking to zero. And those intervals were exactly intervals were inducing I had the two I, T, a rotation at every stop. And in some sense they were all intervals so that when I induce I get the two I, T and not the three I, T. They were all special intervals. So this algorithm is made to produce all special intervals where you have D, I, T and not D plus two I, T's. And I want to do one step. I want to do it today. So one step. So set, set I, zero is equal to zero one and T, zero equal to T. And I want to do one step from N. Say that I defined N. I want to define N plus one. The N's one one step. So this is what we do. I look at, this is I, N. And this is T to the N. And it's some interval exchange map. And if you draw something, draw the last interval very large and larger than eight. So I'm going to put A, B, C, a very long D. And then I'm going to put D, C, B, A for example. You could do other combinatories. I D, C, B. So actually I will put up the picture that I have here also. So if you don't want to draw pictures, which maybe it's a good idea now to listen without drawing, I give you a later, what is the scene? Where do I click? I click here. Okay, so I give you this picture in the slides. So you can just look at the picture. So what do I want to do? So this algorithm has two cases. And the cases are, depend on the last interval. So I want to look at the last interval in my interval partition before the exchange and compare it with the last interval before after the exchange. So compare, compare last interval before and let's give it name. Let's call alpha top, alpha T. So this is top and bottom alpha T such that I alpha T is last. In my example, alpha top is D. The last letter is D. And alpha bottom such that T of E alpha bottom is last after the exchange. So here alpha bottom is A. So you look at the two intervals and you look which one is larger. And then there are two cases. So there is the top case. The top case is if the top interval is bigger. So lambda alpha top is longer than lambda alpha bottom. And in this case you say that alpha top is the winner. The winner is the bigger, yes? Yeah, so I look at my intervals after the interval exchange. So I look at their images and this is the interval which after applying T is last. So look at the picture A will become last. So T, or did I say it wrong? That is last, yes, it's last if you want. It contains the endpoint one, another way to say it. Alpha bottom, so this is, maybe my letters are not so good. So there is alpha T for top and B for bottom. So this is the last in the top row and this is the last in the bottom row. Does it make sense? Is it clear to every, so look at this picture. I compare D above with A below and look which of the two is longer. In my picture, D wins over A because it's longer. Okay, otherwise bottom if lambda, the top is less than the bottom and in this case alpha bottom is winner and the other is the loser. And then you want to induce, you want to define. Okay, let me just change this. So what do I want to do now? Now I want to cut, this is my example. I want to cut the loser, cut the shorter. So I want to make my interval shorter by the smaller interval. So let's do it in an example. Say top case, I want to set IN plus one to be equal to IN minus, minus, maybe let me write it like this. It's the interval from zero to the length of IN minus the length of the loser, which is alpha bottom. So it's in my picture. I chop out the smaller and I set T to the N plus one induced map. So let's try to do it in the example and maybe I'll use the picture above. So where am I, oh sorry, where am I? So it's now my interval is shorter. So this part of space doesn't exist anymore. And I have to look at the first return map to the smaller interval. So what happens? If I look at the B, B is back in my space in one iterate. If I move B, I'm back. If I move C, I'm back. If I move this beginning of D, the beginning of D is immediately back. What happens of the end of D? So sorry, sorry, sorry. The beginning of D is back. What happens of A? If I apply my interval exchange, it moves out of my space. So I really urge you, there's no other way to run. You really should try to think of these pictures. And there's no point that I try. You really, I will give you a stare at this picture and redo it in a piece of paper. So A goes out. So I need to apply T another time to come back. So I have to, then I think A travels with D if I apply again. So if A travels again with D, this will be my return map. A will go out and come back with D to the ending of D. So I fit it here. And I urge you try to meditate on this picture. And maybe I can say one more thing. What happens now I have a new interval exchange. And if you want, I can record new length and new data. So the permutation has changed. So you see here that A, B, C, D goes in D, C, B, A. Now I have A, B, C, D goes in D, A, C, B. Fine. There is a combinatorial description of how permutations change, but I don't want to explain it. You can just, there is an algorithmic way to understanding them. What I want to say that I can keep track of how length changes, how the new length vector. So three lengths of the new IT are the same. B, C, A, B, this is wrong. Oh, stupid me. Okay, sorry, I correct the typo. A, B, and C have the same length that they had before, but D is changed. And how is D changed? The old D is now the sum of the new D and the new A. Okay? So all letters are the same by, and you can record this with the matrix. So I can write the new lengths, actually I want the positive matrix. So I write the old lengths as a matrix times the new lengths and you can verify that this matrix is identity and then there is an extra piece out to the diagonal which it's actually in a position related to the top and the winners and the loser. And okay. So you have a matrix with ones and zeros which tell you how the length changes. And I leave you the slide for the other case. So this is a similar case. Bottom here is where the bottom interval is longer. So I need to cut the shorter interval. And again, I need to induce. And this one maybe you can reflect a little bit more. So B is back, C is back immediately. The beginning of A is back immediately. But the end of A goes out. So I need to iterate it again. I need to take D and map follow it. So I A, the end of A, I will call it D. It goes out, follows D and is back here. This is a little bit more complicated, but so try to, if you want to really understand, try to look at this picture and redo them yourselves. You have like an exercise with solution. And again, you can record the new permutation and you can record the new length. And again, all the lengths as a function of this new length is even by a matrix with one, one, once and one, one extra diagonal. So this is, I did finish what I wanted to tell you for today. And I don't want to go into much details of the algorithm when you can, but if you try to redo this exercise, you will play with interval exchanges and inducing an interval exchange on a subinterval, okay? And then tomorrow we'll try to tell you what can you do with this algorithm and how can you use it?