 Is it on? Okay, maybe we'll start slowly. So, first of all, I wanted to have a little digression. I wanted to show a picture, just because from Amy's yesterday question, yesterday I mentioned that there are minimal, but not uniquely ergodic. So that means there are interval exchange maps whose orbits are dense, but not uniformly distributed. So in the rotation, if you have something which is dense, it's automatically also uniformly distributed as we did last week. So this is not a picture of an interval exchange orbit, it's a picture of a billiard. So what is a billiard? Maybe I need my pointer. You just have a rectangle, and this rectangle has a slit, a barrier in the middle, and then you look at the trajectory of a point which moves in a straight line and bounces at the boundary it reflects. So when it hits the boundary it reflects. And this is just a plot of a trajectory, and I hope it's just visually, you can believe that this trajectory is dense, so it will go in all parts of the table, but it will not equidistribute. So you see there are some areas which are gray, dark gray, and some areas which are white. So these are areas where the trajectory spends more time, and these are areas where it spends less time. So just visually I think you can believe that this trajectory doesn't feel uniformly, according to LeBag, it doesn't feel uniformly the table. And there is a way to build, this trajectory is built upon using interval exchanges which are minimal but not uniquely ergodic. So there is a nice way which is out of my lecture scope but there's a nice way to reduce billiards in polygons to linear flows on translation surfaces. So you can do something called unfolding it, get a translation surface, take the one kind of session, and these trajectories are related to IITs. So minimal not uniquely ergodic IITs are behind this picture. This was really a digression. So what did I do here? Okay, so now let's go back to our Rosivic induction. I put up this slide just for memory. I don't want to go any more into the detail of this algorithm but let me repeat what we have. So Rosivic induction. So we start from an interval exchange which is given by a permutation and some lengths that too. And there are two standing assumptions. We assume that the permutation is irreducible and we assume that the lengths of the exchanged intervals are rationally independent. So this is somehow our irrationality and this keen theorem tells us that under this condition orbits will be dense. Okay, minimality. And maybe let me make two remarks that I didn't do yesterday but some people asked me in the questions. Okay, so, no maybe before the remarks let me add some more. So I have this minimum and then the algorithm, this algorithm gives me a sequence of nested intervals. These are nested shrinking intervals and it gives me a sequence of induced maps which are all the IITs. And a step of this algorithm is what we did yesterday. You look the shorter interval from at the end, you cut it and induce and then you continue doing this. So maybe I have another visual picture of the algorithm. Let's put this up. So you start from my IT and then you build smaller and smaller and smaller intervals by this procedure of cutting until they are nested and they're shrinking and you look at the induced maps on the smaller intervals which are all interval exchange maps. This is like a picture you should have in mind. And now let me make the remarks. The remarks was that this condition of irrationality, so let's start. So the assumption star of irrationality actually implies two things. It implies that the algorithm never stops. Algorithm is well-defined, doesn't stop. When could it stop? You remember we were comparing the last two intervals. There was a alpha top and the length of alpha bottom. So this equality, if I had equality, maybe I'll go back again. If I had equality between the last two intervals, I could not decide which one is shorter. So equality never occurs. So this maybe I should write it at some stage, at some stage N. So there's never ambiguity. There's always a larger or a smaller interval so I can continue further. I don't know, if you try to do, I think I gave this as an exercise yesterday. If you have irrational lengths, you cannot have connections. You cannot have orbits of discontinuities which intersect. And this really tells you that there are never equal intervals in the process. And the second thing that someone correctly pointed out, it also gives me the size of this interval goes to zero. It shrinks into zero, okay? This sort of. And that's gonna be important today. And I also want to make some philosophical remark. If you were here last week, this will sound very familiar to what I said exactly last Friday. So we have this induction procedure where I look at smaller pieces and induce my IT on the small piece. So associated to this induction procedure, there is a randomization. There's a randomization map. So given T of T, I'm defining the orbit either this randomization and maybe this randomization goes from D I E T's to D I E T's. What do I do? If I do a random, I'm gonna write it in words. I, this is actually T to the N. Rescaled, rescaled to unit lengths. So the picture is that you have an IT. You go to a small scale and you have an induced map. And then what do you do? You just zoom it back. Or maybe my picture is. Maybe let me do it a little bit higher. So you have this small interval I N and you have this induced map. And you just make your interval lengths one again. So you open it up to lengths one. So maybe something like this. You take the same picture and this is gonna be one over. You basically you multiply by one over the length. So to make it lengths one. And then here you will see our N of T. So every time you have this induction procedure if the induced object is of the same nature of the original object. This time it's an IT. I can just make it lengths one and have a map called the renormalization from space of IT to space of IT. And it's really like last week. Last week we had induced rotations and we were rescaling them and we had a renormalization on rotations. And actually if you do this procedure for a rotation when D is equal to two you will not get exactly the same than last week. You will get a slow version of what we were doing last week. I will give you an exercise. If you try it with which induction on two intervals you will get something which behaves like the far aim up. And if you can actually accelerate you can do for example all cases top together and then you will get exactly the Gauss map. So there is a relation. You can think of last week example as a special case of an acceleration of this. Induction. And let me say again before we'll do some real example. What is the philosophy? The philosophy is always that to study you can study this renormalization as a dynamical system. So maybe you can study R. Study R as a dynamical system and the properties of the orbit under R of your IT give you some information about the dynamics and the properties of T. And we will see an example in today. But this I wanted to say the picture before we go into it. And the same type of features we saw last week with the Gauss map. So I would like to do some examples of how this renormalization is useful. I need yet one ingredient. And the next ingredient are towers. Towers representation. And again, if you were here last week you will recognize something very similar to what we did last week. And I really did towers for the rotation in preparation of this picture. Okay, so let's look at my induced map and let's look at one induced interval. So given I and alpha, this picture. I have one of the intervals exchanged, I and alpha. What happens if I apply this I to A and alpha my interval exchange map? My interval exchange map will map my interval out of my space for quite some time. Maybe it will move around. And at some point it will come back. At some point it will come back by the induced map. So let me write like this. So let H and alpha be the first return time. I, so this is the minimum. K greater or equal than one. Such that T to the K, and notice I'm applying T, T to the K of I and alpha is contained in I and. So again, I maybe notice. So if I apply T, I'm doing the Poincaré map. So I don't immediately come back. It takes me some time to come back. And in this setup, if you want H, so T to the N, by definition, restricted to I and alpha is indeed equal to T to the HN of alpha, okay? So the induced map is really T to the first return time. And now I want to represent the images of my interval up to the first return as a tower. So maybe let me say, let H and alpha, this would be the union, T I of I and alpha. That's right, T K. T K of I and alpha for K that goes from zero to H and alpha minus one, okay? So I'm looking at all the images up to the time they come back, okay? And this I call a tower. This is the tower over I and alpha. And this is a disjoint union. Convince yourselves by definition of first return. You can prove that these are disjoint. They are just all these images, but I want to draw them at the tower. So I want to draw them stacked up on top of my interval. So this is my liter I and alpha. And I want to draw floors of a tower of height, H and alpha. Okay? Draw them stacked up. Okay, what's going on? It's just, we had, if you were here last week, we were doing exactly the same for the rotation. This picture has no meaning unless it's a graphical representation. I'm plotting these floors as a tower, but I'm going to use this picture to give a representation of the original dynamics in terms of the use map. So maybe let me remark that if you take the union of the towers, this is gonna be equal to everything. And this is an exercise or something we discussed last week. In general, you need some assumption like what I discussed last week. In this case, it's true. If I union, and these are the union over, there are the towers, one per letter. And let me plot now the picture of my whole space. This is IN, I have maybe different size intervals. So for each interval, I draw the tower over it. And I'm gonna draw it like this to understand something. And then I have another one over this interval, sorry. And you can put them distance one, but it doesn't matter, it's just a graphical plot. And I have one more. So each tower, each color is the tower. Each is a tower of this form. H and alpha is the tower. And the base is I and alpha. And the height is the return time, H and alpha. Maybe return time, if you include the base, there are H and alpha floors. And we saw this picture for the rotation and we saw that we can read a lot of properties of the dynamics from this picture. So, but let me first stress again how, what does this picture mean? So, and then in this picture, in this picture, how does T act? T moves each floor up, up by one. So in this picture, by definition, each next floor is image of the previous. And what happens when I get to the top of the tower? I bring you all here up. So I start from, I mean, I'm defining, so the base has the induced map T to the N. Now I'm saying that this picture of stuck towers represent the whole space. So the whole space can be represented like this. And on the whole space, I have the action of T. How does it look like? If I take a point in some floor, it will go to the point above, but by definition of how I represented the stacking. And when I get to the top, and everybody who was here and did some exercises last week, when I get to the top, T moves me back to the base according to the induced map. So at the top, at the top floor, T moves back, back to base, by T to the N. I got to the side to make some space. Okay, so this picture you should recognize is very similar to what we have for rotations. And I claim that really a lot of properties of interval exchange maps can be only proven, I think, using this picture. And I would like to give you two examples now. So I want to give two applications of how renormalization is useful and how this induction and towers are useful. So, but first it's a good moment to stop. Do you all think, feel you understand what's going on? Do you, if this tower picture, because last week I know for the towers, at the beginning for the rotation, it wasn't clear to everybody. Hopefully you have time now to understand. But if you have a question now, stop me before I go on. Clear what I'm doing? I'm inducing and representing the whole map as a tower over the induced map. It's a very standard procedure in dynamics, actually. I'm doing it in this very special case, but inducing and representing the space as towers is used also in hyperboiling dynamical systems. It's used in many other sets up. And this picture here, this towers are related to finite rank dynamical systems and cutting and stacking. And again, the algorithm, you can see it as cutting and stacking of towers like we did for the rotation. So it's also quite a general concept in dynamics. We are doing it in this special case, but you might encounter it again if you continue working in dynamics. Okay, and now application one. Application one. I want to prove that no IET is mixing. And this I said it's a result by Katalk. If I ask you to prove that the rotation is not mixing, I think you all did it last week. It's quite easy to see that the rotation doesn't mix. I have a set, I move it around, it will miss my other, some other sets infinitely often. If I ask you to show that an IET is not mixing, I personally don't know how to see it. The dynamics, how do you, I don't know how to see it unless you think of an IET in terms of towers. So I can, but on the towers, I think you see it very well. And it's really this finitely many towers, which is key to have this property. So let me try to sketch it. So take a set. I want to find the set for which take A is equal to B with measure of A, the bag measure. So not mixing with respect to the bag. The bag measure of A is less than one over D time D plus two. This is the number of intervals. And this will become, why I choose it like this will become clear at the end. So these towers we said are getting thinner and thinner because the size shrink to zero. So this little, and they actually cover the whole space. So they actually give you a partition of the space into floors and the mesh goes to zero. So maybe like Amy's density, maybe you simplified the back density or some approximation argument. Let me say that when N is large, I can assume, so let's say A is almost, with a small error, is almost union of floors, of towers, H and alpha. You can approximate it with floors. I have my towers and I can think that my A is a union of full floors. It is not really, it's almost, but let me ignore this detail. You can work out there will be an epsilon error and I leave you to fill some details if you want to do it. Let's say that it is a union of floors. That is up to epsilon. So then, so choose, there would be, there exists alpha such that there are D towers. We can use pigeonhole principle. There will be one tower which will contain a good mass. There will be an alpha such that the measure of A intersecting H and alpha is greater than one over D, the measure of alpha. One tower will have more than one over D. There are D towers. One of them has to fit one over D of the mass, okay? So we'll fix this alpha. And now we look induced map of the original interval exchange on I and alpha on the base of that tower. Of this tower, of this tower. I have a tower which has a definite proportion of A somewhere and I look at the base, I look at the base and I look at, I induce my interval exchange on my base. And you all did, some of you might have done the exercise. If I induce an interval exchange on some intervals, I get an interval exchange or at most D plus two intervals, right? So it's an IET of at most D plus two intervals. There exists a sub-interval J, there exists a J, actually let me call it, okay, there exists J inside I and alpha. There exists a small interval, which is a continuity interval for the induced map and which is, I can pick the largest with the back measure greater than one over D plus two, the length of I alpha. There's this, one of the induced intervals is bigger than one over D plus two, one of the intervals of the induced map. And this induced map, it comes with a return time and there is an R, Rn, return time, such that T to the Rn of J is back in I and alpha. And now, now this is the most subtle point. So now I would like you to really look at, maybe I'll do this picture big. I have my tower and I have some small interval J, which in time T to the Rn is back in my base, okay? So now, pick some other floor. Then you should really look at the picture. So pick some other floor and pick the little interval above J. So what happens if I flow this? I iterate T, Rn times to this. So the base comes back in time Rn. But once the base come back, it actually keeps flowing up. So what do you know about this interval? It will come back and then flow back exactly to the same height. So this is what, I don't think I can write anything. I think I want you to think what happens to this small interval, right to the claim and then you can meditate. I claim for every floor of F in this tower, if I look at T, Rn of F intersected with F, I claim that the Lebesgue measure of this intersection is at least one over D plus two, the Lebesgue measure of the floor, Rn. So if I look at my floor, it contains my one small interval above J and this small interval will come back at the same floor. So the floor will self-intersect, at least through this small interval of size one over D plus two. The idea is that you have inside F, you have this small interval, you have a small interval of size one over D and this small interval comes back on the same floor. So the whole floor self-intersects. I think I, maybe I should, I'll leave you to meditate on this point because you really have to stare at the picture of the tower and understand the small intervals come back in the time. So the whole floor, when it comes back, will self-overlap on something of that area. And from here, going back to A, you can also deduce that if I look at my set A, which was union of floors, then the back measure of A intersected with TRN of A. This is greater than actually, okay, this again, I don't want to finish all the details because, but the measure of A intersected with TRN, will be actually equal, less greater than D over D plus two, the measure of A. Y, so D has, sorry, A has a one over D proportion in this tower and every floor in this tower, in particular the A floors, when they come back, they self-intersect with the proportion one over D plus two. So this D and D plus two, this is the proportion of A in H and alpha. And this is the self-intersection of each floor. And now I would like to show you the discontradix mixing. So, but if mixing, if T were mixing, the measure of T to the RN of A intersected with A, we has to tend to mu A squared and tends to infinity. You should see as N tends to infinity, my return times are getting larger and larger, so mixing should hold. But these two cannot hold together. So this would give, together with that, it would give that the measure of A is greater than one over D, D plus two, measure of A squared. But you can, sorry, did I do something wrong? Measure of A squared is greater than measure of, so you simplify an A. This is the contradiction with the choice of A. If A is small, I chose A at the beginning, so efficient is small, so that this is a contradiction. Okay, so I don't know if I managed to convey all the details, but the picture I want to leave you is that from this tower picture, you can understand a lot of the dynamics of T, and you can see things that you cannot see with your naked eye. And maybe I try to convey one more, one more application, application two. This is a baby sketch of unique ergodicity. So I want to say that through towers and normalization, you can understand invariant measures. So let me start with some elementary observation. So say that mu is a T invariant measure, potentially different than the bag measure. So you can remark that, okay, let me give a let, so maybe let me start. For the remark, define the following vectors. Define mu n alpha, or define the following quantity. Mu n alpha is the measure of the interval i and alpha, okay? I look at the measure of the small intervals in mind action. And the remarks, looking at the towers, I can see that all floors have the same measure because my measure is invariant. So all floors in h and the tower h and alpha have measure, mu measure mu alpha n. And the other thing that you can remark from the fact that towers give you partitions is that if I know the measure on all the floors of all the towers at every stage, I completely determine the measure. So remark one was this. So if I know the base numbers, I know all the floors measures. And if I know mu and alpha for every alpha and for every n, they fully determine mu. This is just because partitions are generating, partitions are made by floors which shrink to zero so they can approximate any other measurable set with floors, exactly like I did with A. And now there is a key point. So I wish I had, I didn't bring this light from yesterday, sorry. So yesterday, so yesterday, what did we prove? Or maybe you try to, if you prove the exercise, one of the exercises that I gave, we proved that the lengths of the intervals satisfy some matricia relation. You remember this, when we do this algorithm, these are matrices given by the algorithm. I don't have the slides from yesterday, unfortunately, but yeah, okay. So when we were doing the algorithm, we were recording how the lengths of the intervals change at each step. And we saw that you need to multiply some matrices. And these matrices each have the form like some ones with some on the diagonal and out of the diagonal one. So I don't have the last slide from yesterday, it takes too long to put it up. Okay, so maybe I'll just go back to one step of the algorithm. You know, there was a relation between something like this. You had something like I, D. In this picture, it was like I, D, N was equal to I, A, N plus one plus I, A, D, N plus one, right? One interval was a sum of two and this gave you one of these matrices. And this gives you that the lengths satisfy this relation N plus one. But also that other measures have to satisfy these constraints because if I have a measure of the sum of the union of two intervals, it has to be the sum. It also give you that mu and D has to be equal to mu N plus one plus mu. It's always also true for the new measures. So also mu has to satisfy a similar relation, mu zero. So the vector with mu zero alpha as entries, this, sorry, sorry, mu zero, which is the vector with entry mu zero alpha. These are the measures of the bakes. Also satisfies A, N, A one of mu zero alphas, okay? So they satisfy the same recursive formulas. Ah, sorry, sorry. And no, it's like here. So I do it inductively. So the previous is a matrix. Ah, no, no, you're right. Thank you so much, Emi. You're right, it should be the same. It starts with the outer and goes through the inner. Like this. Thank you, Emi. Yes, thank you very much, Emi, because I'm going to confuse everybody. I don't know if it's this. Okay, yes, it's exactly the same formula with the mu's. Okay? Now I want to, now I want to do, now I have to, this is true for any IT. Now I want to focus on a special case. Actually I want to tell you the key renormalization fact. So let me say whatever you're trying to prove. So the following is true. If the orbit under renormalization of T is recurrent. So if the recurrent means that my orbit comes back close to my original T infinitely often. This is actually which, I think it's due to which. If the orbit is recurrent, I claim that T is uniquely ergodic. And this is really a basic example of this renormalization philosophy, you see. Properties of the renormalization orbit tells you something about the dynamics of your interval exchange. So this is a property of the renormalization map. I have an orbit which comes back. And this is a property about the point I'm starting to normalize. And I will give you the proof in a baby case. Proof in a baby case which is T periodic which is Rn of T is equal to T. So periodic or periodic induction or which basically tells you that T is in some self-similar. Now this is the analogous of rotation numbers of quadratic irrational rotation number where the Gauss map is periodic. This is like periodic points for the Gauss map. Those are periodic IT for the renormalization. And that means self-similar. And what happens in this case? So this matrices only depend on the renormalized map. So if my renormalized map is periodic, I claim that the matrices A n repeat periodically. So it will look like A zero, A n minus one. And then maybe A n, okay, that doesn't matter. And then again A zero, A n minus one. And they will repeat in blocks. And I will call A the block. Almost done. I take my formula, I take my formula for mu. And what do I get? I get that mu zero. I can write it as, I can write it as, okay, let me just write it like this. I can write it like this. It's an intersection of K of AK of some positive number. So maybe I should have done one more step. Okay, maybe let me do one more step. Sorry, let me add one more step. So maybe let me first see. If I use my formula, what do I get? Mu zero is A of mu, let's say n at the period. But it's also equal to A times A of mu two n. And this equal to A to the K of mu K n. I can expand it as long as I want. So for every K, I can write it as AK of some number. And this is my number in R plus to the D. And now I have to appeal to something which maybe not, some might have seen some not. And actually this matrix A, it has to be, but actually this period matrix has positive entries. It's clear that it has no negative entries because I'm multiplying matrices with zero and ones. But actually when normalization is periodic, I have to multiply enough matrices to see positive entries in the product. And if I apply it to R D, it actually sends R D, the positive cone in R D inside itself. So it sends the positive cone inside itself. So I want to use some kind of contraction principle or if some of you know Perron-Frobini's theorem, you know that these matrices will have a unique fixed point and it will have a unique fixed point of unit length. So in general, you should picture yourself, my mu has to stay in A of this, but it should also stay in A square of this, but it should also stay in A to the K of R plus to the D. So it actually, by contraction principle, you can prove that this matrix contracts the positive cone. So by a Brouwer fixed point, if you want some form of Brouwer, I can ask you actually to prove an elementary Brouwer fixed point, if you want. Actually, there is a fixed line, but I also want my measures to add up to one. There exists a unique mu zero in the simplex in sigma D such that A of such that, this equation holds, let me write star holds, which has this form. And I know the lambda and mu are solutions, so lambda has to be equal to mu. And this is a step zero, but now you can transport with your equations. This is also going to be true at every other step by applying, applying, applying, my matrices, I apply the inverses of the matrix. Once the initial vector is determined, everything is determined by some remark, initial remark, mu is the bag. They coincide on all tower floors. Okay, that was a long, a long picture, but I thought I wanted to try to give you at least one example of this renormalization machinery. I gave you two, so the towers are helpful and to prove things like mixing, you can look at the dynamics in the towers and renormalization is helpful to study. And I just want to conclude with a picture because I, if I, and this maybe I will skip, I will, let me go. You can do something else. You can also study Birk of some functions. Okay, let me say, you can also do something else. You can study Birk of some, so you can take a, sorry, maybe functions, say a piece with constant function and plot the Birk of some over an interval exchange. I just want to show you a simulation. So you take a point and you plot f of x, f of this t of x, f of t square of x and you plot them on a graph. Say that you graph the Birk of some, so some function. So you plot as time, as their horizontal axis move, values of a Birk of some of a function and then connect the dots and stop at n and connect the dot and maybe map everything into zero one. So I just want to conclude leaving this picture. This is how the plot of a Birk of some over an, of a function over an interval exchange looks like. So I'm just plotting a Birk of some. And now look at this picture and tell me, what do you see? What does this look to you? To me, this looks a little bit like a fractal, doesn't it? It looks like a little bit self-similar. So all another word of renormalization, you can go to study functions, Birk of some and renormalization can explain this picture. Renormalization can tell you that some pieces are made by smaller pieces, which are made by smaller pieces. And what are these basic pieces? They could be Birk of some's up to the height of a tower. So if you plot the Birk of some up to a tower, it gives you a block, but then the towers cut and stuck. So the Birk of some's combined with each other to make a picture. So I cannot go into this, but I just want to tell you that there's a lot of possibilities and pictures like this. When you see this picture, you see renormalization. You see that there is a phenomenon that you can explain. And lots of dynamics of interval exchanges and beautiful pictures like this can be explained through this machinery. All right, thanks. Thank you. Thank you.