 The people who were here last week know exactly how it works. And first of all, I also hope you all had a good weekend, and you relaxed, and took time off, and had a good fun, tourism, and yeah. And you're all ready for a new week. So how it works, we have every morning three hours of lectures by Hannah, Amy, Hadeem, and myself. Then from two, in the program I think it's two to 30, but essentially the room is booked from two to four. We have these tutorial sessions. And people who were here last week know very well. So, and we already told you hundreds of times, you don't learn mathematics unless you try to do mathematics. And there's no other way to digest what we are doing, then trying yourself to solve some questions. And so the tutorial session will work like everybody will be here. And we will give some problems, and you have time to try to solve them. And we will hang around the TAs, all the TAs, and also us, or at least a subset of us. So please, please, during the lectures, but also especially during the tutorials, we are here for answering questions. So we'll go around any questions about the exercises. We can give hints, we can look at your solution, but we can also re-explain. If it wasn't clear in the morning, and by trying to do the exercise, you understand there's some concept which you don't understand, please ask, ask, ask. We are here for explaining, and we're spending our time, and we want you to understand. So we are rather much happy if someone tells us, yes, I haven't understood them, we are very happy to re-explain. And I also know that I sometimes tend to speak too fast or go too fast, so I know last week, I think people told me that Wednesday I had a peak where I was super fast, then I tried to slow down again, but the people who told me that were the TAs. And I would have loved students. Don't be, I mean, yeah. Stefano kind of told us all this story, no, we can write letters for you, so you should show up. No, you should tell us, you are too fast, I'm not understanding, we are really happy if you tell us, and we don't hold it against you, we actually hold you as a sign that you want to learn, and we didn't do a good enough job. Okay, so this week, the title of my course is again, Renormalization, Renormalization in Entropy Zero Dynamics. And it will be completely independent from next week, so the new people can follow from independently, but if you were here last week, you will see a lot of analogies with what we did. So I bothered you a lot about rotations and how the Gauss map has to do with renormalizing rotations. And this week, we will essentially see two more examples of renormalization in action. And it will be kind of in two parts. Today and tomorrow, we will talk about the Sturmean sequences, which again have to do with rotations, but we will do some renormalization more of the geometric and symbolic level. And then in the second part, from probably Wednesday on, we will talk about, we will move from the torus to higher genus, and we will talk about internal exchange maps and renormalization in that setup. Okay, so let me first put on the first picture, maybe out today. Okay, so that's the first picture that I want to discuss today. And it will stay there for a while. So I'm going to look at unit square grid. Let's give it a name. So let me call lambda the unit square grid. Clear? I hope what I mean. Just draw vertical lines with spacing one and horizontal lines with spacing one. And then I'm going to call L a line, which in the plane. And I'm going to assume that it has an irrational slope. So this line makes an angle. We will call theta the angle with the horizontal. We will assume that the tangent of theta sine over cosine is irrational. I'm going to code the intersections of the line with the grid. I'm going to travel, travel along L. You can imagine you start from minus infinity. You move with an oriented as you wish, let's say. And you look at the point traveling along this line. And record, let's do it consistently red. Let's call record zero for when you hit a horizontal line, record one. Let's do it in blue when you hit a vertical. And actually, I think it's clear, but let's have a little bit animation. So for example, in this piece, this is a finite piece of my line. I start from the lower corner and I travel. I hit a horizontal, I write a zero. I hit a vertical, I write a one. Another horizontal, another vertical. Yet another vertical, so I have two ones. Then a horizontal, it's a zero. And then a vertical and so on. Okay, that's exactly the question I was going to expect. Okay, so I should tell you if you want, so what let me say like this? So I'm getting a sequence, let me call omega, the sequence which is by infinite in the alphabet given by letter zero and letter one. So if you want, you can think of this sequence as a sequence where the index of minus infinity to plus infinity. You can think it as a point in the shift space in the by-sided shift space. So actually last week, we talked about sigma two, the space of one side, sigma two plus one-sided sequences. This is a by-sided sequence. If you want to do this, you need to choose where the zero is. So for example, you can say that you move with time and after omega zero is the first hit after time zero if you're traveling with some speed. But I don't want to stress on what is the origin. So for today, because this creates some issues, so for today, I want to think of omega just as a sequence. So it doesn't matter where zero is, so up to shifts. So it doesn't matter where I put zero. I will just think of it as a by-infinite string, okay? The first comment is that the question I want to answer fully between today and maybe early tomorrow is which sequences appear in this way? So question? This was the question, sorry, sorry, this was the question. Okay, sorry, sorry, good point. So first of all, the line is irrational, so I can hit it most once. I can have a point which goes through a vertex, but so if you hit a vertex, you can record either. Either, doesn't matter. You can get the two sequences if you want, one where you record zero, one when you record one. So the question one is which sequences in zero, one arise? And let's give it a name, let's call these sequences as cutting sequences of the square grid. And once we understand this, we may even ask a second question. So if I give in a sequence, so given a such sequence, if I give you sequence and I tell you that I got it by recording a line, what was the angle of the line? Can you recover the angle? What is theta? We really fully understand these two questions. So first question is, do all sequences arise as cutting sequences? Can you get any sequence in zero ones? Can you give me one sequence which you cannot get? Constant, yes, because I'm doing irrational, good point. And if I look at non-periodic sequences or non-eventually periodic or non-eventually constant, still can you find some issues or something that you will never see in a cutting sequence? Yeah, this is very good and it's already quite fine. There will be some bound or so will be, you will not see arbitrary long strings of ones. And for example, can I see something like this? Can I see zero, zero, one, one in my sequence? You said no, yes, it's no, why? Can you try to explain why? Yeah, so let's try to convince ourselves that you cannot see this. So your line will have a slope. So it could be that your slope is greater than one or it could be that your slope is less than one. So this means that your angle is say, okay, that's not the booty angles yet. Something like this or something like that. But in this case, I can see one, one, but I cannot see zero, zero. In this case, so here there is no, zero, zero. And in this case, when there's low, no, this is the, here the slope is less than one. Here the slope is greater than one. And here you see I can see zero, zero, but I cannot see no, one, one. Okay, so I cannot see simultaneously zero, zero and one, one. So it's clear that these sequences will be a subset of the space of sequences in zero, one. And actually it's a pretty rare set. We will see in some sense they have measure zero and they have low complexity. I will say one more about that later. But first, we are the first days so we want to be everybody on the same space and we want to, I want to make a trivial, I mean trivial, if you haven't seen it. I want to make a remark. You can do what Amy just did in the morning. You can project R2 to R2 mod Z2. So you can quotient by the integer lattice. So define two points as equivalent if they differ by an integer translate. And then we saw this morning that, yeah, so the square is a fundamental domain for this equivalence relation. And if I draw a square, actually I should think of it as a square where the opposite sides are glued because these two differ by horizontal sides are glued by the vector, say if I get it right, zero, one. And vertical sides are glued because they differ by a horizontal translation by an integer. These two, vertical by one, zero, okay? So I think we've all seen that geometrically this is a way to represent the two torus. So this quotient is also called T2. And if I glue the opposite side of a square, I get a cylinder and if I glue the top hands of this cylinder, I get a donut. In this picture, you first say glue the vertical side and get a cylinder, horizontal, whatever, and then you glue the other two sides and get a donut. Okay? And how does the projection of my line looks like? So if I project this irrational here, I want it irrational. If I project this irrational line, as I translate it back, you will see actually a dense picture. You will see a dense line. And what happens when I travel? So projection of L is dense. Well, why this is true? We will say in a second for now, believe it. And what happens when I travel along my line? As I travel inside the square, nothing happens. When I go to the top, I need to use my identification and come back to the opposite side in the base. One, one, then I don't know, two, we go to two. So you have to use identifications by translations, by these two integer translations, when leave the square, when at the boundary, the boundary of the square. Okay? So another way to say dynamically what's going on. So you can think of this, this is a trajectory, which is a solution of a differential equation, if you want. You can look at the following differential equation. X dot of t is cosine of theta, and Y dot of t is sine theta. This, the solutions to this equation gives you a flow. So 50 over point X zero, Y zero, say that this is a point in T two, or let's say it's a point in R two, T two, which is R two, mod Z two. This is just X T, Y T solution with initial condition X zero, Y zero. Very basic kind of definition, but this is called this 50, sorry 50, sorry 50 starts from the initial condition and solves this differential equation up to time T. And this really just moves you along a line with that slope. So I'm just traveling along T. And then if I am on T two, I need to use this identification. So when I go out, I come back here. And this is called linear flow, 50. It's called linear flow on T two with let's say indirection theta, I will call it like this. In a few days we will see linear flows on higher genus surfaces. So we will leave the world to the torus and we will go to surfaces of higher genus. That's why I wanted to also make this remark of what is the linear flow. And again, what I'm doing, I'm kind of giving a symbolic coding for this. This is a solution of some flow on the torus. I had an actual picture, but I haven't. And you're kind of recording how I intersect the horizontal and the vertical on the torus are like two arcs and I'm kind of recording how I cross these arcs on the torus. And again, let's slow down a little bit. Again, if you're doing dynamics and systems you certainly have heard of Poincaré map, but there are some people which are new to dynamics. So let me say this. This is an idea which goes back to Poincaré when he was studying flows which come from astronomy in some sense and motion of planets. So I will just do it in this concrete example, but in general, if you have a flow you can try to reduce it to a map by picking a so-called section. So let me call sigma. I will write it in our special example, but it's a general concept. Sigma in T2 will be a section. So a section, if you are learning dimension n, the section is something in dimension n minus one. It's like an hyper surface of co-dimension one. And in my precise example, I will just pick as a section the horizontal, which if you want, it's a meridian in your torus. For example, for us, let's say zero one, mod tilde, zero one with n points identified. This is the notation we used last week. And then I just want to define T as the Poincaré first return map. So T will go from sigma to sigma. And given a point in the section, T of x is just the first time. Well, actually I need to assume something about my flow and I think you won't pick me on. So in my example, I don't need any assumption because I'm assuming that my line is irrational. So what will happen, we will see in a second why it happens, but that actually my line will come back to my section. So I can define the first return time. So T of x will be phi of r of x of x where r of x is the infimum of t's such that t greater than zero such that phi t of x belongs to sigma again. So it's the first point where I'm back to my section. And if well defined, in our case it is. Trust me. So is it clear what happened? So let me, can you, again, I think I asked you last week, but can everybody see this lower corner of the board or not? Anybody cannot see it? I can rewrite it. Okay, so maybe we will, we will draw the picture on the other side. And I have a ready torus also here. So let me recall you. Where were we? Okay, so I want to say T of x is phi r x of x. No, this is the first return. First time went back. So on the torus, you know, you have this nice section and you travel, travel, travel, travel until you're back. And the map maps this point to the first return point. So in this picture, I think someone will have seen it for sure. What is the first, what is the Poincaré map in this case? So what is the Poincaré map of my linear flow on horizontal? Yes, and what is the angle? Yes, that's right. So now we'll do the picture for, sorry, I want to go slowly because I want to be sure that everybody knows, but okay, so let's do it. If I look at this picture, I travel, then I come back. The picture could be like this or the picture could be, I go out, use the identification and then back here. So this is x and T of x. Or it could be y and this would be T of y. So you see the point is translated and you can just do some basic trigonometry. So this is a triangle. This height is one, right triangle. So this height is one. This angle is theta and I have to compute but he's already doing it. So if you call alpha the cotangent of theta, this is exactly what geometrically gives you this horizontal translation, okay? So T of x is x plus alpha. And you see very well that if I have a y which is very close to one, when I add alpha, I go to the other side and this is the modular one, okay? So this is exactly T of alpha, modular one. And one more step. One more step is what about my coding? So if I look at my coding and I look at the Poincaré map, what you can remark is the following. Let's draw, take the upper right corner and pull down my line until the section. So actually you can compute what this is. This is probably alpha and one minus alpha. So if I call, if I am in this segment, let's call it I one. And let's call this segment I zero. So I one is one minus alpha one. I zero is zero one minus alpha. And doesn't matter the endpoint. I'm not too worried about the endpoint. You can include it or not include it. Doesn't matter. So what happens if you are in I zero when you flow? Which side will you hit? When you hit the horizontal or the vertical? So if I have a point that starts in I zero, when I flow it, I will actually hit the horizontal. So I will kind of, my flow will hit the horizontal line and I will record the zero. If I am in this blue, a second half, when I flow, I will hit the vertical, okay? So I'm kind of forgetting where I am on the section. I'm looking where I next hit, okay? And what my flow does between the point where I am and the first return. So I just want to convince you that the cutting sequence that we were looking at, the sequence for the flow, now let's call it omega I and let's label it. Cutting sequence is actually the itinerary of my rotation of our alpha of some point, which I'll ask you to compute in an exercise this afternoon. So it's an itinerary of an orbit. We were using the notation O R, O orbit under the map RF of some point, of some X, zero one. And this is actually a bi-sided orbit. So my map is invertible, my flow is invertible, so I can also go backward and record where I was in the past. Last week we were using forward orbit and putting a plus. But this is like, this is just R alpha I of X as I moves in Z, some X. And this is the itinerary with respect to the partition I zero, I one. And we saw itineraries last week, but just to set notation if you weren't here. So this means that R alpha to the omega I, to alpha to the I of X belongs to I omega I for every I in Z. So what are we doing? I'm just doing my rotation and recording whether I am in the left or the right. This is analogous to recording where my line of my flow hits horizontal and vertical. And you have to be very sloped to do that. But we will see slowly what happens you have to be very sloped and then you can hit a bunch of zeros. You can only hit a bunch of zero if you go very vertical. We, this will become more and more clear because we will analyze, huh? No, no, no, here there's nothing. Here I just wanted to make a connection between this problem and symbolic coding over rotation. So last week, Hannah spent a lot of time describing what is the symbolic coding of the doubling map. And we saw that the full shift on two symbols is conjugated semi-conjugation. We studied the conjugacy with orbits of the doubling map. You remember? And basically we saw that all possible sequences of zero, one appear as itineraries of the doubling map with respect to first or second half of the interval. And what I want, the point I want to make that today or tomorrow I will tell you what is the symbolic coding of a rotation. A rotation is much less chaotic than the doubling map. For example, we saw that it's ergodic uniquely ergodic but for example it's not mixing. And this is reflected in the coding. There is a lot more structure and there are lots of more constraints and restrictions in the symbolic sequences which appear. And we will kind of describe those, okay? So I wanted to make this point to kind of, you can think we are gonna compare the full shift with whatever appears in this setup, which would be much more rigid than, much more constrained than the full shift. And I just want to kind of advertise that, so this cutting sequences of, I will call them square cutting sequences, but they actually appear, they are very much studied and they appear in the literature with many names. So we saw that this problem, equivalently is the symbolic coding of a linear flow in a square or the symbolic coding of a rotation with respect to some special intervals. But these sequences are also known, okay, they are known as rotation sequences. They are known as Sturmian sequences. That's the name that I prefer and it was introduced by Headloon and Morse. And I just learned yesterday when I was preparing this lecture that actually the name Sturmian comes from Sturm theorem, which is about zero of a polynomial like sine of two pi AX plus B. And you have to compare where the zero fit compared to the oscillation of sine X. So okay, so they're also known and you see a Christoffel words, Betty sequences, characteristic sequences, balance sequences. They are very, very much appearing in a lot of parts of mathematics. And they also appear from kind of applications or probably Sturmian sequences or cut square cutting sequences were understood by Greeks. The reason is that if you have two planetary things that move with irrationally related periods, for example, you look at the solar year. The solar year is just a rotation of the sun in a year and you look at lunar months. The lunar month is the rotation of the moon. You can think of the sun years as your integers and then you can look at your lunar months as your alpha rotation. And you can ask, how many lunar months there are in a solar year? And you will know sometimes there are 12 and 13 or 13 and 14. How many 13? Sometimes there are less, sometimes there are more. It can be more or less, I forgot. Okay, there are 13 or 13 plus minus one. And that is actually what we will see happens in these sequences. And also in music. So if you look at the scales, musical scales, then you can pick two scales which are incommersionably related. Usually log two and log three appears and storming sequences have to do with how scales intersect each other. And again, so this is a digression. So again, I try to give some extras. Everything I write on the board, I try to go slowly. Everything which will be on the slide, you can stop writing because I will distribute it when there's sometimes a little bit of extra. So you can also characterize these sequences as having the smallest possible complexity among non-periodic sequences. So I want to tell you what this means. So if I have a sequence, I can look at the complexity of the sequence. It's a function P of n, when n is an integer. So if I fix n, P of n is how many words of length n appear in my sequence? So you should think I give you a sequence. I want to know how complex it is. I give you a window of size n and I start moving my window and see what do I see in this n block? What do I see in the next n block? What do I see in the following? And I can record how many different blocks of length n I see, okay? Oh, this is a typo. So there should be less or equal. And actually if it's periodic, it will be bounded. So in particular, it will be less or equal than n. It will be less than linear. Actually the complexity function will be bounded if my sequence is periodic or eventually periodic. And so it's less than linear. And actually less than linear implies eventually periodic. But in stormy sequences, if you're not periodic, the smallest the complexity can be is n plus one. And those sequences will have exactly complexity n plus one. And I will also ask you to check that if you'd look at, for example, the coding of the typical point for the W map, you actually see exponential complexity like two to the n. So these are in some sense very low complexity sequences, the smallest complexity among non periodic. And also this is kind of a reminiscent of the fact that the system has entropy zero. So if you know entropy, this is a fact of having entropy zero. And what I will do today, we'll start today and finish tomorrow. It's actually, there is a very nice paper by Caroline Sirius which gives this geometric characterization. And I usually give to read it to a lot of my undergraduate students. I think he ran and knows because the first thing she read when she did a master project in summer with me. But there is also if you want a more serious kind of reference on stormy sequences which goes into this complexity notions and balance. There is a very nice chapter of a book by Peter Fogg or written by Pierre Arnoux. And okay, this is all for the slides. And now I'm back to doing something. So let's look at elementary restrictions which we see by naked eye. And we already mentioned some. So let me maybe make this restriction into a definition and a lemma. Okay, so definition. So a sequence is admissible with value K. K is an integer. One of these two either, either, let me make it coherent, either zero K's either ones are isolated which means no one one occur. And blocks of zeros. So if one isolated the zero will come into blocks. I will give you an example. Blocks of zeros have length K or K plus one. I'll give you an example which is better than any definition. So this is an example. You have one, maybe you have two zeros. Then you have maybe three zeros. Then you have another two, another two, another three, and so on. So ones are surrounded by zeros and between two ones there are here, for example, either two or three. And in this case, K is two. Is that definition clear? Yeah. And this we will call type zero. Wait, let me say, yeah, type zero. Type zero if there are lots of zeros. There is another symmetric K's one or zeros are isolated. This is type one and ones blocks of K or K plus one symbols. And the first lemma is that if omega is a square cutting sequence, it is admissible. Is the statement clear? So either zeros or ones are isolated and blocks come with the difference of one. And I sketch the proof and the idea is the following. There are two cases. So if the time, we already saw kind of this first part. If the tangent is less, say less or equal or less than one. Ah, let's do first, if the tangent is greater than one. So if the tangent is greater than one, geometrically you are convinced. I hope that you cannot see two ones. So there can be no one one by a geometric restriction looking at a square. And what about K? I claim that K is actually the integer part of the tangent. Which by the way, maybe I will remember is the integer part of one over the cot tangent. So it's an integer part of one over alpha. And if you were here last week, you will recognize what this is. It's related to the Gauss map, but okay. So K is the integer part of one over alpha. What integer part of the tangent. So if for example my tangent is in between, I don't know, three and four. Then it means, I don't know, if I were starting at zero, it would mean that I hit, I move out between the third and the fourth square. So I hit three times. But I don't start at zero, I just start at some point. And you can convince yourself that by moving the point, occasionally you may hit four times, but not more than four. And similarly, let me just say something about the other case. So if the tangent is less than one, you can reason in the same way, or you can use a flip. You can reduce the other case to the first case by reflection. You can reflect at x is equal to y and zero and one are exchanged. Zero and one swap. And you can reduce it to the previous case. It's a useful observation. And let me say, and of course I run out to space, but okay, let I put it here. In this case, K is actually the integer part of one over the tangent reasoning in this way. I would like to get to state what's going on next, and then we will do it tomorrow. But I want to go a little bit ahead now, and then I will repeat from here tomorrow. Okay, so I need maybe one more step, and then I can state some theorem. Okay, definition. Am I, everybody's with me until now? You're clear with admissible? Okay, say that omega is admissible of type, let's do one case, the other one is similar, of type say zero with value K. I'm going to define omega prime as a new sequence called derived sequence. Derived sequence. And this is obtained by grouping these blocks and giving them a new name by replacing, so let me do it right, we are of type zero, so we will see blocks like one followed by K, sometimes K plus one, but I look at the block one K, and this is sometimes you can write it at zero to the K. So this is a notation. I write one zero zero zero as one zero to the power K, replacing one zero zero K with one, and keeping zeros, keeping the extra zero. And again, an example speaks better than many words. Again, this is where I don't want to keep track of the mark point zero, if I do it with sequences I need to worry what do I do, where do I put my new zero in my sequence, and this is delicate, and if you want to know what really you want to do, you have to read our new book. But say that my sequence is like this, say that the value of what I did before, zero zero one, maybe zero zero one, maybe three times zero, and then another two, then another three. So what do I do? I look at each block one zero zero zero, and I call it one again, one zero zero, I call it one again, sorry there was a one here I guess. One zero zero, it's another one. I record my sequence, and if I have a fourth one, if I have a third, if I have a K plus one block, this I keep it as a zero. And then I have another block, one, another block, one, and this extra zero, I keep it as a zero. Okay, is it clear what I'm doing? And it might happen that my new sequence is not admissible anymore, but if it is again admissible, I can derive it again. So if omega prime is admissible, derive it again. I've finished with the statement, and I think I'm already, no I'm not over time. I'm just right. So theorem, cutting sequences, square cutting sequences. Okay, let me leave some space. Square cutting sequences implies infinitely derivable, are infinitely derivable. This is quite magic, because if you pick a sequence at random, it's not gonna do this. And even if you pick a sequence which is admissible, and you derive it, it will not be admissible. You have to be very delicate, it's to make it really happen. So you wonder each derived sequence is admissible. And this would be, I make all the statements, and I will start from here tomorrow. Theorem two, if, let's k, k one, k two dot dot dot are values, is the sequence of values of derivatives of derived sequences. If it's infinitely admissible, the arrived sequences. If it's, if I have a cutting sequence, I know that it's infinitely derivable by theorem one. I can derive and look at the values. The values are the lengths of these blocks. I look at the original length of block, then tangent, the tangent, and maybe will be given by a continuous fraction expression given by the case, k one plus one over k two. This is actually if tangent is greater than one, otherwise you don't have the k one. One over k two, k three dot dot dot dot dot. And last week we spent a lot of time on continued fractions and many people told me that they all had seen continued fraction. The new people, if you haven't seen continued fraction, you can look at the notes from, which are all lined from last week. But they just need, and I'm not gonna use anything essentially then this is an expression, it's called the continued fraction, okay? So this is what we will prove tomorrow. Thanks, okay.