 It's African night and we already know this is not a scientific event but it's a fun event and it was advertised by Stefano yesterday, so just dances and music. I wanted to comment on the activity we will have Wednesday afternoon and people who were here last week know what to expect so we will have a panel discussion and we will try to answer some of your questions which are not mathematical questions by kind of career, progression, future and so mathematical questions we are always very happy to receive them and do them at any time especially during the exercise sessions you can also ask us to repeat things, let me stress it, it's really we are really happy to explain we are here to teach you something so we are happy to give everything we can give you and on the other hand so this is if you should all have received the link last week in the email to this virtual pad where you can ask questions so these were the questions for week one you can anonymously write whichever thing you may want addressed at the panel so last week the panel was more about PhD and kind of early postdoc and kind of happily and successfully finishing in PhD and and applying for jobs now it's more looking a little bit beyond so for a little bit also more older people the idea the topic would be like how to become an independent researcher how to develop your research agenda and also for later on how to combine family and work for men and women and so if you have special questions you want answer please go to the link and write whatever comes to your mind okay okay so maybe I can go back to the picture of my slides and yeah okay maybe yeah let me just go this is the slide from yesterday and let me recall you we were trying to character it's okay that's enough we were trying to under characterize square cutting sequences obtained by crossing traveling along a irrational line and again the reference I'm actually following essentially I'm doing an expanded version with more details but all the ideas are in the first two pages of this beautiful paper by Caroline series then she goes on on mark of geometry of mark of numbers but okay that is a reference on the web page you can click on okay more serious details about combinatorial properties and even the solution to the hard questions about complexity that I gave yesterday there is this book by are new there's also a link to forensic that might give you okay let me recall you what we were so first of all we made some kind of elementary remarks just by staring at this greed and you can find a necessary condition for a cutting sequence so we call it admissibility and you cannot be admissible of type 0 or admissible of type 1 and admissible of type 0 means that ones are isolated and the 0 blocks have lengths k or k plus 1 which differs by 1 at most 1 and type 1 means that 0s are isolated so we have one blocks again of length k or k plus 1 for some k yes yeah so maybe first of all yeah I could I could put k greater than 1 if you take case is equal to 0 so you have only 1 1 yes that's right yeah so I somehow when you decrease the type at some point you swap to the other type that's right and again let me recall you maybe one thing I went fast in this proof so I can assume that my angle is in between 0 and pi half that's just because of symmetries of the green I can always flip my trajectory and then type 0 and type 1 correspond to these two sectors 0 pi over 4 and pi over 4 pi over 2 so if the slope let's see if the slope is less than 1 then we are type 1 and if the slope is greater then I tend to hit 2 or more 0 so this is type 0 okay okay then we had this is all review because we stopped kind of in the middle so then if I have an admissible sequence I want to do a non-trivial operation derivation and you should think that this derivation is at the combinatorial level is a form of randomization so I'm kind of I'm kind of how do I say I'm kind of zooming out so if I'm blind and I don't see these small blocks and I move back I can see all these blocks as a single digits so I'm kind of zooming out actually so in case 0 I replace every 1 0 to the k block with a 1 that's what we said so let me do an example if this is my sequence every sorry we are in case 0 so we replace 1 here case 3 so I replay this block I see it as a 1 and this extra 1 no this block did I do it right once yeah that's block is a 1 this block is another 1 and the extra 0 stays a 0 have a big block which is a 1 and another block which is a 1 another block which is a 1 and then this extra 0 stays 0 so and I didn't say it yesterday but this definition has another half so if you are in case 1 you should do something similar for the other type so in case 1 you want to replace 0 1 1 1 1 k times with 0 and again maybe let me do another example so if I have 0 1 1 0 1 1 0 1 1 1 0 1 1 0 1 1 1 omega prime is 0 0 0 1 0 0 1 and so on okay where are here thank you yeah here here so that's a 1 and that's another 1 yes and you can check maybe remark to remark when you do this you flip type so when you derive you will get something of the well you flip type in Japan if you are again admissible you will be admissible of the other type in general you may not be admissible in my case this is not admissible so this is not omega will not be a cutting sequence no so there are different conventions of how you this is like a what's going to be a death substitution and then we can do the opposite you can choose either one but you have to be consistently and I hope I will do it consistently later but yeah yeah you can do there are different conventions also here okay so now the main theorem so the main theorem that I stated two theorems and the idea is that admissibility is not enough to characterize cutting sequences it's just a local condition but when I do this operation of derivation if I impose admissibility on infinitely many scales on the derived sequence on the derived sequence of the derived sequence if it's infinitely derivable so if every time I derive I get back something admissible and I can derive again then this current this actually first of all this is true for cutting sequences so cutting sequences I can derive them infinitely many times and get always admissible sequences cutting sequences are infinitely derivable and now you can ask me is this a characterization so is an if and only if and it's almost an if and only if so it's if and only if if you take the closure if and only if omega belongs to the closure of square cutting sequences and what is the closure this is the closure in the space if you want to zero one to the Z last week we had the distance and we had the topology so if you kind of take sequences which can be approximated as a limit of cutting sequences this becomes if and only if and I have you an exercise to think about if you can find something which is in the closure but it's not a cutting sequence but you can actually understand exactly what's in the closure and there are kind of so it's essentially you can understand the closure so you can really characterize cutting sequences through infinitely derivable and not only that but given a cutting sequence if I tell you that the sequence past my test and is a cutting sequence but I don't know which line generated it I can recover the direction from the combinatorial process of deriving and this is the thing I take my cutting sequence by theorem one it's infinitely derivable so I start deriving and I record the values so a zero is the block the blocks value of the sequence a one is the block I derive and they look at the lengths of the blocks of the derivative and so on and as I do this type flips so it will be type 0 type 1 type 0 type 1 but I just record the value and the second theorem tells me that the tangent of the slope is actually given by the continuous fraction with those values as entries okay yes this is case these are the case I call them a to make them continued fraction but these are the case case okay sorry yeah but it's not enough well we will understand better through the proof but in some sense I'm just thinking blindly you see it reduces the length of it the word is infinite but if I look at the finite block I'm making it shorter and it looks like I'm here and then at some point I go back it's the opposite actually it's kind of a zooming out last week we were zooming in but this is zooming out but I'm kind of going out and the kind of roughly thinking of each of this block as a single single letter so it just I'm just saying that it looks at different scales does it make any sense okay so now we are ready for the proof and now for the proof I want to state the key lemma and with first deduce everything from the key lemma and then prove the key lemma so this is really the key point so you have omega say of type which type do I want type 0 with value with value value k now I'm going to consider the following matrix 0 sorry 1 0 k 1 maybe I'll call it a k hope I have the right matrix 1 0 k 1 yes hopefully and if it's type 1 I will only write the statement for type 0 if it's type 1 you have to take the transpose of this matrix and let lambda k you remember that lambda was my greed lambda k is a k of the grid so what let me do an example if k is equal to 3 this is my greed so I apply 1 0 3 1 and I can always make mistakes and you're very good at spotting them but hopefully 1 0 goes to 3 1 you can check so my greed will be actually like this will be a skewed grid which is very much same horizontal lines but the vertical lines have slope 3 so this is do you see the picture I don't have an animation but so lambda k is a skewed grid did I do it right I hope yes yes okay and actually you can get it like this you can take your square and apply the matrix 1 0 1 1 and you will get a grid which has as fun you can think as a base of parallelogram and then you can iterate these three times so basically this matrix is shearing it's shearing your square up by one and you do it three times and you got this cute grid sorry this example was an aside so the key lemma let me go back to the key lemma so I k is this then omega prime that the right sequence is the cutting sequence of the same line L with respect to lambda k skewed grid so what do I mean I have my line L maybe I'll do it sorry my line should be sloped okay my line is actually like this my line it has a slope kind of three integer part three and instead of recording intersections with respect to the squares I want to record intersection with respect to this skewed grid so I will still record zero for horizontal and one for diagonals for these sides okay so I forget about the square grid I only have the green grid and as I travel around this grid grid I record 0 1 0 1 according to how I hit the sides of the parallelograms does it make sense do you understand what what the statement is the statement is the derived sequence has a geometric meaning it is a cutting sequence but not with a unit grid but with respect to a skewed grid make sense so this is what we are going to prove a little bit later because I will start with using this lemma and telling you that once we have this we are done one once we have these we have our renormalization machine what k k is integer is the value so it's a positive integer the same than the length of the blocks so first I want a corollary I want this is what we will prove but how we will use it if we will use the corollary so the corollary that's what I'm going to write the proof of the corollary so if I have my line L and I have my grid lambda k which is a k of lambda and it's skewed let me draw it more more slanted and let me draw this grid like this what happens if I apply a k inverse so what happens if I apply is a k inverse to lambda k I want to straighten my picture so if I apply a k inverse I go back to lambda what happens if I apply a k inverse to my line well it moves it moves to some new line which will be actually something like this and this is lambda prime which is a k inverse of lambda okay so I have this skewed picture I straighten it and this time I change my line so maybe I will write the corollary now so when you do an affine transformation to or linear transformation to your picture cutting sequences don't change if I change the grid and the line the cutting sequence of L with respect to or maybe let me prefer the corollary maybe write a remark let's go slowly because it's really sink cutting sequences of L with respect to R k is the same then the cutting sequences of L prime with respect to lambda do you believe this I'm not doing anything I'm less than the form in my picture but the cuttings are the same so this is very important that these linear operations don't change cutting sequences so there's something non-trivial happened there I related derivation with the same line with respect to a new grid now I'm doing something trivial I'm just deforming to make it square again so the corollary of the lemma so the corollary corollary of lemma plus remark is that omega prime derived sequence is the cutting sequence is again a cutting sequence it's again a cutting sequence because it's the cutting sequence it's the cutting sequence of lambda prime it's again sorry it's again that mistresses it's again a square cutting sequence it's again a square cutting sequence because it's the cutting sequence of lambda prime with with respect to the unit grid, with respect to lambda. Is it clear what's happening? So I have a sequence, I derive it. If I show that it's the cutting sequence of the same line with respect to the skewed grid, with respect to the parallelogram grid, then I can straighten it and I can find a new line whose cutting sequence with the unit square grid is the derivative, okay? So do you see why we are done? Can anybody prove theorem one now why cutting sequences are infinitely derivable? So do you remember what we showed yesterday that if I had a cutting sequence by elementary inspection, I know that it's admissible. So now L prime will be at the omega prime because it's a cutting sequence of a line, another line, but it's again admissible. So it has this property that zeros and ones comes in blocks which are the integer part of the new slope. So proof of theorem one, proof of theorem one would be just that, so omega is admissible because it's a cutting sequence. Again, cutting sequence implies admissible. So then omega prime by corollary, is also cutting sequence, square cutting sequence. So this implies also admissible. So I can derive it. So I can consider omega prime. And again, you can repeat using the key lemma again. So omega prime, you repeat and it's again, it's a cutting sequence of some lambda two and this is so admissible and so on. Does it make sense? Okay, so being a cutting sequence guarantees admissibility. So if each derivative is a cutting sequence, each derivative is admissible and I can derive it can. And now for theorem two, to for theorem two, so let's do proof of theorem two. I just need to compute for you what is, so compute theta prime direction of L prime I want to compute what's the direction of L prime, okay? So what is the direction of L prime? Okay, so now I hope I don't get it wrong because I have to apply one zero K one to say cosine sign to see where the direction go. Cosine sine theta, so I get what? I get cosine theta over K cosine. Ah, sorry, sorry, sorry. I have to compute the inverse. Because I'm applying AK inverse. I'm applying the inverse, so I have a minus. And this is K minus K cosine plus sine. And this I can write as let's say one over tangent theta minus K. Maybe I'll write it. The key lemma we want to keep. Maybe I'll write here. Okay, so the tangent of theta prime is related to the tangent of theta. Ah, sorry, sorry. This is badly written. This is the linear map, but I'm already taking the ratio. So maybe I shouldn't, well, that's okay. I hope you understood what I meant. I first do this computation and then I find that the tangent of theta prime which is sine over cosine is actually this. Well, hopefully it's now. What did I do? I took cosine over sine. So is this tangent or cotangent? Cotangent. I apply to cosine and sine and I get cosine prime and sine prime. And then I take the ratio, so I get the cotangent. So let's see if I can get it wrong. So tangent theta, I can write my original tangent theta. I can solve for tangent theta and I get one over cotangent. So I think I guess tangent is, was it right with tangent? I would be happy if it's right with tangent. Yes, that's right. I would be happier. Yes, because I would like to see this. No, I think that's what I want to see. Or maybe we'd cotangent and then or maybe it's K plus cotangent. It was cotangent. Okay, maybe I should leave this as an exercise. You should cover it, cover it. It is cotangent, but then I can write it as this. There's another inverse of plus tangent. Okay. Okay, but then I think tangent of theta prime will be something which is less than one because you kind of flip type. And then you kind of, if you want, you can also flip the, so okay. So first of all, this K, first of all, let me say that this K was actually the integer part of tangent, which if you want, you can call, it's a zero in my expression. And then you want to prove that this one, again, you can write it. If you want, at this point, you can flip at X is equal to Y and from L prime, you can look at this L prime flipped. And this is, oh, I'm confusing everybody now again. This is K plus one over tangent of theta prime flipped. So this is gonna be something greater than one. And then you can repeat by finding the integer part. And okay, I leave you, yes. So there is this issue that each time you flip factor. So when you derive a sequence which is of slope greater than one, you actually get, so when you derive a cutting sequence of type one, you get a cutting sequence of type zero. And this corresponds to this line flipping from being greater than one and less than one. So if you want to take integer part to see, if you want your slope to be greater than one and see the value as the integer part, it's convenient to flip your line from less than one to greater than one. So this is, if you want, if I have it like this, this would be a number less than one. So to find the value, you can also think of it like this. This is a slope which is less than one. So to find the value, I need to take the integer part of one over because it's horizontal. So I need to take one over and see how many times I cross. Or you can flip it and then have value greater than one which maybe immediately see the integer part appearing. Okay, I think we did it right. Maybe I confused you, but I think it's correct. And if you repeat this again by induction, you finish the proof. Get theorem two. Great, so we are left with the key lemma. So the key lemma is, actually, if you read this Caroline series papers, I find it slightly annoying because she says essentially everything which I said today and yesterday, but just much faster. But then she tells you by inspection, you can look at the cutting sequence with the skewed grid and convince yourself that it's a derivative which I find it's a little bit of a lie because actually I try to tell some undergraduate students, feel the details and they always get confused. So let me give you one, how I see it very clearly. So actually I think it's really nice and convenient to think of this zero, one, zero, k, one, to break it up into smaller steps and to think of it as one, zero, one, one, power k. And analyze just the effect of a single shear which is much easier to analyze. So what's happening? I have my square and I have my line which is no greater than one. This picture is already not good. I need some space. I have my square and I have my line and then I have my, I can, it's not big enough. I want to have two squares, I need two squares. And then I have, I think I need green for my skewed green, right? So this is like this lambda green is like one, zero, one, one, applied to lambda is in green. Okay, so I just do one step and then we'll repeat and see what is the effect of one step when I have to recode my sequence. So I want to, so my original cutting sequence I was coding with zero and once. And now what I need to do, okay, so the ones are the same. So the ones are the same because I didn't change the ones, but instead then recording the horizontal with zero, I need to record the diagonal with zero, okay? So diagonal gives, diagonal gives, and now I'm going to call it the green zero. So I just want to record when I hit the diagonal, okay? I want to ignore when I hit the horizontal and just look at the diagonal. And now maybe we can just draw one square and one diagonal. So now you really have to do elementary inspection. So I have zero, zero, white, white, white one and here I have a green one. Maybe it's not. Okay, so maybe I will call it two. Should I call it two? So now I want to call it into zero, two instead of zero, one. So what do you remark? So what do you remark? That each, if I cross, if my trajectory crosses, my trajectory is slope greater than one. So if it crosses zero, zero, so between two, between, between zero, zero, there is a two. I will still do it green. So between zero, zero, there is a two. But if I have a one and a zero, but between one and zero, no, none. So again, if I go from zero to zero, I'm bound to hit the two. If I go from one to zero, I don't hit the diagonal. Whichever my position of the initial point is, but if my slope is greater than one. So what do I wrote this? My original cutting sequence was like this, zero, zero, zero, one. So if I now put the twos in between, so there's not two here, but there is, so ones are the same. I have to keep the ones. And between two zeros, I put the two here and here and here. And I can drop the zeros, because I don't care about the zeros anymore. So do you notice what happened of the length of the block? What happened is that, oh sorry, by the way, this two should be, it's actually my new zero. So I shouldn't have called it one by zero. So now, if you call the two, you can call it new zero. So. Mod two. Huh? Mod two. Mod two, yes, thank you. Okay, but in any case, what I wanted to say is that now the length of the block decreased by one. So call, so two, call it zero. And then we have that, you have something like one, zero, zero, zero, one. So because I only hit it in between two, so the length of the block reduced by one. And then you repeat K times. And again, maybe I leave you to convince yourself of the details, but now I think this elementary step is much easier than do them all at once. And I do have an animation. So I showed it to you, right, yes, I should have it. Don't show the animation, do it on the board, but I still did the animation, so let me, so this is the cutting sequence. Actually, my animation is the other one, is for the slower, for the horizontal slope, just because horizontal, I have more space. So this is my cutting sequence with respect to zero and one, to the square, and I added the diagonal, which is green, so green one, sorry, here, that's why you are confused. And now you notice that when I cross between two ones, between two blue ones, I have a green one. And between zero and one, or one and zero, I don't cross the diagonal. So I add this green one in my augmented sequence. I add a one in between each blue, I add a green one between blue ones, sorry. And geometrically, you can actually cut your square and transform it into a parallelogram by taking this triangle and moving it on the other side. And now, this is the parallelogram I want to use to code, I want zeros and green ones, so I drop the blue ones. And if I want to, I can already renormalize at each stage. So this parallelogram, I can map it back by applying the inverse, I can map it back to the square, changing my slope. And actually, the slope and this operation change by the faray map. So if you remember the faray map, if you do k iterates of the faray map, you will get the Gauss map, but it's kind of a slow version of a renormalization. And, okay, so, okay, now I renor, I put it back into a square and I can change green to blue. And you see this new sequence has blocks with, here the blocks had two and three, here the blocks have one and two. So the blocks decrease by one. And then you can blocks are shorter by one and you can repeat k times. Your block is shortening each time, okay? Okay, so hopefully I, so when did I start? Okay, I have 10 minutes more, right? So now it's gonna be like a fun, fun. So I finished what I wanted to really teach slowly about Sturmean sequences. And, ah, well, I didn't. I wanted to say, oh, I forgot. I wanted to say about substitution, but maybe what do we do? Okay, maybe I will, okay, maybe I will just say one thing. I will tell you what the substitution is, so I can give you in the exercise an equivalent rephrasing of this theorem. So maybe let me do this. Okay, so we have 10 minutes. I don't think I can do everything. Oh, we'll see. Okay, substitution. So there is a more concrete way to actually produce these cutting sequences because derivation is like a test. But how do you actually build such sequences? So if you want to build them, it's more useful to do something which is like an anti-derivation. So a substitution, sigma, it's a map from letters, say zero one in my case, two words in zero one. Okay, I will not do, maybe we'll do, but we'll do just this, we'll do substitution because then I want to give you an exercise. Okay, so let me give you an example. Let me have the substitution which I'm calling sigma zero. So sigma zero could be, I could say that sigma zero, I have to tell you what it does on zero and what it does on one. And I can say that sigma zero is fixed. Sigma zero is fixed. And maps it to itself. And one becomes zero one. You can sigma acts on by infinite sequences by substituting each letter. By substituting, substituting each letter. For example, let me just tell you what is sigma of a finite block, like zero, zero. For example, what is sigma zero of, I don't know, one, one, one, I guess I want to do this. What is sigma zero of the periodic sequence one, one, one? Or maybe let me do something else. Sigma zero of one, one, zero, one, one, zero. So as I, this, my sequence could be by infinite but let's just look at the finite piece. What happens when I go through my sequence? Each one, I replace it with zero one, and each zero I keep. So this is zero one, zero one, zero I keep, zero one, zero one, zero I keep, and so on. You can substitute each letter. And then you can actually iterate. The substitution and apply it again, for example. So actually let me write, just for curiosity, let me write the k power. So sigma zero to the k is sigma zero composed with sigma zero k times. What does it do on zero? Nothing. What does it do on ones? So one, each one produces zero one. Then the zero stays and the one produces zero one. So you can convince yourself that this is zero to the k one. And you might notice that these blocks look like, the substitution looks a little bit like an anti-derivation. So derivation was taking blocks of this form and shrinking. This substitution is taking letter and expanding them. And there is a pair substitution. So there is companion substitution sigma one, which does the opposite. So one is fixed. I hope I didn't get them wrong, but I think it's swapped, so it's one zero. I will check them later for the exercise. Okay, zero one, zero zero one. Okay, so sigma zero and sigma one are called Sturmean substitution. Not surprising, because you will see. And you can write the following theorem three, which is you can deduce from theorem two. And this is what I will ask you to do. So omega is a square cutting sequence. It's, or maybe if you want it's in the closure of the square cutting sequence, if and only if. If and only if I can get it by, in some sense, applying the substitutions. I want to produce this sequence. And I write, actually, first I will write an expression which maybe is not so clear, but maybe I will write, do is two ways of writing this. You can write it like this. You can write it as intersection in N of the following thing. And actually, let me read. In, in direction theta. Theta, which is equal to a zero, a one dot dot dot dot. If and only if I can write it like this as, so I have to put the last sigma zero to the a zero. Then I have to put sigma one to the a one. Then I have to put sigma zero to the a two. So I alternate the types, zero one, and the powers are the continued fractional entries. And I will stop at anything, a n, and then put zero one to the z. So that means there exists a sequence. So that I can apply the substitutions in this order and get my sequence. Two a n, ah sorry, this is either zero or one. This is zero one according to parity of n. A n, a n, power a n, a n are the entries of the continued fractional. Yes, we can write it like this, sigma zero to the two n. That's what you wanted, right? Yes. And actually another way to say this, you can think of it, I'm gonna write a limit and this maybe looks more constructive. So you can start from say, if I'm with zero, you can start with the periodic sequence of ones. So this is one periodic. Apply sigma zero to the a n, dot, dot, dot, dot, backward up to sigma zero to the, and put sigma one to the a one, dot, dot, dot, dot. And so basically, okay. What am I saying? And I have an exercise. I really want you to try to do this yourself. Start from a block of ones. Apply these substitutions from n to zero. And then start from two n, sorry, this was two n. And then repeat the same thing, starting from two n plus two and going backward. What you will notice that the sequences that you are getting have longer and longer common blocks. So if I start very far and apply these substitutions, they are made so that they kind of expand. They have a common central block. So if I fix, I start from one, one, one from two n and I start with the same thing up to, I add sigma one to the n plus one and sigma zero to the two n plus two and take one, one, one, one periodic. These two sequences have a common central block. So this limit is again a limit in the sigma two topology. So that means that sequences share longer and longer central blocks. So this is somehow a constructive way to build your sequences. If I want to build a cutting sequence in a certain slope, I compute my continuous fraction entries and I take, if I want a finite block, I will just have to take a sufficiently large n and do this process to an initial word and my word will get quite long and contain. Okay, so I think I will have to stop here but I will give you some exercises where you should just try yourself to apply some of the substitutions and see how they work and you will have a way to build stormy sequences. So I like this theorem in some sense because it's more constructive. So it allows you to really build them while the previous theorem is more geometric characterization. And you can actually quite easily deduce theorem three from theorem two, of theorem one, one and two, yeah. So this will be also a challenge, not too deep but a good exercise to try. Okay, and tomorrow I will start my lecture by telling you, this is just for fun but I will show you some slides and I will tell you that you can do the same for not only for the square but you can do the same for example for other regular polygons like the octagon and I will show you some slides of my talk. So you see that this is not too far, there's still some quite recent research which builds upon this. And then we will move to translation surfaces and interval exchange maps and what do you do when you cannot renormalize geometrically so nicely.