 So, last lecture we started our study of linear maps, in particular two-dimensional maps, and for the moment we focused on the case in which the eigenvalues were real, which in itself is fairly special case, even though it's important. Now today we will look at the case in which the eigenvalues are complex, so we start with the proposition straight away, if A2 to R2 is an invertible linear map, eigenvalues alpha plus or minus i beta, with beta difference from zero, so as you know when you have complex eigenvalues then you have, they always come in pairs, matching as conjugate eigenvalues, it is linearly conjugate to the matrix alpha beta minus beta alpha. So this matrix here is in some sense the diagonal form, or the canonical form of matrices with complex eigenvalues, right? So I think, actually I will skip the proof of this proposition, it's a basic result of linear algebra, and I rather want to concentrate on the analysis of these. So linearly conjugate of course means in the same way as we were talking about before, so there exists P linear map such that, so if we call, let's call this B, okay, such that A composed with P is equal to P composed with B. So in particular this allows us to, this defines the linear conjugacy classes as we, in the lemma that we talked about the other day, if two maps, if two linear maps are linearly conjugate then they have the same eigenvalues, okay? So this means that if you take two maps which have different eigenvalues then they will be linearly conjugate to different linear maps, and so these two linear maps cannot be, these two maps cannot be linearly conjugate amongst themselves, sorry? Yes. So using this linear conjugacy we can study the, we can reduce the study of all linear maps up to linear conjugacy to the study of this family of maps. So let's try to understand what is the dynamics of this family of maps. So what is the dynamics of linear maps of this form? Well we have a map from R2 to R2. So the easiest way to understand is actually to identify, we can identify the space R2 with the complex plane in the sense that if we take any points x, y, we identify this with the point x plus i, y, right? So we think of this map, we think of this space R2 as the complex plane where each coordinate, so this is the imaginary axis and this is the real axis, and why do we do this? Because then there is a very simple form for this expression and it's the following. So in this case then the action, so let me call this okay b just to be consistent with what we have, then the action of b, action of b corresponds exactly to duplication by alpha minus i beta. So what do I mean by this? I mean that if I take, if I apply this matrix x, y, sorry this matrix v, sorry if I take a point here, okay this is a vector v, let me call this v1, let me use the same notation that I've been using before, v1, v2 are the two coordinates, right? So b of v1, v2 is equal to alpha beta minus beta alpha v1 v2, okay and this of course is just equal to alpha v1 plus beta v2 minus beta v1 plus alpha v2, right? This is the image of the vector v1 v2 under the matrix b. Now if we think of this vector, if we write this vector as a complex number, if we write v, so this is b of v, so if we write v as equal to v1 plus i v2, so it's just a slightly different way of writing this vector, right? You write this with the notation of complex numbers and then this is exactly equal to, then we have that v1 plus i v2 times alpha minus i beta, okay? We now multiply them as complex numbers or maybe, sorry let me write it to be more consistent with what we have above, of course it's the same thing, all right? Alpha minus i beta times v1 v2, so what does this give? This gives alpha v1 minus i v1 beta plus i v2 alpha plus beta v2, which is equal to alpha v1 plus beta v2 plus, so v2 alpha minus v1 beta, which you can see is exactly the same as what we have above, right? So the real part which is the horizontal coordinate is alpha v1 plus beta v2 and the imaginary part is minus v1 beta plus v2 alpha, it's exactly that. So this is remarkable in some sense, it's like a little magic. So what we're saying is that this, what in disguise, this is a completely real linear map from R2 to R2, okay, and it acts on vectors in a certain way that is not so easy to visualize, okay? Because you take a vector v, you take v1 to v2, the new horizontal coordinate of the image of v is given by this formula which is some kind of composition of v1 and v2. The vertical coordinate is given by this formula, it's not immediately easy to see what this map does. But by identifying this with a complex plane, it turns out that this action can be written exactly as multiplication by this complex number. So you take the, if it has a complex number, you multiply by a complex number and you get the image of this and the linear map. Why is this useful? Because multiplication of complex numbers is something we can, we understand fairly well, right? So what is the best way to multiply complex numbers? So what the iteration, okay, so bn of v is equal to alpha minus i beta n times v, right? Times v1 plus i v2. This is what the iterates are. So all you're doing when you iterate this map is you're multiplying again and again by this complex number, right? So does this help us to understand what this, remember what we're trying to understand is what this vector does, right? What is the omega limit of, we're trying to understand the dynamic. So we're trying to take a vector, iterate it under the map and see what it does. Does it go to infinity? Does it go to zero? What does it do? So how do we find out what this does? So it boils down to what the iterates of, this is one of the eigenvalues, right? So really the future forward orbit of this vector here is basically determined by these iterates of these eigenvalues, the products of these eigenvalues. So what are the products of these eigenvalues? Anyone have any idea how to understand what's happening here? Exactly. What's the best way to multiply complex numbers? It's much better to multiply in polar code, that's exactly, okay? So, and once you have in polar code it's very easy to multiply and we see exactly what happens. So write, we'll use the notation, write alpha minus i beta is equal to R E i theta in polar coordinates where R is equal to the modulus, right? And theta is equal to cosine minus one of alpha over alpha. So this is the formula that translates the standard coordinates in the complex notation to the polar coordinates. So in this notation we have that BN of V is equal to R, so how do you multiply polar coordinates? You know that what you do is you just multiply the, whatever this is called here, the modulus, right? You get R to the N and then you get E to the i N theta, right? Times this vector in polar coordinates, right? So this we can write as some R0 E to the i theta 0 where this here is the initial condition in polar coordinates also. So we write this point as some R0 E to the i theta 0, but of course I0, this is R0, this is theta 0, so this defines this point here. So now we can basically read out exactly what the, what the image is, right? So BN of V is equal to R0 EI theta 0, B of V is equal to RN EI theta N times R0 EI theta 0, okay? So what happens to this? Sorry? Wait. Mario. So this is equal to R0 times RN times E to the i theta 0 plus theta N. This is the iterates, right? And what do these look like? What is this point doing? Okay, good. So let's look at the modulus first. What is this modulus doing? Is this point converging to 0? Is it converging to infinity? Is it doing something strange? It depends on whether R is less than 1 or bigger than 1, okay? Notice also in this case the point 1 is a crucial dividing point, right? And what is R? R is exactly the modulus of the eigenvalue, right? So it depends on the eigenvalue. So if these are the eigenvalues, so this is the unit circle 1, right? So if alpha minus i beta is inside the unit circle, then that means that the modulus R would be less than 1, yeah? And so what will happen to this point? The distance from the origin will shrink to 0 as you iterate in forward time. And what happens to this, how does it shrink to 0? How does the actual point move? As you remarked, what happens is that at every iteration you rotate by an angle theta. So it shrinks in a rotation. So the orbit of V is contained in the parameterized curve R0, R to the t times EI theta 0 plus theta t, t in R, okay? Just like in the real case, I said you can think if there is a parameterized curve in your space and so what does this curve look like? So this is the eigenvalue space, eigenvalues, okay? And this is the dynamical space. So here you start with some point R0, E to the i theta 0, okay? And what happens is that you are on this parameterized curve so you see that when t is equal to 0, the same as when n is equal to 0, t and n is the same. When t is equal to 0, you get exactly R0, EI theta 0. Then as t increases, if you take t continuous like this, then it will slowly, it will slowly rotate by an angle theta t. So when t is small, it will be a small angle and it will slowly shrink, okay, by this factor. So what you get is slowly shrinking curve. The actual orbit will be discrete set of points, but it will belong to this curve because when we're taking the orbits, we're just taking the integer values. So this will be R0, this will be what we call V0, this will be V1, this will be V2. At each step, the rotation will be constant because the rotation is always by an angle theta, right? And then slowly it's shrinking by a certain amount, V3 and so on, but it always lies on this curve. So of course, how fast it shrinks depends on the value of R. If R is very close to 0, then it shrinks very fast, right? So it might be, the picture might look for example like this. It's still spiraling, but it's just spiraling very fast. If R is very close to 1, then it is shrinking very slowly. So this makes sense because this is again, if the eigenvalues have inside the unit circle. So remember, the two eigenvalues are complex conjugates, so if one of them is inside the unit circle, then the other one is also inside the unit circle. What happens if the eigenvalues are outside the unit circle? What happens if the eigenvalues are outside the unit circle? Exactly. If the eigenvalues are outside, then we have that R is bigger than 1, okay? If eigenvalues, if R is bigger than 1, then you just have exactly the same picture, but the spiral in forward time goes out, okay? Of course, in backward time it goes in, right? There's a symmetry, exact symmetry. So this spiral exists for all T and R, so for positive values of time and negative values of time. For negative values of time, this is spiraling outwards. So it's a completely symmetric situation here in forward time and in backward time. What happens if the eigenvalues are on the unit circle? The distance R is equal to 1 in that case, but you still get a rotation, right? So if R is equal to 1, then what is the picture that you have? You have one, you have your point v0 here, and if you look at this curve, this is just constant distance from R0, e to the i theta 0 plus theta t, so it's a constant, so it's a circle, okay? It's not hyperbolic, exactly. And we will see later what the significance is. This is very significant, there's a very big difference. Well, you can immediately see a big difference. So this is clearly the complexified version of having, no, it's a version of the non-hyperbolic case with real eigenvalues, right? In those cases, you had that the, every point is a fixed point, okay? You no longer have that every point is a fixed point, but you have that every point lies on a circle. So if you start on this circle at a certain distance from the origin, you stay at that same distance all the time. So you have all these invariant circles, and we will study later on next week the dynamics that happens with this important invariant circle, which is also very rich and very interesting. But this is the non-hyperbolic case. So for the moment, we're not going to worry too much about this case. Okay, so these are really the way you understand the dynamics. It's very simple. Before we go on to talk about topological conjugacy, let me remind you that these are the pictures that you get for the normal form, right? For matrices of this form. But we said that in general, if you have a matrix with complex eigenvalues, it is linearly conjugate to this form. So what would the picture look like in the general case? There is not that form. If you take another map that is linearly conjugate. So in general, if A equals ABCD, okay, has eigenvalues alpha plus or minus i beta with beta different from zero, because otherwise it would just be real eigenvalue, then what is the picture? Well, we know that it's linearly conjugate to alpha beta minus beta alpha. But what does the picture look like? The certain things which we know have to be preserved, right? In particular, clearly a linear conjugate is a special form of topological conjugacy. So you must preserve the fact that if these are inside the unit circle, then you get this picture where everything converges to zero, and therefore here you must somehow also converge to zero. Right? So if you take some vector v zero, it must somehow converge to zero. The linear conjugacy says quite a bit more, it must converge to zero. So there's a linear change of coordinates if you want that maps this to this, right? So in fact, what the picture is in here will be very similar to this, but basically it's the same thing as happens in the case with real eigenvalues. What it can, the only thing that can happen is that it can be squashed in certain directions, right? Because that's what the linear conjugacy does. The linear conjugacy basically, because it's a linear change of coordinates, there's not that much strange things it can do. It basically can just squash it, it's a linear map, so all it can do is map a circle to an ellipse and kind of squash it in a certain way. So for example, it may look like this. So it's still converging to the origin, a key difference that in fact is one of the reasons why it's much more useful to start in a normal form is that it does not converge monotonically to the origin, right? So we will use now in a second the fact that in this case, we converge monotonically to the origin in the sense that if you start at a certain point and you're converging to the origin at every other point, you're closer and closer monotonically to the origin. Whereas notice that this is not the case here, right? Because you start at a certain distance from the origin, then you come closer, suppose this is the image, here you're at a certain distance, but then you map, for example, to here, which is further away from where you were. But then of course, you come closer again, and then you come closer, you go further, but you don't go as far. Every time, you're slowly moving forward, but it's not monoton, right? Closer, a little bit further, a little bit closer, a little bit further, but on the whole, on average, getting closer, okay? And this makes a difference to how we study. The good thing is that every time we have this kind of picture, we can find a linear conjugacy that maps it to that case, and that's what we're going to do. So the next step is a topological conjugacy, okay? So suppose we have two maps with different eigenvalues. So suppose we have a prime to such maps topologically conjugate. For example, if we have a here, a b c d, a prime here, a prime, b prime, c prime, b prime, and suppose that they both have complex conjugate eigenvalues, and suppose that the corresponding pictures, so one of them maybe converges quite slowly to zero like this, okay? And let's suppose that the other one, so if one has eigenvalues inside the unit circle, so suppose lambda is less than one, and suppose this one has lambda greater than one, can they be topologically conjugate? So wait, so here alpha plus i beta plus or minus i beta less than one, here alpha plus or minus, alpha prime plus or minus i beta prime greater than one, can these be topologically conjugate? No, why not? Different omega limits, right? In one case everything's converging to zero, in forward time, in the other case everyone's converging to infinity. So let's suppose both of them have less than one, okay? But they look different. So for example, this could be like this, and the other one might have a different shape and it might go very, very fast to zero. Might these be topologically conjugate? Different eigenvalues, so they cannot be linearly conjugate, but as we've seen before topological conjugacy is a much weaker condition. Is there any immediate obstruction to the fact that they're topologically, sorry? The rotation, they might have a different rotation, they might have different, they might have different, in principle, in principle. What about if this actually has real eigenvalues? So let's suppose that this has real eigenvalues, so let's suppose this has, the picture here is like one of the cases we studied before, and actually the picture looks like this. Do you think that something with real eigenvalues could be topologically conjugate to something with complex eigenvalues? Look very different. But they both have an attracting fixed point, globally attracting fixed point, so they both have a fixed point that attracts anything. Okay, so what's the theorem going to be? So theorem two hyperbolic invertible linear maps with distinct real or complex eigenvalues. So if they're complex, they're automatically distinct, right? I'm just emphasizing that we are not restricting ourselves to complex eigenvalues, real eigenvalues in general with distinct eigenvalues, topologically conjugate, if and only if their fixed points are both of the same type, both of the same type, by type I mean either attracting or repelling or subtle, they, the same index by which I mean the number, number eigenvalues with positive real part. So the most important part of this theorem is the first part that says in particular that these two systems are topologically conjugate. So forget about the distinction. On a topological level, complex eigenvalues, real eigenvalues doesn't make any difference. If they're attracting, if they're hyperbolic, right, we're assuming they're hyperbolic linear maps, then if the fixed point is attracting, they're topologically conjugate, including this case and this case, right? So what this means is that this, what we're going to have to show to do, remember that a topological conjugacy has to map orbits to orbits. So basically it means can we find a homomorphism that maps this spiral to one of these curves is essentially what we're going to do. So that is why intuitively it seems so strange that this would happen because you could kind of tell that for this to be a homomorphism, you need to be able to do that. And it seems intuitively not obvious that you can map this spiral that goes down and around to this curve. So that's the most important part of this. The second part is just a requirement basically related to what you were saying about the orientation because you can have some cases in which they're both attracting, but one of them has, I can't think now of the different cases, but you can have the number of eigenvalues positive real part also relates to the number of eigenvalues with negative real parts. And you remember if you have one with positive real part and one with negative real part, then it's orientation reversing, otherwise it's orientation preserving. So this is basically a requirement that takes care of the fact that if they have to have the same orientation, then this is an additional requirement. This is in some sense a kind of technical requirement. The point that we want to emphasize more is this. But of course the saddle type can only occur if the eigenvalues are real. So we have already proved that part, right? So if they have the same, if they are both saddle points and they have this property, they have the same index, then it's, we've already discussed the topological conjugacy when you have real eigenvalues. So really the part we need to concentrate on is this in the case in which they're attracting or repelling, the two cases are completely symmetric, as we said, because one is just attracting forward time, one is for attracting backward time. So they are symmetric. So really we will just prove this theorem for the case of two invertible hyperbolic linear maps with attracting fixed points and everything else is taking care of. And I think just for simplicity, let us choose this particular configuration. We could do it with two maps, both of which have complex eigenvalues. But it's exactly the same argument, okay? So for generality and to emphasize the fact that these are included also, we will carry out the construction in this case. Okay, so let's do that now, yeah? So we need to, like we've done before, we need to construct this conjugacy. So what's the first thing that we can do? So here, in fact, I didn't even draw the general case. I remember that this is already the diagonal case. So the general case, you would actually have something a little bit even more complicated if you remember you had the two eigenspaces. And the picture was more something like this. So they would anyway be always come in tangent to a certain direction, depending on the specific eigenvalues that you've got. So this is actually the more general picture in the case in which you have real eigenvalues and an attack, hyperbolic attack at fixed point. And this is the general case with hyperbolic. So the first comment, just like in the real case, is that by linear conjugacies, each of these is linearly conjugate to its own canonical form, right? This is linearly conjugate to a matrix of the form alpha beta minus beta alpha. And this is linearly conjugate to a diagonal form. So the first thing we do is we put them in their normal forms so that we study this. So proof, a and a prime, so a, a and a prime, so suppose, okay. So let's suppose a equals a, b, c, d has eigenvalues alpha plus or minus i beta. With beta different from 0 and alpha plus or minus i beta less than 1. Just for deafness. And i prime equals a prime, b prime, c prime, d prime. Eigenvalues lambda 1 and lambda 2, lambda 1, lambda 1 different from lambda 2. Distinct eigenvalues, lambda 1 and lambda 2 in R. And lambda 1 less than 1 and lambda 2 less than 1, okay? This is just a specific case. We're going to assume that it's attracting. The other cases are all done in exactly the same way. So then a is linearly conjugate to alpha beta minus beta alpha. And a prime is linearly conjugate to lambda 1, 0, 0. So it's enough to show that these two maps are topologically conjugate, right? Because that by linear conjugacy is in particular topological conjugacy. So by the properties of the equivalence relation that would imply that a and i prime. So let's just, for simplicity, let's just call this one also a. Let's just assume this is a and let's assume this is a prime. And we just show that these are topologically conjugate. So how are we going to do it? We're going to use the technique that we know, which is the technique of fundamental domains. So what is the picture? And this is where we use the fact that these are normal forms. So the picture here is that we have this spiraling inwards like this. And the picture here is that. So are we going to be able to find a homomorphism h that maps these two pictures to each other? So the trick here is not to get scared. We can do this. So we're going to find a fundamental domain for both maps. And then we're going to map the fundamental domains by homomorphism and use exactly the same technique that we used in the one dimensional case. So how do we find a fundamental domain? Let's do it for this one first. Maybe it's a little bit easier to visualize this fundamental domain. What do you think is going to be the fundamental domain here? An annulus, good one. Where shall we put the annulus? Yeah, yeah, yeah, whatever we want. So for example, let's take the unit circle, but any circle will do. That's one boundary of the annulus. How do we define the other boundary of the annulus? We map the circle, very good. So we take the unit circle and then we take the image of the circle. Now, how do we know that the image of the circle is strictly contained inside itself? So in this picture we know because every point maps closer to the origin. So in this case, the image of the circle will be something like, let's see, it will not be a circle because in this case we are mapping the horizontal is stronger. So I think it would be something like this and ellipse like this. The image by the linear map because that is what. So each of these points, so this point maps to this point, this point maps to this point, this point maps to this point, this point maps to this point. These curves, these invariant curves, they define exactly where each point on the outer circle maps to the inner circle. And our claim is that this region here should be a fundamental domain. So let's write this a little bit more formally. So it's like S equals unit circle. Consider D equals the annulus bounded by S. And so this map is what we called A and this map is what we call A prime. So A prime of S. So this here is S. This is A prime of S. So lemma. So let's do the same thing on the other side. We can do the same thing. Let's do all at the same time the two fundamental domains. So here we can also take the unit circle here and what will be the image of the unit circle. In this case it will actually be a circle in this case because you're just rotating and shrinking uniformly in every direction. So you just get a smaller circle. So here this is also the unit circle and this here is A prime, sorry, A of S. And this is the region in this case, this is the annulus. And again, notice that we have these curves are telling you where these points are mapping, right? So this inside interior boundary of the annulus is defined by taking the image of S and in particular this point maps to here, right? And then you have here I just do one spiral. Here I do many of such curves. But also here you have spirals going through, you have many spirals. So for this initial condition, you have a spiral doing this. Okay, it's a bit more difficult to draw. That's why I didn't draw it, okay? But this point maps to this point. So every point on the outside of this annulus rotates and shrinks and maps to some point on the inside. So there's a bijection given by these spirals between the outside annulus and inside. Okay, so similarly we define D prime. So D prime is the annulus bounded by S and A prime of S. So lemma is that D and D prime are fundamental domains for A and A prime. On what? Not exactly on the whole space R2. Sorry, excuse me. The annulus, the annulus bounded by S and A of S. Annulus bounded by the boundaries are S and A of S, sorry here. S and A of S and here is S and A prime of S. The blue region, it's the unit. So we can either call them S prime or S, it doesn't really matter. If you want we can call it S prime. Is that what's following you? This here I'm mapping A prime, so S and A prime, okay? So these are fundamental domains but they cannot be for everything, right? So remember the origin of course is a fixed point. So the origin is the only point that is fixed in both forward and backward times so it has no chance of entering this fundamental domain. This is just a small remark. But of course these are fundamental domains at most for everything except the origin. So how do we prove this lemma? Well, it's not difficult, right? Because it is really just again the one-dimensional argument because what you want is because every point lies on one of these curves then really it is sufficient to show that this is a fundamental domain on each curve, defines a fundamental domain on each curve. So for any initial condition that you choose here it's orbit in forward and backward time will lie on this invariant curve, right? So this invariant curve is really just a one-dimensional curve and so the argument is exactly the same as we used in the one-dimensional case. So if you are on the outside of this annulus then in forward time you will at some point you must meet the annulus. Why must you meet the annulus? Because this is a point here, this is the image of this point. So because the dynamics is order preserving, we assume, okay, this is a case in which it's orientation preserving. So we assume in this particular case, in the orientation reversing case you switch and you just have to kind of keep that in mind a little bit. But in the case in which the index, all the eigenvalues are positive then this is just moving along this curve and then before the dynamics is order preserving along this curve and then once you're here you cannot jump all the way to the other side of the annulus. Can you see that? You convinced? So let's try to write this down. Proof. Each 2 minus 0 lies on a curve gamma of x which is invariant under A such that A of gamma, let me write this as gamma of x, A of gamma of x equals gamma of x. You agree with this? There's a curve that is invariant, right? Each of these curves that I draw are invariant because what we said is that every point that lies on one of these curves its image lies on the curve also. So that means that if you take the whole curve and you look at its image under the map it's the curve itself. And the same for each one of these spirals. If you look at one of these spirals the orbit of every point if you start on the spiral you stay on the spiral this means that A of gamma is equal to gamma. So these curves are invariant on the dynamics. And dynamics on gamma of x is monotonically decreasing to 0 in forward time. How do we know this? Exactly. Because we've used the normal forms. This is the crucial point in which we take advantage. This is the remark I said before. We take advantage of the fact that we've reduced by linear conjugacy to the normal forms. The crucial difference between the normal forms is exactly that you are converging monotonically to 0. And in the general form if you did not have the normal form I'll make a little picture here of what the real picture looks like. The real picture looks like this. And in this case if you take the unit circle here then you might intersect and if you take the image of the unit circle it might actually not even give an annulus. So the image might not give an annulus. And also this curve intersects the unit circle in many points. Look this curve which is one of the invariant curves intersects the unit circle here, here, here, here, here in many points. After some time it eventually stays inside the unit circle all the time. But it might come in and out, in and out. Whereas when you take the linear conjugacy to the normal form then you get, as I said before, because the normal form is of the form r0r to the nEI theta 0 plus theta to the n rn is decreasing monotonically in t. So this lies on the curve t. This lies on this curve. And this distance from the origin is going to zero monotonically. So this cannot happen. You can only cross the unit circle once. And you must cross the unit circle once. The curve, this is the curve, okay? Decreasing in full time. So gamma x intersects unit circle as in a unique point. It must intersect in a unique point. Every curve, okay? So no matter what initial condition you have, whether it's outside the unit circle or inside the unit circle, it lies on an invariant curve, parametrized curve, which intersects the unit circle in a unique point. This is how we've used the normal form. It's the same here, okay? Here, in this case also, in the general setting, we have the eigen, in the general setting, we have that the eigenspaces might be like this. And so this might be like this. And these, and again, if you take the unit circle, this might, okay, I didn't do it very well, but you might have a situation where this might come like this, okay? So it might intersect the unit circle, then come out the other side and then converge again to this, okay? So in the general case, you might not, you might intersect the unit circle several times, three times in this case instead of one time. But when you straighten it out by linear conjugacy, then you must intersect only once. So moreover, the orbit is monotonically decreasing. So maybe what I wanted to emphasize here is not so much the dynamics, but gamma x is monotonic. So geometrically, gamma x is monotonically decreasing to zero in forward time, okay? It's a parameterized curve in T. And moreover, the orbit of x on gamma x, the orbit of x on gamma x is also monotone, is also monotonically decreasing in forward time. And monotonically increasing in backward time. It is sufficient to show, it is sufficient to show that for all x in R2 minus zero, there exists unique tau of x such that A tau x of x belongs to gamma x intersection D. I haven't really said anything here. I'm just trying to formalize the fact that all we need to do is look along one of these curves. So as I said, this is exactly the one-dimensional argument and how the one-dimensional argument goes. So let tau x, so suppose first, suppose we take x outside the unit circle, okay? And let tau x greater than or equal to one be the smallest integer such that A tau x minus one of x is greater than one, greater than or equal to A tau x. So you agree that such an integer exists, right? So you just look at the point x and you wait, and there will be some point when it reaches the unit circle, right? So there will be some point where at time tau x minus one, it's outside and at time tau x, it's either on the unit circle or inside the unit circle. This must happen because the point is converging to zero in forward time, okay? Then again, by the monotonicity of orbits, we have that A tau x of x is greater than, sorry, these are absolute values here, then A tau x is greater than A of x A of A of x intersect the unit circle, and this is greater than or equal to A of tau x plus one. So what I've done is I've just applied A to each of these terms, right? So this point x, the orbit of the point x lies on this curve gamma x. So I'm thinking, okay, take the first time such that A tau minus one is here, A tau is here, then this is the point of intersection gamma x of the curve with S one. So you have three points here that have a certain order on this curve, right? One is bigger, the distance from the origin has a specific order between these three points. And so now when I apply A to all three of these points, this order is preserved, right? And so what happens when this order is preserved? Well, this point here maps to A tau of x because this is A tau minus one of x and this is A tau of x. So this maps to here, this point maps to here by definition of this annulus, and therefore this point must map outside the annulus, which is what we're saying here, A tau x plus one x is less than equal to this, okay? And this is exactly the definition of the boundary of the inner boundary of the annulus. So this shows the uniqueness. This shows what is, it should be intuitive. Obviously, which is that each point must fall. So what this shows is that you must fall inside the annulus and you must fall a unique time because when you're inside the annulus, you must fall outside the annulus. There's no other alternative, unique point such that. Any questions? This is just exactly the one dimensional case. So if you have done the exercise that on the fundamental domains for the one dimensional case is exactly the same. Like this is how we prove that the interval that we choose in one dimensional case is a fundamental domain exactly in the same way. You have on the line, you have points moving back and forth and they can only map once in the fundamental domain because of exactly the same argument. So here, this curve plays the role of the real line in the one dimensional case. The restricting to this curve, we have exactly just like a one, all the properties of a one dimensional map, one dimensional linear, no it's not linear in this case but it's got exactly all the same properties and before this is exactly the same argument. So if you're finding this difficult then my suggestion is do the one dimensional case and you will see it exactly. You will understand exactly what. And you can see that this, so there's a couple of comments. First of all, this holds in both cases because we've not used anywhere the particular property of this. When you just look at the annulus if you forget about what happens inside here, outside here, if you forget about what the picture actually looks like then it's exactly the same. All the arguments are exactly the same. You just use the fact that every point lies on an invariant curve and that each of these invariant curves maps, each of these invariant curves intersects the unit circle in a unique point and that this is its image. So here, the only difference when you look at here is that these look a little bit more straight whereas these have these rotation components and this maps to this and this maps to this and this maps to this. But the argument about monotonically decreasing is all exactly the same. So there's no difference in this construction of this proof that the fundamental domain is exactly the same. The second comment is that we have to be a little bit careful about the boundaries, right? Should the boundaries be included in the fundamental domain or not? One of the boundaries should be included, okay? I did not mention that because I wanted to wait. So here, you see if you include both the boundaries, then you get that this point lies in the fundamental domain and its image lies also in the fundamental domain. This is the only point that's a little bit problematic. But if you just include one of the boundaries, then of course its image then lies outside the fundamental domain and it's no problem, okay? So this annual was just like again in the one dimensional case, remember? We took the interval half open and half closed for exactly the same reason, okay? So this is a comment and when I informally define the annual i, d and d prime, I did not say whether they included the boundaries or not. So you need to remember that you need to include one of them. You can choose whether you want to include the outer boundary or the inner boundary. It's the same. Okay, so how did we continue the one dimensional case? So now we have the fundamental domains. Remember what we need to construct this conjugacy is to find a bijection between the fundamental domains or the homomorphism between the fundamental domains. There is a little bit more complex here though than in the one dimensional case. Why is that? Because there are many, so we now construct a homomorphism. So what would be the natural homomorphism to define here? So an arbitrary bijection would give a conjugacy and it's not a problem. But because we then want to extend it to a homomorphism of the whole space, remember, right? So we start with a homomorphism h tilde between the fundamental domains. Then we cannot just use anything really because so we want, so suppose we send s, we send the unit circle to the unit circle in any way we want. So for example, we could send it, use the identity, right? So for example, let h, the stricter to s equals the identity, h tilde. What is the natural way to extend this h tilde to the other points of the annulus? Okay, we want to preserve the dynamics. So what is the obvious way? So s, we can map to s. Okay, what shall we map this point to? Yes, well in this case we don't need to be so precise but we need to at least use this curve here, right? Okay, so wait a second, let me just look at my notes and see how careful we need to be here. Yes, yes, we need to, yes, so we need to map these curves to each other. So if you take any point here, right, this each annular in both cases is filled by these solution curves. So in this case, so in this case they look a bit more like this, these are the solution curves. Sorry, not the curves that are invariant by the dynamics by the orbit. So the obvious way to define h tilde is to define s, to define h tilde on s from s with can be an arbitrary homomorphism for example, the identity and then if you look at the image of some point inside here, then you look at the curve and then you look at the weather curve intersects s and you look at the corresponding point s here and then you map this curve to this curve here and that should be sufficient to do that, right? So you can define, so let me write it like this, so lemma, so let d bar and d bar d prime bar be the closures of d and d prime. So this now include both of the boundaries, okay? Then there exists homomorphism h tilde from d bar to d bar prime, such that for all x in the unit circle h tilde of x equals, so here I'm using slightly different notation, so h tilde of x is equal to a prime, a prime minus one composed with h tilde, composed with a. You take a point in s one and first you apply a, so you take a point on the outer boundary, which is s, okay? And then you apply a of x, which gives you a point on the inner boundary and then you apply h tilde and this gives you the same thing as taking the image of s and applying a prime here, right? So it's exactly the same. So we're saying that these homomorphism must match up on, they must conjugate the dynamics on the boundary. So you must map, you take s one and you take the image of s one and then you take under the homomorphism and this conjugates the dynamics on the two boundaries. All right, this is what this is. Yes, h tilde, yes, this statement here. So I'm saying that there exists a homomorphism h tilde from deba to deba, which conjugates the dynamics on the boundaries. Let me draw a better picture because these pictures have become a little better. So the properties I need is that the boundaries should match up. So this is one annulus. So this is s and this is a of s. And here I have the other annulus. So this is the annulus d, so this is the annulus d and here I have, this is a prime of s, this is d prime. So I know that to each point of s corresponds, the point, if I take a point x here, this is a of x. Is that clear how these two, yeah, these two, the inner boundary is the image of the outer boundary and it's the image by the map, by the linear map and the linear map has these invariant curves. So it's natural to think of it as coming from these invariant curves like this, right? So the outer boundary maps to the inner boundary. And here is the same thing. Here you also have these images, although they have a different shape. Doesn't matter to have these images, okay? So what I want in order, I'm trying to define h tilde and I'm trying to define it on the closure of these two, of these two annuli. And because of the way I want to extend it as a conjugacy to the whole space, I want it to be, because I can define in lots of ways a homomorphism on these two annuli, but I want to make sure that it conjugates the dynamics on the boundaries, which is the only place I can conjugate to, right? Because every point that's in the interior of the boundary after one iterated maps outside the boundary. So it wouldn't make sense at this stage to talk about conjugacy for any other points. But the points that are on the outside boundary after one iterated the map to the inside boundary. So there are points in the bar that after one iterated the way map also inside the bar, which are exactly these. And so I want this conjugacy condition to hold here. I want to make sure that I have a homomorphism from D bar to D bar, D bar to D bar prime that satisfies this property. So if I take a point X here, I look at A of X, right? And then I take H bar. So I take, suppose this for example is H bar of X, right? And then this point here is A prime of H bar of X. And I want to make sure that this is the same as H bar of A. So that this point here is the image of this point here. That is just a standard conjugacy condition. Is that what I wrote? A prime. So if I invert A, what I get is maybe I didn't like, maybe that's not what I wrote. H tilde, so what I want is this. A prime composed with H is equal to, composed with H tilde is equal to H tilde composed with A. So this, yes, yes, so this is the same as saying H tilde equals A prime minus one composed with H tilde, composed with A. That's what I had here. So I won't be able to finish, unfortunately, the construction today, but once we have this H tilde, we proceed exactly as in the other cases. We extend it in the obvious way, okay? Then let H from R2 to R2 be given by H of 0 equals 0 and H of X equals 2. So we take any point X and we use the same argument. We go all the way until we reach the fundamental domain and then we pass it by H tilde and then we iterate backwards, right? So we go A, A prime minus tau of X composed with H tilde, composed with H prime, A tau proposition. H is a homomorphism. So I will not prove that today, but that's all we need to prove because if you remember, this system automatically gives a conjugacy in an abstract setting, we've already shown that. So as long as you have two fundamental domains and you have a bijection between the fundamental domains, you can extend that bijection to the whole space by exactly this formula and this gives a conjugacy. What we need to worry about is the regularity of the conjugacy, the continuity of the conjugacy, and that is precisely why we had to choose H tilde in this specific way. Okay, so I forgot to prove this lemma. This is a simple lemma because all you need to do is map these curves to each other. But maybe next time we will prove both of these results. So we will look at this closely, slowly so you have a bit more time to make sure that you assimilate and digest this construction because this is in some sense the most sophisticated version of these fundamental domain constructions. But the principle is exactly the same. So if you make sure that you really have understood it in the case of one-dimensional linear maps, this is just adding a little bit of geometric, little bit more geometric complexity, but the argument is exactly the same. So it should not present many difficulties. If you're struggling, it's probably because you haven't really looked at the one-dimensional case closely. So maybe we'll stop here for today and then we will complete this construction.