 Hey, good morning, everyone. So today we're going to start higher dimensional. So higher dimension can get very complicated. So can actually one dimension, and we will come back to one dimension, because a lot of interesting things can happen. But for the moment, we will do some high dimensional, so we will look at linear maps in higher dimensions, which is the simplest setting for dynamical systems. So let me remind you what a linear map is. A is linear. If A of alpha v plus beta w is equal to alpha av plus beta aw, for all scalars alpha and beta, and for all vectors v and w in R. This is the standard definition of linearity, which you all know. So in fact, I think in this lecture, in the next couple of lectures, really everything is just some elementary applications of linear algebra. In fact, I hope that it will help you to appreciate linear algebra a little bit, because I think this brings to life some concepts which in linear algebra are very boring and very dry, because we look at them from a dynamical point of view. Let's see. You tell me what you think at the end, because we're going to be interested in the dynamics of such a linear map. So we're interested in iterating this linear map and seeing what happens when you iterate it. So let me start with some definitions. Or let me recall again, basic concepts from linear algebra. Remember that lambda is an eigenvalue of A. If it is a root, if it is a solution to the equation determinant of A minus lambda i equals 0. So if it solves this equation, this is the definition. And we will call the spectrum of A is just a set of all eigenvalues. So definition properties. So A is invertible if the determinant of A is different from 0. It just means that it is. If the determinant is equal to 0, it means that the image of this is some subspace of our line. So this is just standard linear algebra. We will generalize these concepts. Notice, of course, that this coincides with the one-dimensional case. So this is just generalization for one-dimensional linear maps. Everything also applies, but in a kind of trivial sense, because you just have a single scalar x goes to Ax. So that is the eigenvalue of the matrix. A is invertible if the determinant of A is different from 0. It's the same thing as saying A is different from 0 in the one-dimensional case. We will generalize the notion of hyperbolic. If lambda, the absolute value of lambda is different from 1 for all eigenvalues lambda, is this consistent with the one-dimensional case? What did we have in the one-dimensional case? Definition of hyperbolic. A is different from plus or minus 1, right? So exactly the same. What is the difference here? What other values can lambda take besides plus or minus 1? Sure, but with this condition, if the absolute value of lambda is equal to 1, are there some other possibilities? Sorry, maybe my question is not so clear. All I'm trying to remark is that remember that in general, lambda can be complex in this case. So when I say the absolute value different from 1, I'm excluding the whole unit circle. Not just plus or minus 1. So here I'm saying that in hyperbolic, if the eigenvalue does not lie on the unit circle, and we shall see that this is the natural generalization. In one dimension, of course, the unit circle is just plus or minus 1 because it cannot be complex. Finally, a simplifying assumption will be that A has distinct eigenvalues if all eigenvalues have multiplicity 1. So as you know from linear algebra, eigenvalues can have additional multiplicity, right? So you could have an n by n invertible matrix can have just one eigenvalue with multiplicity n. If this is not completely clear to you, just review your linear algebra a little bit. I won't go into it here. So a lot of the results we will assume that we have distinct eigenvalues just because technically it is much easier. Several also are true in general. But anyway, this is quite a reasonable assumption to make on our systems. So also to simplify what we do, we will just look at two dimensional case. Yes? In C. In C. Lambda in C. Yes, the eigenvalues, of course, are in general complex numbers. They may be real, but they may have zero imaginary part, if you want, OK? Because the solutions of this may give complex eigenvalues. So we will, in general, look at the two dimensional case in these lectures. A lot of the concepts and results apply. But I think in two dimensions, you can really understand what we're trying to do. So in for n equals 2, we have that a matrix is just given by, is just a linear map. It's just given by 2 by 2 matrix. And in that case, as you know, the eigenvalues have a very explicit form, which I will write down. So the form is just a plus d plus or minus square root of a plus d squared. At least I think this should be the formula, a d minus bc. Is that right? OK, I think this is the formula for the eigenvalues in terms of the value of the matrix. So you can see that you always have two eigenvalues up to multiplicity in the sense that if this, what's inside this square root is 0, then you only have one value for the eigenvalue, but it has multiplicity 2. If this is non-zero, then you have two distinct eigenvalues. So the property of having distinct eigenvalues is equivalent to just saying it's a kind of non-resonance condition on these nth, that says that this is different from 0. And then you have distinct eigenvalues. They can be real or complex simply depending on whether this is a positive or negative inside the square root. OK, so now we're starting. We are ready to start studying the dynamics of linear maps. So dynamics, we will start with the simplest case setting of diagonal matrices. So suppose we have A is equal to lambda 1, 0, 0 lambda 2. So this reduces essentially very much to the one-dimensional case. What is the dynamics of this? Well, let's look at the action of A. So in this case, A n of some vector v1, v2, v1, v0, 1, v0, 2. So I write v0 because I think of this vector v as an initial condition. v0 is equal to v0, 1, v0, 2. So these are the two components of the vector. And I apply the matrix. And what is this matrix here? Apply iterated n times. You see that? Exactly. So when it's diagonal, you compose the matrices. You get just lambda 1 to the power n, 0, 0, lambda 2 to the power n, v0, 1, v0, 2. And this just gives then the vector lambda 1 n, v0, 1, lambda 2, n. So what are the possibilities here? So you can see what does this mean. So we start with a vector v0. We apply the linear map. What does the linear map do? So this has two components, v0, 1 and v0, 2. And this is very easy to understand the dynamics because each component behaves independently of each other. And it behaves exactly like the one-dimensional linear maps. You can see this. So what is going to happen to the iterates of this component? Here. It depends on lambda 1. What are the possibilities? Well, we know what the possibilities are. If lambda 1 is less than 1, this is going to converge to 0. If lambda 1 is greater than 1, it's going to converge to infinity in forward time. If lambda is equal to 1, it's just going to stay here. If lambda is negative, then it switches around each time. So anything, any value of lambda, we understand exactly what happens to this component here. And any value of lambda 2, the same. We just have the same thing. And so the iterates of this vector will just behave component-wise exactly as, on each component, it behaves exactly like a one-dimensional linear map. So what are the various possibilities depending on these conditions? I'm not going to go through them all again, because I think I will leave them as an exercise. It's very simple. You just have to systematically look through all the different cases. I think about, OK, let's suppose look at various cases for lambda 1, various cases for lambda 2, various combinations, and you see what happens. But let's just look at some cases. For example, suppose that lambda 1 and lambda 2 are both less than 1. OK, then what happens? Then this is the vector v0. This is the vector v0 1. This is the vector v0 2. What happens as an of v0? What does an of v0 do? Converges to 0, because each of these, as we said, this is just lambda 1 to the power n times v0 1, lambda 2 to the power n times v0 2. So if both of them are less than 1, then in forward time, this will be converging to 0. This will be converging to 0. So how are they converging to 0? Will it, for example, what are the iterates of v0? Will it be converging like this? Will it be converging like this? Will it be converging like this? Can you tell? That's right. How does it depend on the relation between lambda and lambda? Excuse me, Ben. OK, so let's see. If, clearly, if lambda 1 is equal to lambda 2, then it goes straight down the diagonal, right? Because you're just multiplying by the same factor. So you go straight down. Not the diagonal, but the straight line that joins these two. It stays, because the quotient here, the relation stays the same. So it goes. So now, suppose if we have lambda 1 less than lambda 2 like this, then will it go up or down? So which one is bigger? This one, the horizontal or the vertical, is bigger for large n, for which one is smaller. So for large n, so if you start with a vector that is on the diagonal, so suppose v01 is equal to v02. So you have a vector that's exactly on the diagonal. Then clearly, lambda 1 to the n, because lambda 1 is smaller, lambda 1 to the n will be smaller than lambda 2 to the n. So it means that the horizontal coordinate will be smaller than the vertical coordinate. So it will be above the diagonal. So it will be coming like this. It will be converging like this, because the horizontal coordinate will be smaller than the vertical coordinate. So the point at some time will be here. You should, when you study this at home, make sure you understand the calculation here. But we don't need, this is true. Also, if v01 is different from v02, because for large n, it doesn't matter anymore. When n is 1 million, the point is the fact that lambda 1 was smaller than lambda 2 means that even if v01 is much bigger than v02. So even if you start here, even if you start with a point here, where this means that v01, the horizontal coordinate, is much bigger than the vertical coordinate. Even if you start here, when n is very large, this will become smaller than this one. So of course, initially, this will be converging like this. But when n is very large, then eventually this becomes smaller. So it means it still comes in tangent to the vertical. So if you look at the curves on which these itineraries lie, they all come in tangent to the vertical if this is true. Even if you start very close to here, you cannot see it. But if you zoom in here, you see that everything is coming in tangent to the vertical like this. No matter where you start, you come in tangent to the vertical. So this is really the picture, is that in this case, everything is coming in tangent to the vertical like this. Except, of course, for the actual horizontal axis and the horizontal, then you're just coming in straight horizontal. That's just the only exception. Because in that case, this will be 0. So you can have this. So in this case, we call this an attracting fixed point. So in this case, the origin is an attracting fixed point. OK, so make sure in the exercises, you understand why we draw these in this way. You look at the different possibilities. You look at when lambda 1 and lambda 2 have absolute value between 0 and 1. But maybe one of them is negative, or both of them are negative. You need to familiarize yourself with all the different cases. They're all essentially the same, but you need to make sure you do them. So the opposite, of course, is essentially the same. Suppose lambda 1 is greater than lambda 2 greater than 1. Then you will see that you will have points moving away. So then you have by this, you see that everything is going to infinity. You start with the point v0. These are the two coordinates. Each of the coordinates is going to infinity in forward time and 0 in backward time. And if you look at these ratios, you can also decide whether in backward time it is coming in tangent to the horizontal or tangent to the vertical. Tangent to the horizontal, tangent to the vertical. Depending on lambda 1 and lambda 2, you just need to write it out and do a little computation here. So this here would be called a repelling fix point, just like in the one-dimensional case. So there is a third case, of course, in two-dimensions, which we don't have in the one-dimensional case, which is the case in which one eigenvalue is bigger than one and the other one is less than one. So what happens in that case? Suppose we have lambda 1 greater than 1, greater than lambda 2, greater than 0. v0, v0, 1, v0, 2. OK, what happens in this case? Ni 1 and t. 1 and t termus. So remember, we have this formula here. So the two coordinates are just one-dimensional linear maps. So what is happening to this coordinate here? If lambda 1 is bigger than 1, what's happening in forward time? It's going to infinity. So we have that this component, the units of this component, v1, 1, v2, 1, these are just the components going to infinity. What about this component here? It's going to 0. So the images are this, v2, 1, v2, 2, and so on. It's going to 0. So what is the vector, this point here, which has those two coordinates? Where is it going? Where is it going? Must be going somewhere. Can you not see it? Where is the first image of this vector? What is the image of this? It's here, right? This is the image. This is the point v1. Where's v2? It's here. You can see that. And then it will continue like that, v3, v4, and so on. So its vertical component is going to 0, but its horizontal component is going to infinity. So in fact, it lies on a curve that looks like this. Will it ever meet the horizontal axis? Yes or no? What do you think? Up the lamp? Yeah? Yeah? It becomes very, very close. Yes, it converges, of course, because that's what's happening here, right? So if lambda 2 is less than 1, this is converging to 0. It's never equal to 0. This is just converging to 0. It's getting very, very close. So this is getting very close. It's asymptotic to the horizontal axis. And in the same way, this backward time, this is asymptotic to the vertical axis. So that's what it looks like. And if you think about it, if you play around with it and you look at different cases, you see that they all lie on these curves. So if you take a different initial condition here, it will lie on some curve that looks like this. Or another condition here will lie on some curves that look like this. And here you get similar curves. So every initial condition lies on some curve that looks like this. You can, of course, all these curves, what is the meaning of these curves? If you look at this formula that we have here and you replace this parameter n by a continuous parameter t, what this gives you is a parametrized curve in R2. You're parametrizing a curve by lambda 1 of t of v01, lambda 2 of t by v02. You have a one-parameter continuous curve. And of course, when you take n, so it's a bit like a flow. In fact, when you have linear differential equations, what you get it flows that look exactly the same. And the solution curves are just these curves. This is, in some sense, the discretization of that, just like we spoke right at the beginning. This is the restriction to integer values of n, the iterates of these points. So what I'm saying when I draw these curves is that there exists families of curves everywhere here, everywhere on the curve. These curves fill up the whole space. And if you take an initial condition of one of these curves, then all its iterates in forward and backward time will lie on that curve, as it's obvious from this. Because the curve is defined precisely as lambda 1 t, lambda 2 t, this with t and r, you get these curves. Yes, that's right. No, complex we will look at now. Yeah, yeah, complex we will look at. But that will not be a diagonal case. So at the moment, we have lambda 1, lambda 2 real. We will look at what happens when the eigenvalues are complex. So what if they are negative? There, what if they are negative? You tell me. That seems it's not a clear force here at one thing. No, that's true. But oscillating the one. Yes, but actually this picture of the curves will be exactly the same, except that the points, this point will switch between two curves on either side. So this will go from here to here to here So this curve, after one it gets mapped to a curve here. And so these points, they oscillate between the two. What cannot be that continuous? The curve. No, you're right. So in that case, they do not lie on a single curve. They switch between these two curves. But these curves still exist in the sense that if there's a curve here and there's a curve here. And the iterative of this map will just oscillate between just two curves all the time, OK? And we'll stay on one of those two curves. It will just switch that side. But you switch around dynamically. Every time you iterate, you switch from left to right. But we can't be. It's just minus. It's just a negative component. It's negative. We go in the case first. Yes. By metric, by order. I'm not sure if I completely understand your comment. But yes, you still have exactly. Yes. It's lambda 1, lambda 2, v 0, 2, 2, lambda 1. Yes. We multiply by minus lambda and minus v 0. And minus lambda can't be twisted. And minus lambda 2 can't be twisted. But I don't see why you're so afraid of negative. I mean, you're not trying to. But we go in the case with this. Well, not completely. I mean, keep in mind, so if these are negatives, it means that things are so, for example, if these two are negative, suppose you had this picture here that you had lambda 1 less than minus 1 less than lambda 2 less than 0. Yes. So what happens is every time I think this is probably the same thing that you say, if you take a vector here, then its image will be in this quadrant. Because this vector has positive coordinates, and its image will have negative coordinates because you multiply both of them by negative. But then when you apply the map once more to this one, then it will switch back to this quadrant. So it will switch from this quadrant to this quadrant. But it will still be doing the same thing. So the fact that this, if lambda 1 is less than 1, it means it's going to, in absolute value is less than 1, it means it's going to infinity as you iterate. The absolute value is going to infinity, as kept is going to plus or minus infinity. So if you just look at the horizontal component, what this component is doing, it is oscillating back and forth, but it's going further and further away. And what this is doing is it's oscillating back and forth, but it's coming closer and closer to 0. So the corresponding point is moving further, further away and oscillating further, further away from this side in the first quadrant to this side on the third quadrant. And then back again, closer to the horizontal, but further away. And then back again, even closer to the horizontal, but further away, and so on. But this picture of curves remains the same because it's still what you end up with. OK, maybe this insisting on these curves is not so important. So if it's a bit confusing, don't worry about it. The important thing is that you can see all these various possibilities and what they do. We still call this, so this is a different situation than this, and we call this a saddle, saddle point. So all the times when we have the absolute value of lambda 1 is less than 1 in the absolute value of lambda 2 is greater than 1, it gives this kind of picture, and we call that a saddle point, as opposed to an attracting fixed point, or a repelling fixed point, or a saddle fixed point. What is the fixed point here? Any, tell us what is the fixed point here. Sorry? Origin, very good. Origin is always a fixed point, right? Yeah, always a fixed point. For some parameters, you have other fixed points. Any, what other parameters do we have other fixed points? In these three cases, there's no other fixed points, clearly. Because everything goes to 0, everything moves away, everything goes to infinity like this. But I have not considered all possible choices of lambda 1 and lambda 2. Which choices give you other fixed points? Excuse me? When will I have other fixed points in this case, for this map? Which values of lambda 1 and lambda 2 will give me some other fixed points besides the origin? 1, exactly. If lambda 1 equals 1, what is the picture? If lambda, OK, I'm doing this exercise for you. This is what you were supposed to do at home, OK? But we can think of these different cases. So if lambda 1 equals 1, then and lambda 2. So suppose, OK, let's suppose, example, let's suppose that lambda 2, lambda 1 equals 1. And let's suppose that lambda 2 is less than 1. OK, just as an example, what happens here? This is v0. What is the image of v0? First coordinate and second coordinate. What happens to the first coordinate when you iterate? Sorry? It remains fixed. It's the identity, because lambda 1 is equal to 1. What happens to lambda 2? It contracts. So where is the image of v0? It's here, right? Because this, so the new coordinates, v0, is equal to v11, is equal to v22. So this is actually a fixed point. And so this is a fixed point, because for this particular point on the axis, the vertical component is also 0. OK, so this particular point is a fixed point. And any point that lies on this line here, on this vertical line, is just going to move down towards this point here, right? Because v0, v02, maps to lambda 2, v02, maps to lambda 2 squared, v02, and so on. This is mapping to 0. And this gives the coordinates of the point v0. So all these points are mapping down to this fixed point. OK, so this happens for every point on the horizontal axis. So every point is a fixed point. And every point, when you take the vertical line to that fixed point, every point is just moving down towards this fixed point. So what you have here is the lines, the invariant lines, are all just the vertical lines. Because given any initial condition, the weight, its orbit, lies on one of these vertical lines. And similarly, you can think of what happens if lambda 1 is equal to minus 1, and so on, and so forth. But I don't want to do it for you. At home, you need to really systematically go through all the cases. We've done basically all of them anyway now. But make sure you understand what the pictures are and all the different pictures. So when I am giving you these pictures, these broad categories, I'm already giving you a kind of indication of looking towards this problem of equivalence or conjugacy between systems. I'm already suggesting that basically, since under this condition, the picture more or less looks like this, we would like to be able to say that perhaps any two maps with this condition, any two linear maps, will be linearly equivalent or topologically equivalent in some way or conjugate in some way. So this is what we're going to study now. So we're going to do a little bit more systematic study of how, in what sense, these different cases can be considered equivalent or not equivalent. Now we've just looked at a few different examples. Actually, before that, I'm sorry, we've only studied so far the diagonal case. OK, this is the diagonal case. Exactly, exactly. So let's see what this diagonal is. You know from linear algebra, you can diagonalize in certain cases. OK? Now let's see if this diagonalization procedure is a bit easier to understand, looking at it from a dynamical point of view. So what does diagonalization mean, by the way? What do you mean by? So if we start with a generic matrix, what does it mean to diagonalize this matrix? Sorry? Sorry? Bah? That's right. So diagonalizing A, what does it mean? There exists P invertible such that B is equal to P minus 1 AP, and B is diagonal. So B is of this form. Is that right? OK, does this look familiar to you? Well, this looks familiar because you know it from the linear algebra. Does this look familiar to you in terms of what we've been talking about? Especially if we take P to the other side and we write this as PB equals AP. And if I put little circles here, it's a conjugacy. More than topological, because what is P? P is linear, linear. More than topologically, more than differentially, it's linear, very strong kind of conjugacy. OK? So what you know from linear algebra is called diagonalization is nothing but saying that this map is linearly conjugate to diagonal map in our language. OK? So A is linearly conjugate. When can we diagonalize such a matrix? Not always. If it has two distinct eigenvectors. Two distinct eigenvectors, that's right. In particular, if it's got distinct eigenvalues, then it must have distinct eigenvectors. So that is, in fact, one of the reasons why I assumed we had distinct eigenvalues, to make it simpler to address these kind of situations. So let's try to understand this diagonalization procedure dynamically. In particular, we'll give a proof of this fact. OK? So the proposition we will prove is suppose A has distinct, suppose A is invertible with distinct eigenvalues, lambda 1, lambda 2, then A is linearly conjugate to B equals lambda 1, 0. OK, so linear conjugate is a strong form of conjugacy. So what will the picture look like? OK, so as we go through the discussion here that leads to the proof of this, I want you to keep in mind what a linear conjugacy means in the sense that a linear conjugacy preserves a lot of structure. It preserves, it's in particular, differentiable conjugacy, topological conjugacy. So it preserves omega limits and so on. So the picture must look very similar, but it will clearly not look exactly the same. OK, so let's think as we go through this, I want to discuss a little bit what the dynamics of these are, and we try to understand what the difference is between the diagonal case and the general case. So, well, if eigenvalues are distinct, if lambda 1 is different from lambda 2, it implies, as we said, that the eigenspaces E1 is distinct from E2. So we have two eigenspaces. So let's draw these eigenspaces. What do we know about the eigenspaces? What is basically the definition of an eigenspace? Is that Av is equal to lambda v. Let's call this v1 for all v1 in E1. Correct? Yeah, this is the definition. And Av2 equals lambda v2 for all v2 in E2. So let's take a point that is in E1, v1. Where is lambda? So this should be lambda 1, and this is lambda 2. These are the two eigenvalues. So what is the image of this vector here, v1? Kevin, where is the image of this vector v1? So what is Av1? Av1 is lambda 1 v1. Where is lambda 1 v1? Depends on lambda. So just to make it clear, let's suppose that they're both less than 1. So let's suppose again that we have, let's just make it more definite to work with. Let's suppose it's there. It's like that. So where is v1 lambda 1? But does it lie on this line? OK, that's the crucial observation that I was looking for. Independently of what lambda 1 and lambda 2 are, it lies on this line. Because it's a scalar multiple of the vector, so it lies in the same direction of the vector. This is very simple, but crucial observation in all of this. So eigenvectors, eigenspaces are invariant by the dynamics. This is what this, in linear algebra, you just have this equation here. When you look at it dynamically, it takes on a completely new meaning. What we're saying is that the eigenspaces are invariant by the dynamics. So if you start inside an eigenspace, you stay inside the eigenspace, which is what makes everything so simple from now on. Because this is v1, so this is the image of v1. So in this case, what is this doing? Then this vector v1 is just converging to 0 on the forward iteration. And what is happening to a vector v2? The same thing. This is also converging to 0. And so what is happening to another arbitrary vector v? Here. Why is it converging to 0? Exactly. We can write it basically in these coordinates. This is the key point here. So by linearity, so if we write, if we take an arbitrary vector v0 in R2, then we can write v0 is equal to v01 plus v02 in a unique way. This is a standard in linear algebra. If you have a decomposition of some linear space, if you have a two linearly independent directions, we span the space. Then you can write every vector in a unique way as a linear combination of those vectors. So you can draw some lines that are parallel to the eigenspaces. I have to get the two coordinates. So in fact, this would be v01 and this would be v02. And how do you know that the image of this vector is related to the image of the components? Because these are no longer orthogonal components. Nevertheless, this is just simply by linearity. So we check that. So we have that a of v0 is equal to a of v01 plus v02. What does linearity mean? Linearity means that we can write this as a of v01 plus a of v02. This is the simple but crucial step. And this means using the fact that these now are inside the eigenspaces. This is just lambda 1 v01 plus lambda 2 v02. So this means that you look at the image here lambda 1 v01. And then you look at the image here. Whatever that is, lambda 2 v02. And then you do the vector sum of these two, which means that you use a parallelogram law to get the new point here. And you will get, and this is how you get the new image, this here would be v1, the image of v0. So now what is this linear conjugacy going to do? How are we going to construct this linear conjugacy? So we'd like to say that this is linearly conjugate. So we'd like to say, so here we have the map a, the linear map a, which has these two eigenspaces. And here we have the linear map b, which is diagonal. So how are we going to conjugate? How are we going to construct the conjugating linear map p? What is this picture? If you remember, under those conditions that we had before, where lambda 1, what did we choose before? Which one was bigger, lambda 1 or lambda 2? It was like that, right? And we saw that in that case, the picture looks like this. Everything comes in tangent to the vertical. Now intuitively, you can do a kind of heuristic calculation here, and you can see that something very similar happens. So here also, if you think about the formula that I just thought, you will get the same thing. So as you iterate this, you will get that lambda 1 is much convergent to 0, much faster than lambda 2. So when n is very large, the coordinate in the direction eigenspace e1 is much smaller than the coordinate in e2. So here also, you have these curves that are coming in tangent to the e2 subspace, which is the most weakly contracting. So it looks like this. And then what does it look here? It's the same thing. If you can do the same thing here, and here you also will have these two coordinates, and you can do the same calculation, and you see that these curves look like this. These also need to be tangent to e2. These also need to be tangent to e2. So this is the picture that you have here. You have that these are moving in like this. So this is the picture that you can draw if you do some numerical experiments with a specific case. And how do you go from this picture to this picture? You just, yes. What you're doing intuitively, you're just straightening out the eigenspaces. There's no difference between these two. All you've done is you've taken, because in fact, of course, in the diagonal case, these are the eigenspaces. What makes it diagonal? Just the fact that horizontal and vertical coincide with the eigenspaces in that case. And the eigenspaces are invariant just exactly as we have here. So this is the only difference is that if you take this map and you map these two eigenspaces to these eigenspaces, then all the picture will go with it by linearity. Because mapping these eigenspaces to these eigenspaces, there's only one unique linear map that's going to do that. And that linear map is the one that's going to conjugate everything else. And that's what we're going to do. And I will write down what this conjugating map is. And then I will leave it as an exercise for you to check that it actually conjugates. But it's very easy. What do we need to do? We just need to map these to each other. So for example, if we write the eigenvectors of A, so let's now prove. So that was just an example, of course. Now I'm going to give the formal proof of this proposition. Let E1, E2, the eigenvectors with E1 equals V1, V2 equals, let me use maybe a different notation. Because I've been using V1, V2 in a different way. So let me call this W tilde 1, W tilde 2, and W1, W2. So I'm taking unit eigenvectors. So here I have the eigenspaces. And I just take a unit eigenvector here. This is E1. And then I take a unit eigenvector here. This is E2. And I write these eigenvectors in the horizontal and vertical coordinates, which are the ones that are given. So here, to simplify the picture, I only do the eigenspaces because that's how the dynamics is described. But when you're given your copy of R2, you're given horizontal and vertical coordinates. So these eigenspaces exist, have certain equations in these coordinates. So these are the horizontal and the vertical. So you can write the horizontal and vertical coordinates of this eigenvector. And you can write the horizontal and vertical coordinates of this eigenvector, which are these. And then we just define P is equal to the matrix W tilde 1, W tilde 2, W1. So in fact, I do the arrow incorrectly here. Now what we're going to construct is P goes in this direction here. So it's easy to check that this P. So we leave it as an exercise check. So exercise. So P then P composed with B is equal to A composed with P. Exercise. This is just doing some calculation. So just to get you started on this calculation, check that it does what you want it to do. So for example, that it maps these two eigenspaces to these eigenspaces. If you take, for example, the map, the point 0, 1, OK? Let's check. Let's let me check that it maps it to the right thing. 0, 1 will map to what vector? So P of 0, 1. So 0, 1 will map to W1, W2, which is equal to E2. And 1, 0, 1, sorry. Is it the other way around? So this is 1, 0. This is 0, 1. So 0, 1 will map to E2. 1, 0 you can easily check with map to E1. So what this linear map does is it maps these horizontal and vertical to these eigenspaces. So it's inverse. It doesn't matter which way you describe it because it's invertible. These eigenspaces map to this one. And then you need to check, OK, this is just non-trivial, but it's not a difficult calculation that for any arbitrary point you will still get this condition. OK, immediate corollary of this, of course. Ah, yes, so I wanted to do, OK, let's just take a very short break now and then we'll continue. Two minutes. OK, so as you can see, linear conjugacy essentially captures the fact that two systems have the same eigenvalues. And the only difference is the eigenvectors, right? Because as you remember, the eigenvalues do not describe this shows the fact that the eigenvalues do not describe completely the linear map because two linear maps with the same eigenvalues can have different eigenvectors. And therefore, the pictures of the dynamics can look different, but the linear conjugacy shows that it's just a question of straightening them out, in some sense. Let's just do one more example of the saddle. It's similar, right? So if we have the saddle example where we have this picture here that corresponds to having one eigenvalue is bigger than one and one less than one, then again, in the diagonal case, these two are the eigenspaces. What can that look like? If in the non-diagonal case, the eigenspaces are very close to each other, for example, like this, then it can look quite different, but it's essentially the same. And what it would look like is simply like this. We'll still get a similar picture. It's just that these hyperbolas will be squashed like this. You can see that these are really the same pictures. Using the conjugacy that we just drew, this conjugacy will map. These two eigenspaces will, in some sense, straighten them out. And then what you will get is exactly the same picture. So they can look very different, but they're really up to linear conjugacy that are the same. So what can we say about the linear conjugacy classes? So an immediate corollary of this is that corollary, suppose A and B invertible with same eigenvalues lambda 1 different from lambda 2. A and B have the same eigenvalues. And then A and B are linearly conjugate. You agree? Why is that? What's the proof? Anya. So neither of them is diagonal. I'm just saying they both have the same eigenvalues, lambda 1 different from lambda 2. Try to formulate. You probably understand the answer, but try to say it. That's not it. So what you're saying is that both of these are linearly conjugate to the third diagonal matrix with eigenvalues lambda 1 and lambda 2. A is conjugate to the diagonal matrix lambda 1, lambda 2. B also. And conjugacy is an equivalence relation. So you're saying this, but you are not using the terminology that you know, which is simply the fact that linear conjugacy clearly is an equivalence relation. So since both of these are conjugate to the diagonal form with the same eigenvalues, then they must be linearly conjugate to each other. Because the composition of invertible linear maps is conjugate, is linear, right? So A is conjugate to C by some P. B is conjugate to C by some Q. And the composition of P and Q is also an invertible linear matrix that conjugates A and B. Exercise. So you make sure you all can write this down. This is a very simple exercise. Another exercise, which is a kind of a commissar that if A and B are linearly conjugate, then they have the same eigenvalues. Again, I believe this is basically just in a simple exercise in linear algebra. So I believe also this is nice. There's no dynamics in here. So the only situation where things could go long is that if they have, if the two eigenvalues are not distinct, then it's not clear that they would be linearly conjugate, OK? Because you can have a situation where you have two eigenvalues, lambda 1 equals lambda 2, and you have two different eigenspaces, so you have the same eigenspace. So it can be a little bit tricky. So lambda 1 equals lambda 2 is a slightly problematic situation. So this leads us to an understanding up to linear conjugacy. So what does this mean? In terms of the concepts that we're talking about before, of choosing a conjugacy class, we have now chosen the linear conjugacy. And we have simplified in the sense that we have said, OK, at least when the, I should say here, this is a real, OK? What we've shown, the corollary of this, because this is just for real eigenvalues. We haven't spoken about complex eigenvalues, whereas this is true always. This is a general lemma from linear algebra. So all the cases in which the two eigenvalues are distinct and real can be reduced to diagonal case, which means that more or less we can understand all of them, OK? But what if two systems have different real eigenvalues? So what if, question, if A and B are two linear maps, OK? And A has eigenvalues lambda 1 of A lambda 2 of A. And let's suppose they satisfy this. Has eigenvalues also that satisfy the same property. But if the eigenvalues are different, then they cannot be linearly conjugate. Because if they were linearly conjugate, they would have the same eigenvalues. However, the pictures look quite similar, right? Because in fact, they look almost very, very similar. So in one case, we might have the two eigenvalues like this, and we will have the eigenvectors like this. And then we have everything coming in like this. Tangent to this direction, tangent to this direction, tangent to this direction, OK? This is the picture. In one case, in the other case, they might be completely different. The eigenvectors spaces might be like this. And then you might have, OK? But you still have everything coming in. So are these differentially conjugate? Are they topologically conjugate? They topologically conjugate. Why are they topologically conjugate? Exactly, exactly, OK? And that's what we're going to prove. Finish today the simple proof, which is basically what you said. But it gives us a classification of these linear maps up to topological conjugacy. So when are two maps topologically conjugate? So I like this as a theorem, because it's a kind of really wraps up. So A and B, hyperbolic linear maps, linear from R2 to R2 with distinct eigenvalues. Suppose A and B, the same attracting, repelling, or subtle, to that A and B are either orientation preserving, orientation all both orientation reversing. So I forgot to give you the definition of this. So orientation preserving means that the determinant is positive and orientation reversing is that the determinant is negative. So the determinant is the product of the two eigenvalues in the two-dimensional case. So if both eigenvalues have the same sign, then it's orientation preserving. If one of the eigenvalues is negative and one is positive, then it's orientation reversing. And if it's a saddle, if A and B are saddles, then we also need then the signs of the corresponding eigenvalues are the same. OK, I'll discuss this a little bit. Ah, I forgot to write the conclusion of the result. So then A and B topologically. So these are really, if you think about it, these look a little bit complicated, these conditions, but they're just the conditions you need to make sure there's no obstruction to the topological conjugates in the same way as you do in the one-dimensional case. So to discuss them, let me first make a simple observation. The proof is that it is sufficient to consider the case where A and B are both diagonal. What do you think, Dildo? Why is it sufficient to consider the case where A and B are diagonal for the proof? We can assume that A and B are diagonal here and prove this result. Here, I'm not assuming they're diagonal. I'm just assuming they're hyperbolic, invertible linear maps with distinct eigenvalues. So the first thing I want to do is say, well, actually, and I want to prove that these are topologically conjugate. All I'm saying is it's sufficient to assume here. I could have assumed here that these are both diagonal matrices. And then if I can prove for diagonal matrix that this is true, then I've proved it for any two matrices which are hyperbolic and invertible. You have some idea? Sohella, linearly conjugate. Exactly, right? Does that make sense to everyone? So from these assumptions and what we proved before, these are linearly conjugate to diagonal matrices. So I can write A is equal to P minus 1 A tilde P, where A tilde is diagonal. Yeah, A tilde diagonal. And I can write B is equal to Q minus 1. Let me write P tilde, P tilde. And B is equal to P minus 1 B tilde. P, where B is diagonal. And because linear conjugacy, of course, in particular, is a topological conjugacy, as long as I show that these two are topologically conjugate, then this would imply that A and B are also topologically conjugate. It's sufficient to consider the case where A and B is diagonal. So rather than as a way of explaining these, let's just look at the proof. And then as we go through the proof, we'll see why we need these conditions, why we would not have that you always have topological conjugacy, just under these conditions. So as Runako said earlier, you just simply conjugate the eigenspaces. And that gives you topological conjugacy in the whole space. So if we suppose that A is diagonal, so A is equal to lambda 1 of A0 lambda 2 of A. B is equal to lambda 1 of B0 lambda 2 of B. We have two diagonal linear maps. So we want to construct a conjugacy. So we know the condition. So we will just construct the conjugacy from the horizontal and vertical eigenspaces for these diagonal. So they're invariant by the dynamics. They're invariant by the dynamics, which means if you're inside one of these eigenspaces, you stay inside forward and backward time. So you can think of the dynamics restricted to each one of these. Restricted to each one of these, this is just a one-dimensional linear map. So we have studied the conditions that allow these one-dimensional linear maps to be conjugate. And then if lambda 1 A and lambda 1 B belong to the same topological conjugacy class. Remember, what are the four conjugacy classes in one dimension? They are minus infinity to minus 1, minus 1 to 0, 0 to 1, and 1 to infinity. So if it's 2, then we can conjugate. Then there exists a conjugacy, H1. Well, H1, so there's the conjugacy H1 from all to all, such that H1 of lambda 1 A of x equals lambda 1 B of H1 of x. This is exactly the conjugacy between these two linear maps. Here we have lambda 1 of A is the linear map here, and lambda 1 of B is the linear map here. Similarly, we can construct something lambda 2 H2, a conjugacy between the dynamics here. Similarly, we can construct H2 R to R with the same, so H2 of lambda 2 of A of x equals lambda 2 B of H2 of x. So these are conjugacies on the horizontal and the vertical, and then we just define that H of x1 x2 be equal to H1 of x1 H2 of x2. And then we just need to check that H is a topological conjugacy between these two, but that is very easy, just one line here. I don't want to raise these conditions. H composed with A of x1 x2. So x1 x2 is an arbitrary point here. So maybe I could put x1 here, x1 here, x2 here, x2 here, because this is a point x that has coordinates x1 and x2. So I have this is equal to A H of A x1 A x2. And because these are in the eigenspaces, this is just H of lambda 1 A x1 lambda 1 A x2. And these points here still belong to the horizontal and vertical, so by these conjugacies, by the definition of H, sorry, by the definition of H, H is defined as H1 of lambda 1 A of x1 H2 of lambda 2 A of x2. This is just equal to lambda 1 B of H of x1 lambda 2 B of H of x1 and H2 of x2. And this is just equal to B of H of x1 H2 of H2 x2, which is just equal to B, which is equal to B. So what are the conditions that have allowed us to define this conjugacy? Now we go back to why we imposed all these conditions. What did I need to assume in the construction of H1 and H2? I assumed some relation between the eigenvalues of A and the eigenvalues of B. What did I assume in each case to be able to have this conjugacy? That they belong to the same conjugacy class, the same one-dimensional conjugacy class. Now, all these conditions are same. If you study them one by one, are exactly that, that the eigenvalues correspond to the same conjugacy class. So all these conditions just say that, can I erase here? So what we've used in the proof, used in proof, is that lambda 1A and lambda 1B and lambda 2A and lambda 2B, each pair, each pair, each of these pairs, belong to the same one-dimensional conjugacy class that I thought before. So if you kind of digest this, this is exactly the same thing. So we're saying that either both of these must belong to, either they must both belong to one of these four cases, and these must belong to one of these four cases. And I just, the way I wrote it is, for example, they must have the same kind of fixed point. So of course, the kind of fixed point depends on this pair. So if the fixed point is attracting, it means that all of these eigenvalues must belong either to this or to this. And the fact that they're both orientation preserving, orientation reversing, these other conditions make sure it's better that you just leave them at home and you make sure you understand, it just gives you exactly the kind of conditions that guarantee exactly the statement here. So as long as this is true, then we can construct a topological conjugacy. If it's not true, then there are obvious reasons why these are not topological conjugates. For example, if one is an attractor and one is a saddle, then you see it cannot be topological conjugate because it needs to preserve omega limits. In one case, everything goes to 0. In another case, some things go to 0, but some things go to infinity. So it must belong to different topological conjugate. So this is it for the linear maps with real eigenvalues. We have shown that there are many linear conjugacy classes because if you take two systems with different eigenvalues, they belong to different linear conjugacy classes, but not many topological conjugacy classes because the topological conjugacy classes is just a small finite number, depending on the various possibility for the different eigenvalues here. So we've given a fairly good topological classification. So what you could do is write down all the possible topological conjugacy classes of hyperbolic invertible linear maps with distinct eigenvalues. It's just a small number. And we've given a complete classification of that. So the next lecture, we will do a similar analysis for linear maps with complex eigenvalues. Thank you very much.