 Okay, so next homework is going to be due Wednesday with an asterisk, right, for those that are in class today. And let's see, we're probably going to be ready to talk a little bit about Chapter 5. And there's not much to... Well, in this chapter what we're going to be doing is kind of formalize some of the observations that you probably have made through work in the problems in Chapter 4. That is the kind of the... to establish the nature of the equilibria in a dynamical system. And as you will see, there's going to be difference whether the dynamical system is continuous or discrete. Okay, so let's start talking about continuous systems. Although it doesn't matter the order in which we do it. Let's talk about the eigenvalue method for stability. Okay, so first of all, what is stability? So I'm going to illustrate here a two-dimensional system. Dynamical system continues. So continuous case first. So dx dt is some... Notice we're still talking about an autonomous system. So there's no time dependence explicit in the right-hand side of your system. Okay, and remember that we say x star on equilibrium or steady state means that the right-hand side vanishes, right, at that point. Question is, so definition, an equilibrium x star is called asymptotically stable. And oftentimes we'll just... I mean, if we need to, we're going to specify it, but otherwise just kind of refer to stable if... what happens? If the solutions starting at nearby points for any solution of dx of the system, which starts near, and this is the quite imprecise linear x star, okay? So on the picture, it basically says, you know, things could... If you are far away from this equilibrium, if you are a certain distance away, then your solution may do weird things, right? It may not go towards this equilibrium, but as long as the ones that get close to it, so as long as there's some sort of neighborhood of vicinity of the point, let me use a different color here, so as long as there's a vicinity of the point, so that once you start in that vicinity, you're actually approaching the equilibrium point. That's when we say that equilibrium is asymptotically stable, okay? We say that x star is an unstable equilibrium. Well, let me just do it first. It's a stable, but not asymptotically stable, is a stable equilibrium if the solutions x of t stay close to x star in the vicinity for initial conditions starting in the vicinity of x star. For all initial conditions, say x naught in the vicinity of x star. So what's the picture here? Well, certainly if it's asymptotically stable, if it goes towards that point then, right? Let's say this is the... So if it goes towards that point from all directions, let's say, right? Then this is asymptotically stable, right? What would be something that's stable but is not asymptotically stable? And again, in the plane, there are many ways in which this can happen. So if it's some sort of... Let's say there are periodic orbits here, so there are periodic solutions, right? That don't actually go towards a point but don't go away from the point either, right? So that's kind of the... Again, the definition is quite imprecise. There's a much more precise definition with epsilons and deltas but... And again, probably the easiest one to formulate is x star is an unstable equilibrium if it is not stable. But again, in the plane... In 3D or in several dimensions, things can get quite complicated. The possible behaviors of a dynamical system near an equilibrium. But in 2D, the situation basically says, well, if there is one point... Actually, if there are points getting as close as possible to the equilibrium that actually diverge, so actually the trajectory goes away from it and that qualifies this as an unstable equilibrium, right? Yeah. I will call this unstable but yeah, you can kind of be a little bit more specific. For linear systems... So this is really not just for linear systems, right? We're talking about nonlinear general systems, okay? But for linear systems, there's that idea of semi-stable but let's not complicate. Okay, so in other words, you could have certain directions in which it actually goes towards that, right? Or you could have one direction but you could have other directions in which it doesn't happen, right? So that's kind of a possible scenario for an unstable equilibrium. Of course, a much more obvious scenario would be if it is just straight going away from it, right? Again, not necessarily straight lines as I plot here but... Or it could also be it's spiraling out, right? So I should have said here, asymptotically stable is if it goes straight in or it could actually be spiraling like this so it never... I mean, it doesn't kind of directly approach that but... So it's a different pattern but still asymptotically stable, right? So these two are asymptotically stable. This is stable or not asymptotically stable but not asymptotically and this is unstable. Okay, so the point is that all these behaviors can happen very easily in a nonlinear system and actually you could have several equilibrium in the same system, one of one nature and others of different natures, okay? So we'll see examples of this kind of things but... So let's fix an equilibrium x star for our system and let's see how can we actually detect whether it's a stable, unstable equilibrium by methods other than just plotting the face portrait, okay? Or the solution curves near that equilibrium, right? So to establish the stability of this equilibrium, first we linearize the dynamical system around x star. So here's what we mean by that, okay? So pretty much what we do is we do... We've done this when we talked about Newton's method but this is a different context now. We expand in terms of its Taylor series or if you want just we look at the linearization of the right hand side around x star, okay? And that's how it looks even if f is a vector, right? In fact, if f is a vector then... So here f is f1, f1, f2, fn. So what would be this object here? So if this would be just a scalar then this would be just the derivative, right? But since this is a vector what is the meaning of this? It's a matrix, right? It's a n by n matrix because remember this is n by 1 and you want to multiply something to n by 1 and still get n by 1. So you have to have an n by n. So it's a matrix and it's the Jacobian, right? Metrics of f at x star. And how is it explicitly, well, at any point the Jacobian matrix is putting each derivative of each component of that right hand side with respect to each variable, right? So you can see that formula, well, and it's not an equality, right? It's an approximate equality, right? So that's the linearization. It's as if you take the linearization of each component f1 and you write it, right? In terms of, so you're going to get basically this row times the x minus x star and so forth. So that's the simple reason why you assemble this matrix in this form, right? You take each component and on each row you take the partial with respect to each variable, okay? So you can see it's n by n, right? Okay, but now we said that x star is an equilibrium so this guy is actually going to be zero, right? So sort of a rough conclusion is that f of x can be approximated by this for x near x star, okay? So again, one can say how good this approximation is and other things in terms of the distance between x and x star but we'll just see this on the pictures, I guess. And we won't be doing exact the errors in this approximation. Okay, but one can do it. Okay, so as a consequence, we can approximate or associate, if you want, an approximation of the dynamical system, original one, with this new one which is simply replacing the right hand side with its linearization, okay? Now, it's not clear that the two pictures, the two direction fields and face portraits will look similar and actually they don't, right? Except when x is very close to x star. So when you focus it near the equilibrium, these two dynamical systems will have similarities, right? Okay, now what's the advantage of this dynamical system? Well, this dynamical system is actually is a matrix times x, right? So in fact, making a change of variables which is nothing but a shift, a translation. So I'm going to call y to be x minus x star, right? Well, what does that mean? It means that if I'm following a solution, x of t of the original system, right? What would be the derivative? Excuse me, what would be the translated one? So dy dt is simply dx dt, right? So this gets replaced by dy dt. This gets replaced by y, right? So this, the dynamical system becomes dy dt equals a times y. All right, so if you've never seen, I mean if you've never seen linear systems, although probably maybe a little bit you've seen in an OD course, the thing to say about this is if it's a system but it has linear dependence on the variables y1, y2, yn, then it can be solved. You can actually find the solution explicitly. And you can actually say everything about this behavior of the solutions of the system by just looking at the matrix A. So this was something far from true if this was for the original system, right? Where it was nonlinear. There was no matrix, right? It was nonlinear right hand side. But the moment you linearize around an equilibrium, you can actually get this matrix A and then read a lot of information off that matrix. Information which actually translates then into the behavior of the system, and as I said, this system approximates the original system but only in the neighborhood of that point of that equilibrium, okay? So what is it so good about systems of linear differential equations like this? Okay, so I mean again this, I'll just give an example 1 equals 2 so you see this in its full beauty here. The 1's and the 2's, okay? So you have a system like this. That's what that means, right? Well, as I said, it can be solved explicitly, okay? And it may not be very clear how at this moment but let's imagine A is diagonal. As usual, we start with the simplest cases. So let's say the matrix is lambda 1, lambda 2, 0, 0. That's, then the system really looks dy1, derivative of y1 is lambda 1, y1, and derivative of y2 is lambda 2, y2, and that's, it's not really a system because these two equations are decoupled, right? So you can write y1 is c1 into lambda 1t, right? y2 is c2 e to the lambda 2t, right? You can solve it explicitly. And so y, which is y of t is y1 and y2, you know, can be written as a, as a what? Let me write it like this. e to the lambda 1, e to the lambda 2, oh please, 0, 0, and then c1, c2. In fact, let me, I mean, remember the constants? They're not, well, they are arbitrary but they usually tell you or they match the initial conditions, right? So time 0, those are the values of the constants, right? So I can put it y1 of 0, y2 of 0. So in other words, for this system I can write this as y at 0, right? So that's how you solve it. Again, this is for a equals this very simple diagonal matrix. Now, that's fine but we really want to compute the solutions when the matrix is not diagonal, necessarily diagonal, right? So that's where this notation comes through and it's called the exponential of a matrix and it's not, there's a t here so I'm multiplying t times a and I'm taking the exponential of that, right? t is a scalar, t is a time. So t times a is, in this case, 2 by 2 matrix, right? And what does it mean through the exponential of a matrix? Well, you know how to exponential of a number, right? The exponential of a matrix obviously is in this case, if the matrix is diagonal it's simply the exponential of the diagonal entries, right? And putting them on the, right? So how about if a is not diagonal? Well, it turns out that you have basically the same, well, you have a similar expression for the solution of this linear system where e to the ta, or let's call it e to the b, right? So I have a matrix b, which will be the ta, is the exponential of the matrix a of b and is defined as. So the definition of this new operation is sort of very similar to, it's a mimicking sort of what happens with real numbers, right? So it's using the series, I mean, this is one of the, there are several ways to think of the exponential of a number, right? But one of them is this, right? So it's b to the n over n factorial. So it's raising them, in this case, it's raising the matrix to powers and then dividing by the scalar and then adding everything together. Well, what you get in the end is you get a series and it turns out that this series is actually convergent and so it has a limit, right? The partial sums have a limit and the limit is always defined as e to the b, yeah. So, okay, so this formula, well, I mean, we're really going to be using this formula to compute the exponential of a matrix, okay? So we're not going to be using it, but we need to know what, you know, when we see this, when we see that notation that we know what it is, right? It's a matrix that's obtained in some fashion from the matrix a, of course, t times a. So you can see e to the t times a is, is what? This is identity, right? Plus ta plus t squared over 2 factorial, right? And so forth. Now, just so you're convinced, so I can convince you that e to the ta times y naught does indeed satisfy the linear system that we're trying to solve. You can think of just taking the derivative. Take the derivative of this expression with respect to t, right? And see if you get what you need to get. You need to get a times this thing, right? So one can check that the derivative is, in fact, well, you can see it. It's kind of nicely put there so you can take derivatives, right? Assuming you can take derivative term by term, which all of this kind of needs some sort of argument, right? But assuming that that's possible, I mean, it's always possible. So identity is zero, right? Oh, hold on. I'm sorry. I'm multiplying by y naught here. So, oh, well, I guess we can do it without y naught. We can just think about taking derivative with respect to t of that thing. So this is zero plus a plus what's t squared over 2 factorial differentiated? T squared was t cubed over 2 factorial. It's t squared over 2 factorial. You see, so basically it's just an a times the same series that defines the exponential. So this is a times e to the ta. So if y of t is e to the ta y naught, it means that the derivative with respect to t of y of t, y naught is just a constant in time, right? So that's just, it acts as a constant when you differentiate it. So it's just going to be a derivative with respect to t e to the ta y naught, right? So the derivative of e to the ta and that's a e to the ta y naught. Why not? So this is a y, okay? So you see what he started and where he ended up. It says that this thing solves the system, right? Now, the only other thing that needs to be said is, how do I know that I don't have other solutions of the system that are not of that form? And anybody has any idea? How do I know that all the solutions of the system look like that? Well, it looks like, yeah, it looks like an integrity factor, but and you're right. So you can actually finesse it so that if you multiply by e to the minus ta, I believe it'll be integrity factor. So you have d by dt y of t minus a y of t equals zero. You multiply by e to the ta, right? Minus. And this would be exactly d by dt y of t e to the minus ta. So this is conversely, right? So then let's say I believe you have to multiply to the left. Yeah. So that's an impossible operation, right? Because it's the matrix applied to a vector. And so this means that e to the minus ta y of t is constant. If the derivative is zero, it means it's constant in time, right? Well, what constant can it be? Well, it's whatever it is at time zero, so this is why not, right? I should say that the exponential of the zero matrix is one, right? In that series, there's only the identity that, I mean the identity, okay? So right, so then y is you multiply back e to the ta y naught, right? So I hope I convinced you that this expression really is the general solution of, or the solution of the dynamical system, y prime equals ay. And this vector is just initial condition, right? So now the only thing left is, so given a, what is e to ta? Well, we saw, well, we didn't do the computation, but we saw that if this is diagonal, let me use the word d here. It's lambda one, lambda two. It's actually a good exercise. Take that series, it's not hard to see, that the series definition, the definition of the series, right? As a series solution. It's just basically taking powers of d and powers of a diagonal matrix by just raising the diagonal entrance to that power. Then summing it up, and here pops the exponential of t lambda one, and the same e t lambda two, right? So it's just decoupled. But in general, it's a scary word. Jordan-Kanonika form are needed. And so if n is three or higher, this can get quite complicated in terms of describing what the Jordan-Kanonika form of a matrix is. But in n equals two, there are very few different cases. So it's worth to list them. Before I list them, let me say what this is. So this is a theorem, which is a linear algebra. It says that for any matrix, and again, let's do it for n equals two. For any two-by-two matrix A, there exists an invertible matrix U and a Jordan block diagonal matrix J such that, here's how this matrix looks. So a matrix looks A, U, J, U inverse. U is invertible, so there's U inverse. And J can be of one of the following types. So J can be diagonal, and that's lambda one, lambda two. Or it can be, again, diagonal. So I guess I could say that as follows. So it can be a diagonal with two distinct diagonal entries, or it could be with the same diagonal entries. So D is, well, in that case, it would be really simple. It would just be kind of a lambda times identity, right? So if lambda one equals lambda two, that's why we distinguish the two. Or, when this is important, it could actually be of the following form, where there's this one on top, so it's no longer diagonal, so maybe I shouldn't call this D anymore. Let's just stay with J, J, J. Because J could actually be of this form. Actually, there's one more if we were to talk about alpha beta minus beta alpha. Okay, so we'll see how all these forms, I mean, what do they correspond to. So there's one such matrix, J, that gets associated to the original matrix A. And then what's the relation between this J and that A? Well, it's this kind of a decomposition, it's called Jordan-Connicoff form decomposition. And it might look like, okay, it's, why is that a composition important? So I'll tell you in a second. So, yeah, please. Alright, so maybe I should say the first one, well, in all of this, lambdas correspond to the eigenvalues of the matrix. A, so assuming you know what the eigenvalues are of a matrix, you can have, for a tube eigenmetrics, you can have two real eigenvalues, distinct eigenvalues, that would be that first case. These two cases would be when you have a repeated eigenvalue. Lastly is when you have complex conjugate eigenvalues. So here you'd have lambdas alpha plus and minus i beta, okay? So these are the three cases. Okay, so let me just say this. So e to the t times J is simple to compute. I mean, so reason for this decomposition is that, is the following. Is you can compute the exponential of a matrix. So e to the ta is, well, think about it. It's identity plus ta plus t square over 2f square and so forth, right? Now think about replacing a with this. So it's u, tj, u inverse, right? It's just ta, and I put the t with J instead of, what's the next one? Well, what's a square? a square is uj, u inverse, so times uj, u inverse, correct? So u inverse, this u inverse with this u cancel, so it's uj square, u inverse. And in general, this is true for any powers, right? An is going to be ujn, u inverse. So this starts to make sense why the decomposition is useful, because powers of a matrix can be computed in this relatively simple fashion, right? You only have to raise the one in the middle to the power n. The outside and the inside, you don't have to do anything. It's just u and u inverse. So here you would see it's u, t square, j square over 2 factorial, u inverse, and so forth. So in the end, you can do, and you can also replace identity with u and u inverse. So this is, you can factor out if you want u on the left and u inverse on the right, and what's left is just the exponential of j. So it's, so exponentiating a matrix e to the ta is simply taking the exponential of the special form matrix, j, and then multiplies to the left and right by this invertible matrix, u, okay? So the key here is that e to the tj is easy to compute, is easy. For, it's easy to be computed for all types of j listed above. For instance, we saw that what happens if, well, if there are two distinct eigenvalues, then, right, it's exponentiating. I guess the most interesting one would be what if it's the same eigenvalue, repeated, but it has this one, right? Because if the other type was just diagonal, so that's, again, it's just dimensioning the term. So this was e to the lambda 1 t, 0, 0 e to the lambda 2 t. Anybody knows what e to the tj looks for this matrix? Again, it's kind of a, it's not maybe not as easy as one before. I guess I just take powers of this, plug them in that series, and convince yourself that this is e to the lambda t, or t lambda, I should say lambda t, lambda t. So there's this extra factor of t that appears, okay? So that's the most complicated one, of course. Well, the most unusual one, I should say. Also, if this complex conjugate roots, eigenvalues, then let's see. I believe this is e to the alpha t cosine beta t, e to the alpha t sine beta t. I'm not sure I have the sines in the right place, but I'm not off. I mean, even if it's not right, if this minus is here, I think it's right, okay? So again, this is the picture when we have two by two systems. But this picture kind of generalizes to when you have n by n systems. So that's why this decomposition is very important. What are we looking, what are we interested in? Remember, where do we start? We started with saying, we'd like to understand the dynamics or the way the solutions behave for the linearized system, right? So to wrap it up, we start with our nonlinear system, right? And it has an equilibrium here. Then we linearize. So now the system looks like this. And what happened with our... I mean, how is this x coordinate system different from the y coordinate system? I'll just put it y. So this is the y phase portrait, the linearized phase portrait. This is the x phase portrait. It's a translate. Well, A is not any matrix, but is the Jacobian matrix at this equilibrium. But not only that, we also translate the y was, so remember y was x minus x star. So the solutions that approach x star, or started x star in x, in y would start instead at zero, right? Solutions that kind of approach... If there was a solution that was approaching x star, then in the y-picture, it would approach zero, right? So in other words, if the picture looks... I'll just take an example. If the picture looks like this... Again, you can have various behaviors, but let's say the picture looks like this. For the linearized system, so on one direction it goes in, on one direction it goes out, and then, of course, this is an unstable equilibrium, right? For this system. Then, when we said that we approximate the dynamical system with that dynamical system, what do we mean is really that there is some sort of a transformation between the two. So there is some sort of direction... It's no longer a straight line, though, right? It could be some sort of two curves, right? So it's kind of a deformed picture of... Whoops, I got it wrong. This one is to go in, okay? Again, I'm just plotting a few, but if you kind of plot a lot of this, right, and then you plot the corresponding ones here, it's almost as if you look at this picture drawn on a hill, right? And the hill is not straight. You don't look straight up. It's kind of a... It's deformed, right? So in that sense, we say that the linearized phase portrait kind of approximates the original phase portrait near that point, okay? And you can see this in the examples, in the specific examples. Any questions on this? No. Yeah. It's kind of so what, to me? So basically, by reading the behavior of the solutions of the linear system, we can infer behavior of the nonlinear system. Is it easier to deal with this? Well, we could solve it explicitly. We've just solved it explicitly. So here I have... So the bottom line is that the nature of the solutions of y dy dt equals ay near zero. But I should say that for this linear system, there's only one... Typically, there's only one equilibrium. That's zero, right? You always shift it so you're looking at the zero equilibrium. Is determined... Is completely determined by the eigenvalues of A. Okay? Because you saw... You saw those... Again, I only showed you in the 2D, 2 by 2Ks, but it's true in any dimension that the eigenvalues will determine whether the solution approaches zero, that is the equilibrium or not. Like, think about any of these cases. What would take the first one to approach zero as t goes to infinity? We're always thinking t go increasing. You need to have the lambda 1 and lambda 2 to be negative. Right? If they're not negative, if one of them is positive, right? It means that at least on one direction it's going to go away from zero. Take a look at the second one. Again, to have this... And again, it's not just this, but it's multiplied to an initial condition, right? To have that go to zero, it's enough that lambda is negative. And it's... Why is it enough? Well, this exponentially goes to zero. This exponentially goes to zero. This still goes to zero, but not that fast. Yeah. You can ask it? About the t, right? So even though this kind of looks like it's growing, but if lambda is negative, it goes to zero. And also in this case, what needs to happen so that all the components of this matrix go to zero? Alpha has to be less than zero, right? So that's the real part of lambda. So let's kind of summarize. So if the real part of lambda is zero for all eigenvalues of A, then it means y star equals zero is asymptotically stable, which translates in, and again, let me remind you, A is the Jacobian matrix at that equilibrium x star. So it means x star is asymptotically stable. Yeah, if the real part is negative, and you have imaginary, I mean, you have complex eigenvalues, then what that translates into is parling in. Because you have some rotation given by cosine and sine. So this is lambda 1, 2 is alpha plus minus i beta, with beta not zero, of course. And alpha is negative. So that's the real part, right? Then it goes into, yeah? Okay, so that's nice because now you can go to your dynamical system. Nonlinear, right? The one that you are interested in. Let's see, I'm interested in, well, you can do it on the way up, probably you can do anything. This code that I showed you, example 5.1 is a combination of 4.1, which you haven't talked about. But it's just basically two species of trees that fight for survival. Each has a certain intrinsic growth rate, maximum stem population. So this is, once again, well, I think I factor out something, but logistic growth for each, and then there is some competition for each, I mean between themselves. Okay? And you can use p-plane or whatever, but the point is that, let's say you find, I don't know, how many equilibria, right? Then you can take one equilibria at a time for the nonlinear system, linearize, look at the eigenvalues of that matrix, and draw the conclusion. So of course you can do it all at once, but it's not, again, you probably know by now that for different problems it may not be as easy to do it, but anyway, here it's just computing, taking all the equilibria and it's saying, for the 00 equilibrium, you see what happens? It computes DF, and DF was kind of manually computed here, right? I mean, it was coded in, and it's done at a symbolic level, which is not that unusual as long as you have the system. I mean, if you can write the system by hand, you can probably write it in the computer, right? And taking the derivative symbolically, I haven't heard of a case when you can write a function and the computer cannot take a derivative, right? So as long as you can write the function in the computer, then it means the computer knows those functions involved and you can take derivatives, right? It's the other thing around the integration that's difficult, but... So it's simply doing that, computing the metrics, and then evaluating. You see how I evaluate at each one at a time, and the last thing is, EIG is the computer lambdas. And it does it numerically, so at this point it's a numeric computation, right? So in principle, it has error, right? I mean, that eigenvalue may not be 0.07, I don't know, it's some sort of approximation, right? But again, it's the level of trust that you have on this. Like, if this is 0.07, it probably means it's positive, right? Okay, so you see that... So those eigenvalues are both positive, so that's real and positive, right? So that means it's actually not asymptotic stable, but it's not even stable, it's unstable, right? Well, the one we've talked about so far is if both have real negative parts, negative real parts, right? Then it's a stable equilibrium. Okay, now let's look at this one. Why would this not be? Well, it's, again, you look at the linearization and you see it, it's not... There is one direction, right? Corresponding to the eigenvector for this eigenvalue, on which the solution moves away, right? It has a positive e to the lambda 2t. Lambda 2 is positive, it goes to infinity, right? Okay, so you can classify the equilibrium by this simple criterion. Okay, so I should say this. So this is the key real part of lambda for all eigenvalues. It should be negative to have asymptotically stable, right? It gets tricky when the eigenvalues are actually of zero real part, okay? For nonlinear systems, it's kind of... It's a difficult analysis that needs to be made. For linear systems, no issue, no issue, right? It just means it's kind of going in circles. So it's stable, but not asymptotically stable. But what's clear is that if real part of lambda is positive for at least one eigenvalue of A, then x star is unstable. I mean, unstable in the sense that it goes away from that point, right? Now, again, you're talking in the neighborhood of the point, so you cannot extrapolate this to other systems. And just to convince you of that, if you go to P-plane, because, again, P-plane is good for two-dimensional systems, look in this van der Poel equation, which is... We haven't talked about it, but it's actually mentioned in your chapter 5, is the RLC circuit. So it comes from an electrical circuit. But, again, let me just show you. So if you look at the direction field, it looks like this, right? And now, again, it's plotting the solutions forward and backward in time. So you cannot really see when I'm clicking, but what happens is there's an equilibrium here at zero. It's kind of easy to see by hand. Here's the system, right? So you set it equal to zero, you got x equals zero, then you have to have y equals zero, right? So, obviously, that's an equilibrium, and you can find it through this thing, but you also get it automatically, the Jacobian, at that point. You can do that by hand, but here it's just giving you... How would you do it by hand? You would just take the derivatives with respect to x and the y of the first and the second, get a two-byte matrix, and then evaluate it at zero, zero. Okay? No, no, no. But this is the original A, and it can be put in one of those forms, or it is always equivalent with one of those forms. But, again, you don't have to... You just look at the eigenvalues and know that the exponential of that matrix will... how it will behave, right? So what are the real parts? It's positive, right? Real part is positive, right? Oh, yeah, this is one. It's one. Eigenvalues. These are the eigenvectors. So we're looking at the eigenvalues, and the real part is one. In fact, both are one, right? So repeat it, and it's one. So it's positive. What is the conclusion? It's unstable. In this case, it's x star is zero, right? But it's the equilibrium you're doing the analysis. And your conclusion was it's unstable. What does it say about the unstable? So you see you can display the linearization. And how does it look? You see it just goes away in all directions, and it keeps going away, right? Whereas in the nonlinear system, does it go away forever? No, you see it kind of goes to this limited cycle. So this just shows you how... I mean, the picture that you saw there matches this, but only in a kind of a neighborhood of the, right? If you move away because of nonlinearity, things are totally different, okay? So I think some of the homework is... I mean, the whole homework you can do is with P-plane, it would be just a matter of plotting the linearization, plot a face portrait for different... Now, remember, if you have different equilibrium, you're going to have different linearization at each equilibrium, it might be totally different. Some are stable, some are unstable. Yeah, the eigenvectors give you the direction in the linearized system, but in the nonlinear system, I don't know what's a good one to show, but like, for instance, well, that's not a good one because this doesn't have a little... No, that's not good either. I think it should do competition species here. Okay, so you see here, this equilibrium is unstable, right? For this particular problem. The linearization looks like... I mean, you don't have to do anything, it's just a few clicks, right? The equilibrium, the linearization looks exactly like it's one of those typical pictures, right? The shapes are hyperbolized, right? So this, right? It kind of matches this in the neighborhood, but you see, as far as the line, so this line, which would be, I don't know, you should display the vector, the directions to see which way it goes. Let's say it goes this way. On this, there's a positive eigenvalue, right? At least real positive eigenvalue. This is a straight line, whereas here it's not. It's like a curve, right? So in the nonlinear system, you don't have... I mean, you don't have lines, right? You have curves called stable curve and unstable curve. If you wanted to draw a tangent line to this, you would get the eigendirections, okay? And I should say one other thing, and it's kind of crammed to the end here, but for this discrete dynamical systems, I'll just say the following. So if I have xn plus one is g of xn, right? And x star is a equilibrium. Then how do you decide, first of all, what does it mean that it's asymptotically stable? It's the same thing. It goes towards that point, right? Upon iterations. So then the eigenvalue method is linearized around x star. So what is that supposed to say? It says take dg of x star, right? Call this the matrix A, yeah? And then look at the eigenvalues. So then the linearized system would be, of course not x, but some y, right? So linearized system, okay? And in this case, it's very easy to actually solve what is xn. You see, it's A times A xn minus one. So it's A squared xn minus one. So you can continue like this, right? It's just powers of A. So this is not e to the ta. You can continue, but it's powers of A. Well, when is Am going to zero? Well, the answer is when eigenvalues are in absolute value because what you do in that canonical formula you're raising the eigenvalues to a power n. So powers of a number, when is that going to zero? It's when that number is between negative one and one. If it's real and if it's complex also if the modulus is less than one, right? So that's the criterion for stability for discrete is simply if lambda is less than one for all eigenvalues of, again, the Jacobian, if you want, of the right-hand side then x star is asymptotically stable. In the linear system, but in the linear system that's true the question is does it always translate to the nonlinear system the same way and it's always more delicate. I don't have a, the answer is not always yes. And just, again, just to point the docking problem that you've had in your homework. If you just look at the Jacobian, eigenvalues of the Jacobian, this number should be less than one not negative, right, for stability. So it's very important that you know are you talking about a discrete system or a continuous system when you make the decision, okay? Now there's a kind of a visual representation of these discrete systems when we'll talk about this some more. But I hope that's giving you enough. And remember some of the handouts I gave you and some of the links on the website has, one of the books actually has a whole sleigh of examples of discrete dynamical systems. I want you to, I mean this homework should be a breeze basically because you have everything from the previous. So it's the same problems. I pick them to be the same. All you have to do is include this eigenvalue computation, okay? And it's not even done by hand, so it should be easy. So with the extra copy-sex of time that you're going to have from having such a simple homework is go to those other examples and see this kind of, see this behavior, kind of this analysis done for those systems. Okay, thank you.