 Let me say this on the record that this lecture should be used only when needed, when you have to miss class or something, but not on a regular basis. Participation in class is key, and I think I'll start taking roles just to keep track of attendance. It's been, it's happened in the past that I've, you know, we found ourselves at the end of the semester and students would come to take the exams, wouldn't do good, and they wouldn't know why. I mean they were like, yeah I've been watching the lectures, but I think there's still a disconnect if you, if you rely on the lectures too much. So let's see, obviously I printed too many copies here, but I have a few handouts that I want to give, and also if you look at the course website you will see most of these handouts are listed here, so that was, yeah, so the first two are, I think they are the last one, well number C here are on HIV infection modeling. So this particular one that I call baby model is just a one piece of paper, one sheet that I put together for a model that describes sort of dynamics of the HIV infections in a certain, in the organism. So here I just kind of describe the variables, there are four variables, and there are, there's going to be four rates of change, right, and you see the rates of change of the four variables depend on, in a nonlinear fashion, on the variables themselves. So you can see this is a dynamical system, right, in four dimensions. With a bunch of parameters, and again I don't have time to talk about significance of these parameters, but you can take this, say the numbers that are given here and the initial conditions and just run the simulation and see what happens. The code for, well there is, there are built-in codes in MATLAB for solving systems of differential equations of any order, but if you want GUI, there is same place, this same place as P Plan and Direction Field. If you go to this website at Rice University, OD Solve is, I may have shown you this, but if not, it's the same as P Plan, well it's similar to P Plan the way you, it starts with the exception that now you can actually change the number of equations. So remember P Plan only had two equations. So you can, you can actually now, can I look at one, oh there's plenty, can I have the x-rays? Thank you. So you can type in those equations, put in the parameters, thank you. Let me give you this handout too. And notice that there aren't, there aren't too many places, I mean there are places for six parameters. So if you need more parameters, you can use them in the expressions. And if you are really kind of running out, if you have more parameters than this, then I guess you can just hard, you know, type them in the equation, right? Okay so I'll leave this as an exercise, you know, I won't do this completely here, but you can get creative with the, with the names of the parameters, lambda minus mu times t minus k times t times v, right? And so forth. L, I and v, each has a significance that's indicated on this handout. So then you have initial conditions, right? And you have solutions, you have the number, excuse me, the solution intervals, so t runs between this and this. It may needs to be changed, depends on the, I don't know if it's seconds or days, I believe this is days, in days. And then you select the solver, so it's typical, typically start with the OD45, which is Runge-Kutta, the solver. And you can do a time plot, so you can see basically the four variables, you know, starting with those initial conditions, what happens with the four variables? Okay, and that is, yeah, yeah. So lambda and so forth, right? That's right, lambda is, mu is, takes a little bit of work, but times to negative three. Well, as I said, you can use these expressions. So in these expressions, you can actually, you can have a parameter that depends on time and actually, you know, you can have a variable coefficient in those equations. You could have a non-autonomous system, right? But again, you could use these spaces for the extra parameters. And if you run out of everything, you just type it, you know, you just type it in the equation. Okay, so you do this, and of course you have to, you have to fill in everything, otherwise it will tell you something is not defined, right? So let me not do this completely, but I'll just give you this hint that if you, once you're done with the work, you know, you save the, this, this system, and it saves with this extension ODS, I don't know, you give the names, S-H-I-V. And now when you close this, you can actually call it again, you know, you can actually load it, load the system, right? So you have to type it again. Or I believe you can even, you can, you can call it from this ODSolve and then H-I-V-ODS. Actually, the problem is it was saved somewhere else, so you just have to see where it was saved. Well anyway, you can just do this. I think if you just load it, yeah, it's saved in H-I-V. Okay, so I don't know why it's not there. Well, I don't know. You can just load the system, okay? All right. So that's one thing and there is a much longer or much more in-depth article or review article from 1999 that, you know, kind of explains how that model comes into being, this is one of the very famous mathematicians that has worked in this infectious disease modeling, infectious disease, Allen-Perilson. And again, you see some of the graphs that you can actually reproduce on your own. And if you look through this, of course, the first part is a lot of the biology of it, but soon it will start seeing the equations that maybe are not exactly the same as the baby model, but they are in the same ballpark. And so it's kind of an interesting, you know, and it's sort of a reference article, okay, about modeling the H-I-V. Okay, so that's one thing, and again, we won't be talking about this much, but I invited to kind of look, you know, at least see what are the problems, what are the limitations with higher-dimensional dynamical systems, you know. And when you have a two-dimensional dynamical system, so two variables, you know, you have the plane that you can analyze, you know, null clients, you can analyze stability, you know, steady-stage. You can see the solutions, right? It's much simpler. When you start doing four-dimensional, the only thing you can display is time evolution, right, for specific initial conditions. And there are other tricks by which you can actually represent your data, I represent. You can do, I think, a 2D or 3D plot of one versus, you know, kind of selected number of, I think if you look in the gallery, Lawrence is three variables. This has to do with some chaotic behavior. So this is time evolution of those, again, it's a dynamical system with three unknowns, three variables, state variables. But you can also do a 3D plot, x versus y versus z, which gives the famous, what's called strange, strange attractor, right? I think you can even rotate, I forgot how to rotate. Okay, so you can see that, again, the trajectory, you don't see the time, you just see the trace that that particle or that initial condition leaves as it moves in the space, okay? But when you start talking about like, whoops, I always do this. When you start, as I said, four-dimensional, so if you look at two springs, so this would be two springs attached, one to a two-spring mass system, right? Then you have, this is a four-dimensional dynamical system, there are four degrees of freedom, right? And the time plot may tell you something but may not be exactly what you want, so then you could plot, for instance, for the first spring, you could plot x versus u, so this is the position versus the angular velocity, this may tell you something or may not tell you something, or you can plot, you know, at most two or three variables, right? You cannot plot four variables. One thing that I want to point out in this is, so I have a two-spring, so I have basically, or actually could be a double pendulum, too, right? Similar thing. If I have, if I'm looking at the first pendulum or the first spring mass system, right, that's affected by the motion of the second one, right? And the second one is time-dependent, so if you only focus on the first, you know, position and momentum of a component of the system, then what kind of, what kind of dynamic is that? What kind of, what kind of dynamical system is it? Autonomous or non-autonomous? Well, you can think of the pendulum as being, the first pendulum is being forced by the second pendulum, right? And the second pendulum is time-dependent. So you have, in your system, you have a forcing that's time-dependent. If you look at, just look at the equations here, you see, if you, if you only try to isolate the first system of the first two equations, you see that there is the y, that's the second position of the second pendulum, or second spring mass, right? And that's going to be time-dependent. So if you only try to isolate the first one, that's going to be a non-autonomous system, right? Non-autonomous system, what happens with the solution curves if you are to plot in a, in a, in a phase plane? What's different from, between this and something that's, let's see, a two-dimensional system. Again, this is time-plot, but the p-plane thing, sorry. Ooh, let me go back to the p-plane since, so take anything that's like prior to prey, right? What do you notice with the trajectories, with the solutions to this autonomous system? They never intersect, right? So they don't, so two distinct, distinct trajectories never intersect, or if you start with a, if you start with a, you know, with a solution and you follow that trajectory, it never intersects like, you know, you know, like in a point. It's either, it would kind of, in this case, it would be a periodic solution, right? Which you could see if you were to plot, let's see, if I do both versus time. Let's see, I don't know why I cannot see the versus time. Maybe we should change. Let me try, let me try competing species like the ones we talked about. Okay, you can see x versus t. Oh yeah, okay. So you can see both x, oh yeah, I think we have to click on a previously computed solution to see this, right? So you see both x and y versus t for this particular trajectory, right? But it's trajectory itself never, and there are no two that actually intersect, right? So this is a signature of being autonomous system, right? Why? Because in autonomous system it doesn't matter when you start, right? If you start at a specific location, then you follow basically your solution in an unique fashion. I mean, there are exceptions, of course. If the equations are not, you know, the right-hand sides are not really nice functions, then you may run into problems of uniqueness. But assuming that, you know, the left and the right, I mean, the right-hand side of your dynamical system is, say, continuously differentiable. So there's derivatives that are continuous. Then the solution, given initial condition, the solution is going to be unique, right? At that point. So it means that for this system, at this point I cannot have a solution that actually self-intersects, or two different solutions that intersect, right? Whereas when it's time-dependent, so again this is, I believe here, if you put a time-dependent part, let's see, I'm curious what happens. So if I have some sort of forcing, let's say sign of a sign of t. Yeah, see, in p-plan it doesn't even accept time-dependent, I believe. Let me try t. Yep, so p-plan is for autonomous systems. If you need a non-autonomous system, even 2D, then you have to move to this other one, right? So again, this is autonomous. Solutions will not intersect by default. You don't see that, but in the plane it's a little bit, by default it plots a time plot, but if you have to do the face plot, okay? Now there is something funny going on there. It looks like it's, what do you think is happening? Looks like it's not coming back and doing a periodic. So if you do like t between 0 and 100, you see it feels like it's truly moving away, but the reality is, I mean, can this happen in an autonomous system? Basically it could, right? But it's more likely that it's actually the error in the computation that's actually moving it farther and farther away, right? So even if this is not true error in the computation because it's autonomous, this should not self-intercept, right? It should probably spiral out if it does, but we don't, I think it's more likely it's an error, basically, okay? So I just wanted to show you a non-autonomous one, which was forced oscillation. So forced oscillation basically says, I don't know if you can see it, but there is an extra term in the velocity. So it's like, excuse me, in the acceleration, right? The derivative of the velocity. So that's, so there's actually a force that's time dependent on oscillating, cosine, right? Of some frequency, 2t, yeah? There's a forced spring mass with some damping and then some forced oscillation, and it's always time, time evolution here. But if you do a time plot, excuse me, a phase plot, then it does self-intercept, right? Because you have, because this maybe probably that's where you started, x0, right? But then as you move and you try to fit that direction field, that direction field changes. So it's always kind of constantly trying to adjust to the new direction field, okay? By the way, that's the reason why you don't see any direction field in ODE solve. I mean, how could you see it? You'd have to make a really fancy sort of way in which you see how the direction field changes with time, right? And another thing is if you start at the same location but at a different time, it will do a different thing, right? Because at a different time, the force is probably going to be different, okay? So all of these things are kind of interesting to understand through these examples, these properties of continuous time dynamical systems. So I hope you can use this tool. And the last handout, do you see any questions on this? HIV thing. The last handout was a model of cancer, not as much growth but just a cancer treatment using competition model. I only made copies of the first two pages but you can access the whole article. So there's a myriad of examples of models. This one that I pointed is in a book, I think it appeared in 2007. And I think it's chapter nine. So you can just get the whole, you know, you can read the whole article. And you can see things are, I mean, the models that one starts with are pretty basic. I mean, it's like the whale problem if you want, right? But it very quickly moves to more realistic ones. So it starts with what you've seen in the logistic growth, right, for each population. Of course, in this case, it's a different thing. It's X1 is the concentration of normal cells, X2 is concentration of cancer cells, right? So then it's some sort of competition between normal and cancer cells. And the point is, and we're going to revisit this, not necessarily on this example, is that treatment means that you actually affect the rate of change or the growth rate of the normal cells and cancer cells with certain terms, you know, with certain terms. And I think this article just kind of talks about different strategies for treatment that would drive what? X2 to zero, right? The cancer cells to zero. Again, there are other examples. I think there is this chapter one. No, I'm sorry, chapter two talks about epidemic dynamics. I think it's doing, oh, this one is doing the SIR epidemic. So that's another source. Let's see, and two more, which I didn't make, I don't believe I made copies, but there are the two books that I mentioned in the syllabus that we're going to be kind of referring to. One is this mathematical models for life sciences. And you can see there are two chapters, one on continuous time dynamical systems, one on discrete time dynamical systems. I will turn off the, wow, okay. I thought I turned it on. Yeah, in fact, I think it was always off, then it was on, and then it was off. All right. So there are three variables. In a population, there are three types. There is the susceptible population of individuals. So there are individuals that can be infected, but are not infected. There is the infected population. And I think the rest are basically the immune population that are not infected, but are not susceptible, so they had the disease, I guess. So once again, these three variables appear in that model, which I couldn't display. And the dynamical system, well, one assumption is that s plus i plus r equals constant as the total population. Of course, this assumes that the disease doesn't kill people, so there's like the same population over time, but typically that's not true. So this is just a kind of a simple assumption. But anyway, the rates of change of this are as follows. So there's like a constant, and I may not use the same letters for the constants, but you can see the terms are sort of the same type of competing term, competition term, right, prior to prey term. So it's a product between kind of the, it's the interaction, number of interactions between the susceptible and the infected population, right? So there's a decrease in the number of susceptible population due to that interaction. The infected population, of course, grows if there is infected people interacting with susceptible people. And there is also an intrinsic, it's not a growth, but it's a decay saying that if you get infected after a certain amount of time, at a certain rate, you're actually becoming immune, right? So you get cured, I guess. And the last one is, again, this is just, I guess I should, I'm sorry, I shouldn't put, this is just, this is the model for problem number 10 in the chapter 4. So there's maybe a gamma here, something like that. So by the way, one thing you should notice here, so A, gamma are parameters, and one could do some sort of sensitivity to these parameters, yeah. So this is from the case studies by, I think the authors are Caldwell and G, okay? So also C also problem number 10 in our book, okay? So it talks about the same model. Okay. So let me move to discrete systems and just tell you what the difference is. All right, so let me just kind of make a parallel with the continuous dynamical systems. So in discrete dynamical systems, usually we have a change, if you want, over a period of time, a fixed period of time that is described as a function of the current state, okay? So the, so x is a state variable, right? And it could be, I don't know, it could be not one, but many, right? Number of state variables. So this is state variables. And delta T is some fixed time period, I don't know, one second, one day, one year, it depends on the problem, right? But it's sort of a, it's saying that whatever experiment we do, we only care about the change during this, you know, it is up to each such period of time. It's like a sample, right? So, and the model just tells us there is a change, or the change that is occurring during that period of time depends somehow on the current state, right? So for all practical purposes, we're going to assume that delta T is one, I mean if it's not one, you just move it to the right, right? So really what we have is a, we have a dynamical, well, we'll see why it's a dynamical system, but we have a law that says the change in x is a function of x, okay? And this is the contrast, of course, the continuous time dynamical system, right? Where this would be sort of an infinitesimal change, right? So that law says it's an infinitesimal change of the state variable as a function of the current state, okay? So typically, we're going to think about delta x to be a change, and again this could be a vector, right? This could be several components. So we're going to think about the, this is the new or the next value of x minus the current or the old value of x as a function of the old value of x, okay? So what you see here is that because the model comes as a change delta x is given as a function of x, that the new state, so when you update, when you compute the new state, what you're going to see is x old plus f of x old. So this is conveniently written as it's a new function, let's call it g of the old state. So this is where g of x is just x plus the right hand side, right? Of your model, okay? So this is actually, now it's different than continuous time dynamical systems where you have to solve sort of a differential equation. In this case, what do you have to, what is this? Well, so this is upon iterations of this, here's what you get. You start, so start with x0, right? That's the initial condition. Then you're going to be x1 is going to be, I don't know, I think I use these superscripts. So x0 is x0. x1 is g of x0, right? x2 is going to be, so you see that's why it's convenient to have the g because you just apply g to the current state and you get the new state, okay? Rather than making the difference, rather than using the f, right? f would give it the difference and then you have to add the x old. So that's the only thing. So in general you have xn plus 1 equals g of xn and you can think of, you know, just imagining n equals, okay, now we're going to get into trouble with little n should be, let's see, do I use different, oh well, let's say two variables. I want to just say n equals 2, but so if I have x1, x2 here, right? Then what's going to happen? I mean, what is this iteration doing? It's basically saying take xn, let's say this is the current state, right? And compute the new state. Now, how is this going to be computed? Well, the difference coming back to the original equation. The difference is, so this vector if you want from xn to xn plus 1 is going to equal exactly f at xn, right? So graphically this is what you're going to, if you have to display graphically, this is what you'll see. You'll see basically f evaluated at each state and just kind of put, you know, kind of add those vectors together, right? So fn plus 1 is going to start where fn ended and so forth, right? Okay, so in a way it's kind of, it's hopping on top of the direction field given by the right hand side of that model. But let's talk about a particular case and then we're going to talk about an exact model. But a particular case is when g is linear, that is, f is linear. What does it mean, linear? Well, if f is linear, f to be linear, it means that this is just a matrix, I'm not going to call it a, let's call it b times x, right? So it's in effect, it's some sort of x1, x, let's call it k. I want to change this from n to k because n is going to stand for the number of iteration, right? And this is going to be b11, b1k, bk1, bkk, okay? So that's what it means that f is linear, which would make g to be what? Well, it's x plus f, so it's x plus b times x. So this is just the identity, matrix plus b times x, right? So bottom line is if your matrix, excuse me, if your right hand side is like a matrix multiplication, then that's a very special case, very, very easy case to analyze because iterations of this system are as follows. What is x1? x1 is a times x0, right? x2 is a times of x1, so it's a squared times x0. So in general, is xn is going to be the nth power of this matrix times x0, okay? So this is an is the nth power of a, okay? Now imagine, of course, this is not going to be always the case, but imagine a as a diagonal matrix. So say a is a diagonal matrix, then what happens with the, how do you raise a diagonal matrix to multiply with itself? n times is the same as multiplying the diagonal, right? So you raise this, these are the eigenvalues of this matrix, right? Just like raising the eigenvalues of this matrix to the power n, right? And what can you say about as n goes to infinity? Well, there are only a few possibilities, right? In this, in this special case, if lambda, lambda could be, well, could be complex, but let's say it's real, but it's an absolute value less than one. So between negative one and one, then what has, what happens with the, actually, I'm sorry, I should say strictly less than one. What happens with the powers of a number that's less than one? Goes to zero as n goes to infinity. So let's say if both are less than one, then both go to zero, right? Meaning that this matrix is going to go to zero. So this means that xn is going to go to zero as n goes to infinity. Well, what does that, what is that supposed to mean in this picture? It says if I start with x not here, you know, I'm going to hop on the first, right? Maybe hop on the first, take the first vector, right, and reach to x1, but eventually what's going to happen is you're going to go to zero, right? Not necessarily in this fashion, but in some, some, some sort of fashion, right? And that tells something about this steady state. What is the steady state of this system? Well, again, it's, unfortunately, you have to go back and forth between the original system, which has a change in x and the iteration system. So the easiest is to think what is a steady state for this, for this dynamical system? Well, a steady state is a state that once you're there, you stay there forever, right? So there's no change. So this basically means that's one f is zero at that state, right? So a steady state for this is x star for which f of x star is zero. So it's just like in the continuous dynamical system, you look at the right hand side and you set it equal to zero, and you get the places where this is zero. Now, you tell me when I have a matrix, what do we call it? This was b. So if in the linear case, b was, I mean, f was just a matrix multiplication, right? And now I guess I should say if b is invertible, then zero is the only steady state. If b is not invertible, zero is still a steady state, right? So zero is always a steady state, right? So, but there might be others, okay? In fact, if f is not invertible, then you can, you have a whole kind of line or some sort of subspace of zeros. So that's, I mean, that's a possibility, but it's sort of a degenerate case, right? So in other words, this iteration is telling you what's the nature of the steady state, right? Whether it's stable, like things go to it, right? Or don't go to it. This is if the problem is linear. If the problem is nonlinear, you know, you can have all kinds of crazy stuff, okay? Even in 1D. If it happens to not be invertible, then when you solve this, you're going to get whole subspace, like a line of points of zeros, which means that the steady state is not asymptotically stable. Things don't go to it. Things starting nearby on that particular line might stay, you know, might stay at those points, okay? But I mean, you deal with this case by case, right? So you have a problem. You look at the steady states. You look for the steady states. Then you analyze each, right? So let me just say about this docking problem. So it has to do with, the problem is sort of, you start with a spacecraft, whatever, and you can only maneuver it at discrete times, at discrete moments of time. So you have some, you know, some delays if you want. And you'd like to dock it, I don't know, let's say I want to dock it. This is an international space station, right? And this is a simplistic kind of situation where you can move it. You know, you're really close to it, but you can move it in only one direction, right? And what you want to do is you want to, you know, be able to stop it, you know, at zero, right? So this would be x equal, well, whatever the x is, right? So you want to, you're maneuvering it. You start with some initial condition, with some initial velocity, some initial acceleration, and then you force it, you modify its acceleration. You control it so that you, the goal is to get on the dock with velocity zero, right? So here's the control, I'm going to simplify this a little bit. The control is the, so an is the acceleration at time t, at time tn, it's like n delta t, and vn is the velocity at time t, tn. And here's the control law, so sorry, this, the control law is, it says that the acceleration is proportional to the velocity and with a, with a negative constant. So minus k with k is positive, saying that if I have a velocity that's moving in, you know, in the positive direction, let's say, right? Then you want to break. So you want to put a negative acceleration to it. So let's see the kind of the, the law that comes out of this modeling is, is that the change in the velocity, so from the previous velocity to the current velocity, it's proportional to, so there are, there are some constants here, it's proportional to the acceleration at the current, current time and the acceleration of the previous time. So this is going to be equal to minus ck vn, sorry, I'm going to put that subscript here, minus wk vn, okay? And this doesn't quite look like a dynamical system the way we want it, right? So we, remember what do we want? We want something to be a rate of, I mean a change in the system, in the state of the system to be a function of the current state, right? So the way it looks right now, it says the following, it says vn plus 1 minus vn is minus ck vn minus 1 minus wk vn, right? Which is really a recursion relation, a second order recursion relation, because you see, you can use this equation to find the new velocity if you know the two previous velocities, okay? So that's why we say it's second order, right? So in other words, we can put vn plus 1 equals, what is it? 1 minus wk vn minus ck vn minus 1, okay? But this requires, so requires the two previous values of the velocity at each time step. In particular, requires, so also requires the first two, so I guess v0 and v1, okay? So you do need to know the velocity at, two values of the velocity to be able to get the new one, the v2 for instance, okay? So because of this, I mean, sometimes you can deal with this kind of recursion relations and there are ways to solve this. It's linear, also it's linear. If you look at, everything's a constant, wk and c are constants. So this is, this is linear in the unknown v, but so the key modification is to rewrite this in the following sense. So it's to consider the state variables as being the current velocity and the velocity is the previous step. So this is current velocity and this is the previous velocity, all right? And with this, you can actually rewrite the second order recursion relation as a first order recursion relation. It's exactly the same as you do when you do, when you have differential equations, second order equations, you write them as first order systems. So let's just do this really quick. So, so what is it going to be the change in, or actually we don't even have to do changes, but I guess, I guess let's start by saying what will be the change in the state of the system. So this is new x1, x2 minus old. So it's going to be, let's see, vn plus 1 minus vn and vn minus vn minus 1, okay? So this is going to be minus ck vn minus 1 minus kwvn and it's going to be vn minus vn minus 1. So one more step, you can isolate the x1, so vn vn minus 1 and you can build, you can see what this matrix is, minus kw minus kc1 and negative 1. So this is exactly b times x where, right, this is x. So b is this matrix minus kw minus kc1 minus 1, right? Because this is just x1 and x2. So this is x1, x2. So take a look. You have delta x is a linear function of x, I mean linear in the sense that it's a matrix multiplication by x. So what is it going to be the system then that needs to be iterated? So xn plus 1 is going to be x plus xn plus bxn. So this is going to be identity plus b times xn. So call this a, if you want, right? So what is the matrix A? It's going to be 1 minus kw minus kc1 and 0. So this matrix is what needs to be raised to consecutive powers and I think I have it here. Let me just look at the just one minute it's going to take. So if you look at this system, the system is over here. Now you see I do it as far, I mean as changes in the current state. So that is that matrix B, right? And you can see the direction field. So the direction field tells you if I start somewhere then I'm going to hop on that and then follow the direction field. But to actually compute, make the computation, there's an iteration step here which does compute the new x by adding the to the old x a term, right? The function the right hand side of that original problem. Now there are two ways to do it. One is to just hard code it here. So just code it like that, like we do there above. Or the way I do it here is actually they find a new function that needs to be called, okay? So the function is a separate file. Did I actually post this? I don't think I posted it, huh? Yeah, I posted this, okay. So this function which is of course very simple but needs to be saved in the same place as the main code so that when you run this it knows how to compute it, right? And then it just plots those consecutive numbers, right? And if you test it, we'll talk about this in our next in chapter five about the eigenvalues, right? So that matrix A indeed has eigenvalues that are less than one in absolute value. There was a question of why this value of the constants and I think this the first two values are whatever the meaning is I chose so that is the same as the problem in the homework, right? So that you don't have to modify any of that. This value is given to you in the homework and it's different than this but other than that I think the code should be pretty much the same, right? All right, so that's it. And sorry for the audio problems.