 So can folks hear me now? Yes. Okay. And clearly enough, this will work. Okay. Okay, I'll try. So yeah, if you can't hear me or if you missed something, somebody put it on the chat and somebody there, obviously I won't be able to keep track of it. Okay. So we were talking about how to solve it, the linear time invariant equation. So we have, we're basically trying to solve this equation, x dot equals AX plus BU. Of course, since this is just an algebraic relationship. So once we know X of T, we can also figure out what Y of T is. So really it's the differential equation to solve is this one here. And in order to do this, we'll first change variables here. You'll see why in a moment. And we define a matrix exponential in terms of a Taylor series, taking advantage of the fact that A to the N is something that we can easily express. So it's our D to the N, our inverse where these are lambda the N. And so this, I guess, should be more like an arrow when we evaluate the exponential, all of the terms will just gather into two exponentials of the individual eigenvalue. So this will be either lambda one and two, two T all the way down to the lambda Nt, our inverse. Okay. And so if we take a time derivative of Z, we just stick in here and differentiate the first term, and then the second term, but the second term is X dot. So we can put back in our AX plus BU, this term and this term cancel. And so we're just left with this term here. And so now we can just integrate this. And so Z integrates simply. And if you go back to our original variable X from here, then we have X is E to the AT times X of zero, the initial condition plus a driving term that depends on the input. You. Okay, so this is the, let's say formal solution of linear equation. So how does, how does this work? And something that you already know how to solve more simply. If we take our harmonic oscillator here, our favorite example, then as we saw last time, the A matrix is zero one minus one zero when we convert to X, X one is Y, X two is Y dot. And so we can take this A and diagonalize it. And so the result is here. And we had a good refresher yesterday on how to do this. And so now if we exponentiate this, then this will be E to the IT is negative IT. And you have these matrices of eigenvalues here and you put, put them all together. And lo and behold, you just get a rotation, which just says that the, in the X one, X two space over theta theta dot, if you want to think of like a simple pendulum, it just rotates in space, which captures the solution that we know. Okay. So the nice thing about this, this, this form here is that this is, this is also very amenable to solving the system by computer. So there are lots of routines that are very good at, at exponentiating matrices, which actually turned out to be a very non-trivial problem numerically, but it's something that mathematicians, clerical analysis analysts have worked on for a long time. So you can go into MATLAB or whatever your favorite programming languages and get solutions from these in this form. And in particular symbolic programs like Mathematica and Maple also can do symbolic exponentiation, these kinds of calculations. And so they use these routines internally. Okay. You might wonder, can we connect the frequency domain and the time domain? The answer is yes. And I don't want to spend too much time on it. The basic idea is we can take these time domain equations, take the Laplace transform. So the time derivative turns into an S, it turns into multiplying by a complex variable S. And then same for the Y. And so from the X equation, we can now isolate S times the identity matrix minus A, X equals B, U. And so then we can invert assuming that it is invertible. And so if we can invert this, this matrix here, then we can express the transfer function. So this is, this is X and then Y will be C, CX plus DU. And so we get an expression here that given matrices A, B, C, you can construct a transfer function that, that goes from the input to the output. Notice that the, the poles of the transfer functions, this is what we were talking about last time, the places where the, the, the values of S in the complex plane where, where G is infinite are given precisely by the eigenvalues of A. And I'll just remind you that the eigenvalues of A, we would normally write like this, which we can rewrite as lambda I minus A equals zero, so that the determinant of this has to be equal to zero. And this gives a characteristic equation whose roots are precisely the eigenvalues. And so this is the same, you know, up to lambda versus S, this is the same condition here when this, when this vanishes taking the inverse blows up. Okay. So it's, it's, it's really too identical, but, but rather different looking ways of looking at the same dynamic. So you're looking in the frequency domain or in the time domain, but they're, they're compatible. Okay. So if they're questions or if I go too fast, slow me down or ask the questions and stuff. All right. So as I said, this, this, this generalizes quite nicely to multiple inputs and multiple outputs. So in transfer functions, you would get a matrix of transfer functions and you can, you can deal with that. But in the time domain, we have the identical matrix A, but now B and C become matrices for inputs and outputs. And so it's a very straightforward formalism. I mean, essentially everything that I derived for one input, one output, if you just reinterpret B and C to be bigger matrices, these will apply to multiple inputs and multiple outputs. So here I just go through the calculation for the harmonic oscillator for the transfer function. You can, you can go through this on your own, but it reproduces the going from the ABCD matrices that we had. Here's A, here's B, here's C, back to the transfer function that we discussed. And if you happen to observe both position and velocity, you would just generalize into an identity matrix. Note that it's not this. So, so this C corresponds to observing separately the position and the velocity. Here's a question for you. If C were equal to the rule vector one, one, what would that correspond to? Let's just see if this is thinking in. Between a couple between the position and velocity. Yeah, what do you mean by couple? I mean, because you have to multiply the C matrix by the, the vector of output of Y. No X. So why is the output for X is the states. So the relationship is, is, the relationship is the right. Okay. So the states. So I mean, you have a couple between the velocity and position. Yeah. So we would say that you're observable. The variable that you're observing is the sum of the position and the velocity. Yeah. Okay. So that sounds a little bit weird. But, you know, you could design some sensor that did that as opposed, but it's not what we mean by measuring separately the position and the velocity that you would do by having two outputs. And so therefore it's a two state vector to two output matrix. So it's a two by two matrix. Okay. So again, that's just the. Some of some of some of the notation. Now, one thing to notice about the time domain is that there's a certain amount of abstraction, particularly in what we mean by the state vector. And one way to realize this is that we can do a coordinate transformation for these equations. So we can define an X prime, which is some transformation, transformation matrix T times X. So change coordinates. And so we can stick this in. And when we do, we find that that X prime obeys the same differential equation with transform matrices. So A prime is just T A T inverse B prime is T P and C prime C T inverse. And so in those, with those new matrices, we have exactly the same form of the equations. But notice, it's maybe a little hard to see over here, that a and a prime share the same eigenvalues. Okay. So they will share the same transfer function. So if we were dealing in the world of transfer functions, there's sort of only one transfer function between the input and the output. But if we work in the time domain, there is an infinite number of ways to represent the internal state that just depend on how you choose coordinates. So in fact, this artificial example could arise by somehow, you know, doing this change of coordinates or you could, if you had something like that, you could also change coordinates to get rid of it. But the main point is that this state vector isn't really a physical thing in the way that inputs and outputs are, because we can represent it in different values. So you have to be a little bit careful in understanding what needs. Okay. And we'll come, we'll come back to this later on, but I just want to plant that, that idea. Okay. So in the time domain, some of the things that one might do are look at time domain responses to, I mean, you could do it a general input, but there's some special inputs that are useful. One of them abstractly is just a delta function. So you give the system a symbolic kick. So you was just some spike. And so if you look at in the frequency domain, if we go from y equals G times you. So the convolution. So this is the, in the time domain, this would be why is G convoluted with you. So it's G convoluted with the delta function, but you'll remember the delta function when you come to it, it just returns back the original function. And so you just get G of T. And so the time domain to talk about G of S as a transfer function. And in the time domain, the function corresponding to it is just the green function. So it's the impulse response function. And so sometimes it's an object that you all are familiar with, because this is the usual thing that gets calculated in undergraduate physics courses and graduate and beyond. In terms of the state space notation, what happens to the states? Well, we can just stick in delta of T. Into our integral that in the general formula that we found this initial conditions. And so this then just gives a reevaluate T prime of zero. So this just gives E to the AT times times B. So this would be the state. Impulse response function. We can also look at things like step functions experimentally. Those are easier to impose. So you take whatever your input is and just quickly step it up. And so. So for example, well, here I'm plotting both impulse and step responses for a first order system for a second order system. So for a first order system, if you, if you give it a spike, or the system will be displaced and then relax back. If you give it a step, it will relax up to the step. If it's second order, then it can oscillate if it's underdamped for these responses. And if it's overdamped, then it again just relaxes, but with two, two exponentials. So I won't go into that because this again is sort of familiar territory, but just, this is trying to connect what you already know to the slightly different way of looking at it. Okay. So this gets me to, to one of the main topics that I wanted to broach today, which is the notions of sort of complementary notions of controllability and observability. And in a way, these are technical things, but they're actually much more than that. But we'll start from the technical point of view. The technical question is, you know, I, we've talked about systems and controlling systems, but it's useful to ask, is it possible to control the system? Not all systems are possible to control. And so before you try to control the system, you might want to check that you can control it. Similarly, the fact that we now distinguish between the internal state X, which is the slightly abstract and dimensional vector from the output Y, which might be just a single variable could be multiple. Let's think about the case where it's a single variable. So one of the questions is, does observing Y tell you what the state is? And even before doing any math, you can see this is important because, you know, the differential equation is f dot equals f of X. So if you know X, you know something about how it's going to change. But you don't know X, you just know Y. Sorry. Okay. So we, we observed Y as a function of time, but what we would really like to know is X is a function of time. So, so the questions that we'll ask are given an input U of T, which let's say at this point we're free to specify any, any way that we want. Can you control all of the elements of X? Can you make X do what you want? And complimentary, if you're given the observations Y, you know, sort of starting the past, do you know what X is now? Okay, and these are, this is where we'll start. So let's, let's think about the single input and single output. So U and Y are scalars, but remember X is an n dimensional vector. So there's a lot of jargon and people, there's a lot of sort of subtleties about what exactly do you mean by saying something is controllable. And in the literature, people make a lot of sort of fine distinctions and we won't be too, too picky today, but roughly what I'll mean is that there's some U of T that makes a system start from an X certain state at time zero and end up in some other state at a finite time tau. Okay, and so the question is, is there some U that will take you from here to here? And we don't care in this point of view, what the path that's taken, sometimes you might, but right now we're just asking, I'm here, now, and I want to go there at some time tau in the future. I get to specify tau, you get to specify U of T, is it possible? And it might be or might not. So we can think about the controllable set as the set of all the X of T that can be reached from X in some time tau. We can also talk about a controllable system where, you know, for any X not so any initial point X, I can get to any other, you know, the set of all the points that I can get to in some time tau. Sometimes these are called reachability and people will distinguish between controllability and reachability and so forth. Anyway, but this is the rough idea. Okay, so I think it's best approached by thinking about a bunch of examples of systems that are or are not controllable. Okay, so let's look at the two dimensional system, which is two first order systems. So A is this diagonal matrix. So it's two uncoupled first order dynamics. X one dot is minus lambda one X plus U. And U is linked into the system by one zero. Okay, so, you know, the two equations when we write them out like this, that's sort of given that, you know, U is hooked up to system one and not hooked up to system two. So X one dot is minus lambda one X one plus you X two dot is just minus lambda two X two. So is it controllable or not? No, right? Because, you know, if there's a part of the system X two, that is not coupled at all to you. There's no way that anything you do with you is going to change X two. Okay, but the state of the system is specified by the combination of X one and X two. So while you might be able to drive X one to whatever you want at some time, you can't do anything with X two. And so it's not controlled. Okay, so we have this idea that the inputs are somehow, you know, have to have to hook up to all of the state variables and this can either be done by having a new explicitly hooked up to system one or by having a coupling internal. Okay. By the way, if we if we just focus on this guy here, if we just had that X one, so it's just X dot equals minus lambda X plus U, it's in a way trivial to see that it's okay intuitively it seems like it's controllable, but there's a trivial way to do it. So if, in fact, you can do something even stronger, which generally you can't, which is to make it go on some path that you want, and how do I know this? Well, let's look at this equation here. X dot is minus lambda X plus U. And let's just reverse our thinking. So now imagine that you want the system to go through some X of T. And you ask what is the U or the differential equation college, right? It's less. Yeah, so you just put the path in here and that tells you what the U is. Okay. So when you can do this, this gives you a nice elegant solution. And in fact, in the statistical physics community, this was sort of rediscovered not too long ago. And I think it's it's it's your compatriots. Anyway, made use of this. I think they called it inverse engineering. But anyway, but it's a method that's been around. And if we, if we sort of think back in our A, you know, this is a X and the U, depending on the structure of A and B, you might or might not be able to do this inverse. And you can apply this to nonlinear systems as well. And the gist of it is that there are classes of systems where you can do this inverse. And then it's a very nice way to design control. You can do that by using the same methods in your system and you can do that by developing which reasonable and not reasonable and so forth. But not every system can be in that form. So. Okay, so that's an aside. Let's get back to our system. So here. I have an input. And it's going to two copies of the same system. So it's, it's. Third example here, it's at that lambda one and lambda two are equal, and I say it's not controllable. So why, why isn't it controllable? But this is not controllable. This is not controllable. Sorry, I'm talking about this one here. So this is the case. This is a special case of three where the two languages are the same. Differentiable, right? I'm sorry. You can differentiate. You have the same input no matter what. Yeah, so it's the same input that's going to equally to both systems, which are identical. Yeah. Now that it runs amok, it's that you have one input, you know, you have the same input going going to both systems. So, for example, if the initial conditions, you know, we haven't really specified what the initial conditions on them are, but let's say they're the same. And then the values will be identical, right? X one and X two will be identical for all times. But the point about controllability is that I have to be able to make it go to an arbitrary X one X two at some time. So I can't do that in such a system. And if the initial conditions were different, they might not be the same, but I don't have independent control. Okay, so, so it's, it's, you know, does that my one variable you give you independent control of both X one and X two. Okay, so like, but I'm not one of them are different than you have. Okay, well, now, so now, now, okay, so you're getting ahead of me. Sorry, sorry, sorry. No, no, no, no, so this is exactly this is going to be my question to you is now we talked about Lambda one and Lambda two being equal. Okay, but now if they're different, so that's what I have here. Is it controllable. So, remember what this is asking so the notice that the coupling here is identical so it's one and one. So the input is getting coupled to these two independent systems. And the only thing that's different about them is that they have different time concepts that relax at different rates. And the question to you is, can, if I want, for example, you know, let's say a time zero X one and X two are zero, and at time one, I want them both be equal one let's say, can I design a U of T that makes that happen. Maybe. I can, I mean, if you want to take five minutes and just try to play around with this we can use a good point to take a break and give it a try just to give it a try because I think in wrestling with these you sort of understand what's going on. I'm sorry. There's a lot in the chat. Sorry. Okay, that's just. Yeah, it's possible to make a full screen somehow. It might be possible. be possible. I don't think on the full screen, there's a few options on the top, yes, center top, yes. This one? Yeah, on the full screen. Beautiful. That happened. Okay, nice. It's a little bad on the bright. With that, let me, let me, let me. Maybe on top of the center, there is this, maybe just when, when you're just, it won't change because of the strain that you've got. Okay. Okay, that's, that's as good as I can do. Okay, so yeah, so maybe take a minute and just play around with this and see if you can see if this is possible. You know, if you get a sense of whether this is possible. I think I remember what the passcode of the meeting is. Do you know what it is? Of course, it's a one. And so, three, five, seven. Endure the conditions. So, they have to be zero at zero. So, so, well, it doesn't really matter. Just, they have to be the same. They have to be in some initial condition and some final condition. I think I said zero, zero and one, one. But I mean, controllability actually would require this for arbitrary exit, exit times zero and arbitrary exit time team. Okay. But just, I'm just giving a specific example, but it can be whatever you want. So, there's a, there's a, there's a comment in the chat saying from Lael, I think it would be possible if we have different set of numbers multiplying, not one for both. So, let's, let's add that. Well, here they, here the goal is to do it with the same number, one and one for each. So, we'll add this to the question because, you know, in some sense, intuitively, it seems easier if those numbers are different, but I want the numbers to be the same in this game. So, so, so this isn't a particularly easy sort of as a, as a kind of calculus problem. So, we can, you know, I'm not sure how long to let people play around. We can, I mean, what I was hoping is you can get a feel for what, what the issues are involved. Is it enough now? Or do you want to, you want to live more time? Okay. In the chat room. In the chat, sorry. Okay, no, the path for lambda two x one minus lambda one x two is independent. So it's not, I was wrong, sorry. Okay, so a withdrawn vote for no. So, so what I was going to ask is if we had a vote who wants to vote yes and who wants to vote no. So let's let's let's vote first for no. Okay, we've got one and anybody in the chat for no. No, no, not no. Okay, yes, three. Okay, and a number of undecided ones. Okay, so let me just, if I can unshoot inside of it, I want to unshare my screen for a second. Oh, hello, but I want to stop sharing. I don't want to leave. I just want to stop sharing the screen. Okay, I thought I pushed it. Okay, so let me let me share something else. Okay, so here, here, here, here's a pretty good answer. Yes. So this is this, I mentioned that there were some problems and solutions online, and gave the links last time. So this is, this is from one of those problems. I won't go through, I mean, I'll let you go through some of the details, but I wanted to focus on sort of a solution here, which would give some intuition. So one of the things to realize is that you have a lot of freedom with the function u. There's an infinite number of ways to choose a u of t. It can help focus one's mind by taking a simpler form and just asking if that works. Of course, if it fails, that doesn't mean it's not controllable, but if it succeeds, it can make it easier. So in the problem, I think I suggest trying to set function with two parameters. So we let u of t go from negative u, it starts off at some negative value up to some time, and then flips up to a positive, you know, the opposite value plus u at some time not, and then continues up to time one. And in this case, a little past it. Okay. And so I sketch here. So this is the input function. And of course, we're going to choose the value of u not a little less than four here, and the value of tau a little more than 0.5 here properly. But I have two parameters. And so now you're asking I give you a u with two parameters, and I have two conditions that at some time I want these equal. And so now you can ask this, can you solve that relationship? And it turns out you can. So what happens here? We see that they start out the same, then with a negative u, they're diverging because they're relaxing at different rates. Okay. So with negative u, they're pulled down to the negative u solution. They're trying to relax, they're relaxing at different rates. And so now when you switch, they have different initial conditions, and they relax up at different rates too. Okay. But these things are just set that they cross again at the time that you wanted to. And so I think you can see in that solution that it will even with this simple parameterized form always be possible to pull this off. Now notice that they're only equal at this one moment at every other time, there's something else. But controllability isn't asking anything about that in between what have after. It's just asking, you know, I was here at time zero, and I want to be here at time one, can I do it? And this is an instructive answer to show that you can. You can tell you this process has made it work. And make them cross as many things as you want. Well, I mean, that's a separate question. Okay. So here I'm just saying, you know, I want to go from one point at one time to another point in state space, another time, and can I can I do that? You know, asking, for example, whether I can hold it or make some path, that's that's a different question. Yeah, for more demanding questions in some sense. Yeah. We should explain it, right? But the question also that we should explain it makes is who are we spending, right? Who's close? Nothing's the same. At some time. Yeah. So, yeah, so if we had asked them to be, you know, two in one, you could still do this. Okay. I mean, once you play with the math, for doing one case, you'll see that it works for both cases. But you want that x1 and x2 be equal at the same time, you know? Well, no, it's the same answer to arrive at the value of x times. This could be an example, but controllability sort of generalizes that statement that you can do this for arbitrary x1 and x2 at some arbitrary time. Okay. But again, in this particular case, once you sort of, you know, inject this parameterized form, you'll see that, yeah, you could make it work for any combinations. Yeah. There's probably a number of solutions to that problem. There are. There are. That's right. So this is, I mean, this sort of special form is just something I pulled out of the area. And so you could imagine doing this with other ones. And so that raises the, I mean, so the question that gets raised here, okay, has thought the subject. And what's going on here? Let's, let's start to get this back. So this is maybe 200 wishes on this mini chart. Okay. Let's go to unification. Concentrate. Screen. Okay. Are we back? No. Oh, yes, we are. Okay. Good. Okay. So we're back here. So, so we've gone through a fairly painful test to try to figure out, is this controllable? And so one of the first questions to ask or next question to ask is, you know, is there an easier way to answer this question about whether it's controllable without having to be clever and go through a lot of calculus? And, and, you know, some of these needs in the internet. Okay. So, so the answer, of course, I mean, I wouldn't be going through the setup if there weren't a test. And so the answer is that there is a test. So let me give you the recipe and then try to convince you that it works. So the recipe is as follows. We're going to construct a matrix called the controllability matrix. And right now, remember, we have a single input and also single output, but here we only think about input, you have a single input. So B is a column vector, an n dimensional column vector, A is a matrix. And so we form a matrix that has this funny form. The first column is B. The second column is AB. So remember, B is a vector, AB is another vector. And so we make that the second vector, and then we take A squared B and all the way up to A to the n minus 1B. And so this forms an n by n matrix here. And if the determinant is not zero, so if it's invertible, then the system defined by the matrices A and B, the same vector B is controllable. So the first thing, okay, so this is the answer. And what I'll do is first check that it makes some sense on the examples that we just did and then try to see where it comes from. So just bear with me on the, why does this work? Okay, so we looked at an A that was meant to be 1 minus lambda 2 and B, oh, this is the first example, B was 1, 0. So AB, we multiply this matrix on this vector and we get minus lambda 1. So we put them together and we get 1, 0 for B and minus lambda 1, 0 for AB. And we can see that this is not invertible and it's not controllable by this test. Okay, remember this was the one input, two independent systems input only going to 1. Okay, now we'll do an example, should be 2 and 3, I guess, that A is minus lambda 1 minus lambda 2, B was 1, 1, so identical. So now AB is going to be minus lambda 1 minus lambda 2. And so we form this vector here, this matrix here. And so the determinant is lambda 1 minus lambda 2 and so it's controllable unless the two lambdas are equal. Okay, so two lambdas are equal was example 2 and the ones that are not equal are example 3. Okay, so this simple test, which as you can see would be very easy to compute on a computer, numerically will tell you whether it's controllable or not without going through this painful set of thinking that we did. Painful, I have to say, but useful because it kind of gives you a sense of what's at stake in the question. Okay, so this is kind of the recipe instead of none of the next questions, why does this work? Okay, so the first thing is we can always set the initial condition to be 0 because we can just basically displace our coordinates so that it's starting at 0 to make it make a coordinate displacement. Okay, so just to remind you of something called the Cayley Hamilton theory, or sometimes the Hamilton Cayley theorem, that a matrix obeys an nth order characteristic equation. It's nth order characteristic equation. So we remember when we defined eigenvalues, we had the characteristic equation for the lambdas, the matrix also obeys it. Intuitively, this is because it's just the matrix of all the lambdas in its diagonal form. And so it'll end up being n copies of the characteristic equation. If you write it as r dr inverse. Okay, so what is the consequence of that? It means that if we take a power a to the l, that it can be expressed as an order n minus one polynomial, right? Because this is this is an nth order matrix equation. So it says that I can express a to the n in terms of lower powers. And so I can always express any higher power in terms of these powers going from 0 to n minus one. So what that means is that this matrix exponential e to the 18, which seems like it goes up to a to the infinite power, really only goes up to from a to the zero to a to the n minus one. So you can express the matrix exponential as a linear combination of these powers. Okay. So now let's go back to the impulse response function that we talked about. So we have a u as a delta, then is e to the 18 times b, but e to the a is just this linear combination of powers from zero to n minus one. And so the x that results from a from a delta function input is a linear combination of coefficients that you would have to work out and which could vary which vary with time of these vectors b, ab, a squared b, all the way up to a to the n minus one b. And so this spans the whole space of xn, then you can get basically anywhere in that space. Okay. So the input, the delta function input is giving you a response that spans the whole space. It's reaching, you know, this one function input is generating time of the vectors that reach the whole space. And so if you can do that, then essentially you can get anywhere. Any questions? So I mean, that's, that's kind of just a sketch of the proof that that's, that's basically why, why it's working. So there's a lot of comments I have on because, because this takes some unpacking to, to appreciate. But any questions up to now? Sorry, may I ask a question? Sure. Yeah. So this is probably my misunderstanding. But suppose I discretize time just to make counting the better, then given interval zero to capital t, I have ut which I can just dissect into n interval. And then I have n independent ut elements, right? Yes. Then this controllability means that I can get arbitrary y1 and y2. Each of entry has x, yes, arbitrary. Sorry. Yeah. So it's x, right. So the, the, I have two n entries, right? x1, there are n of them, and x2, there are n of them. Yes. So are we saying that we can get arbitrary two n entries out of n entries? No, no, we're saying at some particular time. So you're asking about, can I specify the whole, can one specify the whole path? And you're giving a good proof as to why you can't in general. Okay. Oh, I see. So we are asking only for the particular instance. Yeah. So, so you use n, but let's, let's, let's call it some other letter, m, time steps. Okay. So m can be very, very large. But at the, at the end of the interval, you have to meet n conditions for the state vector. The state vector is n components to two in this example, but n in general. But you can define, you have an infinite number of, of, you know, a large number, an infinite number of, of intermediate inputs. So the U of t is infinite dimensional in some sense, in the way you're describing it. Or m dimensional, but n could covalent infinity. But you're only asking for me to, so it's massively under specified in some sense, there, there, there are many, many solutions. I think you're saying that commenting that there should be many solutions to this. Okay. Okay. I understood that. Okay. Yeah. I think there is something that I don't get because, I mean, you said that you can fix the initial condition, say, and at the different condition at time, whatever. Yeah. But then you could just think on, on the condition at this time as an initial condition of a different process, right? So you could, yeah. So you've got the initial state. Okay. Yeah. And then you've got the end state, but the U is different, right? So, I mean, by applying different U's, I'll get to different places. Sure. Sure. Sure. The thing is that, say that I have like three times, okay, D0 to 122. Okay. Okay. I have my differential equation with U. So I can fix the positions at D0. Yeah. Right. This is my initial condition. With this control, I can fix the conditions at T1. Yeah. Selecting one U. Yes. Particular. But then I could, then I can use this T1 as a different set of initial conditions. Yes. Ordinary differential. Yeah. So I could select also the whatever, whatever points I want for T2. Yeah. Yeah. And so you want to do this. So I can iterate the process and get any trade regular? Yes. But then it may be, it may be a trick. Yeah. It might be a trick, but, but, let me. But if you can select, okay. So there's a big catch to this that I haven't mentioned yet. Okay. So let's go through the catch. Like, let's go through with this. These are my notes here about what this means in government because this kind of question. Okay. So we'll get to that. So the first comment is that we did a single input and we could have multiple inputs. So when we have multiple inputs, then B is a matrix. But you can go through the same argument and form this A, B, A, B, and so forth. So now this is not a square matrix. This is a rectangular matrix. But it turns out if it had rank n, so if it's, this will again work. So in some sense, it's making it easier because you have all these inputs and that's all the more chances to have the response span to space. Okay. So adding more inputs can make something easier to control. We're just asking possibilities though. Okay. So again, this controllability doesn't imply that and we'll get to why in a moment. It also, as a corollary would say, for example, you might not be able to hold the system at the value that you want. It doesn't guarantee you might, but you might not. And another thing is that you might ask, are we asking too much to get to any, we're asking a pretty severe definition of controllability. And so often, or sometimes at least, you can go with a more modest one, which is that this impulse response will span perhaps a subset of your entire space. If where you want to go is in that subset, then that's okay. You don't necessarily have to have a system be controllable to control a particular task. Okay. This is asking for any test that you can imagine, but it's a simple test. It also doesn't, again, a corollary of not specifying the path, you can cook up examples where the control trajectories are very non-local. Like you want to go from this point to this point and you might think that it's going to do something like that, but it actually takes you around the very securities route. So there can be surprises like that. Okay. So to answer your question, one of the things that we have assumed is that this is a linear system, but there is one way in which all systems are not linear. And that is that all systems in practice have some limit on the range of view that is applicable. So you is what you do to the system. And there's always some physical limit about what you can do. So controllability in the way that I've defined it, because I haven't said anything about where X is and what tau is. So I can, you know, have like, I mean, kind of a hockey puck, but some car or something, and I'm here now and I want to get to Jupiter in a microsecond. Okay. I mean, I can ask, can I get, you know, from here to Belgrade in four hours or something like that. But controllability asks, can I get from here to Jupiter in a microsecond? And clearly no car is going to be good enough to do that. Right. There's a maximum U that you can apply. So even systems that you sort of want to call linear have an implicit non-linearity. And so when you get to non-linear systems, then the issues, you know, the controllability of a linear system doesn't necessarily imply what sort of a non-linear extension will do. Okay. And clearly, the set will be bounded into how to deal with some kind of reachable set that you can reach. And that's what, you know, within some set can you control it. Okay. So that's the thing that can go wrong in the argument that you're doing, that the use that would be required to impose at least some trajectories might be, you know, impractical to put in. You would need a crazy U. Well, crazy, but also like let's say the amplitude of U would be bounded between two values. And so you would need views that go outside those values. Okay. Some trajectories might be fine, but some trajectories for sure can can can can violate that. Okay. And then the last two actually is that, you know, this, so this criterion was first formulated by a person in town, an engineering town in the early 1960s. And for a long time people thought, well, this is the end of the story. I've given you a recipe, you go to your favorite programming language and you just, you know, actually, they haven't, many of them have routines that given an A and a B will construct the right matrix, take the look at the rank, and so on, and give you an answer. But it turns out that this is not so trivial when you start to think about large systems. So when N is very big, even doing this computation and figuring out the determinant can be non-trivial. But even more interesting, for some reason, I'm not sure why, but it took until about 10 or 12 years ago for somebody to ask the interesting inverse question, which is, you know, given an A, can I find a B that makes the system control? And so this is work that was by Barabasi and Company and in their interpretation, you can think of A as kind of a network. So every non-zero entry in A connects the, you know, the dynamics of component I with J. So you can think of the different elements of the state vectors as kind of nodes and constructing a graph that says, you know, are there any interactions from one node to another node? And so then the question is, if you have some big, so any network, you can construct a connection sort of matrix, which we'll call an A and think of the dynamics on it. And so the question is, how many of those nodes or which nodes do I have to control in order to make something controllable? And you can reduce it to kind of like a binary thing with it, with the elements of A are either zero or non-zero. But even then, you have, you know, if you have any of them, there are two to be any combinations. And so that's growing exponentially. And when N is large enough, you won't be able to answer that. And so they use some tricks through graph theory to come up with a polynomial time answer to that question. And so then that's very interesting because it says that in some, you know, like if we stick with the biophysics context, I have some protein interaction network or something like that. And I want to know which places do I need to control in order to control that system? You can start to answer those questions. And so people have used it to kind of computationally do the equivalent of knockout experiments. So there's biologists go through this painful set of experiments where they want to know, you know, what gene controls what, and they'll make an organism where they knock out a particular gene and then see does some function change. And so now, but what you're really doing is in the way asking about controllability, like if you make this certain change in the system, can you control an outcome as desired? And so now the ability to sort of construct the network, which would be kind of like the A here, and then ask, you know, how do I make this a controllable system has real value. And so they were able to predict the results of some knockout once they did some experiments on C elements, where the whole brain, the neuron neural wiring network is known to this, the linearized version of this. I mean, of course, it's a nonlinear dynamics, but in this case, they could show that the linearized approximation is good enough for controllability. And so they could say, you know, which neuron will control what outcome. And so that's that's quite powerful. So, so even this technical question actually has some very interesting consequences, not just for the little systems that we'll be focusing on, but for bigger systems. Okay. All right, so that's it on controllability, but there's a flip side observability. And observability is asking about why the kinds of questions that we were asking before. And I'll go through this more quickly. But the question is, we, in order to control the system, will argue that we need to know, you know, x, but we know why. And so given why can we infer x. And the important point is that we're given why, not only at some, right, if we were just given why at one time, it's obvious that the answer in general is no, if you have an n-dimensional state vector, and you observe, you know, it's got n components, and you observe y at one time, you have one value, so one value can't tell you what n values are. But if you observe y over a bunch of time in the past, then you've got as many points as you need. And so it's, it's at least possible. And so the recipe for this is, is basically very similar. You take y equals cx, and then you differentiate it. So y dot is cx dot, which is c times ax, y double dot. And so we go on to the n minus one derivative. And and this is a, this is again, is enough because a to the n minus one is the highest independent power. And so we can sort of put this in a matrix form of n observations y related to n state components of x. And if this matrix here is invertible, then from the y, the n y values, and x values. Okay. And so it's a very similar thing. And it leads to a very similar kind of condition, except that now we've got a bunch of, let's say again, one input c is a row vector. So we'll have a row vector here, ca is another row vector, ca squared is another row vector, and so on. So we have n row vectors here, and we use them to start the matrix and ask if it observable. So in a way, this is saying, well, you know, so then, then, you know, intuitively, what we're doing is kind of saying, well, we want to know this big y of t, which is in terms of derivatives, but the derivatives we can always approximate in terms of finite differences. If you think about noise being in this observation, you might worry even about a first finite difference, but certainly about an infinite difference. So this is not a good way to do it in practice, but conceptually, we can, we can go from the derivatives to n values in the past. Okay, so, so although y is a single component, if we had n observations in the past, those, if this matrix is invertible, can map on to the n components of the state vector at this particular time. Okay. Adding u of t, okay, doesn't affect observability. It just alters the x of t, that's fine. So it's just depending on c and a. I'm going, I think I'm running a little late, so here's an example of a system that's unobservable. So this is, again, two independent systems. So x1 and x2 have different differential equations. We're only observing one component and x1 and x2 are uncoupled. So when you observe y, it's only telling you something about x1. It's never telling you anything about x2, so it's going to fail the test. And, and, and, and, oh, I didn't, I didn't compute the, okay, but when you compute this, you'll see that it's, it's not invertible. If we go to our friend, the harmonic oscillator, and you observe the position, we can apply the same. So here's our, here's our matrix A, here's our matrix C. C is this row vector. And so we go C is one zero, C A is zero one. This is the identity matrix, so it's invertible. And so it says that this system is invertible, and you know that that's true because you can always estimate the velocity as, as y at this time minus y at the previous time divided by delta T. So having a sequence of y's allows us to construct the state vector. Okay. Again, with noise, it's not such a great algorithm, but conceptually it works. And, okay, this, okay, so I think I think I'll skip these because these are these are all variations. And, let's see which one. So this is, this is a case where it's not observable. So x double dot is, is zero, so this is just an accelerating particle. And we let y be either x or x, x dot. If we let it be x, then it is observable, but if we let it be x dot, then it's not because you don't know where, where it started. So again, these are sort of intuitive things. If you have different inputs going into the same system, which then get added, that's also something that's not observable because you can't, again, you can't differentiate between the two. Okay, so, so we've gone through somewhat slowly the, the, the story on controllability and somewhat quickly the story on observability. What you will have noticed is that they seem very similar stories. Notice that there's a, so there's a kind of duality that I wanted to point out that the, you know, if we, if we have the controllability matrix B, A, B, A squared, B, and then the, the, the observability matrix was C, C, A, and C, A squared and so forth. Notice that they're the same. If we, if we let A go to A transpose and B be replaced by C transpose, then the matrices become the same. And so what I want to suggest is that there's a kind of duality between inputs and outputs on a system. We take the system and reverse the direction of time because A dagger in some sense is the same dynamics going backwards in time that and, and reverse inputs to outputs will have as formally the same kind of dynamical system. So U of T is affecting the state X of T from now into the future. Y of T is, you know, influencing our estimate of the reconstruction of X of T from the past up to now. Right. So the whole story is was summarized by Shannon, same information theory Shannon 1959. So we can know the past but not control it. And we can control the future, but not know it or do not do not know it. Okay. And there's a real duality here. And in some cases, we'll see we'll see this as we go along even more formally that, that, that formulas that apply to control also pop up for observing. And they're really kind of essentially the same problem. Question. Yeah, is this duality a statement of the diverse time? Yeah, so, so, so the dual I mean, to in order to reconstruct a system that has the same form, you end up reversing time. Yeah, when you reverse inputs and outputs, because again, the input is telling you what's going to happen in the future and the output is going to tell you from the past what you know now. And so if you want to reverse them, you also have to reverse the direction of time formally. Okay, thank you. Yeah, we're not we're not really reversing the direction of time. Okay, so the last thing I wanted to talk about the other thing I want to talk about today is to introduce the notion of an observer, which will be using subsequently. And in order to do this, we're going to we're going to sort of ask two different kinds of questions. How do we make control based on knowing the state? And how do we do it based on knowing the observations? Now, really, what we want to do is do it on observations, but let me let me kind of decompose the problem. So we imagine, I mean, you can always imagine that your C is an n by n identity matrix. And so you just know all of the components of the state vector. That's not usually the case, but it there's nothing stopping it from being the case. So we'll start with that. And so there's a theorem which I'm not going to prove, but I've sketched out the proof here, but I'll give an example of that. So here's sort of the payoff that if A and B are controllable, then you can you can pick a matrix or a vector if it's just a k will be a row vector if it's one one input or a matrix more generally of the form kx or minus kx. And put the poles or the eigenvalues of the dynamics anywhere you want. So it means that you can take your dynamical system and turn it into an arbitrary different dynamical system, which usually or hopefully will in some way be better. So so so this only and this this only applies if the system is controlled. So this is this is kind of a benefit for once you know that the system is controllable what to do next. So let me let me illustrate this on our favorite harmonic example oscillator example. So we have x1 dot x2. Here's our matrix A. I would. Yeah. I understand the theorem. What do you mean? So the claim is okay. You start with an A that has some set of its, you know, it's a dynamical matrix and the dynamics are characterized by a set of eigenvalues. Okay, we're talking about linear systems. So the only things that can happen are like exponential decay, exponential growth oscillations with growth and decay. And so the set of eigenvalues will specify the dynamics of course with initial conditions and so forth. So now the claim is that if the system A and B is controllable, then you can define a u equals minus kx. You can define a k and construct a feedback. Let u equals minus kx. Okay, you know, let t point to whatever you want. And so this will result in a new dynamical system whose eigenvalues are whatever you want. So you're on control. You're going to tune out the relaxation times and the decay, you know? Yeah, but you can turn an oscillator into something that relaxes. You can turn something that relaxes into an oscillator. You can turn it down to over that, for example. Stable to unstable. Anything you want, you can do. Okay. All right. So let's see this with the harmonic oscillator. So here's our harmonic oscillator and here's our B. And we checked that this is a controllable system just on the previous couple of pages. And so now I want to let u be minus some row vector k1, k2x. Okay, so this is the form of the feedback. Remember, this is the feedback where we assume that we know the entire state vector. So it's called full state control. Not too realistic, but it's a starting point. And so then let's look at what Bk is. Okay, because we're going to have, you know, B u is going to give us a Bk. And so this is our B. This is our k. So we have a matrix here. And so if we stick this in, then the matrix right here, we put this in. We have a x, sorry, x dot is a x minus B u, sorry, Bkx. And so there's an a minus Bk matrix, which I'll call a prime. Okay, so this, oh, I did that. Sorry, I had it here. Okay, so this is this matrix here. So the new dynamics now are for a matrix A prime, which is a minus Bk. And so it looks like this. If you look at its characteristic equation, you get this. And it's clear from the form of the characteristic equation that by choosing k1 and k2, I can make the roots, the eigenvalues lambda anything I want. Or s, sorry. Okay, so this is a kind of simple example where we see that the school control is possible. And on the previous page, there's a proof that for general A and B, this is at least the sketch of a proof that it's possible. Okay, so that's nice. And so then one question is like, okay, what would you, you know, where would you want to put the poles? And of course, it depends on what you want to do, but there are some, at least heuristic guidelines. So for example, let's, let's think about this harmonic oscillator, maybe we have a harmonic oscillator and we're interested in vibration damping, the vibration damping, you know, the system gets hit. And you want it to go back to its equilibrium as fast as possible. And you may remember from, from intermediate mechanics, that the way to do this with the second order system is to be a critical damping, right? Like if it's very little damping, it will oscillate a lot. And if it's very much overdamped, there's a fast mode, but then there's a slow mode that gets slower and slower to relax. So critical damping is the minimum relaxation of minimum time to relax. So that seems like a reasonable place to shoot for. So let's say we want to put our poles at minus two, minus two, okay, we could just solve for the values of k1 and k2 that make this possible. One thing to note is that, let's say we wanted to put this at minus a and a, because I took two kind of arbitrarily, but the bigger the value is for minus a and minus a, the bigger a is the faster it will relax. And naively, when you wanted, you know, vibration damping thing to relax as quickly as possible. The problem is the one we talked about last time, which is that if you look at the values of k that you need in order to do this, they grow with a. And if you have any other dynamics that we haven't talked about that have delays or things that are like delays, then we showed that putting very high gains will lead to instabilities. Okay, so you could in principle put your pole arbitrary position, but in reality, since the dynamics that you're actually controlling are never going to be exactly what you assume here, they're never going to be this simple. You'll always have some kind of delay coming from some part of the system that there'll be a practical limit to what you can do. Of course, the other limit is that the bigger the gains, the bigger the values of u that are required, you know, for a given perturbation, the u is kx, so if the k is the growing with a, then the bigger the value of u and you'll have a finite range of u. And so all of these considerations mean that you shouldn't just arbitrarily create some very short relaxation time. And in general, the heuristic would be, you know, to keep the gains as small as possible, but still fast enough for your purposes. Like don't get greedy and make it, try to make it faster. Now, this is a two-dimensional system. We have two gains. Once you get to an n-dimensional system, you have n gains. That's a lot of choice. And so again, we want some systematic ways of thinking about it. We'll do that tomorrow, but for the moment, the heuristic rule of thumb would be of the poles, you know, only try to do something to the poles that are annoying, you know, that are bad for you in some sense, and then just leave the other, the fewer you change, the less effort it requires. And so, you know, you try to identify, it's typically the more slowly damped ones with the fastest dynamic. So if you have a set of poles here, these ones will be kind of what are called the dominant poles because they're in some sense, at long times, there's what's left after things have decayed here. And so these ones, you might want to move and then have other ones become the dominant poles. You know, you move these around so that they and other ones become the dominant poles and, you know, try to do as little as you can get away with is the generative heuristic. But it would be nice to formalize that tomorrow. And so, but this is sort of a lead up into the question that I really wanted to talk about, which is, you know, we've, we've assumed that we have an output that equals the state vector that we were able to observe all n components of the state vector. But as I've argued, that's not the case. And we've argued that it's possible, if the system is observable, then it is possible to accumulate enough information to reconstruct the system. But, you know, you can sort of see that if it might take so long to get that information that, you know, in some sense, the system is, you know, not doing what you want while you're getting out with the status, right, you have to be able to get that from past observations quickly enough in order to have a state that's useful. Okay. So, so the interesting thing is that there's kind of a technique for dealing with this situation that people can control theory, which turns out to be optimal in lots of cases, it's reasonable in almost all cases, but it's optimal in many of them. And it's to introduce the notion of an observer so an observer is going to be kind of a shadow dynamical system that lives on a computer, okay, which is going to be a computer model of the dynamics that you're trying to control. So the rough idea is that you've got the real dynamics in the real world. And then you've got a copy of the real dynamics on your computer. And you're going to try to set up some interaction that synchronizes them, okay, using the observations, feed it into your model, and synchronize the two systems. Once the systems are synchronized, then you have the state vector, because you know internally in your computer model what the state vector is, right, you've got the whole model running on your computer. So if you can synchronize their behavior, then you're good. And the intuitive way to synchronize them is to use feedback with an error that is the difference between the predicted output from your model and the observed output that you're measuring. Okay, so let's see how this can work. So let me do this naively. So a naive observer doesn't have that feedback. And let's just see what can go wrong. So here is the physical system. So it's our linear system. Here is a dynamical model. Now we know the input. So we can feed the same input U into both the real system and the model. Now if we do this, let's look at the difference in the state vectors. So let X be X minus X. So X hat, are these hats for this shadow dynamical system? Okay, which is with the X hat is supposed to be kind of like an estimator of X, although we don't have any stochastic part yet. And then E is the difference. And so if we subtract it, notice that this part just cancels out. Same input to both. And so we're left with E dot equals AP. Now if A is stable, system is the dynamics are stable, this will, the error will eventually go to zero. So that sounds good. But we can immediately see that if there's going to be a problem that there is an unstable system. Okay. So remember, you might be a stabilizing, it's an unstable system that might be controlled by you to be stabilized. And we've just seen that if A and B are controllable, we can make the eigenvalues whatever we want. And one of the big uses for this is that if you have an unstable system, you can replace the eigenvalue, the unstable eigenvalues and move them over into the stable part of the complex plan. So we can stabilize a dynamical system. So A can be unstable, but the controlled systems is stable. But this part cancels out. And so the difference is just obeying the original dynamics which are unstable, which means that the error diverges exponentially. The difference diverges exponentially. So clearly this strategy of an observer won't work if the dynamics are unstable. But it turns out that it actually doesn't work very well, even if they're stable because the A tells you the time scale, like the smallest eigenvalue of A will tell you the longest relaxation time, set the time that it takes to synchronize. And the observer here, this system is called the observer system in the control theory jargon. It takes an identical amount of time to relax as the physical dynamic. But intuitively, if you really want to control it well, you should know the state in a time that is shorter than the time of the dynamics. So what do we do? We add feedback based on the observation that's said. So the observer has the original dynamics. And then we add in something that's proportional to the difference between the observations and the predictions. So the predictions are, you know, if I have an x hat, then cx hat would be the y that I predict. So y minus cx hat is the predictive value. And so now if I formulate these error dynamics, then I've got an A minus LC. And so the observer error dynamics now are looking at a modified dynamics, A prime is A minus LC. And L is for a single observable L would be a column vector of gains. And so we again have in kind of observer gains to tune to make the eigenvalues of A prime, whatever we like. So now we can in principle make it as fast as we, you know, this convergence of the observer as fast as we like. In practice, again, I'm leaving out part of the story. When you add noise, if you try to make it too fast, then you get into problems, essentially like the kind of finite difference in gains that we're talking about, that we need some time to average the noise. So if you make the time scale for this observer to be too short, it'll be bad from a noise point of view. But we don't have any noise yet. So formally it looks like we do whatever we want. Okay, so again, back to the harmonic oscillator. You know, how does this work? Well, okay, so here's our dynamics matrix. Here's our output matrix. And so now we have L as a column vector, we form LC, and so we form this A prime minus LC, and so we get a kind of modified dynamics for the observer errors. And so again, we can choose L1 and L2 to put the poles wherever we want. So if we put them at minus two, minus two, then L will be 43. Again, one would find that we need bigger observer gains to get faster observer relaxations. And noise will eventually admit that what you're allowed to or what's reasonable to do. But what would happen is kind of what I've sketched here, that you would have the physical system would be there here, the observer systems, you know, you don't know what the initial state is. So you have to initialize it with an arbitrary initial condition. And so whatever you pick, though, it will synchronize and eventually the systems will go in lock step. They create this synchronized system. And then the state vector is the same for both. And then, then you can use that state vector as an input into control. It's interesting to look at the structure of the full dynamics. So, okay, well, here I've added a reference, but here we have the x and x hat. This is the observer feedback. If we look at the error, all of the inputs and so forth are zero. But if we write this as a couple dynamical system of the true dynamics x and the observer error e, then notice that this matrix here has sort of these block components here. And so the eigenvalues are just given by the eigenvalues of this and the eigenvalues of here. And this leads to a separation principle, which is that we can have this observer in the system lead to a reconstructed state x hat. And then because the original system dynamics were not changed, we can use it as if we had a full state. So remember, I said, wouldn't it be great if we could observe the full state because then it's controllable, we can put the eigenvalues in whatever we want. What this shows is that if we reconstruct things with an observer, we can still do that using the observer x hat instead of the state as a substitute. So this is called the separation principle. It's something that is true for linear systems and may or may not be true for nonlinear systems. So it's in general true when things are controllable and observable, then you can do this. It's more subtle for nonlinear systems. I think I'm going to still get a couple of quick things here. So you can use this either kind of in hardware and software. So in hardware, you have to really have your physical system and you would have the observer on your computer and you would use the observations of the physical system as inputs to the observer. And then the observer would synchronize and then if you want to control once it's synchronized, then you're able to use the x hats as a basis for control. So you have the y's that give you the x hats and then the x hats give you the control views. So you kind of decompose your control problem into these two separate steps. You can do everything on computer too if you're just sort of simulating a system. And so then that's what I've sketched here. And again, the upshot is that if you want to keep your observer dynamics faster, you want to observe the system faster than the natural system dynamics and then you can control it in a reasonable way. And so the last thing, and I think we were kind of running long stuff and I've talked about a lot of things, is just an application of this. So you have a disturbance. So there's the input that you want, but sometimes the system is also subject to other inputs that you don't want. So environmental perturbation, somebody picks the table or whatever. So those will enter the system through another input. So in some sense, all systems really are multiple inputs because they're the inputs that you want, but then they're the inputs that you don't want to. Those are formally another kind of input. So I go through and using these techniques, what you can do is basically construct an estimator, not only for the state, but if we think of the disturbances as being generated. So the class of signals that are entering is being generated by another kind of dynamical system. So you could have white noise inputs, but often you've got something that has some defined characteristics. So it might be a sinusoidal, if you're thinking about vibrations from the earth, it might be signs of different frequencies and so forth. So if we think about the disturbances of being generated by their own dynamical system, we can try to estimate their state and then if we know their state, then we can try to have a control that would get rid of them. And so I go through that and the interesting result comes from that, which is getting a little tangential to my main purpose, so that's why I'm not going through it so much, is that it's something called the internal model principle, which is that if you want to correct for exactly asymptotically for some kind of disturbance, your controller needs to know about the dynamics that generate that disturbance. So if you have, for example, sinusoidal disturbances, your controller has to know that your disturbances are generated by a harmonic oscillator equation. And so if you do that, then you can design a control that after some transient will perfectly compensate for, let's say, being vibrated at some frequency. So that's nice, but it requires you to know something about the kinds of disturbances that you might encounter. So if you do, you can take advantage of it to design a controller that will exactly remove that kind of disturbance. So again, this is something that if you're designing LIGO or something and you want to isolate it from all the vibrations of the earth, then these kinds of techniques are useful. And I think that's, yeah, that's the internal model principle. It's just what I was saying. Okay, so I'm out of steam and I suspect you guys are, but so that's what I want to talk about today. Are there questions about the observers and stuff? So the thing that we'll eventually carry forward is this technique of using an observer to help estimate a state. But what we will eventually need to do is add a noise to this story, which we haven't done yet. Any questions? Questions or from the... Sorry, I have a question. Not totally related to today's lecture, but more very more general question about relating control theory to biological systems and stochastic models and stuff. I think in your lecture that there are these very nice models where there is a convolution and you go to the last page and everything is quite easy to manage. And I wonder if in biophysics, I am not aware of and whether they are models where control with convalescences and whether they make certain. I think people have done some limited use. There's a very nice paper by Boris Reiman and others from like 20 years ago that sort of tried to argue like in signal chains and stuff that you could use some of the same concepts. But yeah, more generally, I mean these concepts have to be generalized to kind of more complicated dynamics. So we're trying to get there at least some of the way there. And one model that they used and used is the air-bonded dynamics. This is what? The air-bonded dynamics. Where you model the Rambositon variable, which you mentioned experimentally. There's another Rambositon variable for motors that do adaptation. This is an Rambositon model. And then there's a third variable. It is the calcium concentrated in the cell. Which is supposed to be a feedback in the Rambositon model. I didn't realize some dynamics. But I've never seen this type of project with convolution but it will make sense. Variables do not respond instantaneously. Right. I mean it might not be convolution if it's a nonlinear connections and stuff. I mean I should say that one obvious generalization of the observer in nonlinear cases is this idea of creating a copy of your dynamical system and then trying to couple it with the observations. In principle it can work for nonlinear dynamics as well as linear dynamics. So I just said take a copy of the system and have them go in parallel. So you can take a copy of a nonlinear system and do the same thing. And in fact if they're fairly close, if they're close enough, then you can linearize the difference of the dynamic. I mean we've been looking at the difference. And that would obey linearize of linear equation if they're close enough. The problem that's a little subtle is that you don't know the initial conditions. So after a while, I mean this might work if they were close enough but if they started out with an initial condition for the copy of the observer that is too different from the actual state of the system, there's no guarantee that being so far apart that they would necessarily synchronize based on a linear strategy. Knowing the structure of the nonlinear equations you might try for a more fancy nonlinear control that would always stabilize them but that's okay then that's extra work to do. But you can sort of see that at least in nice nonlinear situations this same strategy would work. The other comment which will come back later on is that what we're really doing here is making use of prior knowledge. So implicit in all of this is that I know the dynamics of the system I'm trying to control. And so I use that knowledge to come up with another, this shadow dynamical system, the observer but in order to do that I had to know the original dynamics including all the parameter values. So when we start talking about Bayesian inference and priors, this is kind of the way to incorporate prior knowledge of the dynamics into inferences. So again, I'm kind of jumping the gun a little bit but maybe for ideas. I'm sorry to go back to it but I just did quite a bit of the argument to get the control of the constant but the people knew exactly where you were. The example where people make the protocols cost. Now I'm trying to find the exact time but if you have a couple of one order differential page or these or the partial couple differential questions then isn't the system already turned? Like you have to set up a couple differential pages with extra logic or two. So here I mean that is the set that is new set and also the initial for the different set. So I would think that x1 and x2 would end up depending on the right. Like they would express it would solve this input and then we could express one part made as a function of the other line. Given given you given you given you right yes yeah so the question is can I choose you to make them independent and the answer here is yes. Right so it's only involving a two I mean the controllability is asking about the state of the system at one particular time with all the freedom of choosing you and all the intermediate time values. So there's there's many many values of you so it's it's very very over determined like I've got a lot of freedom on you to tune these two values. So it's also very specific when you say yeah well that's why the controllability test depends on the specific A and B. Right okay yeah so it's sort of given the lambdas given the the one and the one is it controllable. You might change the the V this one one to something else and make it non-controllable. I mean we we know how that works now from from here I mean we would just redo this calculation with you know B1 and B2 here and just see when is this invertible or not. There's probably some combination of these that make it not controllable I suppose. I have a question yeah sorry okay when we solve an equation of motion we need to specify the the either initial and or the final boundary condition. Yes and these be viewed as a special subset of a control and observability issue that you choose you and why at the initial time and final time. Well okay so so the the the um there's the picture that I had with the observer yeah so so um you know if we look at this picture here um I mean the the the idea with this observer for example is to create dynamics that will where the difference is going to zero at some controllable prescribed rate and so it doesn't matter what initial condition you give your observer right it will always decay the difference will always be decaying away. Yeah but you can give like just the delta function in each the u function right. Okay sorry this is this is observability but um so your your question about controlling it or observing sorry both I mean the the controlling is for the initial condition u is delta function and observability I had in mind like final boundary I specify where I want to end up. Okay but observability has nothing to do with the control so the the the inputs that you apply are relevant because I understand yeah yeah yeah the the the the same input is being applied the input is known so it's being applied to both the physical model physical system and computer model um and so and and then here you're setting it up so that the initial condition for the observer doesn't matter so I'm not sure I'm getting where you're going with that. Well when we solve the the occasional motion I can specify initial position and the final position right and I look for the path for controllable system yeah and so there'll be some view that give you a desired from taking from a from a known initial condition to a desired final condition yeah. Right so that in the when we solve the equation of motion the same thing so instead of the initial position initial velocity I can choose initial position and final position arbitrarily right. If it's controlled yeah yeah no no without control I mean that's I mean the conventional you know the just the mechanics if the exercise right. But I mean it won't I mean you can't if you don't if you don't have a control then then then the final position won't be what you specify right in general. Well I mean there will be no yeah no solution but for the to usual dynamical you know trajectories I can choose I can specify initial and final position instead of initial position and initial velocity right. Well okay yeah yeah we're sorry yes if you specify an initial position if it's like a two-dimensional state vector you could specify an initial position of velocity or you could say I want an initial position in a final position and there'll be some velocity that makes that work yeah that's true. Right so I was just wondering whether that can be viewed as a you know very simple control setup in other words the initial condition can be viewed as controlling and observing the system. Well again observing is is is a separate story it doesn't say anything about observing right. So in the sense that in the sense that I end up particular position. Yeah but that doesn't mean that you know what that what the state is right so observing is just saying you know can I figure out what the the full n-dimensional state of the system is um you know in in the extreme case you have no observation so it's obviously not observable c is just a zero matrix or zero vector so you can still do everything you were saying but you have no idea like the system is doing kind of what you want but you have no idea that it's doing it or not right that's a separate question right one is what is the state of the system and the other is do you know what the state of the system is? I see so your answer your short answer is that the the initial condition either initial or final position is not a complete observations. That's nothing to do with observation so again just think of the case where you're blindfolded you have no information about the system the system can be going can be doing what you say but you have no idea if that's true or not. Right that's what I meant okay yeah yeah so these are really separate separate issues um but and we'll come back to this tomorrow on thursday this this duality means that there's a certain kind of similarity to our discussion about um you know can we control something and can we observe it and mathematically they have some common features but but they're independent uh in terms of what an actual system is like. Okay. Thank you John. All right now we have some optimal control and and uh yeah talk about how to implement some of these controls in a more systematic way. John's lectures are available online from yesterday and you can find them in the website and today we will try to be a bit quicker. Okay you know it's also in terms of