 So we welcome King to our school, we look very much forward to your course on this part of it. Okay, well thank you, thank you Edgar and it's great to be here and it's interesting to try to do this in a hybrid way so I hope that the people online can also get something out of this and I guess they can send questions, I can't see anybody online. Yeah they can, there's a chat and we will see there's something, they can also ask questions like if you're interested. Okay, yes. So, okay, so, so we have five days about an hour and three quarters or something like that, a day, and what Yeah, I spent some time trying to think about how to do things so, so as Edgar mentioned I got enough into control theory that I ended up writing a book on it and I taught a whole semester course. So there's kind of like I know how to give a one semester course on some of this material, and I know how to give a one hour sort of seminar. And so I'm, you know, trying to kind of feel my way about what's the right way to present things so that we get far enough to be interesting. But not to make it just like a seminar where you tell people things but they're not really expected to necessarily understand how to derive so I hope I can actually teach some things. Also, I know that there's a range of levels and so the most of the examples particularly at the beginning and especially today are going to be very simple but the ideas behind them I think are interesting, even for people who are farther along. So the overall plan of what I wanted to try to do is today kind of give a little bit of an introduction to control theory, and what we're going to sort of be talking about in general. And then talk about a specific kind of classic approach to control, which is for linear systems in the frequency domain, and to try to talk about some things that happened then on that that's for today. Tomorrow, we'll switch from the frequency domain to the time domain, and there's a whole. It's a much more general approach, but it has a certain abstraction relative to the frequency approach that that takes some getting used to. On Thursday I want to give a brief introduction to a topic called optimal control. And Thursday, to talk about what happens when you have still capacity and I know there's already been a lot of discussion on that so hopefully that will start to join themes that have been discussed. And my overall goal for Friday is that by the last half of it to present some recent work that we've done that makes use of a lot of the ideas that I'll be talking about so. And so I'm hopefully we'll get there in some fashion or other. Okay, so just to begin. So the book, this, this, this, this is the cover book, and that's kind of the overall reference I'm using. It has, if you go to my website here. There's a link to the to a page about the book. And then from there there's a link to Cambridge University press title on it. And there there's some materials that you can download that might be helpful. There's, there's two things there's a set all the problems in the book for better or for worse have solutions on the web. So you can download the problems and the solutions and some of those might be worth going through now or later. There's also a kind of mathematical methods appendix that was too long to be in the book or book was too long to have the appendix. So that's also something that can be downloaded and might again help if there's something specific that talk about where the match background is not this clear. Okay, so for just getting going I wanted to start with a pro which I've always found very nice by Arthur C Clark who was a science fiction writer, probably most famous for 2001 space Odyssey which was made into a movie in 1968 or something by Stanley Cooper. And he also had, he's also known for inventing the concept of the communications satellite. There's about 1945 so somewhat before there were actual communications satellites and how this might work and why this was an important idea. But the quote that I'd like is later it's it's any sufficiently advanced technology is indistinguishable from magic. And what he means by this is that if you took any device this tablet your phone or laptop, and you showed it to somebody 200 even 100 years ago. I don't like magic, right? How this works and particularly, you know, before radio and just the idea that things are wireless and I'm always impressed when I walk past somebody on the street and they're talking into the, you know, some some your your phone and Mike and, but you know to the I they're just walking down the street talking to nobody. And you know 300 years ago person like that would be burned. And these things really change. And, but it also has another meaning which is that we can take complex technological objects, like a car or a computer and use it without realizing that there are 100 different control groups and running underneath that there's all this technology and it kind of is is invisible until something fails catastrophically and you know a plane falls from the sky or I've got round two of people coming. Okay, so not not what we haven't really done anything yet. So, so one of the things that I hope I can start to get across that control theory is something though that it can be a very mathematical topic and the mathematics can get very difficult, sometimes it can be not so difficult. But in some sense, the math is not really the point of the exercise and, and one of the things that's very interesting about control theory, you know, the view of a physicist it means is that it's a different kind of way of thinking about things so it's it's based with a with a goal or purpose. And that sounds a little trivial but it changes a lot of things. And I think some of that I hope will kind of see through in this week. You know, by purpose won't mean that there's a, you know, intelligent design of, you know, you have somebody who's designing some object then and puts in some, you know, thermostat to regulate temperature and that's all done by, you know, some supposedly intelligent designer. And more subtly purpose can also kind of come in through evolution, both technological evolution so that is as devices, both different generations, they tend to converge on on standard solutions and in biology and I didn't mention that when I was talking about control and physical but our body is also full of control groups that regulate gene transcription and blood pressure and there's sort of large scale physiological feedback groups in our body that keep our temperature constant and regular blood pressure and so forth. And then there are cellular level feedback groups that that deal with gene expression and the control of the cell cycle and so forth and again those all kind of happen normally without her being aware of and and the way they work is subject to biological evolution. So, all of this leads to a picture that's somewhat different from what we're used to thinking about in physics, and, and instead I hope this kind of comes through. So, what, what this gets applied to is, you know, technology. If you're an experimental physicist and I know there's at least one maybe maybe two here, then understanding control theory can help you do better experiments. And more, but, but, but the value of learning about control theory is not just if you're an experimentalist but I also hope to convince you is that there's kind of conceptual value this this dynamics with the purpose. But also other applications to physics so so you can actually change sort of the dynamical properties of physical systems and get materials and responses that are not natural in some sense. So a classic example of this is the Paul trap in, in atomic physics where you, you can trap charged particles with rotating fields and do experiments that would otherwise not be possible. And as you've already seen, there are things such as as land hours principle and Maxwell demons where the ideas of feedback and control entered very fundamental levels within within physics. But also, beyond that in the whole kind of quantum engineering movement of building quantum information systems, quantum computers, all of this depends on control of one system so so probably the most extensive kind of conceptual application control recently has been in a sort of fancy control within one system. And finally, there's the idea of controlling complex systems complex networks and that's also a topic that has there now we don't have time to talk about all of these things but I can touch on a few things here. And as I mentioned in biology there are control groups all over the, all over the place and then all timescale. That's kind of the general motivation to be a little bit more specific. Some of the goals that one might have in imposing control are regulations or viewers and experimentalists. Often, you want to keep a bunch of variables constant and then vary one parameter deliberately. And so, for example, if your mini experiments have effects that depend on temperature. And so whatever else you're studying you want the temperature to stay constant then it's roughly constant in the room but you know people go in and now windows are open and so forth. And so you might need to control temperature more precisely so that's something that feedback control can do. And that's kind of maybe the most simplest kind of application of regulation, you can also try to track the time varying signals so you want a system to go through some some some path. And so you want to force it to not just regulate about a constant but about a time being. And mostly that's what we'll talk about this this week. There are more subtle things that we'll be talking a little bit about dynamical systems and I guess there's some other lectures, starting also starting today and dynamical systems. And so you'll learn about things like attractors and so forth. When you add feedback you can change the attractor of the system and you can you can modify the properties modify the type and so you can kind of create the dynamics that we want. Another very nice application of control is to create collective phenomena where that wouldn't otherwise exist so you can have different agents, and you can start to add in interactions they have sensing and so forth interactive this is the active matter in birds flocking and so forth but you can also make artificial ones where the interactions are kind of programmed to use the control to create collective effects that wouldn't exist. So this can lead to synchronization, swarming dancing, all sorts of things again stuff we won't really have time to talk about. But Okay, so the first maybe basic notion is of a feedback loop. And these can occur in many different kinds of system. It all visible by the way. Okay, it's not. Okay, I'll try to try to put things higher. So, different classes of feedback systems so some of the oldest technological ones are whatever we call gadgets. The one you probably know is toilet where you know, pull a lever and, you know, flush it and water goes down in the tank, and then that opens up a valve, and then fills, and then there's a ball, and when the water is high enough the ball goes up and that shuts off the and sometimes it doesn't quite. And that's a descendant of something called the steam engine governor, which is depicted here this is my one sort of historical application where this was developed by among others James Watt in Scotland in the 18th century. And this was a device to regulate the speed that a steam engine will return that. And the idea is that as it rotates. Let's see, so as it rotates here, these balls will feel centrifugal force, and they will go out. And when they go out enough, they're hooked up, you know, the as they go out they're hooked up to a lever the lever will close a flap that cuts off the steam. So if it goes too fast, the flap starts to shut down. And, and then that that will take the steam engine to a to a to to regulate the actually sometimes it doesn't sometimes it kind of ends up being unstable around. And the first person to understand this was actually Maxwell who were the first theory paper and control theory in 1867. Around the time he was doing 1868 or is actually just about the time that he was thinking about the Maxwell demon experiment which is kind of something on the conceptual role of feedback in a system. And he's also thinking about the practical part of feedback so it's interesting that he, he actually sort of pioneered both aspects of control theory, at roughly the same time, but I think completely disconnected in his mind. It's a long story about why that interest in the governor but I'm not sure what I would call gadgets these are these are devices mechanical and they're also biological ones that just sort of function as feedback systems. And it's up to the viewer from the outside to understand what's the feedback part and and so on actually I didn't reproduce it but I mean, in the book I have the complete this this is a part of the steam engine right there's the whole rest of the boiler and the part of it. And it's interesting things to see the full diagram of the steam engine, because you see this complicated set of cables and wheels and things like that. And then off in the corner, there's this part here which is government, but if you didn't know about it might take you a while to figure out how this engine worked and what was the function of this this piece and so and similarly in biological systems when we look at a molecular or things like that. It's, it's a challenge to understand, you know, is there some kind of control system and how it's working and how it's, you know, and quote unquote designed or evolutionarily designed. There are also sort of large scale, natural kinds of feedback so like in the climate, the one of the class of ones is is CI, which, when it's present is white and tends to reflect light, but keeps the water cool. But as it starts to melt the ski, the water is effectively darker, absorbs more life, and will tend to warm things up more so there's a kind of by stable situation where you can get positive feedbacks where you know you have what we're experiencing now which is that that things start to melt that makes the surfaces more absorbing which makes things warmer which makes them start to melt more and so you drive between two different stable unstable or two different stable states with an intermediate and stability. And we'll see a little bit of that. In the kind of more deliberate technological realm there are both analog and digital control which is one of the big advances about 100 years ago was to sort of break things up into modules where you would have the system that you were interested in. You would have an electrical circuit with an amplifier and a sensor so the system would have a sensor and that would give them a signal into an electrical circuit that would do the regulation and then it would sort of take a decision and, you know, output. Another voltage which would go through an amplifier and that would affect the system somehow. The version of this everything is digital so you have the analog signals which are digitized turned into numbers a computer with a program which makes now some explicit decision, and then we convert that digital answer into an analog signal which is sent out to an amplifier and affects the system somehow. So, just to sort of set some concepts. When do we have a feedback loop and, and when do we just have interacting variables. So, if I have two different systems, let's say x and y, in some sense a feedback loop is x affects y and y affects x and so you set up a loop like that. But, you know, there's the question is when would you think of this as sort of a feedback loop versus just a two variable system. You could take this x and y and put it into a vector x and have it. So, so, so x here might obey dynamics, x dot is some f of x and it also depends on why y dot is G depending on x and y. And you could think of this as a as describing a feedback loop, or you could just think of it as a couple set of equations. So, with a vector x instead of just two scalars. So when would you talk about a feedback loop and when not it's a little bit semantic question, but usually you want some notion of autonomy that x and y or somehow physically distinct systems that you can sometimes are really just two separate things. Sometimes they're all part of one thing but they have separate separate roles that you know what as we saw here. So there's a lot of causality that when x acts, it, you know, exert something on why y axis or something on that. So there's this notion of a feedback which is at least somewhat intuitive. When you have two systems and they have this autonomy and separateness, then you can think of open loop interactions and close interaction so open loop interaction just means that one system affects another, but not vice versa. A controller and it just does something and the system changes its behavior as a result. A closed loop system. The controller affects the system, but then the system affects the controller. And that's the notion of feedback that the controllers being fed a measurement from the system. So this would be kind of like a thermostat that, you know, the text that the room is too cold and then it tells the heater to come on. And so the sensor is the something that tells it the temperature of the room, and then the action that's being taken is to turn the heater on, and the decision, which is the third part of the feedback would be, you know, some rule like when it gets colder than such and such temperature turn on the heater. And when it gets hotter than some temperature turn it off. That would be a very simple rule. I don't think we'll talk too much about this but one can make a distinction between autonomous and non autonomous dynamical system so so for dynamical system this, this is just some state x changing in time. If it's f of x, and there's no time explicit time dependence we would say it's autonomous and non autonomous there being some explicit variation in time. But it's useful when you want to distinguish between feedback kind of imposed by a computer program which measures something and then changes it so that change as far as the system is concerned is some outside time dependent effect. But if you saw the whole hardware sort of one thing you might think of it as an autonomous system that is just, you know, it's dead in some it's connected to reservoirs I think even some energy. For example, or particles with chemical potential, but otherwise it's internal dynamics and the controller and everything is sort of happening internally. So, another distinction, which is sometimes useful is to think about a complex controller versus simple controller. A complex controller be something that's equivalent to a Turing machine, and it can basically do anything so computer program, you know, computer that was computer program to right control is basically do anything so that would be like a complex controller. And then something simpler maybe like a thermostat might, you know where it's like a bi-metallic strip that normally do one thing, but that's also a controller. That's kind of the philosophy. And before I switch into some actual actual material, this is a good place to stop and just if there are any any questions before we really get going. I have a question from online. Sure. Is there any notion of control or feedback at the level of fundamental laws of physics or biology. Let's say, I mean that we are there you know reductionist viewpoint of a fundamental laws not something we do in experiment or like that. I mean, there are feedback, you know, feedbacks that happen in natural system so as I mentioned, like the Earth's climate, you know, can be thought of as having various feedbacks. So that's not something that we impose and in biological systems are there so they exist, not because necessarily we're here. Whether you think of that as fundamental or not, I'm not sure. So that arise because we are dividing the closed system into two pieces one is the one we are interested in observing and the other one is some environmental path. Is that right. Yes, but so we kind of make this classification, but but I guess it's a useful classification. I mean, there are places where you can sort of see this explicitly so so one is kind of a more advanced comment but if it one for example and thinks about bipartite Markov's dynamic to being like a master equations you can imagine having like four states and depending on the rates you can think of them as making a measurement or having a feedback loop. And in those cases, whether you think of it as just a four state system or two, two states that depend on things like the separation of time scales. So there are some simple cases where you can sort of see the emergence of this kind of separation between two different systems that might think of as the working system and the control system. And in those simple cases, they can arise from things like like disparities of time constant and so forth. Otherwise, there are just, you know, as I said, there are there are plenty of examples both in biology and geophysics let's say where it makes sense to think about feedback systems where it's us who imposes this kind of terminology. I see it's in elementary physics. Can I think of the radiation back reaction as kind of feedback control situation where electron radiates and then it affects the motion of electron itself. Yeah, I suppose I mean I guess one of the questions you can make these interpretations and then is it useful to think of it that way or not. I suppose to say something like that. Okay, thank you. Okay. Okay, so to try to put some some some be a little more concrete and get into the material. So I want to start with. Yeah. I don't have. So, so the question is whether, you know, what, when and how do we decide if it's a feedback group. So the, the, I mean, I think one thing is that this is, this is a point of view that's being imposed from the outside. So one question is just, is this a useful thing to do. I mean, as I said, sort of some of the elements that you would want would be that the two parts that you decompose system into a controller physical system that's being controlled and then the controller. You know, you should have some kind of autonomy. And there should be some causal interactions between them. And not that then this might be useful way to do but otherwise it's really just it's we're the ones for making this classification. And many times this is a very useful thing to do. So in this logic, this is often very deliberately done in a very obvious system and then you put a sensor on it and then you put it up to something called a controller and you put a wire into it and another wire coming back to an amplifier so all of those elements really like you can point to, you know, this box and this box and this box and it's connected by that wire. And in those cases, it's clear. In other cases in nature where you look at it and you say okay make sense to call this a feedback system so the climate feedback for example. Other cases it might be a little more complicated and as I said, and I think in biological systems in particular, sometimes the challenge is understanding is, is there something really being actively controlled or not and it's not an obvious thing. Okay. Is that good. Okay, no complaints. Okay, so let's start in a simple place, partly to to fix some concepts and notation. So think about a pendulum as a kind of prototypical dynamical system. So you can think of a balance of torques, leading to a second order nonlinear equation. One of the things that's often useful to do in control theory but in dynamical systems and so forth is to work in dimensionless variables that are appropriately scaled. So you want to find, for example here time is a physical, we have what we can, we can divide through by the mass and that's, that's clear and divide through by length squared, and this we define as a square of an angular frequency but we still have time as an explicit variable. And we can get rid of that by defining a dimensionless to variable T bar, which is a may not times tau so this is time one over time so T bars dimensionless change variables. And so when we change variables the omega squared will will, there'll be an omega squared here, and it's put into here and it cancels out and so we're left with a dimensionless equation. I will usually be pretty sloppy and go back to the same notation that I started with so the dots here are the, you know, the original time variable but now scaled. And so we can write this, you know, initially we had mass, we had length, we had time, we had gravity, and eventually we're getting down in this case to an equation with no three parameters and so that's one of the advantages the others that that numbers have, you know, kind of absolute meaning whereas the numbers when you have units don't you can, you know, something that is a meter it's one meter or it's 10 to the minus three kilometers or 10 to the nine nanometers you have to be number when you have units doesn't have any absolute sense but but when you have numbers then you do. Okay. It's also useful to put everything in a kind of standard format so everything I'm going to be talking about I'll assume that it can be reduced to some kind of system that is of the form x dot or x is some vector equals some f of x where f is a vector function. And in this case we can do this very simply by defining a two component vector x one equals theta x two equals theta dot. And so then we can put this equation into a first order form, like here, and then we call this x, and we call this f of x, and we have this this standard form here. Okay, so I hope this is something that that you can call seen before in some in some fashion. Now the point of view of control theory has what seems like just a slight modification of this but it ends up being important, and that is that that when you open up a many physics texts and dynamical systems they would talk about equations of this form, but in control theory we add inputs and outputs we explicitly think about how you know what what you can learn about the system, which might not be all of that it could be but but often isn't and what you how you affect it. And so we add inputs, which I'll call you, and I think that actually traces back to the Russian for controls of any Russian speakers here. I forget what it is in Russian, but it starts with a new and outputs, why. And so, what we end up with is a set of equations x dot, which is a function now both of x, but now we'll have it some explicit input you, and the outputs of it will be some why, which can be in general some nonlinear inputs and possibly you as well. I have a question. I don't know how the difference between the x and why me. The output, the output must be as part of the system state or not, not necessarily correlated with the system state but not necessarily often it is but but it doesn't have to be okay. So we'll see we'll see some examples of how this works. And, and, and the feedback would be, you know, drawn on the wise, you know, given the wise, how do you construct the you, that would be a feedback system, for example. Okay. And often both for simplicity and for other reasons that we'll get into will be thinking about linear dynamics. So the simplicity is that we can we know how to treat linear systems very well mathematically so that's a good, that's a good starting point. So, for example, in this pendulum, we can imagine, looking at his behavior in the vicinity of the down equilibrium, where they did not, and they did not dot so this is x one and x two or both zero. So our initial state x not is your zero. And, and then if you linearize the equations about that and we'll see how in just a moment. We can write this as a, as a matrix equation for x one and x two, and I'll talk about this in in a moment. The result that you get depends on where you are. So, so for example, in the system state space, so I'll call the state vector, and the vector space that x lives in will be the state space. In physics we would talk about the space quite often. So here you would have position and momentum variables. Here, it could, the state vector could be the set of cues and keys from from positions, coordinates and generalized momentum, or positions and momentum. But they could be other things too. So, we'll just, we'll just talk about a state, state space and state vector. So, yeah, so, so in particular for the for the pendulum, we can add, you know, if we rewrite the, the equation here and add a U of t to the right, we're saying we can apply a torque to it. And, and so then, and for the output of the system, the wise, we could say that well we, maybe we measure the, the, the angle of the pendulum, but the complete state actually requires both the angle and the velocity. And, and typically, you actually can there are devices that will will directly measure velocity, but typically people often don't use that they might measure just just the position. And so in this case here the why, so this is a specific example would be, you know, this is the state vector here then why would be a row vector one zero that picks out x one and not x two. So that's a very simple, simple example of age. Okay. The linear systems, as I said, will often be dealing with them. So the day they have a form of, you can put it as x dot equals a x plus be used so so the f of x is being essentially Taylor expanded, maybe I should put that first, but we'll get to that in a moment. So the first term is exit we assume that equals zero is an equilibrium and the first time of the yeah, yeah. So, I have no linear system. And my question is, what does it mean, I'm applying the linear theory. Can I show you and hold for for like 10 minutes and we'll get to that. Yeah. And maybe I think for people who like the abstracting first I should have inverted the presentation but we'll do, we'll do some examples first and then and then we go back to the realization. Maybe I should have done it back to us. Okay, so just but just to again maybe think of it as setting notation. So, so x is an n dimensional state vector. And so a x dot equals a exit a would be an n by n matrix. And b would be an input but it could also in general be several inputs. And so if it was several inputs maybe M of them, then you would need a matrix be to connect that and output. The effects of the input to you on the state x so be would be an n by n matrix. Similarly, you might have more than one. You might measure more than one thing on it until you would have why would be safe key components, and it would be connected by some matrix see that went from from n components to the p output and inputs to p outputs, so that's a key by n matrix. And if it were affected by the inputs which is sometimes does happen. Then you need to go from from the number of inputs to number of outputs so via n by n, or m by P matrix. And actually sort of pick these all into some giant matrix that's in n plus P by in plus M, just as a kind of it was sometimes an efficient way to store a linear system in a computer program, but it's also kind of just a visualization of the inputs and outputs. So, so to give an example of what this language looks like for some simple things that that you've already seen if you have like a low pass filter circuit, an RC electrical circuit where you have a time dependent voltage in and time voltage out or resistor capacitor, then the dynamical equations or the time derivative of the out is, is, you can show, I hope you've seen this minus one of our CV out. And then it itself is a is a time dependent input would would also affect be the driving term here. Again, we can scale it and when you scale it it takes the simple form. x dot it's minus x plus you so we, we've gotten rid of the RC as a as a unit of time top we call tower see and then we still just the way we did before. And so in this language here. The circuit because for me. Okay, sorry. I got a lot of. Okay. Okay, so, so, so V in is the time dependent voltage in. And so there's a. So we have my things on the fly the V in minus the hours. Well, sorry. Yeah, you can think of it as a voltage divider, except that we're dealing with with with AC signals. And so we have impedances and so the impedance. There's a voltage divider between R and C, but the, the impedance of this is just R, but the impedance that this is one over I make it see. And so there's a frequency dependence of voltage divider that's useful point it is to you, but I don't know. I think so. And so, so this is a typical, this is the simplest example of what's called a low pass filter so we'll see me in a moment where it basically does is let the low frequencies through and filters out and reduce the amplitude of high frequencies. So we'll get to that in a moment, but just as an equation of motion. So this X dot is minus X plus you so so in this, in this language of, of, you know, having ABCD, the A is just minus one. Okay, so just everything's a scalar here. B is is plus one, and C is plus one and there's no D. And this is kind of mapping, you know, a physical equation that you would probably easily derive to this language of linear systems with an explicit a matrix for the dynamics of the matrix for the inputs of C matrix for the output and sometimes a D matrix a couple, if the input couples through the output, then we have that you usually don't. This is a version you kind of already talked about. So we had like a damped harmonic oscillator which in scaled units would be, you know, q double dot plus two zeta zeta would be a dimensionless damping coefficient. One for critical damping, less than one between zero and one for under damped one is critical damped and over damped when it's greater than one. So this is again the driving input, we rewrite this as a second order equation now. It's the same as we had before there's now the damping term here. That's our a matrix, the B matrix is this is zero one because you is affecting the equation for D by DT of q, it's q double dot is D by DT of q dot which is D by DT of X one. D by T of X two, sorry. And so, so it enters in as a zero one. If we observe the velocity of position as I mentioned before C will be the row vector one zero and then again zero. So, so this is again mapping something that you already seen onto this this language of engineering systems theory. Okay, so, you know, again, why study these linear systems. One is that they're easy. And so this is the, you know, why do you search for your, when you when you've lost something why do you search for your key when you've lost your key or something why do you search for it under streetlight well that's where you can see. And so these are the questions that we can easily solve. However, even in cases where you have nonlinear dynamics will see a lot of cases where we're looking at two dynamical systems that each are nonlinear, but they're almost the same and so the deviations the differences between them are small and can be approximated by linear systems. So, even something that starts out as linear can be thought of as nonlinear. And the other reason is it turns out that that when you have a linear control system. It can be robust against lots of kinds of perturbations, and you can think of the nonlinear term sometimes as perturbation so it's not so strongly nonlinear, some of the nonlinear effects will just be lumped into various kinds of disturbances that hit the system. So you can see these linear systems to think about it. So they're so linear systems are more. They're certainly simpler but they're also more realistic than you might might initially think, or they can be. Okay. So, yeah, how do you get from nonlinear systems to linear ones. So, so, so basically you just Taylor expanding. So if we start with a system x dot equals f of x, and you and why is some function, nonlinear in general of h of x and you so this is the readout which doesn't have to be linear. Then if there's a fixed point this is particularly easy to understand so if there's an x not and you not where that are fixed points and f of x not and you know it is zero, and h of x not and you know it is some some some reference value. So you can understand everything in small deviations around the x not, you know, why not. And if it's differentiable then then we can carry out a Taylor expansion men's age or just the dfbx evaluated at the fixed point x not you know, and he is the FDU and so on. So, again, what is exceed or exceed even then. No x not is is the diesel is a fixed point so it's not and you know they're defined as by being fixed points so so they're solutions that are independent of time. Again, this is just a simplest case, you can do fancier linearization. And, again, the control that you need. So you imagine that there's some you not that will fix x not so so again if I go back to the temperature thermostat temperature in the room. So x could be the, the, the temperature of the room x not as the sort of stationary point you not as the heater. Let's do this in winter the heater that is amount of heat that's necessary to heat it up to the temperature x not. And so then everything is stationary. But of course in general the temperature is varying so forth and the heater can turn on and off. And so then we linearize it about that. And so there it's clear that that for example a different value of you not will lead to a different value of x not and usually it's not a linear necessarily linear value so you have some knob that you're turning on the heater and you just turn it some value and you measure the temperature of the room that comes to and maybe that's linear but maybe it's not linear but where where whatever it is, you can, you can look at small deviations about it and describe those by linear dynamics. Okay, so once we have this sort of linear systems point of view it's it's very natural, particularly for the case where things the systems have time independent components, so that the A and B and C and B. The principle could still be, they could be time dependent, and still have a linear system, but in the simplest case they're not. And so, then it makes sense to think about things in frequency space and so as physicists we would naturally just look at the 48 transform engineers tend to look at the Laplace transform, but in the, but with a complex variable. You would define f of s to be the, the integral from zero to infinity over time of f of t, the time dependent signal f of t, e to the negative of st, and think of this as an operator L acting on F. And, and if that in s is complex, then there's not too much difference between the Laplace transform and the Fourier transform. Again, just in case anybody does encounter engineering discussions I'll do this in terms of Laplace transform, but one can easily convert to a Fourier transform. So in the Laplace domain, then the relationship between why the output and you the input. We can take the output y of t and take it to Laplace transform and get y of s. So I'll be a little bit loose and just, if I say y of t and y of s. Of course, the y's are different functions and the two different domains, but we can sort of think of a more general notation where y is some object and it has a time representation and has an s plane, complex s plane representation and so I'll use the same name for all of them, but but functionally is a function function. And so we can take the output y and take it to Laplace transform and taking input u and take it to Laplace transform. And then, since these are now for, for in the simplest case y is a scalar signal and use one input, then g, then, then, then, then y of s and u of s are also scalar functions and so we can take the ratio and g of s the ratio is called the transfer function or sometimes the physics version would be the dynamical response function. And so it is a complex number because we're, we have a complex variable s. And so g of s is complex. It has a magnitude and a phase. So if we think about s, if we substitute s equals i omega then we'll have, again, up to limits, zero infinity instead of negative infinity infinity will have a Fourier transform. And so we can think of the frequency response and the phase response of the system. So the magnitude of G of i omega for your transform will be the magnitude of the magnitude response and we can also talk about a phase response and pictorially this is all very simple and pictorially we have an input sine wave at some frequency, and then we measure the output the red one here. Because it's a linear system the output will have the same frequency as the input, but the amplitude will be different in the ratio of amplitudes of the amplitude of the output to the amplitude of the input. So that's evaluated at this particular omega. And then the phase shift between the two, which is shown here is the phase shift at that frequency. So, we can apply this for example to our second order system. So if we have the situation where the, what's being measured as a position then it's, it's y double dot plus two to zeta y dot plus wise. You have t, you take the Laplace transform remember every time you have a time derivative will bring down a power of s with a Fourier transform they'd be an i omega so that sense the classes a little bit easier you spring down a power for every derivative. So this becomes, you know s squared person this becomes s. And then if we take the inverse of it here will have one over one plus two zeta s and this is the square. Okay, so this is, we sort of taken the input output behaviors we have the dynamics, plus the structure of the input, plus the structure of the output. And that gets summarized by this one function g of s. I cannot see the second term. So, sorry. The two lines. Zeta. Okay, yeah. Okay, that's my, that's my come to the data. Sorry. No, not enough, not enough. So Z is just a damping parameter. Yeah. Yes, for question. For you to solve for faster form. Which one is preferred by we use when we use when we use the past wise so on the control unit to use for fast transport. Well, as I said, if. If you think of for your transforms and the plus transforms of real arguments, then, then, then you can answer this question, you know, there are some advantages like like the plus you have initial value, it's sensitive to initial values and so if you have an initial time that can be useful. But if you allow the arguments to be complex in both cases and the convergence properties are different. So if you allow the arguments to be complex, there isn't that much difference. And so then I would just say it's cultural that that business is often used for your transforms and engineers often use the plus transforms. And essentially, the whole. Yeah, yeah, so when you start to describe things the way I'll describe it in the complex s or mega plane they're rotated by 90 degrees so there's a. There's a literal point of view but but it doesn't matter too much. Question. Okay, well that was very efficient. Right, so again this is this is all most useful for linear time invariant systems. So once you have even time varying dynamics and nonlinear terms then this approach is less useful so so this is kind of a starter approach historically it was kind of a starter approach to because it was kind of the first one, at least in the West that was that was thought about. Yeah, I'm another person applying linear system on a nonlinear system. Some of them are in a future for this position. So, how much is it useful to apply the linear system on a nonlinear system. Well again I think that's something that the answer I hope will kind of emerge as we go through the week. It's a little bit more useful than you might think and sometimes it doesn't work, I guess it's a short. Yeah. Yeah, so so so there are some situations where it's sufficiently nonlinear, and we'll talk about an example. I think mostly on Wednesday, where, you know, so instead of having a pendulum. I'm going to introduce the pendulum because I want to then put it on a cart and have a cart very much inductively kind of a more intrinsically nonlinear system where linearization doesn't work. To get rid of decisions. Well, I guess. Okay, so, so again this is jumping ahead a little bit. So, so this pendulum that I introduced, if you control it with a torque. It's not so different from a linear system actually turns out if you control it by putting it on a cart and pushing it back and forth, it is. So you might not think that that's a very, you know, that sounds a little bit weird that two systems, you know, depending on how I hook up the motor can be either basically linear or essentially nonlinear. That's why that happens. And so, yeah, in one case if you, you know, with this, if you ignore the nonlinearities, then you're severely constrained in what you can do and something to become impossible. Whereas if you could run with it with a torque, then there isn't that subtlety. So basically all the things become possible with more less of a linear. So, it's, it's a subtle, it's a subtle thing. Yeah. You've done one hour of course if you would like to go break short break. It's really interesting. Okay. Then we go. 30 or 45. But yeah. Well, maybe we started a little bit late, but yeah. You are true. You have one hour 25 and you must do it. Okay. So that we can each at the top of the event. So if you want, you can do it. For me, I'm fine. I was hoping that we would take a break kind of in the mix a little bit further along that not a break but something where maybe people could try to do a little calculation on their own. Okay, so. So there are a number of graphical tools for linear systems that are useful to know about one of them is called Bota plots, where you just take the log of the magnitude to these are magnitude and phase response so you take the law tradition is the log of the magnitude versus the log of the frequency. So for a second order system for example it's under damped. You have something like this with the resident peak if it's over critically damped and overdamped it would not have a key. And then you can look at the phase response. Again for a second order system. This would go from the if we have a linear variable by here from zero to minus pi so a phase like a pie passing through pi over to it at resonance again plotted on a log frequency scale. So these are the log scale is because you want to look at frequencies of many orders of magnitude and responses over many orders of magnitude. So look at this in the complex plane and characterize the plane by the complex polls. So this will have zeros in the denominator will have zeros at different points s in the complex s plane. And so we can put all of those zeros because it's in the denominator makes them blow up and they're called polls. For example for an under damped system we get a pair of complex conjugate polls. And so they might look like this. So in this case here stable systems are all have negative values of the real parts of linear parts of stable systems with live in this this half of the plane. And if they're complex, because the coefficients are real in the transfer function. So you know from the fundamental theorem of algebra that the roots will come in a fundamental theorem of algebra that we should come in complex conjugate pairs. If it's overdamped then you'll have two real roots and so they'll be on the imaginary part of zero so there'll be two points here. And then you can follow as a function of data, for example, the evolution of these tools. And you can see that they would kind of go like this, like this and they would meet at critical damping, when you have to generate roots, and then there was split and form a very quick, you know, quickly damped one and a slowly damped one. And again this is characteristic of overdamped dynamics and, and I think, if you take an intermediate level mechanics forced or something like that you would see these three cases of damping. So you plot the transfer function itself in the complex plane. And so then, as a function of frequency, for example in the second order system at low frequency, the phase is zero source in phase and so it's got a real response here. And then at intermediate frequencies the will have imaginary parts of it become important to the actual system will trace out something like this and at infinite frequency, the magnitude goes to zero and so it both real and imaginary parts go to the origin. And so this is called a Nyquist. This is the first one is called a pole zero plot. And this is called a Nyquist plot, the Nyquist and voter where it will be theorists at Bell Labs. Company, the telephone and telegraph company in the US that where a lot of this kind of theory was was developed in the 1920s and 30s. You can apply this to end order systems. And so then G would have a denominator that's an end order polynomial. And at very high frequencies, the leading term would be s the minus then. And so this would correspond to a phase of minus i to the n times negative the end so the, the amplitude would be going down by the end. And there would be a phase lag of pi corresponds to a phase lag of my side corresponds to a phase lag of pi over two. And then if you add that to the end power that would be any times pi over two so a first order system has a face like a pi over two. The second order is we're just seeing has a face like a high 180 degrees. Third would be, you know, three pi over two and so forth. In general, the transfer function can also have polynomials in the numerator so the functions of the input as well. And so then one would speak about like a kth order numerator and an nth order denominator and a relative order of n minus k and everything I talked about would apply to the relative order and not to the board. And if it's a system, the way I just described where the transfer function is the ratio to polynomials it's called a rational transfer function. But not all transfer functions fall into that class so there are ones that are sort of like irrational ones are not rational. So the simplest example is a time delay. So if you take the Laplace transform function that is delayed by some towel, then it's easy to show that this is either negative s times how times the Laplace transform of the undelayed system. And so the transfer function then just depends either the negative that's how we scale it. And so that's not the ratio to polynomials. I wanted to kind of get to an example using this so something that's a little like this, but different and maybe we can work through the details. In a moment would be if you have a one dimensional rod with a heater that here's one dimensional rod going off to infinity here, we put a heater here, and we put a temperature probe some distance x down the rod. So we can monitor the temperature here in response to changing the power into the into the rod. And so you can work out maybe we will in a moment that the transfer function for this is even a negative square root of s over square root of s and again this is something that's not the ratio of two polynomials so spatially extended systems. And so you can work through the delays and things like that can have transfer still have transfer functions, but they're not a rational polynomial. And this actually has some physical consequences at the moment. Just so so one of the advantages of working in frequency space is that you can take a bunch of simple system, simple linear systems and easily build up more complex ones. The basis of this is the convolution theorem so you might remember this for Fourier transforms but it also works for Laplace transforms that there's a convolution theorem. So the convolution is, is, is this integral here, so between two function g and h and the third function f. And so it's g of t prime h of t prime minus minus t. And so this defines an f of t. And so the plus the convolution theorem in this context is that the plus transform of the convolution of these is just the product of Laplace transforms. And so this is very useful to build enough complicated systems because you know we talked about a system having input and output, but now imagine we have two systems where the output of the first system is the input to the second system. And so now we can kind of put them together. And, and an example would be having some sensor dynamics we talked about our second order system as an example. But imagine, instead of measuring the, I've changed names to assist a deer. Anyway, let's call it new. So new is the variable that's a thing second order dynamics, but we can't measure it directly but we measure with some sensor that itself has some dynamics and this is actually what happens in a real sensor. Nothing can sense anything instantaneously. And so we actually then have two transfer functions, and it kind of looks like this where we have new in terms of you that's our original system here. We have a transfer function that we've already talked about. But why is the first order system we've also talked about that that this transfer function. And because of the convolution theorem, if one is feeding into the other than the transfer function from you, and goes to this intermediate variable new, and then to this final variable why we can go from you to why directly just by multiplying the two transfer functions. And in the time domain, this would be a convolution but in the frequency domain is just the multiplication of these two transfer functions so the transfer function of the combined system is just the product of a first order one and a second order one. So it's very simple. The convolution like contains the causality of the signal transfer. Yeah. Yeah, so these are these are, we won't have time to really, there's a whole subtle story about causality. And for those who know about commerce chronic relations. This is part of the story and it turns out that that in control theory. The most useful form of the commerce current of relations that relates the real and imaginary parts of transfer functions. And in control theory is actually more useful to look at the polar form of that. So you take the magnitude in the phase, because we look at magnitudes and phase, but it turns out that that there's also a, it's called a photo relation, but, but I'm sorry. Boda BOD. So it's the game based there and so there's a relationship between the magnitude and the phase just as there is between the real and imaginary part. The subtle points and again sorry I'm going to speak for this restricted part here is that whereas the commerce current of relations are equalities. The Boda relationship is actually an inequality between the magnitude and phase lag, because you can always have a system and add more delay, but there's there's a minimum delay that you can go below. Okay, but, but you can always add some extra delay like you can always have a signal that just is a loop of a wire that goes for 10,000 kilometers and speed of light and it just takes a while for things to happen. And that will delay the system without necessarily changing anything else. Okay, so you can always just add extra phase delay without affecting the magnitude. But if you don't have that, then there's a minimum phase delay that's implied by the way the magnitude varies with the frequency. So that's the more useful version of it for control but that was one of the things I kind of jettisoned unfortunately. About the touch of the model in the time domain. Oh, here. Yes. So, what does this model have in the equation, the spectrum equation that implies that you can do this, this type of product in the past. It must be model given class. Yeah, so it's linear models with time invariant proficiency coefficients for the linear system. So, like the zeta here is, you know, the damping could depend on time right. The new in the second. The new in the second phase is the same new here. So instead of measuring. So before I said, you know, like before I said, well really the internal variable is like this x, and then we have a y and I said well, if y is equal to x and I'll just replace it. Well, but that implicitly meant that I had a sensor that was infinitely fast and whatever x did, why would respond instantly. And then I'm saying sort of more realistically, a sensor takes some time so it's characterized by some some time constant which actually I haven't been very usually there'd be an explicit towel here that was, you know, different from from one once you fix the scaling here. Okay, so, so the frequency discussion is intuitive but again it's very restricted because it's this class of linear time invariant systems. Okay, so let's try to let's try to do something slightly less that uses all of these concepts. So, imagine now we're going to construct a feedback loop where we have two systems ones now going to be part of this controller. So, forget about the sensor issue for the moment. So we have our physical system g of s, which might be like a linearized pendulum. And now we're going to add in another system that's going to control it. So we do it by measuring, taking this y output and forming a difference between a reference signal which we want to follow which could be a constant or it could be some time varying the plus transform of the time varying signal. And we'll get the error in frequency spaces so we'll construct this sort of feedback loop here. And what I want to find. What we should try to find is what is the transfer function connecting our to why we have this feedback. Okay, so again the cast of characters are G is the system case the controller. Why is the output of G but now under control you is the internal input so it's the, it's the output of the controller but it's the input to the physical system. So this is the reference and E is going to be the difference are minus why and this is kind of the negative error signal and the negative sign is, is, is a kind of a convention so I'll mostly stick with that. Okay, and so the, the, the, I mean, the problem here, and maybe this is something that we can take two minutes to try to do is to work out what this, what the transfer function from here to here is. So I don't know if this is something that you guys want to take a break and try to do for five minutes. Okay, so when you're doing it. Remember that. So like this relationship here means that you is K times E. Okay. Why is G times you. Okay, and then the input here the error is our minus why. So if you put all of that together and just solve for why in terms of our, then, once you come up with something. So could you repeat the relations between the variables. So relations are just the kind of just what you see. So, so here's K. Okay, so it's, it's a dynamical system to us the controller than I know. And so it takes an input E, and an output you. So then you equals K. Right. You know, in other words, K is the product of two Laplace transforms convolution and pandemic and so that that produces the quote unquote output you but you would then input. Because I look because one person's output is another person's input. Okay. Yeah, okay. So you put them all together and what you want to do is solve for why people something times R. Yeah, well, okay, in control theory, it's usually defined as R minus Y. You can think of it as, as, as. Yeah, okay, it's just. So it's, yeah, it's the negative of what we would call the arrows. Meaning in the flow coming in. The other kind of the signals of are going in, you know, this direction. So each of these systems here the G, but the input is on the left and the output is on the right, the input of the controller here is this and the output of controllers on that. So that is the thing that I'll do something about. No, that it seems that the goal of the exercise to write the right. Yeah. Yeah, in terms of the functions K, in terms of the transfer data. Oh, sorry. See, so single input single out, but I'm sorry. Yeah, so in general, you could have more than one output more than one. So the thing is that your system is J. And then you have a controller that modifies the. Yeah, so the controller will be something that modifies the dynamics and we'll talk about this in, in, in five minutes. And you go right now like the, the output, comparing it with a reference no. Not this works, but it, but just, this is just a kind of algebra. Okay, okay. Yeah, so, so it's, I mean, they're equivalent, they're just showing things differently. So the, the, the voter plot is just the frequency response plot, I think we recall. So it's just explicitly plotting the magnitude in the base on frequency. The, the whole zero plot is just plotting the positions of the complex polls of gs. And it ends up, the, the, would you make it a secondary? Oh, the, the poll zero, but yeah, more, I mean, if it, I'd say, yeah, yeah, and then the imaginary part will be the frequency and then the real part will be the damping. the zero, the zero point, where we find the, what is it? Poles, the poles, we're going to have to change from the both. Yeah, so in physics, so some theoretical physics texts, particularly like in particle theory, we plot the complex Fourier transform, so it would be the complex omega plane, and then the poles would be in the upper half of the lower half, depending on the sign relation. I think usually it's defined as the stable ones are in the lower half and the end state in the upper half, I think, all right, let's get it. There's the other way around. Any of us talk here, do you think? Oh, so if you define the complex Fourier transform, which people like in many body theorem stuff often do, they would define a complex omega and then talk about the poles in the omega plane, and then the stable ones, I think are in the lower half and the end state in the number half, but it might be the stable in the upper half and the end state in the lower half. It just depends on whether you take a plus omega, i omega, or minus i omega in the forward versus the reverse, so it's just the convention. Okay, so how are we doing on the block diagrams? Is there? Maybe there's a volunteer. Is there a volunteer to try it? I mean, I don't know. Yeah. Is it up here or is it up here? Does she get a fresh page or does she get a fresh page? Yeah, there's a page. Or just a reversible page. Oh. Maybe a little bit. We mentioned that the last time the solution, you may have to work up there, but you said that the input was kind of losing. All right, I'll just find it. So that you had to find out the final time and get the first response, and you want final expression for why. And the why was the economy with me and you. And then we just put this in here where you add that B of S is R minus Y of L. And then it goes with B. Yeah, so among friends that we've written all of that, you could probably drop the S's. Yes, you can. I mean, everything's implicitly a function of S then. Oh yeah, yeah, I was the producer for R at the beginning, but there is when you find it in the window. Yeah, I'm sorry. In people with B, M, R, or one thing. Yeah, that's right. Okay. Is that reasonable that people? It looks kind of like what we're doing. Should I want to just add components of the system? Okay, so yeah, so exactly that. And yeah, so it's just keeping track of these various things and doing the algebra. And you have the why coming up in two places. So that's why you get a non-trivial denominator. So we can just give this a new name and call it T of S, which would be like the closed loop transfer function or it has an official name and control theory of complementary sensitivity function, but that's probably not so useful. But it tells you how to go from a reference to the output when you have control. So the thing to contrast is that by putting in the feedback loop, we now have this dynamics T of S connecting R and Y, whereas if we didn't have this loop when we just applied R directly into a system and didn't have any of this stuff, it would be R applied to G would be, you know, Y, G, R. So you can think of adding a feedback loop as changing the dynamics from this G of S to some other dynamics, which hopefully are nicer in some way. One thing to notice is that, so G is your system and K is your controller. So G is kind of set by the system that you have, but K you have some freedom to choose. And for example, if K were some kind of constant or had an overall constant, notice that if K gets very large compared to G, then T goes to one, right? And if T were one, that would be kind of nice because then R, then Y would just be R. And R is in some sense what you want Y to be, right? That's the reference that you would like your system to do. So in some sense, it looks like we've solved our problem by having something where we let K become very large. Okay? So let's see how that goes. So if K is just a constant, this is something called a strategy called proportional control. And it's called proportional control because the feedback signal U or the output of the feedback controller U is just proportional to the E. Now, once you remember that if we define the error kind of in the more natural way, there will be a negative sign here. And this would be Y minus R instead of R minus Y. So this is negative feedback, but because negative feedback is stabilizing for a system, it's kind of the standard case, I guess. And so that's why often it's defined the errors defined backwards. So you just don't have negative signs everywhere. But really what you think of is that the feedback is in the opposite direction from the error. Yes. Just a question, what does this Y minus R correspond to in real space? I don't know the future. Oh, sorry. So it's the difference of the output from the reference. So just think of it in the context of space, right? Yeah, but it was just wondering what it would look like in the real space. It's just the same, right? If you take the inverse Laplace transform, you just get back to R. Y minus R. Yeah. Yeah, it's the product that gets trickier because then one of you comes to convolution for some of this. So if, so let's look at this for a first order system. So this is the RC filter circuit that I talked about, for example. And so what is T of S look like? Well, it's GK of a one plus GK. If we divide two by the GK, it's one over one plus GK inverse. G inverse is just one plus S. K is just this K proportional Kp. So it's one over one plus S over Kp. And if I rearrange terms, I've got a constant here and an S term here. So this is also a first order linear system. It's a new system that still has first order dynamics. Now, instead of going to zero, we'll go, you know, at low frequencies, so this is the long time dynamics S is zero. And so we just have, we just have one over this, which is Kp over Kp plus one. And indeed, if you were to set Kp to infinity, this would just be one. And so then again, we've got a partial function T equal to one in the long time limit. And I plotted that here. So this is the, the Y infinity is a function of Kp. And so as Kp gets large, this becomes one. And so the output is following the input. And the dynamics, it's S over Kp, which corresponds to speeding up the dynamics by a factor of Kp. So if the number we've scaled everything by some time constant tau. And so the new dynamics has a time constant that is tau divided by Kp. So it's faster. So in a way, we have a system that responds faster and goes more and more towards the value that we want to command it. So again, setting Kp to be at some very large numbers seems like we will have solved our problem and, you know, made our first order system behave like a much faster first order system that is following much more closely the command signal. Sorry. I think it, I mean, I don't know how to interpret this graph of why, like I don't know what is the mean of the output to be one or zero. So, so, so, so why is equal to, let's go back here. So, so why is T of S times R? Right here. Okay. Okay. And so, so we're looking at what T is. So T is one. Then why is just following our is the same Y is equal to things are and and in the time domain Y of T is equal to R. Okay, so that's a perfect controller because whatever you command it on the input, it doesn't be output. Okay. So, you know, again, think of this room and the temperature of the room, and that's that's the Y. Okay. Okay. The R is what you want the temperature of the room to be. So, okay, the temperature of the room is a little more complicated than the first order system. But to the extent that it's acting like a first order system, we would say, well, I want the temperature of the room to do this over the day. And so you would put that into your R of T and then the Y of T would instantly follow it. Of course, we know that that doesn't quite happen. But we'll get to that. I hope in a moment. So, however, for a finite value, we'll show in a moment that, you know, taking KP to infinity or very, very large sounds like a perfect solution. And the way I'm setting things up, you might suspect that it doesn't always work out so simply. And so we'll come to that in a moment. But if you set KP at a finite value, then the Y is always, the T is always less than one. Okay. And you can think about this intuitively in the temperature example that I just gave. So if the control is really proportional to the difference between, to the error. So the difference between the set point and the actual temperature, then if that, if you have perfect control, then the error is zero. But if the error is zero, the heater is off. And if the heater is off, the room cools down. So the heater has to be on to keep the room at a certain temperature. But then it can't be, if there was just proportional control, it can't be on. So it settles at an equilibrium or can settle an equilibrium that is just below where the error multiplied by the KP is enough to keep the heater at the right level. So that's called proportional droop. And the larger KP is, the less of it there is, because a small error multiplied by a large number will be a moderate heater value. Okay. So in any finite value of gain, the actual temperature would have to be a little bit lower. And so that's a defect of proportional control. And one way, a strategy that can get around it is called integral control. And the idea is that instead of making the feedback proportional to the error, you make it proportional to the integral of the error. You can think of this as sort of saying, well, you know, maybe if I looked at the behavior of the system, I could understand how much heater value is needed to keep it at the right temperature and just empirically determine that from how much it has been eating and use that as your signal. So mathematically, what this means is like, if we have our first order system, and I'll do it in the time domain here. So y is minus y. And then we add something that's proportional to the integral of the error where the error again is, let's say R is just a constant minus y of t. So we put that in. And the thing to notice is that now, when the temperature stationary, the stationary solution where this is zero, just has y equal to R. To analyze this in frequency space using our transfer functions, we would have the integral is corresponds to one over S, the inverse of a derivative. And we're almost at that. This is the subtle signal. So it has a transfer function k over S. If we put in our first order system here, then t following this recipe, we just put it in, but here we've got instead of a constant, we have S over ki. And so if you look at this, now this is a second order system. So when S equals zero and we look at the DC response, it indeed goes to one. So the time, the stationary solution is exactly what we want, but it's at the cost of turning a first order system into a second order system. And you'll notice as you increase the gain, which I call ki for the integral, then increasing the gain is like decreasing the damping and it will make it start to oscillate. So instead of now relaxing, it will start to wander around like that. So the feedback is turned to first order system into a second order system in closed loop. And here that's not such a good thing. On the other hand, it has the advantage of going to the solution that you really want and not having to have an offset. So this integral control, this property of making a solution go to a desired value, pops up a famously application of this in chemo taxes, where Stan Leibler and Namibar Kai, about in 1997, analyzed what makes the attraction of a system to a chemical gradient, the same overall sort of different gradients and it turns out to involve interval control. But just to get back to where we were, let's try to combine some of the advantages of proportional control and integral control by having a transfer function that at both, so we'll call this PI control. So now the transfer function is K of S is a constant plus something one over S. And if you compute its closed loop transfer function, so that the response, the bode plot of the controller now looks like this. So in proportional control would be just like this, integral control is one over S like that. And this has both. Now you have two parameters to play with. And you can still have T is equal to one for long times. But now we can take KP that appears here in the damping term and get rid of this oscillatory response. So this problem that I said that you have a second, you turn the first order system into a second order system, which will oscillate in its response. That can go away. So you can, you can keep going and add in a third term, which is proportional to the derivative of the error. And this is, this gives you PID control, which is kind of the standard industrial controller for an experimental, like you're an experimentalist and you have to control variable. This is what you'll probably be using. And the derivative term kind of anticipates, you know, you see the error changing. And so you can sort of anticipate that there's been a big perturbation, you'll have to do something to correct for it. And then once you can get the jump on it and start correcting for it before the perturbation happens. The problem is that if you also have noise, sometimes that jump in error could just be a noisy jump. And so that's a problem that we'll have to deal with later. But if you didn't have so much noise, then that's a good strategy. So this PID control is simple, but now we have three parameters for proportional gain, interval gain and differential gain. And one of the things you would do is, well, how do you choose the values? So the PID control is simple enough that you can actually more or less do this intuitively. And there's some heuristic rules. But when you get to more complicated kinds of controller, then you have more parameters to tune in. You need something more systematic. And one method is called optimal control. And I hope on Wednesday we'll get to that. I'll leave with just maybe a sketch. And I'll put the, I guess we're going to post notes or something, right? Yeah. Okay. So I can post notes. I don't know if this is part of the notes there, but I sort of said that, just having a proportional controller with infinite gain seem to be the solution to our problems. And in fact, what happens in the physical system is if you make the proportional gain too high, it almost always starts to oscillate spontaneously. And to understand where this comes from, let's go back to our control loop. But let's add now a sensor. And the sensor here actually is going to be, not having any dynamical properties, except for adding a delay. Okay. So it's, you know, the system and the controller are physical distance away. There's at least the speed of light, but other things in that delays as well. And so now if you analyze this feedback loop, it turns out to be KG over one plus KG H. Okay. So there's an extra H in this loop part here. And if you ask, what would make this one stable? Well, KG H every equals minus one, right? Then you have one minus one and it's zero, so it blows up. And so that corresponds, this linear response blowing up corresponds to being an unstable system. And so that happens, you know, if you go through and analyze this system, and I have the analysis here, this happens at a finite value of the gain. So when the game gets to a point, which actually when you work it out, it ends up being the ratio of the system time constant. So how fast the system responds to the delay. So that ratio limits the game that you can apply. So if you apply a higher gain, then the system will go unstable. And so if the delay is zero, then yes, you're allowed to have an infinite delay, but we know the speed of light is constant. So you don't have that more realistically, you can look at this one dimensional rod that I talked about. Okay, with its funky transfer function, even negative square root of S over square root of S and use it as the transfer function. So this is, this is now an extended system. And it doesn't really have delays, but faster temperature, faster temperature fluctuations take longer to diffuse, longer to go from one point to another. And this will also lead to an instability. So you don't really have to have an actual mathematical, you know, rigid delay, but any extended physical system will be that. And true physical systems that high frequencies are always extended. And so your temperature rod, you know, very low frequencies you could describe, temperatures is like capacitors and have a thermal circuit that's like an electrical circuit, but at high frequencies, it doesn't work that way. Same thing for electrical circuits. Once the frequency gets to the frequencies of order, the dimensions divided by the speed of light, then that time becomes important. And then you have to think about delays going from one component to another. So there are always things that are kind of like delays in physical systems. And so there's always a limit to how much gain you can apply in this kind of feedback loop. So that motivates being a little more sophisticated and going from proportional control to PID control is kind of a first step in sophistication, but it's not by any means the last. And so we'll kind of continue on. I guess this is pretty much it for today. And so what I want to do tomorrow is go back. We'll still be in the linear systems enough, but go back to the time domain and talk about the time domain in a little bit more detail then. And that'll be kind of our window to generalizing things in a nicer way. Okay. Any last question before we break the topic, you know, I'm just crazy because I don't, the last time I was in the lab was in like the battery. He said, I mean, you're kind of molded in their response. No, the, I mean, the, you know, like your control is not fair to go. It's so late to go to your system. No, no, I mean, that could be, that could be. I just a source of delay. But when I was trying to say that this example of the, of the rod, which, which I think, actually, I, you know, there's, there's, I think in the notes here enough that you can work. It's one of the problems that's worked out in my book, but it's not mine. But the physical system itself has delays. Okay. If it's spatially extended because it's input in the output or different spatial locations, then the signal takes some time to propagate it. Okay. It's not, it's not exactly a rigid delay. Like, you know, I was talking about earlier, but it's effectively like that, that higher the frequency go, the more phase shift is going to be to get to one point to the other point. Okay. And so even within the physical system trying to control their delay, of course, if you have other delays in your controller, of course that just makes it worse. Is that the system model? Yeah. Yeah. Well, I mean, for example, okay. Yeah. Right. I'm sorry. I'm sorry. How do we handle that? Yeah, so, so all of these. All of these. How to say. All of these non-rational transfer functions that are not rational polynomials are described by, in fact, partial differential equations. It could be delay differential, but you can convert a delay differential equation into a partial differential equation. But a partial differential equation can be linear and have a transfer function between two points. And so that's how you, so in frequency space, it's not a problem. So the transfer function, for example, here is, you know, this either negative square root of s over square root of s. And so that's, that's all you need to do is, is, is fine little plus transform, connecting input and output. And then you can still use the same formula. Again, everything is linear and time independent. Once you play with that, it's not so easy. I have a little bit technical question, because I know that you were doing research on the Maxwell emails and these kinds of things. And you have to pick back, you have to measure the position of the particle in the trap. And then use this information to move the center, not for example, of the trap. So I assume that you also have a delay in the measuring of the position and the transfer in this information to the trap center. How does it affect to the, to the experiment or only in the game that you achieve or. I mean, that's one effect. If you, I mean, if your goal, we're trying to control it, that would limit you. It affects it there. I mean, this is the stuff that we'll be talking about on Thursday and Friday, but it affects it by limiting. If there's a delay between the information you require and the action that you take, then you're acting kind of on the old information. And so there's more uncertainty. Yeah. And so we need, we need to introduce stochastic effects into the equations to describe it. So that'll happen starting Thursday and Friday. That'll be Thursday. Oh, yeah. But so having a long delay. It's a, it's like, it's like, I mean, we can say that we have more noise, more uncertainty. I mean, the noise is always there, but you're not sure about what it means. Yeah. So in the language that I started with, the system in some senses is open loop. It's not being controlled while there's delay. Okay. And so then it'll behave as the open loop system happens. And if there's some uncertainty and you describe it by some distribution, then that distribution will sort of broaden. Yeah. Okay. For the second time, we have to stop here, break, and then we continue with the lecture on the meeting. Let's break. Thank you.