 Okay, so far we have been looking at modeling of dynamical systems by end of this lectures for about 6 to 7 weeks we know how to develop models starting from data, so we know how to develop dynamical models starting from physics linearizing them coming up with a model which is control relevant alternatively we know how to start from data completely come up with a model which is control relevant. So now we are at a point where we have a model which can be used for controller synthesis for controller design and now we want to embark upon design of the controller as I said controller design given that you have a good model controller design is relatively easy task it is not that difficult it is not, so to begin with I am going to give say one lecture on the concept of stability of discrete dynamical systems because this is at the heart of design okay, this is at the heart of design this concepts are relatively easy to understand when it comes to linear dynamical systems, so we are going to restrict ourselves mostly to the linear dynamical systems, so I will touch upon little bit upon Lyapunov stability but that is not going to be my non-linear systems is not going to be my you know focus we want to restrict ourselves to the, so stability analysis of discrete time linear dynamical systems, so that is the focus, so at the end of what we have after modeling whether we are modeled using first principles or whether we are modeled using data we have this model okay, right now you will see that the stochastic component in the model is missing I have just listed here model with respect to you inputs known inputs okay, right now do not bother about unknown I have stated the model with respect to inputs okay, X are the states Y are measured outputs okay, now X this model is coming from discretization of a dynamic ODE model which is for a physical system then X has physical meaning okay, if it is constructed as a realization of a time series model then X may not have any physical meaning may or may not have meaning okay, so you may not be able to associate some physics with X, nevertheless this generic form gives us you know captures the dynamics in the neighborhood of a state remember that this is a perturbation model this is not a global model this is a model which is control relevant only valid for a small range what is that small region very difficult to quantify okay, very difficult to quantify when you do perturbation studies when you inject perturbations you can say that whatever is the range covered by your perturbations that is the when you do linearization through Taylor series it is difficult to come up with that range okay, when you do perturbation studies it is easier because you have introduced perturbations you know what is the size of the perturbations and you can say at least for those perturbations this model is okay not bad good enough but for linearization business we do not know okay, so we know how to get transfer functions for this this transfer function could be using Q transforms or could be using Z transforms so both of them well the form looks very very similar we already know about this and then this right now I am going to look at single input single output systems okay I would some of the research which I am going to talk about are valid for multiple input multiple output but some things are restricted to single input single output, so we know we are at this point right now we know how to come to this point starting from physics or starting from data that is what we know okay, there are two things that I should bother about one is zeros of the transfer function so I want to worry about the roots of this numerator polynomial or I want to worry about the roots of the denominator polynomial in fact roots of the denominator polynomial is the main thing for design controller design of course that does not mean roots of the numerator polynomial are not important they are also equally important but prime concern many times is the roots of the denominator polynomial so in control terminology these roots are called as zeros and poles and we are going to be more worried in this lecture about the poles we are not worried about the zeros so much okay now stability is a very very deep concept it is not of dynamical systems and again as I have been saying that you know the stochastic processes which I introduced in last few lectures has matured over two centuries same is true about stability analysis. Stability analysis of dynamical systems has been studied since Newton and what we study now in classroom probably to hundreds of years to develop so it is not a easy concept to digest well I am going to introduce two kinds of stability concepts one is unforced stability and other is the forced stability okay what is unforced stability let me just go back and view some concepts again is this a forced system or a unforced system it is a forced system the forcing function is you okay forcing function is you so this is a forced system under certain situations this forced system particularly when you are designing a controller the forced system can get transformed into an unforced system okay the forced system can get transformed into an unforced system so unforced system what will be equivalent of unforced system here if u k is 0 for all the time if u k is 0 then you will get x k plus 1 is equal to phi of x k okay this is a forced system there is no forcing function in that case okay and so we want to look at two stability concept one is forced stability under is unforced stability unforced stability talks about a system in which there is no forcing function okay or by some manipulation forcing function as maybe you have put a controller and you know your forcing function no longer exists so it become unforced system or the other thing that we are worried about is what is called as input output stability okay this is forced stability so these two are different concepts they are related but not equivalent okay there are certain differences and as a control engineer we have to be aware of these okay. So why do I need to look at unforced stability you might ask this question why do you look at unforced stability we always have you can control a system if there is a forcing function if there is no forcing function there is no way of controlling the system right if you cannot inject any input to the system okay you cannot manipulate right it exists on its own okay like solar system is a dynamical system okay we at least we as human beings cannot at least still now inject any input from outside to perturb it right. So it works on its own there is nothing that you can do about it you can just observe you can write a model but not that you can manipulate something okay so in a typical feedback control system we want to put a feedback control law let us look at a very simple feedback control law okay I am trying to come up with a very simple feedback control law UK okay are the inputs they are proportional to let us take simple proportional controller what is proportional all of you know what is proportional controller from the first course in control set point minus measurement okay times some constant for a scalar case for a vector case if I generalize we time some matrix. So what I have done here is I have written this control law UK is equal to G G is a matrix matrix times RK minus YK okay RK is my set point okay YK is my measurement coming from the plant and my control law is well actually let me put this internal form here actually YK is equal to C of XK is some function of XK so I am putting that here and I am transforming this system now I am just going to rewrite this system I am going to write U I am going to substitute this formula for U into my original equation okay my dynamic equation if I substitute this formula for U in my dynamic equation I get this equation I am going to call this as a closed loop equation because now U has disappeared U is function of R minus Y but Y is function of X okay so if I substitute I get this now let me look at a scenario where R is 0 okay set point is 0 perfectly fine I want to control the system at the origin at the steady state where we have linearized okay my set point is 0 my X will be 0 if X is 0 CX is 0 okay I want to control my system at the origin this is the typical problem regulation problem in any system say process system you have a reactor or you have some level in the tank you want to control it at 0 0 means 0 perturbation you have defined some operating point okay you want to maintain the 0 perturbation the level should be at the desired point okay so that is what I mean here what has happened here is moment I introduce this control law and I set R is equal to 0 set point is equal to 0 I get an unforced system I think you might remember this little painfully if you are not been able to do the last problem in the mid same the same problem right little bit of manipulation just gives the same problem okay and we have already talked about stability of these kind of systems when are these systems stable I can value of ?- ? GC are inside the unit circle then they are asymptotically stable okay so this is an unforced system and we know about stability I just want to okay so there is one more example why do we want to look at unforced stability let us say I have an open loop system see one is closed loop system so in the closed loop systems I get a situation where I have you know dynamic system which is unforced okay and I am worried about stability of this unforced system well not only stability of the unforced system I am also worried about the performance of the unforced system right see if you just go back here this matrix I can values of this matrix govern the dynamics okay and here what is it that I have introduced as a control engineer matrix G matrix G is my choice okay so I can move around Eigen values of this matrix by appropriately choosing G actually that is a controller design problem I choose G in such a way that poles of this matrix are at desired location right that is my controller design problem the other situation is that I have a system let us look at a single input system and let us say the input is a sinusoidal function is always a sinusoidal function input to this particular system is always a sinusoidal function let us say a circuit which is subjected to some you know AC current permanently subjected to the AC current I do not have I cannot manipulate the input is always this AC current and I want to worry about the dynamics of this particular system okay in this case my dynamical system becomes like this right the this part here is fixed okay again this is an unforced system because this component is not in my control this input is entering the system okay and I have to worry about so I might generally say that I am really worried about unforced dynamical systems okay in which the right hand side would be function of xk and in general of time k is the time here so it could be in general function of time k the previous example k was not explicitly appearing in this particular example k is explicitly appearing in my equation okay so this is a time varying unforced system so general problem which we would be worried about is okay well I am not going to look at at least for this course I am not going to look at the time varying unforced systems that is more complex okay I am going to study simple things introduce concepts regarding unforced system okay in which time does not explicitly appear okay the other things of course we need to study in real problems but we are doing a first course on advance control so we can excuse ourselves and just stick to the simple ideas okay so this is a general problem I have this dynamical system xk plus 1 it evolves according to this non-linear difference equation okay where f is some non-linear function where f is some non-linear function and x0 is the initial condition x0 is the initial condition which is away from the equilibrium point equilibrium point for the system let us assume is 00 perturbation system x is a perturbation model some it could be linear or non-linear okay oh well I am going to introduce this non-zero equilibrium point what is the steady state of this point system when do you say the system at this steady state xk plus 1 is same as xk I have written it by saying that x bar is equal to f of x bar okay so x bar is my steady state operating condition okay and x bar is equal to f of x bar is my okay so now this is where you are going to blink and then not going to be comfortable okay let us see what this is when do you say a particular system to be let us before we look at the theorem statement or the definition sorry definition of stability let us take a famous example which is often used to teach the stability I will also take a chemical engine example later but the simplest example is a pendulum okay now a pendulum pendulum can be you know working like this okay there is one more configuration for the pendulum which is inverted pendulum okay I could have a pendulum which is like this okay so this example of inverted pendulum can be looked upon as a idealization of a scenario where you know you have rocket space launch rocket taking off and before it is taken off you know it is just balanced on a platform right it is balanced on a platform it should not fall while taking off it should remain vertical and then go into so it is actually if you see if you have seen those clips we will see that before the take off it is clamped using some supports okay you do not allow it to move but you can do that when the lock rocket launches should you remove the support okay and then balancing that rocket on the platform below is same thing as balancing a stick on your hand it is not different there is an equilibrium point where the rocket exactly stands vertical okay provided the vector from the center of mass goes through the base and all that so if that does not happen the rocket might fall and then it might hit you know some other country rather than going into space so that is the difficult problem okay but so this is a valid configuration we need it we need this configuration and we want to control that configuration so this configuration is the other configuration where you know pendulum is moving like this what will happen is if I put up the pendulum the pendulum will come back after some time okay if you have a perfect pendulum no dissipation of energy okay what will happen it will keep oscillating okay if you have a real pendulum where energy dissipates what will happen in the you know you move the pendulum and leave it after some time you know it will the oscillations will die down and it will come to the steady state what is the steady state you know it is vertically down that is a steady state it comes back to that steady state so if I look at x bar is equal to f of x for the pendulum system there are two possible solutions there are two possible solutions actually physically two possible solutions theoretically there are infinite solutions because you know this is a steady state and so if you rotate once this is also a steady state and then you rotate twice is also a steady state so mathematically there are many steady states physically there are two steady states one is up one is down okay now how do I mathematically quantify this idea that if I put up a little bit the system comes back the system is come comes back to its original steady state okay I want to mathematically express this idea what will happen here in this case there is no control suppose you know you are see if you are allow if I have this pen is there on my hand if I am allowed to move my hand I am using a degree of freedom I am using manipulation to control the system okay but if I am not allowed there is one perfect steady state for which if the system is at that steady state it will remain at that steady state but what will happen if there is a slight perturbation it will stop okay I want to quantify this mathematically mathematically the example from chemical engineering there are examples from other domains a example from chemical engineering also is reactors where you have multiple steady states typically what happens is that a reactor would have multiple operating conditions okay so there are two operating conditions which are stable one is low temperature where you do not get a product one is high temperature where you get good amount of product but the temperature is very high you do not want to control at that point and is a middle temperature where you get reasonable amount of product but the middle temperature is you know not steady it is a unsteady state point so actually for a reactor if you take this heat generated and heat released okay sorry heat removed then heat generated is actually a S shaped curve okay this is Q generated and heat removed okay there is a cooling jacket and you remove the heat through the cooling jacket and heat removal is proportional to cooling water temperature minus the temperature inside the reactor. So if you remove it is proportional to the if you plot T versus QG then you get three steady states one is at the lower concentration, other is at the higher concentration and this one is a unstable steady state. And this is stable steady state okay and you want to control the system at this point okay controlling the system, controlling the reactor at this point is same thing as controlling the you know stick balancing the stick inverted pendulum no difference okay a slight movement will take the system either to this point or to this point okay where will the system go if it is uncontrolled you do not know. So to make the system stay at this point you have to apply a controller but this point is an unstable point okay. Now how do we understand this concept of how do we quantify this okay. So we are going to use a basic definition of continuity of a function if you see carefully this is a modified definition of continuity okay. So what I am saying here is that if I perturb the system if I come up with a region suppose I am looking at the pendulum okay I am looking at the pendulum I look at some region okay in the neighborhood of the steady state okay let us say I give you that this region is where the movement motion of the pendulum should be restricted to okay. Now if I give you this region okay you have to tell me set of initial conditions for which the system dynamics will remain bounded in this region okay. So think of it as some game you know I define a region okay I define this much region and then I will say that tell me all initial conditions for which if I take my pendulum the system dynamics will remain within this bound okay. So suppose you come up with a set of initial conditions I will say okay I do not I am not happy I shrink okay can you give me another set of initial conditions for which the dynamics will remain inside this okay you will come up with one more set of initial conditions for the pendulum. So dynamics remains inside this I will go on shrinking this okay. So whatever region I give if you are able to come up with a set of initial condition for which the dynamics will remain within this or within this or within this or within this okay then we say that the system is locally stable okay whatever region you come up with whatever region I come up with you have to give me set of initial conditions if that is possible for a particular system see for what I am saying is for every r greater than 0 okay xk-x bar x bar is my steady state xk-x bar is the evolution of dynamic valuation of the system okay if this remains less than this big r big r is what I am giving you okay and for this system to remain bounded in this region if you are able to find a set of initial conditions okay this is x0-x bar is the initial condition for this initial condition okay the system will so let us go back to this picture okay let us see whether we can understand it from see okay see suppose this is the point this is x1, x2 and initially you know I will give you that this is the region in which all the motion of the system x1 and x2 should remain bounded so I want it to be remain bounded here okay I want it to be remain contained in this region okay so I have given some r okay can you find out a set of initial conditions let us say this is the set of initial conditions for which if the motion starts from here from any point in here okay the motion starts from any point in here it will remain bounded in this region okay remain bounded in this region if I happen to change this if I happen to change this you know if I say that my r is this much still you should be able to come up with some set of initial conditions you know if I shrink this if I say that my r is not big my r is very small you should be able to come up with set of initial conditions for which if motion starts within this points it will remain bounded in this region okay if motion starts within this point this region it will remain bounded in this larger region okay that is what I am asking is this possible for every r whatever I am saying it should happen okay now why this every r is so important okay if it happens for just one r is it good it is not good see for example if I define my r which spans this region entire region okay see for this system what will happen for this system the if the reactor is here it will either go here or it will either go here okay so if I give my r which spans both of this then you will say well system remains bounded and it is stable right system remains bounded because if my region is very large okay then then there are set of initial conditions for which system will remain bounded in this okay but if I start shrinking this if I start shrinking this there is a problem then you cannot find a set of initial conditions for which system will stay inside that boundary okay. So this is what is actually quantified by this definition that given any r given any region can I find a set of initial condition for which the system dynamics will remain bounded in that region if that can be done then we call the system to be stable. Okay also sometimes called as Lyapunov stable okay this is picturization so given any given any initial condition given any bounded region can I find a set of initial conditions for which the motion will remain bounded within the given region okay in fact when you start looking at stability you get tempted to think other way around okay you start saying that given a initial condition can I found a region that is not a way correct way of thinking in fact that was mistake which was made historically when people started looking at stability they first said well given a initial condition can I found a region in which it will remain okay that is not there might be one region might remain you know what is important is so you know this the question is if I put up a little bit will it go outside see if I define a region which covers even this steady state if I define a region which also covers this steady state then you will say that system is stable but actually you know the system is not stable. So this definition will not work the other way around okay only way it will work is saying that for every small region so you know I am not interested in a region which covers this steady state and this steady state I am here right now I give a small region around this now can you find out a steady state can you find out a initial condition for which system will stay here except at the steady state except at the steady state that you cannot even move a little bit even if you move a little bit this will fall this will remove the region okay so that is very very important let you think about this look at more examples this is not a thing which will go down so easily well we have another concept which is asymptotically stable asymptotic stability comes if not only if the dynamic remains bounded within a region but the trajectory goes to 0 as or trajectory goes to the steady state as k goes to infinity see if it happens that this trajectory is starting from x0 and not that it just remains bounded but it also converges finally to the steady state that will happen for a pendulum right it will you perturb it okay it will oscillate and come back to the steady state wherever you perturb it okay it will oscillate and come back to the steady state so a pendulum which is not inverted the regular pendulum is asymptotically stable system it will come back to the but a pendulum regular pendulum which has no damping is not asymptotically stable which has no damping what is meaning of no damping it will keep oscillating it will oscillations will not grow oscillations will not die down okay they will not grow they will neither grow nor die down so those those systems are marginally stable systems neither oscillations grow nor they die down if the oscillations eventually die down as k goes to infinity okay then those are called as asymptotically stable systems okay or the equilibrium point is called as let me let me revise we cannot say a system is asymptotically stable system we misuse this terminology for linear systems because linear systems stability concepts are global you know everything either it is globally asymptotically stable or globally marginally stable or globally unstable okay in real nonlinear systems you have to talk about local stability in neighborhood of a point x bar okay in neighborhood of a point x bar that is what makes sense in the real nonlinear system okay so one thing that because we are introduced to stability in the control course where you start looking at linear dynamical systems one thing that we start thinking about is that unstable system means something going to infinity okay that is not what it means it only means that the locally that point is see the pendulum dynamics it does not go to infinity because you have unstable point okay so that is only ideal world of linear systems it happens that trajectory goes to infinity in reality all that it means is that it is a point where local any trajectory does not tend to stay it is a point where system has a tendency to go away from that point that is all it means so stability is a local concept in the context of nonlinear systems in the case of real world systems and we can talk about an equilibrium point to be stable or equilibrium point to be unstable okay that is that makes more sense well asymptotic stability of equilibrium well we know something already we have done this when is linear dynamic system stable spectral radius is less than one strictly less than one it is asymptotically stable if it is less than or equal to one it is marginally stable and could be you know one of the Eigen values is equal to one then or pair of Eigen values is on the imaginary axis it is marginally stable it will okay so we know this result that spectral radius if it is less than one so we can apply it in the different context and both the context now you are aware of that you could apply it to open loop systems if you would 0 you could apply it to the closed loop systems if u is function of y in turn function of x you know I could apply to so of course my controller design problem is to choose g in such a way that you know controller design problem is to choose g in such a way that spectral radius is always less than one okay in fact I might decide and say I want it to be 0.1 okay so that the rate of change of error will also or the rate of change and the rate of activation system returns to the steady state also will get defined if you specify the poles now there is a problem you know this result which is very very powerful for the linear systems cannot be taken to non-linear systems as it is okay there is a difficulty you can use this result for non-linear systems through local linearization we have done local linearization we started from Taylor series non-linear model locally linearized now the question is well the golden question analysis of the local linear model does it also hold for the global non-linear model is it it is a relevant question because ultimately you are going to base your design decisions on the local linear model but the true system is not local linear you know local system is non-linear okay now trouble is you can carry over these results of linear now these results which have listed here okay these are for ideal linear system ideal linear world which we have created as you know applied mathematicians not it does not exist anywhere it exists only in our imagination and textbooks okay there is no perfect linear system anywhere in the world it is always an approximation which now we also know this that open loop system is marginally stable if rho is equal to 1 it is unstable if rho is greater than 1 okay now the trouble is this results or analysis based on local linearization can be used only for two cases if locally matrix or locally system is asymptotically stable or if locally system is unstable okay if locally system is asymptotically stable then behavior of the local linearization and non-linear system is you know you can you can make some comment based on the local linearization you can make a comment upon non-linear system behavior if locally system linearized system is unstable then the non-linear system is also locally unstable okay but if local linearization is marginally stable I cannot say anything about the non-linear system this is the trouble I cannot translate results okay so marginal stability you cannot establish using local linearization you can establish asymptotic stability you can establish instability but not marginal stability okay not marginal stability and there are marginally stable systems which we keep using all the time it is not that marginally stable systems do not exist you have must have seen old pendulum clocks pendulum there is a marginally stable system we it neither decays okay and nor it nor it grows right so oscillators we use oscillators you know at so stability of such marginally stable systems which are actually non-linear cannot be established through linearization asymptotic stability yes you can okay instability you can that is why that is why you are still able to use linear control theory and design a controller for real world non-linear problem okay but it has a limitation because marginal stability of linearized model cannot say anything about the stability characteristics of the non-linear system I will give an example so that I will give an example where linearization says that the system is marginally stable but actually the system is asymptotically stable okay so if linearized system says marginal stability you cannot draw any conclusions linearized system says asymptotic stability yes the non-linear system is also locally asymptotically stable okay linearized system says unstable non-linear system is locally unstable okay so I will just give you some visualization of a two-dimensional system this phase-pane portraits is something which is very very often used to visualize dynamical behavior of two state systems okay so I am right now talking about continuous time systems right now I am talking about continuous time systems so if you have a system a non-linear system or a linear system for the time being let us take a linear continuous time system two differential equations okay so I am actually worried about for the time being what I am going to show you is dynamic behavior of this is my A matrix okay for the time being do not worry where the system has come from you have the system analyzing the system okay what you know is that the dynamic behavior of this system is governed by eigenvalues of A matrix right is governed by eigenvalues of A matrix so A will have two eigenvalues lambda one and lambda two right two will have A will have two eigenvalues lambda one and lambda two of course if you want you can discretize this and convert into a discrete form but then there is a mapping between continuous and discrete and results of dynamics will not change okay so this is just a visualization of what is happening this is important in the view point of you know I am just putting this figures to enforce the concept that these are local behaviors okay so one thing that we look at is a system which has two eigenvalues which are you know negative real part I am talking of continuous time system right now okay for a continuous time system where both the eigenvalues have negative real part what happens mathematically we say that the solution decays to 00 asymptotically stable pendulum okay the motion comes back to the steady state okay perturbation state is steady state so you know any trajectory any trajectory that starts in the state space this is these are different trajectories in x1 x2 plane I am just plotting the time trajectories in x1 x2 plane you start from some point this trajectory will eventually come back to the origin you start from some other point it will come back to the origin so locally because the eigenvalues have negative real part all the trajectories will come back to the origin so this is called as stable node other possibility is unstable node what is unstable node all the trajectories diverge from this point okay all the trajectories diverge from this point nothing stays here okay that is what I want to visualize when so if you are perfectly here see pendulum if I am perfectly here at 00 it will remain at 00 but any small perturbation will take the system away okay in reality for the pendulum it is not going to go to infinity so do not associate instability means something going to infinity that is not that is not the thing okay it only means that this point is such that nothing stays at this point you know the system tries to run away from this particular point no this is for there are slight differences for second order system you have something called saddle point so if one is so it is unstable but saddle point there might be certain directions in which system will not become unstable so if you have this situation that one eigenvalue is less than 0 and one eigenvalue is greater than 0 okay then the system is you know somewhere in between actually it is unstable system technically because okay but there are some trajectories okay if you happen to latch on to those trajectories you will not leave those trajectories so you will not go to infinity okay so that see here what happens is that there is only one point where you know if you are there you will stay there otherwise any perturbation from there you will go away okay now for a saddle point what happens if you happen to stay I think on this on this no V1 sorry V2 if you happen to stay on V2 then you will remain on V2 okay if you happen to stay on V2 you remain on V2 but anywhere else it will tend to go away so little funny situation okay so this is only mathematical possibilities not that real system which has one eigenvalue which is outside unit circle or one eigenvalue which is on the right half plane system is unstable all practical problems but this is a finer point which this is classified as a saddle point okay well the other thing is you know center is where both the eigenvalues are complex okay what is the equivalent thing in terms of discrete time here one eigenvalue is inside unit circle one eigenvalue outside unit circle here both the eigenvalues are on the unit circle okay both the eigenvalues are on the unit circle okay then you again differentiate between unstable systems so you have a stable focus a stable focus is where your complex eigenvalues your complex eigenvalues but the real part is negative okay pendulum you know it will have oscillatory behavior but oscillations die oscillations do not grow okay so unstable focus is one from which locally you know the system diverges but you know oscillatory manner okay diverges in the oscillatory manner so that will be an unstable focus so these are local behavior that is what I want to emphasize through these graphs nothing else okay we are looking at locally at some point in the neighborhood of that point we are trying to assess everything of these cases we can analyze through local linearization of nonlinear system except this case if you take a nonlinear set of system two differential equations to nonlinear differential equations come up with linearization and then linearization tells you that actually it is a center you cannot believe that result you cannot extrapolate that to the nonlinear system okay so whatever you come up with analysis with linearization only holds for all cases except center okay one more concept which is very very crucial in control theory so till now I talked about unforced stability okay talked about unforced stability so unforced stability can arise because you are lost degree of freedom how did you lose degree of freedom because you have put a controller okay the second thing is you know input and output stability when I want to talk about stability concept when I have some degrees of freedom okay and I still want to know about behavior of the output with respect to inputs okay mind you there was one difference earlier we were worried about the evolution of states earlier I was talking about evolution of states okay I was only talking about x whether x as a physical meaning or does not have physical meaning is a different story I was talking about behavior of x okay in input output stability I am going to be worried about y and u u is my input y is my output I am only worried about observed states or observed behavior okay I am not let us say I have a boiler okay there are so many states inside the boiler okay if you write a differential equation and then come up with a model you will get many many states for temperature and level and what not if I am just measuring level and temperature okay and if I am giving two inputs one is fuel other is cold water flow I am only worried about behavior of you know level and temperature with respect to cold water flow and fuel flow input output I am not worried about what is happening inside I am not saying that I am not worried about I am not talking about it when I talk about be both stability well you might say that why I am introducing all these mathematically fine notions we cannot help it control theory you are dealing with dynamical systems and you have to understand these final points otherwise okay so be both stability is a linear invariant system is be both stable if a bounded input produces a bounded output okay if I give a bounded input now what is bounded good question okay because when you say when you call a function bounded we are dealing with functions input will be a function say cost function okay or you know input is ut is a step function so is step function a bounded function it depends on what tell me improve upon it Smith something else we talked about in the last course what about norm I could talk about a norm of a function okay if norm is bounded then the function is bounded okay so bounded input bounded output stability actually gets tied up with the norm of a function so question comes what is a bounded function okay now is actually magnitude is not a bad idea but is but you have to put it in the right context now is step function a bounded function how do you define boundedness suppose I give you norm of a function so now I am talking about a function ft okay let us say which is equal to a cos t a cos omega t okay now t belongs to 0 infinity okay what is norm of this function how do you define a norm how do you say this function is bounded or unbounded integral well you could define different ways you could define one norm you could define two norm you could define infinite norm okay this function might turn out to be bounded according to one norm but not according to some other norm that is possible okay see for example this function I can define a norm of ft as integral 0 to infinity mod ft so norm of ft you know I am calling one norm is equal to 0 to infinity dt what is two norm right what is infinite norm max over t mod of ft okay so these are three different there are three different norms okay now tell me whether step function is it bounded according to one norm you need step function no it is not bounded according to one norm is it bounded according to infinite norm yes okay so this bounded input bounded stability actually also depends upon the way you define the norms typically you would use infinite norm let us say if you decide to use infinite norm then according to infinite norm step function is a bounded function cos a cos omega t is a bounded function okay you can have a very simple definition you give a region the system limit the input function remains bounded in that region it is bounded according to infinite norm not according to one norm or two norm okay we can but when I have to choose mathematically you have to give a reason for that so no no but what I wanted to point out was that it crucially depends upon how you define what is bounded okay so it is not that so exponential function is bounded according to one norm minus t as t goes to infinity or e to the power minus any a t is bounded function according to one norm according to two norm according to infinite norm but that does not mean it is step function is bounded according to all these norms okay so it is convenient to use infinite norm when you are doing this be both stability okay now this part is very very important here okay so this last thing which I have underlined is very very important this should happen for every initial condition okay so one way to find some system is unstable is to find one initial condition for which this will not hold then now there are finer things here you know if you have a system which is if you we are talking about a state space model and obviously if the spectral radius is less than one then you know then the system relating or the transfer function relating y and u is be both stable why is that because what is the relationship between the denominator of the transfer function and poles of phi eigenvalues of phi they are same eigenvalues of phi is nothing but the characteristic equation okay so if one of if all the eigenvalues are inside the unit circle then but what I am what I want to say here is that a system which is asymptotically stable in the sense of the appano okay on force system is asymptotically stable okay if you take this let me go back to this so when I am talking about on force stability I was talking about only xk plus one is equal to phi xk okay when I am talking about be both stability actually I am worried about this stability of this transfer function okay are they systems for which the transfer function is stable but the state dynamics is unstable possible that is possible okay stable or marginally stable marginally stable not unstable marginally stable it is possible that there are poles real cancellations and you know you have a system which is the state space dynamics is marginally stable but this is be both stable so all those kinds of things are possible now the question is if spectral radius of phi is less than one that means all the roots of all the roots of this equation are inside unit circle okay there inside unit circle and then we can talk about you know we can say that if this system is asymptotically stable asymptotically stable in the sense of the appano on force system is asymptotically stable then this transfer function is be both stable okay if this system if this phi has all the eigen values inside the unit circle then this system is bounded input bounded output stable vice versa is not true suppose I do not say anything about this I just give you the transfer function from the transfer function that you found that the transfer function is bounded input bounded output stable can you say that you know all the eigen values of phi will be inside unit circle that is a tricky thing you cannot go anywhere out you can say that if this is asymptotically stable this is be both stable but if this is be both stable you know the original system can be marginally stable or something else so there is a problem so that one way thing works the other way does not work observable do not confuse observability we will talk later right now there is no question of observability we are not talking about be both can be be both can be no no no no no no so what I want to say is that asymptotic stability is a stronger concept than be both stability so asymptotic stability implies be both stability but be both stability does not imply asymptotic stability okay so this is an example this is an example where this is an example of a harmonic oscillator okay this unforce system has is stable okay which means its spectral radius is equal to 1 okay is this stable in the sense of Lyapunov unforce system is stable if you take this system okay do not worry about the u put u equal to 0 you take this system okay for this system eigen values are on the unit circle this system is stable in the sense of Lyapunov okay it is stable in the sense of Lyapunov now this particular system has poles on the unit circle and actually that makes it be both unstable so actually if you introduce bounded input into this particular system okay then the output actually grows with time okay so the conclusions that you reach by putting u equal to 0 okay about the stability are different from the conclusions that you reach by putting the forcing function u okay for this particular system if you introduce if you introduce a cyclic u okay then the output will grow okay but if you say u is equal to 0 then spectral radius is equal to 1 and then you will say this is Lyapunov stable so there are some certain differences between these two things you will have to go back and think about it I cannot explain everything we look at more examples we look at some cases it is marginally stable system okay so but marginally stable so asymptotic stability of phi will imply as a bebo stability okay so here I am showing a situation where original system is marginally stable okay but it is bebo unstable okay so these two are different all that I want to point out here is that these two are different notions okay and only when you have state space model to be asymptotically stable then you can guarantee bebo stability okay you can say the same thing about instability also yes but there is a trouble when it comes to marginal stable systems okay even for linear case okay so why are we worried about bebo stability of course when set point is not equal to 0 and I have a closed loop control this is my closed loop equation I derived this sometime back and then I am worried about behavior between r and between y okay so in this case I am worried about behavior between r and y okay so the question is the system bebo stable that means is y with reference to u with respect to sorry y with respect to r r is now input to the closed loop okay so is the system bounded input so if I give bounded perturbations in the set point okay will the output remain bounded okay when can you guarantee that if you make sure that phi – gamma gc is asymptotically stable which means all the Eigen values are strictly inside unit circle okay then you can say that this system will be bounded input bounded input stable which means if I give bounded perturbations in the set point output will be bounded okay so that is why we worry about in the design okay now stability test I am going to do one simple stability test in the context of today's context all the this is most simple method that is direct calculation of Eigen values you have 5 matrix or you have denominator polynomial you find out the Eigen values okay and simple tricks like Hurwitz criteria or Ruth Hurwitz criteria in today's context does not make too much sense because you have very powerful computers and you would you know download some program and say roots and you will get or your calculator itself you know might have some matrix Eigen value calculations so nevertheless there are some nice methods of analyzing roots of characteristic equation you are aware of Hurwitz criteria right Hurwitz or what which stability test Ruth Hurwitz criteria so of course other method is root locus method so I am not going to get into root locus method I am going to take one method which is based on properties of the characteristic polynomial my quiz plots you can do analysis using my quiz plots I am not going to get into my quiz plot analysis either I am going to talk about the Lyapunov method very briefly because I need this in my subsequent development this method of properties now I need more for putting up a problem in my exam and it is a nice tool by which you can find out whether the poles of a particular system whether they are inside in the circle or outside in the circle just by doing some simple hand calculations you do not require a computer so I will talk about that particular method which is very elegant method for finding out and so this is called jury stability criteria in the case of discrete time systems you have this jury stability criteria the Lyapunov stability is something which we will discuss later Lyapunov stability is by a Russian scientist Russian mathematician and physicist Lyapunov who lived in 19th century and made 7 L contributions to theory of stability of differential equations so let us look at jury stability criteria postponed the Lyapunov stability business to the next class it is fairly involved so this is simple polynomial test given a polynomial I want to find out whether the roots are inside in the circle or outside in the circle okay and so this is my polynomial okay I have this convention of numbering a not even so the coefficient of n is called a not and the last coefficient is called a n okay you construct this table okay now how do you construct this table well I tell you a simple rule you have written this a not even to a n here and if you notice the second row is a n a n-1 to a not okay now all that I want to do is to I want to multiply this row by some factor and eliminate a n okay I want to eliminate a n okay I want to eliminate a n so this thing here becomes 0 0 okay then see I got this I got this new row which has one element less okay then what I have done is the same thing see here we first started with a not and a n then we wrote this row in the reverse order same thing see a not I got this row then I wrote this row here in the reverse order I want to eliminate this a n-1 okay I go on doing this till I get finally you know only one number finally it is this is somewhat similar to what you have done in root 3 ways criteria you go on eliminating so once you do it for some simple system you will understand how to do this very maybe in the next class in the beginning we will just do this and then there is a test you know if all if a not is greater than 0 and if all elements in this first see you have to look at elements here a not a not n-1 will get a not n-2 and so on so if all of them are positive okay all the roots are inside the unit circle let me check if all this a not is positive and if a not 1 to k that is all the first elements in the first column are positive then all the roots are inside the unit circle okay very simple test which you can do hand calculations and without even or simple calculator calculations you do not need some you know matrix I can value calculation program you can just find out whether the roots are and you can find out you can arrive at some necessary and sufficient condition for stability based on analysis of this we will look at this problem let me look at one simple problem I have this problem and if I use jury scheme see this is 1 a 1 a 2 I have written a 2 a 1 1 okay I multiply this entire row by a 2 and subtract okay if I subtract this last thing here will be 0 okay then I get this 1- a 2 square and a 1 into 1- a 2 I write this in the reverse order see this element has come here this element has come here now you want to eliminate this term okay multiplied by appropriate factor and subtract you will get only one term so what is the jury stability criteria see this should be greater than 0 this should be greater than 0 this should be greater than 0 okay if all of them are greater than 0 then the system is stable well which way it is stable it will be both stable we are talking about roots of the characteristic polynomial if all of them are positive then we are you know asymptotic stability is guaranteed so if you are doing trial and error if you are using that g matrix okay if you guess some value and if you want to do hand calculations and see whether you know you are inside the unit circle or outside the unit circle you could just apply this simple test on a piece of paper and then find out whether your controller designed is locally stable or locally unstable okay so in this case if you so this 1- a 2 square should be greater than 0 and this second thing should be greater than 0 so this gives you these three conditions and then you can actually plot in a 1 a 2 plane you can plot so any value of a 1 a 2 in this region you will have a stable transfer function that is what so this business of Lyapunov stability will look at it later okay so this was a very very deep concept which actually forms the entire foundation of stability analysis of differential equations different equations and the entire modern theory of stability is founded on this one thesis PhD thesis which was written by Lyapunov as is doctoral work the seminal work which very very few times it has happened that a doctoral thesis would entirely form a new branch of analysis so we will have a very brief look at this because we need this theory when we as we go along okay.