 In the last lecture I talked about state estimation and then connection of Kalman filter particularly the stationary Kalman filter with the time series models. So similar ideas being pursued through different approaches different domains but what you are actually trying to achieve is quite qualitatively quite similar that is modeling of unmeasured disturbances whether you go through Kalman filter approach whether you go through time series modeling approach they actually merge and you can show that there are similarities except the one can be viewed as a parametration of the other and now we move on to control we talked about estimating states now we talk about controlling those states so which state do you want to control I want to control all the states okay so if I have a state estimator I have a sensor that reconstruct the states so I have a measurement of states which is constructed using model and the true measurements true sensor measurements I get state measurement now the state measurement is going to be used to do closed loop control using state feedback that is the idea okay so I want to move towards this model predictive control model predictive control is a tool which has become very popular in industry it started with chemical industry but by now it is everywhere all the domains robotics or you know in control of motors in control of fuel cells all kinds of all kinds of application it started in chemical engineering because this is a very very computation intensive tool multivariable control tool which can handle constraints multivariable interactions simultaneously and now there are commercial packages which actually are available to implement these tools and we will be seeing one of them not the package at least the implementation towards the later part of this course but the background for all this model predictive control is actually classical linear quadratic optimal control theory okay so to set the background in principle I can start directly teaching model predictive control after I have identified the models you can directly start developing this predictive control that is what is done in the industry they develop models mostly from data very rarely from first principle but there are cases where you develop from first principles also so you develop models either from data or from first principles then use them to develop a predictive controller you may be developing if you have a first principle model or whether you have or you do not have whichever way you have you have a state space model that space model then is used to develop an observer and then you use the observer to construct a state feedback controller that is the philosophy and that state feedback controllers practical implementation today is model predictive control okay so I want to reach model predictive control eventually but you know we are not just the practitioners we want to understand the theory behind the whole why how other than just how it works we also want to know why it works what was the philosophy and how it evolved so quadratic Gaussian optimal control is something that is where it all started and then a practical version of this quadratic optimal Gaussian control is model predictive control okay so that it will not be completely surprise for you so I will start with the motivation then I will move on to reachability reachability is like now you will see the development is very very parallel completely parallel in fact and when I am going to talk about model predictive control there are parallel to model predictive control in state estimation I have not talked about it but there are parallel to that also so one could in principle do you know just to parallel developments what changes is in state estimation we are worried about the past of the system okay and the current state how does the past and the current state are related in predictive control we are worried about the future of the system okay same model equations one will view at the past through optimization optimal control formulation the other will optimal not control optimization based formulation we saw that Kalman filter is an optimization based formulation though final form looks like there is no optimization involved but actually you are minimizing the covariance of the estimation error so it is a elegant recursive solution okay but basically it is an optimization problem we saw the same thing we will do here to control will have problem which is so before I move to that to the optimal control problem we talked about pole placement observer I will talk about pole placement controller okay a very similar idea and then I will move on to optimal linear quadratic optimal control optimization based formulation okay just like you have a recursive solution there you will see that there is a recursive solution here okay so there will be complete parallel okay the parallel at least for this course will stop when you start talking about model predictive control though there is a parallel to that in state estimation I have not thought about it I have not thought it I have not talked about it and we will move on to so before I go ahead let me tell you where we are standing right now today when you go to a plant okay a large scale plant say chemical plant or power plant okay and if you look at the control structure in a large scale production facility okay it consists of multiple layers okay the first course in control that you study is here regulatory control you talk about sensors you know some you learn about control walls or stripper motor or some kind of actuators okay and then you have some sensor speed sensor you know you have pH sensor temperature sensor level sensor or you have you know some frequency sensor so you have all kinds of current sensor voltage sensor so you have all these PV is process variables all these measurements coming to your PID controllers and then you have MVs are manipulated variables which are sent to the plant okay now this is something which we study in your first control of course and then something about this I also talked in this course I talked about interactions okay some decoupling and all that so all that will stand at this layer okay you are not going beyond that in a modern plant you have one more layer which is called as advanced control layer this advanced control layer consists of two sub layers okay one is a multivariable controller this could be linear controller or to be non-linear controller typically this is a multivariable non-linear controller okay this is what is that what it will control which I am going to talk about so this this will be just moving one step up in the control hierarchy okay there is one more layer which talks about online optimization what is the best operating condition under the prevailing situation okay see for example I will give you an example from chemical engineering but I think all of you can appreciate is it here no okay just talk about it I will just mention it okay so you know chemical one of the major chemical industries petroleum refining right you get crude oil and then the first thing that you do is you have a huge crude distillation column in distillation you separate you know different hydrocarbons based on the modality so the light hydrocarbons they come at the top of the column there is a huge column in which you know the vapor is flowing up and the liquid is flowing down and because of the counter current interaction between the vapor and the liquid you get separation of different products at different stages okay so at the top you draw light components at the middle you draw so the top you will get LPG okay at you come the little down you will get you know petroleum little down you will get kerosene and evasion turbofuel little down you will get diesel then you come down in that column you will get some lubricating oils and bottom you will get tar and then what you do is this tar you again crack and create converted into lighter products because you do not want to have a lot of tar so you convert it into gasoline and so on so that is this is the petroleum processing or crude processing plant. Now how do you operate this plant what is the best temperature pressure operating conditions for the given crude column that depends upon which type of crude you are using it depends upon whether you are processing Gulf crude or Bombay high crude or you know some Venezuela crude it depends upon the crude quality okay there is no one way of operating the plant for the other things that happens is you know there are lot of heat exchanger equipment associated with this column and then you know the heat exchanger equipment starts fouling after some time the efficiency of the heat transfer reduces so you cannot operate at the same set points which were design set points because you know the heat transfer efficiency has reduced you have to change a few things okay then you probably afterwards carrying out some reaction to break the tar into you know or the heavy components into lighter components there are some catalyst the catalyst starts degrading over time okay so you have to change the operating conditions temperature and pressure inside the column how do you do it what is done is that they have a mathematical model coming from physics which steady state model which is used online to find out what is the best operating point okay what is the most optimal operating point what is optimal that could be that could differ from situation to situation for example you know that comes from a top layer called as scheduling and planning layer a scheduling and planning layer would tell you that this month month of November is Diwali you need more kerosene you know you have to supply kerosene to different households because they use more fuel December you need aviation turbo fuel more because there is lot of tourist traffic and there is lot of demand for aviation turbo fuel and so on so there might be different operating goals and then in month of February you are saying well I want to operate the plant I do not care now about maximizing kerosene nor maximizing aviation turbo fuel I want to operate the plant in such a way that I have maximum profits okay so that these are different goals and then the set points that need to be given to the controllers lower to this multivariable controller and regulatory controller are different when you have different operating goals now those are actually determined by this online optimization and scheduling planning layer okay scheduling planning layer will set goal for a month or a 15 days this online optimization will decide what should be today set point for one shift or two shifts or three shifts so this online optimization is run once in once in a day or twice in a day and then you download the set points to this multivariable controller and this multivariable controller then talks to the lower layer which is PID controllers and is PID controllers it is complete like management hierarchy now there is some big boss there are multiple managers and so it is like that it is completely hierarchical system you know actually in terms of hardware okay so and then there are these PID controllers are workers you know the grassroot workers they will manage only one loop at a time some flow set point is given it has to maintain that flow it does not bother about a larger picture so this is the now this top layer is actually is actually these are control layers and companies today's control companies like Honeywell or ABB or whatever whichever rose mount all of them supply a solution which is not just PID controllers they supply a solution from this point to this point okay you might start wondering who whom does this belong to is this scheduling planning is this management job it is not these are very very complex mathematical programming problems and they have to be solved by us not by just people who do management the management people can of course decide the bigger strategies but actually when it comes to finding an optimal you have to solve optimization problem that is all mathematics that is why the system engineers are required so you need to model the plant in such a way that the problem does not become too complex and you can online optimization and multiple controllers are actually also equally complex because this see for example if you want to decide what is the optimal operating point for a power plant you need to have a model for the power plant okay you need to have a model for the steam generator you need to have model for furnace you need to have model and then you have to decide depending upon suppose the fuel is going to the furnace has changed or the coal quality that is coming to a steam plant which is operated on the coal has changed you have to have a different operating scheme you cannot have the same thing for anything that comes in so this has to be solved and first of all these models have to be developed so you need chemical mechanical electrical engineers to develop these models these are very complex models and believe me that this is not something I am talking about which is you know some cutting edge technology this is by the time it is not a cutting edge technology it was a cutting edge technology in late 80s okay now this is done in many many plants in India it was started in Chennai petrochemicals way back in 78 way back in 85 I think 84 they implemented this kind of a control scheme not the scheduling planning but online optimization multiple control these are very very costly controllers very very costly these kind of a controller would you know easily cost something like hundred thousand dollars it is not that you know PID controller you can get for you know $50 or $100 but not these controllers are very complex the most complex part of this is modeling as you can imagine if you have a good model you can do good control okay so modeling and state estimation are crucial if model and estimates of the states are good because these are model based controllers okay they use models online okay there are many many issues how good is your model how fast you can do calculations okay so how do you realize this through hardware so there are many challenges and still even though this particular technology is 30 years old I do not think all of these are challenges are solved okay so there are still evolving field but this model based control is one situation where you know unlike the rest of the control theory which was developed first in the university labs or you know defense labs and then it percolated to the industry this is one example which is other way around it was first implemented by shell okay and it was first implemented in France by a industrial group and then probably because some competition between them they decided to publish it and then that resulted in you know development of an area which is this so now it is a huge research area there are you know workshops held for or research symposium said on model to control and so on so yeah input to the plant is given no no see the actual input to the plant is typically given by the PID controller PID controller gets the set points from this multivariable controller no no so I so now you look at it that this PID controllers are part of the plant as far as this higher level controller is so higher level controller will have a model that includes the PID model okay so as far as the higher level control is concerned it only looks at the set point as an input to the plant and measurement is still this PV okay so it is a cascade if you know what is the cascade control it is like a cascade form okay so this is this will only give set points again this will give set points to this this will give set points to this it is like either okay so this scheduling there will give set points targets that they to this optimization there will give you set points for the multivariable controller multivariable controller will give set points to individual PID controllers okay and this is also done many times because of safety suppose something happens and this this layer fails okay even then PID controllers are active because PID controllers are many okay so this typically this could be a one controller so if the computer implementing this somehow stops is a virus or something then you know you have problem so whereas so this controller directly talking to plant is many times not done though now those kind of controllers are also available but you go through an intermediate layer and then yeah and a model knows about the interactions so they will get automatically very much so this predictive controller not only does job of multivariable control it also does a job of what is called as constraint handling. Constraint handling means you know you have all kinds of constraint when you operate a plant you know see suppose you are letting some flu gas to the atmosphere you know it should not have so much CO2 or it should not have so much of CO and you are letting something into the water it should not its pH should be so much so there are so many constraints you have some temperature you have a reactor the temperature should not cross some safety limit okay all kinds of constraints are there pressure should not cross some safety limit so all these things constraints are handled today in a plant through what are called as programmable logic controllers so you have PID controllers which are individually working there are these programmable logic controllers which you know take some kind of actions based on some you know wisdom that has been gained over period of time if this happens and if that happens then do this so then this PLC will overwrite the PID controller and we will give priority to the safety suppose some temperature is rising and this particular steam valve is open shed the steam valve so you will have all kinds of rules okay and these rules are through lot of experience and then you code them it is like a programmer program which actually keeps checking different conditions if then else and then there are recipes as we know what to do if something happens so this is how it is currently handled this controller modulator controller it can simultaneously handled multivariable interactions it can handle constraints okay so that is why it is actually a monolithic thing which can do many things anyway we will move to that later so why do I want to do advance control because first of all there are complex multivariable interactions as I do not know whether I give you this example earlier of the car where you have three drivers so just imagine PID controller there are thousands of PID loops in a plant in a chemical plant in a power plant in a metallurgical plant there are thousands of loops and if the and PID controllers do not talk to each other okay so it is like thousand drivers and that is why you know it can create chaos then there are operating constraints and PID controller equation does not know about those constraints so what you do in a PID controller is you again you know in your software you put if then else if this happens then do not use PID controller output do something else all these things are given in the PID block and you can actually put those things so safety limits are handled in the ad hoc manner using PLC and PID and then it is quite messy so there are three kinds of constraints I would say there is safety constraints there are actuation constraints see PID controller when you say that output is gain times error into something you do not say that you know output cannot be more than this if error is 100 and you know gain is 500 output you can thousand but my wall cannot open more than 100% so what about that okay all these things cannot be put into a equation so what you write PID controller equation then you write an if statement if PID controller output goes beyond this then do not use PID controller output put it equal to 100 okay so all these ad hoc things have to be done okay then on top of it process has non linearities process conditions keep changing so multi loop controllers are very difficult to use and handling logic through ad hoc manner is very very difficult to PLCs it is done today but it is a very difficult system to maintain and non linearity can be handled through what is called as gain scheduling but many times predictive controllers can give you much better way okay so now I am going to start developing this stochastic optimal control some things I am going to go through very quickly we are familiar with this model I am going to take the same model I do not care which way you develop this model whether you came from physics linearized and then did something or you started from data and then you know you got this model completely by time series modeling like we are doing through Luke's toolbox does not matter so I am going to take a simplistic assumption that WKR VKR uncorrelated if they are correlated also we can handle the situation in fact when you develop time series models you get a situation where WKR and VKR correlated but that is not a difficult so that is not a really limitation what we have to begin with a model that relates unknown inputs with the state unknown inputs with the state we know these are Gaussian white noise processes this is a simplifying assumption we will start by developing controllers using the most ideal conditions one by one we will relax them and go to the practical situation that is however to do it so you know about this model how do I design optimal state feedback controller now okay so the controller design is going to be done like this we assume that all the states are measurable okay and design the controller okay then we design a stable state estimator okay and then using state estimated estimated states we implement the state feedback control law okay and then the separation principle there is something called separation principle we will talk about it that ensures that if you separately design observer and the controller to be stable then the joint system of observer controller is stable under nominal conditions under perfect model conditions okay so at least some guarantee of stability why am I working with state space models because in state space models it does not matter whether you are working with single input single output multiple input multiple output five outputs three inputs non-square system square system does not matter everything can be done under one single framework okay second thing is all the linear algebra can be used okay not that when you go to transfer function description linear algebra cannot be used but algebra of transfer function matrices becomes very very complex beyond a certain point up to three by three four by four it is okay but model predictive control I will show you some at least show of what has been done in the industry people have implemented model predictive controllers where there are 600 outputs and 300 inputs simultaneously model okay simultaneously manipulated 300 inputs to 80 inputs simultaneously manipulated by you know monitoring 600 outputs simultaneously so one model stick controller trying to take you know to 80 input manipulate moves to inputs 280 inputs it is quite mind boggling designing that kind of a controller using transfer function paradigm is I would not say impossible but little difficult to deal do the algebra and everything okay let us begin with this notion of controllability or reachability okay the first question I am going to ask is given this model is given to me can I move the system from anywhere to anywhere okay this is a question which is very very similar to what we asked about observability given these measurements can I estimate the initial state okay something like that now I am going to know my problem is little different now earlier in observe while designing the observer I wanted to find out x0 okay now I am saying x0 is given to you let us say you use some observer and then somehow you were able to reconstruct x0 okay now if x0 is given to you okay and I have this model okay this is how the system evolves okay I want to I want to find out input sequence u0 u1 u3 u4 u5 okay input vectors in time such that you know I should be able to take this system to any final state let us say xf I am at some initial point x0 I want to reach any arbitrary final state xf can I do that okay if this question can be answered then we can decide upon how to do that okay is it fundamentally possible for this particular system I want to do state feedback controller I want to take I want to be able to move state from anywhere to anywhere in the state space can I do that okay that is the first question I am going to ask okay so this is something like a fundamental property of the linear dynamic system so this I have highlighted here any arbitrary initial state to any arbitrary final state it should be not that you know I can move from so let us start doing that let us say you are given x0 and I start making input moves u0 u1 to right now there is no controller maybe I am doing it as an operator okay so what will be x1 x1 will be you know phi x0 plus gamma u0 u0 is the move that you have implemented okay what will be x2 x2 will be phi x1 gamma u1 and actually you can show that this is phi square x0 right just substitute phi gamma u0 and gamma u1 okay likewise I can go on doing x3 is equal to phi cube x0 now these particular nodes I yet have to upload I will do it on the weekend okay so I can just go on writing this recursively okay these are the input I am making u0 u1 okay u2 and so on let us say I have made n moves where n is equal to dimension of the system I have x belongs to n dimensional vector space so I made n moves okay why this n is so critical will become clear soon okay so this is this is everyone clear with this okay so now the question that I am going to ask is that can xn be equal to xf in n state in n steps I want to go from x0 to xf okay so can this can we reach xf okay now this last equation this last equation what I am going to do is I am going to replace this xn is equal to xf see what is known here x0 is given to me okay this phi gamma are known to me okay what I have to do is I have to find out the input sequence which we will take from x0 to xf okay so what is unknown to me is u0 u1 up to un-1 in the observability question was different in observability we did not know x0 we said we knew you okay and then you wanted to estimate x0 now this flip it you know x0 you do not know you okay so this equation this last equation this last equation I am just rearranging just see this I am just rearranging this last equation okay the known component I have taken on the right hand side xf is where I want to reach I know where I want to reach okay xf is known to me x0 is given to me right initial set is given to me and phi to the power n I know because phi is known by the power n is known so this quantity on the right hand side is known okay this matrix gamma phi gamma is known because phi is known gamma is known so this matrix is known what is not known is this stacked up vector okay what I have done is I have stacked up un-1 un-2 un-3 these vectors are stacked up so this is a huge vector this is a huge vector if there are m inputs and if there are n states this vector will be mn x1 okay this will be mn x1 so this is a huge vector so this is like a matrix equation matrix x vector is equal to a vector okay this is a matrix this is a huge vector is equal to vector when can I find a unique solution to this problem this this particular matrix should be full row rank the dimension of this matrix is this the row dimension of this matrix is n there are n n states okay so this particular matrix should be so necessary condition for reachability is that rank of this matrix should be equal to n where n is equal to state dimension so the fundamental thing I have not designed any controller just looking at phi and gamma I can say whether I can take the system from anywhere to anywhere okay if the system is not reachable the system is not controllable or reachable yes slightly different notions controllable and reachable reachable reachability is when you can take system from anywhere to anywhere controllability is many times used when you can take system from anywhere to origin okay so when xf is equal to origin then it is many times controllable and it is called reachable when you can take it from anywhere to anywhere and there are some slight differences okay so you can design a controller so what if controller is not reachable then it you should go and add more manipulate variables you should redesign your system okay if you want the system to be completely reachable okay and if you cannot do that you have to live with whatever you have you have to see whatever is which part is controllable and which part is not controllable so this is actually if you want to if you want to go back from here then and then say change the reachability of a system you may have to change the structure of the system you may have to add some more manipulate variables okay and make it reachable okay so but this is a fundamental characteristics so if a system is controllable then you are guaranteed you can take it or system is reachable then you are guaranteed that you can take it from anywhere to anywhere okay so that is the fundamental just like observability when can you develop an observer when you know when you are guaranteed to uniquely estimator states from the measurements when the system is observable okay observability depends upon only Phi and C matrix here it depends upon Phi and Gamma matrix okay so let us take the CSTR example predict the CSTR example this is my Phi and this is my Gamma so I have two states and I have two inputs two flows one inlet flow one the cooling water flow and concentration and temperature are my two states can I take system from anywhere to anywhere by manipulating these two inputs that is the question if you find out this observability matrix here it turns out to be this rank is to rank is to which is equal to system state dimension you can take it from anywhere to anywhere okay same thing I have done for the quadruple tank system this is my Phi Gamma matrices using this Phi and Gamma matrices using these two inputs can I move system from any state to any state any level any combination of four levels to any other combination of four levels okay that question when you do this linearization and start thinking in terms of linear model this question can be answered analytically without actually having to you know design see that is the beauty of this analysis you can just look at those matrices construct some you know some other matrix and looking at the rank you can say yes I can take from anywhere to anywhere okay so if you construct the controllability matrix for this quadruple tank system this will be four cross you know whatever four cross eight yeah because there are two inputs so each Gamma is four cross to Phi Gamma will be four cross to this is Phi square Gamma will be four cross to and Phi cube Gamma will be four cross to n-1 so you have to use Gamma Phi Gamma Phi square Gamma Phi cube Gamma why why you stop at Phi cube because n is equal to four okay n is equal to four so just go back and look at this condition here yeah so Gamma Phi Gamma up to n-1 where n is the state dimension okay if this matrix if this matrix has ran for then I can take system from anywhere to anywhere okay okay now I am going to do this pole placement controller design very very quickly because this is something that we have done for observer okay so I am not going to spend too much time on this pole placement controller design same idea that what we did for observer design do you remember or I have to connect the quiz so that you remember which is observable canonical form we transform the system to observable canonical form place the poles in the transform domain and then we did a inverse transformation to find out the observer in the original state space same thing is used here except instead of moving to observable canonical form here you will move to controllable canonical form okay you have this original model for the CISO case well I do the transformation move to the observable canonical form sorry controllable canonical form then for this controllable canonical form you know do a design do a pole placement design in the controllable canonical form and then come back to the original problem so if you design a controller now here in the transform domain the closed loop equation becomes this and you know so determinant of this will be just so just looking at the first row of the controllable canonical form you can tell what is the characteristic equation okay match it with the desired equation so this coefficient is equal to this coefficient this coefficient is equal to this coefficient so that you do and then you know you get this the controller gain matrix in the transform domain and then you know come back to the original domain by multiplying by the t transformation matrix how do you get transformation matrix in earlier case we got a transformation matrix by multiply the question of two observability matrices inverse of one and here it will be two controllability matrices okay so it is everything is you know like a mirror image what is happening there is happening here no difference so if you understand one you understand the other okay so you have this coordinate transformation t which is so controllability matrix of W tilde is for this controllable canonical form W call is for the original controllability matrices of the canonical form and of the original form okay so these can be computed very easily you compute this so I am not going to spend too much time on this I think less than 5 minutes that is what I have ended up spending so you want if you have any difficulty you just ask me because this is what we have already done except now you know everything will look transformed everything look with transpose of what was earlier this particular matrix when it is controllability canonical this is a row okay in observability canonical it is a column okay but exactly the same idea you are not doing anything different and then you just design this so here I have given you formula for how to get this t inverse and you can see here you can place the poles to any arbitrary location place the poles of the closed loop okay the placing the poles of the closed loop to an arbitrary location only when the system is reachable the system is not reachable if the rank condition does not hold you cannot place the poles to any arbitrary location okay so now let us see whether the closed loop is stable okay because the primary criteria when you design is that closed loop should be stable now you will say well I have designed by pole placement you choose the poles of the closed loop okay but there is a catch when you are done this when you are done this your controller is based on the true states here what is ? ? is Tx right see we have this transformation ? k is P of x what is x is the true state okay so when I did the design I have done a design using true state feedback in reality I am never going to get all the state measurements so I have to rely on observer okay I have to rely on observer agreed so when I want to implement this controller so this is fine you know you design this controller and then you got this and by design if everything is perfect if all the states are perfectly measurable then by design this is stable right because what you have done here you have placed the closed loop poles right you have placed the closed loop see this is the control law you pluck it into here when you pluck it into here you will get ?c- ?cgc this was the mid same problem right so you already know about this and many of you are done it correctly so most of you are done it correctly so you do this except that you did it in a hard way you know you just found out the determinant and then equate it this is a more elegant way you just do a transformation look at the first row or first column and then do things this is the algebraically more elegant that is all what you have done in the mid same problem is by a very crude way just find out determinant then find out the characteristic equation equate it to the equate the coefficients and then that is possible for 2 x 2 system beyond that you know if you have to do it for n cross n it can become algebra can become very very basic what you have done in the mid same if you have to repeat that for a real big problem so this is very nice when it goes to large dimensional systems but here the stability is ensured only if the states are perfectly measurable states are not perfectly measurable so what you are going to do you are going to actually take a state estimator use estimated states and then implement the controller and now the question is this joint thing stable okay is the joint observer controller pair do they form a stable that is what I mean by nominal closed stability so my plan dynamics is given by this okay right now I have taken a simplistic viewpoint I have no state disturbances I have no measurement noise okay ideal situation just to understand okay I am developing a state observer let us say this is my Lohenberger observer okay this is my Lohenberger observer and my Lohenberger observer okay so this is my Lohenberger observer and see my controller is not going to be –g xk because xk is not known to me only yk is known to me this yk has been used here this yk has been used here of course with a delay of 1 this yk has been used here to reconstruct the state okay and this reconstructed state is what I am going to use to compute uk okay so my question now is that is this plus this together see this is a dynamical system this is another dynamical system and they are connected through this equation okay is the joint plant and estimator controller pair okay is this plus this plus this is this together stable okay what I have done is I have designed this separately I have designed this separately is not it I have designed a pole placement observer Lohenberger observer I have designed a pole placement controller here okay now will a stable pole placement controller and a stable you know pole placement observer together give me a stable closed loop behavior okay that is the question that I want to ask okay what is the observer dynamics this we already have studied so not to worry this so I am just combining now just look at this equation what I have done here is uk I have substituted as x at k given k-1 so now this equation now becomes ? xk-gamma g x at k given k-1 okay now I am going to use this relationship here that is x at k given k-1 is xk-error is everyone with me on this xk-error so this I have just substituted as true-error okay once I do this okay I get ?-gamma g ?-gamma g x k okay and ?g x k what we have done when we designed the controller pole placement controller we have made sure that poles of ?-gamma g are inside the unit circle okay many of you in the mid-term you have forgotten that you are dealing with continuous time systems discrete time systems the example which was given was discrete time system you have used the conditions for continuous time system for stability standard trick which some of you have fallen for it okay so here how do we design the observer ?-lc at poles inside the unit circle how do we design the controller ?-gamma c had poles inside the unit circle okay now this equation plus this equation is it jointly stable okay so what I am going to do now I am going to combine this equation and this equation into one big matrix equation okay so is everyone with me on this all that I have done I have stacked up xk-1 true plant dynamics which now evolves according to this okay the true plant dynamics is affected by xk and error okay error dynamics is affected by only error dynamics okay I have just combined these last two equations this equation and this equation this equation this equation has xk and ?k given k-1 this equation has only epsilon k given k-1 I have just club them into one big equation okay what can you say about eigenvalues of this matrix eigenvalues of these matrix are nothing but eigenvalues of observer and controller okay because of its special form this 0 here helps you a lot okay so what you can show is that the closed loop characteristic equation of this particular system is nothing but determinant of ?i-you know ?-gamma g-determinant of ?i-lc now roots of this part are inside of the circle roots of this part are inside of the circle so together everything is inside of the circle very nice you can separately design observer and controller to be stable at least for the nominal case when the model is perfect okay you are guaranteed that the implementing state feedback control law using estimated states is going to give you stable closed loop behavior okay this is a very simplified form of what is called as a separation principle you can separately design observer to be stable controllers to be stable marry them together they are stable okay you will get stable closed loop behavior that is the that is the message from the simple calculation okay so since we are made sure that spectral radius of this but this matrix is less than one and I just put we have separately made sure that spectral radius of this matrix is less than one and spectral radius of this matrix is less than one you know then both of them both of the things are separately designed to be stable the joint thing is also stable that is a very very nice result so that you do not have to do a time at least for the nominal case okay so well let us look at the CSTR example this is my state space model I have knocked off the disturbances in the raw model what I did was I transformed it into the controllable canonical form okay you can do it by various ways you can use it through the transformation others in fact instead of using the transformation the simplest way I would say is to find out a transfer function look at the numerator and denominator and you know how to write the form right the transformation matrix did not be found you just find a transfer function look at the numerator you can write you know one column B column gamma column and then look at the denominator you can write first row and rest all is simple so doing that is you do not have to really find that T matrix you can do it without T matrix the reason to show T matrix is to show that these are only reorienting in the state space okay so you just converted into controllable canonical form then you can design the G this is G tilde this part without multiplication of T this is G tilde that T is required of course T is required because you do a design of G tilde and then you have to recover G which is in the original state space so you need to find out this T matrix so this multiplied by this matrix will be no good okay software or well I can think of possibility of doing observer implementation through LRC circuits or you know through op-amp kind of circuits it is possible okay you can realize the things that linear differential equation can be realized through op-amp circuits so I can have an equivalent op-amp circuit okay which will act like an observer okay and you can do observer but observer I doubt how will you reconstruct the entire state vector you can have an op-amp circuit for it will be difficult to get the entire state vector through op-amp circuit I think observers have been implemented to best of my knowledge they have been implemented through microcontroller microprocessors I do not know whether you can do it I do not know I do not know it might be possible I will check and tell you because there is lot of continuous time observer theory which probably means that even before microprocessors were used observer probably implemented through op-amp circuits so I will have to find out I should not but what I can tell you is that a differential equation just like it can be solved numerically inside a computer it can be solved using op-amp circuits and then when this probably our generation has seen maximum transition when I was doing my mtech in IIT Madras we had a simulator there which was a analog simulator so actually you could develop a analog circuit which is using LRC and op-amp blocks okay which is equivalent to a differential equation okay and then you know you can have a comparator see if you want to find out the error y-y hat you can have measurement coming you can have a comparator comparator ultimately what is it it gain multiplied again multiplying the you know error being fed back it is a differential equation a differential equation can be realized through op-amp circuits so it should be possible in principle to whether it was done commercially I do not know it must have been done in defense applications I am sure and then when you do not have time to do computations you know when your system is so fast you have to realize it through op-amp circuits you cannot so well answer to your question is it should be possible to do it through op-amp circuits so just an example I am just implementing one pole placement controller and observer for this CSTR system I start from some non-zero initial condition where non-zero in the sense perturbation okay and I want to go to the initial 0000 means perturbation 00 okay so this is the this is how the state evolves and this is the manipulate input I have designed a single CSO controller single input manipulation and this is how the observer this is observer controller pair okay so initial estimate is wrong and then together they actually converge and work quite well just a demonstration of okay multivariable pole placement becomes messy okay see first of all when you the same thing that we had we had a problem in the observer design it is not possible to do there is possible to do multivariable pole placement for observer or for controller in MATLAB you have a subroutine called place okay you just give phi a gamma and say place the poles of the closed loop at so and so location it will give you G matrix or you give phi and C and say place give the closed loop observer poles it will give you observer matrix it is possible to do Levenberger observer or pole placement controller for a multiple input multiple output system for observer I have uploaded that paper by Levenberger 1966 so that you can probably have a look but there is one problem why I went for the why I am now going to move to optimization formulation rather than pursuing this pole placement idea further okay and why did I do it even for you know earlier case that is well one practical problem of course is that when you are doing for placing the poles you are actually specifying only n poles okay and a number of elements of this gain matrix G here they are m x n because you have m inputs so this matrix is m x n so you have more number of parameters than the equations and then you have to fix some of them arbitrarily to get so there is some problem but those things can be fixed that is not a real problem see what is do not look at those equations look at blank sheet see when you are designing a controller okay there are what are the two primary considerations when you design a controller what is the most first fundamental thing stability first thing is those look should be stable that is the first okay what is the second thing performance not only that you want it to be stable okay but you want to be controller that is performing very well okay I will give you an analogy which not everyone of you will like but it is good analogy okay so designing a stable controller you know is like saying your CGPA is above 4 okay anybody who is between 10 and 4 is a graduate from IIT Bombay okay is stable okay so that is primary thing otherwise you cannot get out of IIT Bombay okay but you know you want a good performance okay how do you measure it so you want you know you want high CPI so now why did we do optimal observer design to linear quadratic optimal control okay it solved two problems in one shot it solved stability problem okay it also solved performance problem because we specified the performance designed an optimal observer and it also gave you post inside unit circle see stability was guaranteed since you got a proper okay he was you know above 4 point between so you did not have to worry about whether he is above 4 point or not so that was guaranteed okay so it solved two problems in one shot it solved stability problem and performance okay now if you ask me well how do you choose the poles here for a multiple input multiple output system okay you can give poles inside unit circle for all the poles inside unit circle you will get a stable controller there are infinite such controllers okay which one of them will give you the best performance okay what is the best performance how do you quantify it how do you know you have to articulate that so that is where the need to go for an optimal controller design okay has come up so this is not just because this approach you cannot use for that you can use and you can overcome this practical difficulty of less number of equations than the number of unknowns those are minor difficulties those can be there are solutions for dealing with those things problem is when there are infinite possibilities which is the best controller I do not want you know let us say I am coming from a company I will say well I do not want somebody who had just 4 points I want somebody who has 9 points okay so that is where this so do not take it too hard okay it does not mean that somebody who has 4 point is not that is a wrong notion okay everybody is a good student okay like I do not know whether you have seen that Karate kid old one the first one okay so there this what is that name of this teacher Karate teacher okay so Miyagi says that never a bad student always a bad teacher so if you are not understanding well I am failing not that you are bad students okay so this so I am going to design now optimal controller to begin with I am going to do it in the most ideal situation okay one by one I will relax those conditions and I will go to a most complex controller okay so initially I am going to take this model which is linear state space model no disturbances no measurement error okay only thing that has happened is somehow the current state is not equal to the origin origin is the desired state steady state okay that is where I want to be somehow it has got disturbed think of a pendulum I want to be exactly here in the middle no perturbations but somehow the pendulum has gone here okay how do I best how do I quickly get it to the origin okay I do not want pendulum to go like this for a long time I want just to come here and stop here okay whatever I want to I want it to come to the origin as fast as possible okay somehow it has got disturbed we do not ask these questions why x0 is not equal to 0000000 origin I want to get the system from some x0 which is not equal to 0 to the origin this is the first problem I am going to solve very very so what I am going to do now is I am going to find out optimal input sequence okay I want to find an optimal input sequence by minimizing certain objective function okay by minimizing certain objective function now you can see here that there are three terms here what is this first term first term is well let me just move on to see if you have a matrix if you have a vector x norm of x to norm simplest way of defining to norm is of course x transpose x right so this is nothing but x1 square plus x2 square x2 right this is two norm square two norm square right okay but is this the only way of defining a norm this is not the only way of defining norm so you can have you can also define a norm which is let me call this xw2 which is w1 x1 square plus w2 x2 square plus wn xn square where wi are greater than 0 for all i okay if I choose some weighting w1 w2 w3 wn which are all greater than 0 why I would do this I would do this sometimes if I have a vector x which have dissimilar elements then for doing scaling I can use this multiplication factor by the way one thing which I forgot to write to all of you for assignment is that if you have measurements which are of dissimilar magnitudes and if you have inputs which are of dissimilar magnitudes when you do system identification it is helpful to work with scale variables okay one easy way of scaling is take output each output find its standard deviation take each input find its standard deviation and divide it divide the output by its own standard deviation divide the input by its own standard deviation so you will get scaled input and scaled outputs all of them will be you know between plus or minus 2 or plus or minus 3 okay and then it is very easy to do system identification the problem optimization problem become very well posed and it is the solutions come much better if one variable is 10 to the power 5 other variable is 0.1 0.2 and then you know you have trouble getting good models this is a numerical difficulty not stated anywhere in anywhere so in Luke's book somewhere in background last chapter practical tips you will find this you should do this why scaling works makes things better well probably I do not know the full theory but it does make things better so well so this is scaling and this actually can be written as norm x w2 square is equal to x transpose wx where w is a diagonal matrix with w1 w2 wn appearing on the diagonal okay so basically this is square of distance of x from the origin this is length of the vector using some scaling factor okay so let me switch back to this now look here what is this first term the first term talks about distance of x from the origin using a scaling factor square of the distance so you want to find out just concentrate on the first term now you want to find out inputs such that square of the distance from the origin is as small as possible minimize some of the square of distance from the origin okay find out you such that okay now why this another term has been added here so there is one more term here okay now it may happen if you say that you know you are at some point from the away from the origin and you suddenly want to move to origin this might lead to inputs moves which are very large okay very large so you have some stepper motor you know you cannot change it from some you know by large amount in one second or one millisecond or whatever is your there is some physical limit okay how do you incorporate that physical limit how do you tell optimization program that well do it but do it slowly okay so to do that you penalize some of the square of inputs okay so find out input moves such that okay the distance from the origin is minimized at the same time do not use large input moves okay there is one more term which is added this is the last state I want to reach the final state xn less xn would be typically 0 0 okay the final state I am penalizing separately so this term here this term here and this term are not separate except if it sometimes a larger weighting matrix is used for the final last term okay so this is a objective function find that input which will minimize some of the square of distance from the origin find that input which will not be too large okay so want to reach the final state as quickly as possible so the problem is posed as optimization problem okay yeah actually you would like to pose this problem I mean you want to say you want to give an extra weightage to the last term because you know you want to go the last term should be as small as possible so I might give an extra large weightage to the last term finally I should go very very close to the origin so if I see the idea is like this if I put a large weightage here then it will try to find out that you which makes this last term as close as possible to 0 okay so that is a and then also in the derivation it will become clear why I am doing that there are some artifacts of now here this wx wu and wn are positive definite matrices these are typically tuning matrices I choose them okay I choose them to balance between the speed of recovery speed of recovery is this quantity okay and you know the large penalize the large input moves I do not want sudden changes in the manipulate very so these things can be so these are typically diagonal matrices with positive entries okay even though I am saying they are positive definite matrices these are typically positive matrices with diagonal matrices with positive entries okay how do I solve this problem okay I do it in my next lectures this is done using a celebrated approach called Bellman's dynamic programming okay this is one of the landmark development in the area of now there is a trouble here in Bellman's dynamic programming you start from that time last time okay you start from time n okay and you work backwards okay it is a funny situation in observer you are working forward in time here you solve the problem backward in time okay nice thing about this Bellman's dynamic programming is that you will get elegant close form solution okay but you first solve problem at instant capital N make it optimal then come back to n-1 make that optimal then come back to n-2 make it optimal like this you go to 0 okay so this Bellman's dynamic programming this is the basic idea that solve the problem at k optimally and then move backward in time and then again you know what we are going to get is something called well I will just show you where we are going to reach with lot of algebra we get into a controller which is something like this time varying controller in Kalman gain you got time varying gain right here you will get time varying gain again exactly you know mirror image optimization problem there optimization problem here okay and you get what are called as Riccati equations again okay but Riccati equations working backward in time those Riccati equations are moving forward in time these Riccati equations move backward in time so what to do about it we will look at it afterwards so this Riccati equations solution of Riccati equation gives you the controller.