 So last lecture we looked at linear quadratic optimal control which actually forms the basis of model predictive control. So in this lecture I want to complete some part of this quadratic optimal control and then move on to the main control scheme that I want to talk about is model predictive control. So hopefully we will be able to do it by end of this week that is the next lecture. So I talked about linear quadratic optimal control mainly because it forms the background for model predictive control. What is model predictive control? We will see that how in terms of concepts there is a smooth transition from one idea to the other idea. Many of the books do not introduce it this way, many of the books choose to introduce it model predictive control not connected with LQG because the way it developed in the industry was not exactly the way I am presenting it. So if you take a historical viewpoint then connections between LQG and predictive control were established somewhat later in the literature but the connections always existed it is not that they were not there they were not so apparent in the initial works that appeared anyway. So let us start to looking at this LQ controller linear quadratic controller now I added one more term here linear quadratic Gaussian regulator initially I developed controller only for a system in which there were no disturbances no state disturbances no unmeasured no measurement noise it was a clean system most ideal case we looked at a problem where it is perturbed from origin and you want to bring the system back to the origin this was a limited problem very very simplified problem that we looked at actually what you do in reality there are state disturbances even if you relax these two small constraints that is our conditions that is there are no state disturbances and there is no measurement noise then we have to make some modifications. Now moment you have this kind of a model okay moment you have this kind of a model in which you do not neglect the state disturbance nor you neglect the measurement noise and you let us say you have characterization of these two signals this is zero mean white noise this is again zero mean white noise you know covariance okay then how do you implement the controller well the way you implement the controller is not using the actual state feedback but using the output feedback what you do is you construct a state estimator here I have constructed predictor okay just to keep my development simple it is possible to do same thing using Kalman filter okay I could have done everything using Kalman filter I am just choosing to develop entire nodes using Kalman predictor that is only to keep mathematics or algebra simple okay it has nothing to do with you know that this is though observer nothing like that you can develop this controller using Kalman filter as well okay in Kalman filter I have to write one more equation okay prediction correction here I have one equation so my subsequent algebra becomes simple apart from that there is in the development once you understand the concept extending it to from Kalman predictor to Kalman filter is a minor modification it is not a great okay but if you see here what I got for linear quadratic optimal control were Riccati equations then I was not interested in the dynamic Riccati equations I was interested in the steady state Riccati equations I got the gain matrix right I got this G infinity matrix this is my controller gain okay and what is this L star here is the observer gain how do you get the observer gain you have method of designing the observer through Kalman approach that is Kalman predictor you have Riccati equations for observer okay so actually actually you have to solve when you design this feedback controller you have to solve two sets of Riccati equations one for observer the other one for controller okay one for observer other one for controller one to obtain L infinity one to obtain L infinity other is to obtain G infinity okay when you are doing your assignment now okay I am expecting you to implement this control law this control law I am not expecting you to right now to implement observer which is time varying time varying gain forget about time varying gain you can directly get steady state solution of the Riccati equations both for the controller and observer just find out L infinity G infinity okay implement the control law like this that is what I am expecting how do you solve the algebraic Riccati equations okay that is a complex business okay you leave it to MATLAB there is a subroutine MATLAB called ARE is it for discrete time is it DARE yeah for discrete time it is DARE discrete algebraic Riccati equations that will solve that will give you the controller settings matrix okay there for Kalman filter there is a subroutine called D. Kalman okay you just give system matrices to D. Kalman okay it will find these gains for you it will it will give you both D. Kalman will give you predictor gain D. Kalman will give you filter gain you have to choose whichever one you want to implement you have to choose either filter gain or predictor gain and implement the control law okay just pick up the steady state gain and implement it okay that is when it comes to closed loop control when it comes to state estimation and when you are doing Kalman filter okay at that point you write the Riccati equations see how the pk changes as a function of time how lk changes as a function of time after sometime lk will go to lk star or l star okay so that will happen so that you can see there so these two are different exercises I will upload today detailed instructions on how to submit this assignment okay so as I said I want to include one more thing in this so the three easy components of this assignment is one is system identification then observer programming and then you know this lq controller linear quadratic optimal control so these three you do and then the fourth one is you implement model predictive control okay that is what I am expecting so that is that will have higher voltage than these three so suppose you put 25 marks so first three is 555 and MPC is about 10 marks okay so do predictive control implemented that is the main thing okay so this idea clear what is happening here so right now again we are looking at a very limited problem we are looking at a problem of moving system from somewhere to origin okay what about the stability okay you can show that you can construct the way we constructed the closed loop equation last time okay for the lune by observer in the same way you can construct the closed loop equation here you have this is the true evolution of the plant this is the error estimation error okay you get the same thing here there is one technical problem in directly applying stability condition to this equation is because this now we have included this wk and vk okay now the troubles with wk and vk is that because we have made an assumption that these are Gaussian okay they have Gaussian distributions a Gaussian signal is a is a is an idealization it is a Gaussian distribution is defined for in say if you take some random variable x okay Gaussian distribution is defined for x going from minus infinity to plus infinity okay now the Gaussian distribution will make sure that large values never occur they have very very low probability nevertheless okay you have defined a domain which is minus infinity to plus infinity okay now once you have done that this system has inputs technically which are unbounded technically in reality disturbances are not going to be a unbounded in reality wk state noise and measurement noise is going to be bounded it is never going to be unbounded model used for it is saying that it is unbounded okay that creates some technical difficulty in talking about bounded input bounded output stability because these inputs as we have defined right now that is Gaussian distributions are not bounded inputs so there is a technical difficulty if you view this equation as a deterministic equation where w and v are some deterministic inputs to this difference equation which are bounded you make that approximation then you can talk about bounded input bounded output stability by looking at the Eigen values of this matrix see this is the augmented state vector okay this is the and this is the matrix whose characteristic equation will determine whether the system is closed loop stable now this argument we have done in the case of earlier for the state feedback control law yeah no so when you define see how is the Gaussian distribution otherwise what you have to do is you have to define something called truncated Gaussian distribution okay so which means that w can take let us take a scalar case yeah so x can see if you take a Gaussian distribution of a random variable x okay it is defined from x going from minus infinity to plus infinity it is not it is if you want to take you cannot define a Gaussian distribution for a bounded x if you want to define a Gaussian distribution for a bounded x then there is something called truncated Gaussian distribution which means let us say x varies between plus or minus 3 suppose okay you want to say that then the distribution is no longer Gaussian it can it is almost Gaussian with truncation but it is an input no to the system it is an input to the system so is it a input defined on the bounded domain it is not moment you say it is a Gaussian random variable okay it is not a input defined on the bounded domain that is a technical difficult okay if you ask me whether so actually speaking actually speaking creating a model for a real-world problem in which an input that can take unbounded values is not realistic but it is mathematically convenient why we do it is because it is you know mathematically much more convenient if I were to work with those truncated distribution the life will become mess I mean the algebra will become quite messy and you cannot get nice okay so technically it is an unbounded the unbounded defined on the unbounded domain though in reality no such thing as unbounded the input exists right so basically we have seen through Lyapunov argument that ? – ? g ? this is inside the unit circle and ? – lc ? this is inside the unit circle independently observer and the controller are stable then you can just combine them the combine system will be closed loop stable this is what this roughly is the idea behind the separation principle you can separately design observer you can separate design the controller under the nominal case they will be jointly stable they will give you a system which is jointly stable okay so this let us move on to a more realistic formulation yeah 38 I can use a Kalman filter here also okay so I am using Kalman predictor here just to keep my algebra simple because this whole thing becomes easier to develop it is not that I cannot do it with Kalman filter I can do a whole thing with Kalman filter I can use why I am using ? k – 1 here sorry it should be k – here this one right thanks yeah it should not be k given k it should be k given k – 1 okay so the first thing is that we have made an a simplifying assumption that the disturbances in the state okay are white noise it is a very very you know simplistic assumption okay there could be disturbances which are influencing the state dynamics that are colored or drifting okay assuming that their white noise is a simplification that help us to do some mathematics okay now let us go to the real problem that it is in reality it is not a white noise okay it could be colored okay now how to deal with this problem we will see that second thing is that when I did all these derivations okay I made an assumption that ? and ? and c matrix which are used for the plant and which are used in the model are identical okay this is also okay this is also just a simplifying assumption to you know get some insights into ideal behavior but in reality yeah yeah before this we want to so I am going to do that so right now I am just in this particular controller I am bringing it from a non-zero initial condition to zero initial condition to zero state from non-zero initial state to finally I want to control at 00 this particular regulator when you say regulator you are only regulating at 00 this UK is equal to G infinity will only ensure that X K will go to 00 as time as K goes to infinity I am just going to origin right now okay so I have designed a controller to move system to origin from a state which is not at the origin okay now I want to use this controller see why I did all this because I wanted to derive all these Riccati equations I wanted to derive all these Riccati equations I want to derive Riccati equations that is why I have done this simplification and now I know how to compute G infinity okay right now I know how to compute using this Riccati equations I am going to form this algebraic Riccati equations I am going to estimate G infinity okay and instead of implementing controller like this because I do not have true state measurement okay I am going to construct an observer see since I cannot implement the controller using the true state measurement because they are not available I am going to construct an observer here and using the estimated state I am going to implement the control law what is this control law going to achieve move the system from at time zero the state is not at zero zero it will move to zero this is a very very limited problem okay now I want to make it into a realistic problem of moving the system from anywhere to anywhere okay how do I modify this control law okay so that I move from anywhere to anywhere is that what you want to ask this is achieving a limited objective moving from anywhere to zero okay so now how do I change this and right now I have said that under ideal conditions okay the observer and the controller together will give you a stable closed loop behavior so it is fine if you implement the control law using estimated state instead of the true states okay so observer controller is a thing which is sort of married together you cannot implement state feedback control law without an observer you have to have an observer will give you state estimates or a soft sensor will give you estimate of the unmeasured states using those states you implement the control law okay so other thing is that there can be mismatch between the model and the plant if there is a mismatch between the model and the plant then this control law may not take you to the desired location zero zero that is another problem so we have to add some kind of integral action what is done what will happen is if you do not if you just use this control law blindly okay when there is a model plant mismatch or there is a disturbance you will get an offset you will not the system will not move to zero zero it will settle at some other point okay so now what is done in PID controller you all of you know that if you just have proportional controller okay then what happens is that you will get offset okay so you need to introduce some way of integral action into this system okay now there are several ways of introducing integral action into the system I am going to talk of two different ways of which I will emphasize one and the second one also I will talk about but so what if I want to what if I want to move the system to some non-zero initial state what if I want to move the system to some set point okay which I have specified I want to move the system to a given set point okay not to zero zero see you take this quadruple tank setup in quadruple tank setup when you have put up the system and develop the state space model zero zero means the initial steady state okay now what if I want to change the level set points I do not want to keep system only at the initial steady set all the time I want to move to some other set points okay so that is the possibility second thing is you know my model and the plant need not be identical you know that real system is non-linear you are identifying the linear state space model there is a approximation so model and the plant are not exact so that is going to create some problems the third problem is that there could be drifting disturbances which I have not accounted for when I modeled okay so all these three things are going to create a problem so I am going to modify now my control law using these two additional terms here okay this XS is a steady state target okay XS is a steady state target and US is the steady state input now what is this I am modifying this control law if you see here if you see this I am trying to keep the form of the control law same state feedback except I am trying to apply some corrections here okay to the input and to the state okay these corrections how do I compute this correction we will come to that okay how do I compute this but finally a realistic LQ controller which I am going to implement is going to be UK is equal to USK minus I have removed here G infinity this is G infinity instead of writing G infinity every time I have adjusted here X minus XS okay now the question is how do I find out US and XS okay such that I reach the desired set point arbitrary set point I reject all the disturbances okay my controller will work even if there is a model plan mismatch okay this is my target now so now I am moving from a unrealistic but mathematically convenient formulation to a realistic formulation okay where I am going to use the results of my previous part where my results of previous parts are going to come G infinity okay I am going to compute G infinity using algebraic Riccati equations those I have sorted out earlier okay now I know how to compute gain now given this gain I want to only sort out the problem of okay non-zero initial state or non-zero final state non-zero final state disturbances okay model plan mismatch all these I want to sort out now okay so these are called as target states okay okay now let us move a little bit to this look at this equation this is my model okay this is my model right now do not worry about observer controller all that this is a model okay now when you say that your final steady state let us call the final steady state in for the time being as x okay let us use the same terminology xs xs if xs is equal to 0 0 vector what will be y ys c xs that is equal to 0 vector dimensions will be different this will be n cross 1 this will be or r cross 1 there are outputs there are n states okay so the dimensions of these two vectors will be different but these are to 0 vectors okay now so which means which means what is the final set point here output set point 0 when you say that I want to reach the target state of 0 0 the final state is 0 okay now just keep this aside so this is the file now let us say let us say I want to reach I want to reach a set point okay which is given by r okay so this is my this is my in this particular case so when you say that case xs is equal to 0 ys is equal to 0 so r is equal to r is equal to 0 so this is my so r also becomes equal to 0 vector the r is the set point okay what if I want to reach a set point which is non-zero okay I want to reach a set point which is non-zero so let r be not equal to 0 now this is my set point okay now what I am going to do I am going to find out the value of the steady state okay I am going to find out value of the steady state xs okay which corresponds to r which is not equal to 0 okay r is some suppose you know we take this our level four time problem and then r you have given as you know four centimeters and minus three centimeters I want to reach these are deviation variable these are deviation variables this difference to the initial state I want to go to plus four and minus three okay in tank one and tank two I want to raise the level of tank one I want to reduce the level of tank two okay so this is my r so what is the steady state what is the steady state that will give me this r can I find this out I can find this out because I have this model so what is the steady state let me call this as xs is equal to 5 xs plus gamma us I still do not know what is this xs and us I have to find out okay and r equal to c xs where I want to reach I want to reach y is equal to r okay so I want to reach y equal to r equal to this okay r is given to me r is this okay how can I find out xs from these two equations can I find out so from this equation from this equation I know that i minus pi into xs is equal to gamma us okay so xs is equal to i minus pi inverse gamma into us okay if I substitute this here I will get an equation r is equal to c into i minus pi inverse into gamma us correct is it okay everyone with me on this okay so what should be us what should be us okay what should be us which will give me this r if this matrix is a square matrix if the number of inputs is equal to number of set points if this matrix is a square matrix then number of measurements is equal to it will happen when number of inputs is equal to number of outputs see in the quadruple time setup okay you have two measurements level one level two you have two inputs to all positions okay this matrix actually this will be steady state gain matrix if you look carefully this is a steady state gain matrix okay this matrix let me call this matrix as k u okay then us is equal to k u inverse into r is everyone with me on this us is equal to k u inverse into r right if us is equal to k u inverse into r you substitute this here we will get xs okay see what I have done is that I found out this now this I substituted here then from this equation I got us and if I substitute us I will get is this clear is the sequence clear this try to derive it yourself it is very simple okay I am given see I am given a set point for the given set point I want to find out the steady state corresponding steady state I have model equations using the model equations I can actually find this particular steady state I can actually find this steady state okay is this clear let us not bother about it right now you understand this simplified thing if it is not a square matrix you can use the pseudo inverse okay you can use appropriate pseudo inverse okay right now take the simplified case it is square matrix if it is not a square matrix you can use pseudo inverse of this k gain matrix it is not always square but like we initially made all of assumptions right now make an assumption that it is square okay so if you want to control the system okay you have a situation if you want to control the system at a specified set point then what is the condition how many input should be there and how many if you have some if you have set two outputs you have to have minimum two inputs you can have more than two inputs okay so it can be square in the sense that the number of inputs have always should be always be equal to or more than the number of outputs if they are not then you cannot solve this problem okay so if they are not equal or if they are not more than the number of outputs then you cannot reach the desired set point is a fundamental see what if the number of just look at this algebraic equation if the number of inputs are less than the number of outputs can you uniquely define the inputs which will take the system to the desired outputs the input space is smaller than the output space then can you take system from anywhere to anywhere just linear algebra problem when can you uniquely solve this ask this question see I will put it in a abstract form if I have a x equal to b okay when can you have a unique solution see this problem is not different k u x us is equal to r these two problems are same map them and then see if the dimension of r okay is larger than dimension of us okay you cannot take system from you cannot manipulate input to take it from any set point to any set point the dimension of r so dimension of r should be less than or equal to dimension of us okay if that is the case I can always take a situation where this is equal no I may not want to manipulate more inputs I want to manipulate equal to the number of which are controllable they will not be reachable controllable they might be controllable to 0 0 0 but they will not be reachable reachability is a stronger condition so if you have example you show me we can look at it so under determined system there is always a problem okay so we are talking of well condition problem where system is you know number of inputs are more than the number of outputs and then we are able to take the system from anywhere to anywhere that is where so if you have a situation where the dimension of us is larger than the dimension of r you can use zero inverse and work with the problem so get one solution okay okay let us go back here so how do I get this excess and us knowing r is clear from here at least for the ideal case where number of inputs is equal to number of set points or number of outputs okay see what I am trying to do here is something like this okay so you have found out you have this system xk plus 1 is equal to ?xk plus ?uk and yk is equal to cxk okay and I have this another I have this steady state equation xs is equal to ?xs plus ?us r is equal to cxs right I know how to find out xs and us now given r okay so all that I am going to do is to subtract equation 1 and equation 2 and equation 3 and equation 4 I am going to subtract this and this and this from this okay if I subtract what I will get is right is this okay okay and yk-r is equal to cxk-xs everyone with me on this what do I consider I will consider you want to consider everything in one shot is not possible see we develop things in pieces and the entire thing will become clear only after you you do not jump okay just be patient we will consider noise we will consider all kinds of noise okay I have to explain a concept by removing certain complications then you know see I can show you the final expression which looks very complex okay so to explain that I am removing certain you know things and then if 0 is going to change xs you tell me if there is a 0 mean noise is it going to change xs no this is a non-zero mean noise xs is going to change how to deal with it I am coming to that okay just wait just have some patience okay so now this is this is this can be this equation you know I can write as ?xk-1 is equal to ?xk-gamma ?uk and ?yk is equal to ?xk-gamma ?yk is equal to c ?xk is this fine now this is a perturbation model perturbation around perturbation develop around this steady state okay this steady state will take me to this set point R okay so the trick we do the trick we do is to design a LQ controller for this perturb system okay what is the origin of this perturb system xs okay see origin of this perturb system is when you read 00 where you do each xs what do you mean by reaching xs reaching xs means you are reaching set point R okay see this is a trick I am going to do to use the same old control law in the new context I want to reach a set point which is non-zero okay I found out a steady state corresponding to that non-zero set point I subtracted created a new system which is this perturbation system okay for this perturbation system I design a control law that is ?xk-g?uk which is nothing but xk is equal to xs-g?uk-us you see what I am doing I am just rewriting this back into the original form okay I am writing this back into the original form so do you see this now what is the reason why I am doing this yeah so the reason why I am modifying this control law okay is to find out that steady state which will take me to the desired location okay so I am going to talk about two approaches of dealing with modipline mismatch noise covered noise and all kinds of disturbances that can occur the first approach is I am going to talk about the implementation right now you will get more and more understanding about this as you actually work with it okay so my observer is based on this okay when the model is perfect what we know is that ek is a zero mean signal it is like a uncorrelated white noise sequence it is a perfect white noise signal moment there is a mismatch between the plant and the model moment it happens that the true plant evolves according to some ?-g?-c? which are different from ?-c okay this ek here this ek here is no longer a white noise sequence okay is no longer a white noise sequence so this can happen under two situations one is that this ?-g?-c? are different from ?-c it can happen when your model the true plant evolves let us say according to this there is another term here okay this is a drifting disturbance this is a drifting disturbance which you have not accounted for in your model when you develop the Kalman filter you never knew about this okay so you are not accounted for this guy here but in the plant it is there okay when these two things happen what will happen is that the innovation sequence is no longer a white noise okay this innovation sequence becomes a colored noise I am skipping the proof if you want I can give you some of the reading material on this that why it becomes a colored noise but right now trust me that it becomes a colored noise so what is the meaning of it becoming a colored noise if there was no plant model mismatch okay there was no plant model mismatch then ek would be something you know this is my ek would be something like 0 mean so this is 0 okay so ek will be something like ek will be something like this but moment there is a model plant mismatch this ek will be a drifting signal okay ek can be a drifting signal which is there is a time correlation between between k and k-1 k and k-5 you take any difference there is a time correlation it is no longer a 0 mean signal okay this is a 0 mean signal means actually if you take a moving average it will come out close to 0 okay here it will not come out close to 0 here you will see that there is a trend okay now one can of course do modeling of this online using all the time series method that we have studied I am not going to get into that right now I am going to have a simplified you know fix to deal with this particular problem that it becomes colored okay what I am going to do is I am going to find out what is the drifting mean of this signal okay what is the mean of this signal which means I am right now interested in finding out if this is changing like this I am just I am just interested in knocking off this high frequency noise I want to find out this trend I want to find out this dominant low frequency trend I want to find out this dominant low frequency trend okay and then I am going to take this dominant low frequency trend as an indicator of unmeasured disturbances okay if unmeasured disturbances where not there ek will be like this if it is there it will be some drift what is that drift how do I get that drift from the data of ek which is coming okay so what I am going to do here for that is I am going to filter this signal okay this is the first order difference equation what does the filter do what is the filter it filters certain frequencies and it gives you a signal of lower frequencies okay filter is when I use the word filter okay particularly all those chemical engineers will start thinking in terms of something in you know electronics or electrical engineering domain and we do not know what is this it is not like that filter is anything that filters high frequency signal in chemical engineering you know if I have a flow and if I want to remove high frequency oscillations from the flow what I will do is I will put a tank in between okay and the flow comes into the tank and outflow of the tank is given to the reactor I have put a filter in between to filter out the high frequency noise okay the level in the tank will fluctuate but the flow out will not fluctuate depending upon how broad or how small the tank is you will be filtering different frequencies in the flow signal okay so a filter essentially when it comes to computer programming a filter is nothing but a differential equation or a difference equation a difference equation is a filter okay a difference equation will filter input given to the difference equation the output of the difference equation will be a filtered output okay so what will decide the filtering ability the time constant okay the time constant what is in the case of discrete time where does the time constant this is coming to picture sampling time and I can values of I can values of 5 okay so I am going to specify this 5 I am going to specify this 5 matrix okay I should it should be stable filter of course so I have to choose this 5 matrix okay who is I am going to choose this 5 matrix to be a diagonal matrix okay alpha 1 alpha 2 alpha 3 alpha r so I am going to filter this using a signal which is you know this is a simple first order filter with 1-5e this will make sure that the gain of the filter is unity how do you find out gain of the system the steady state gain what is the input to this this difference equation what is the input e is the input e is the input okay e filtered is the e filtered is the output okay e filtered is the output e is the input okay and then this particular difference equation will give me a filtered output maybe let us do let us see whether in MATLAB I can show you this okay so I have created a signal which is colored noise which is some kind of this thing and I will just say plot this e k e okay I think let us do a little bit let us let this signal dominate and reduce the okay so this is a drifting signal okay now I want to filter this signal okay so I am going to write a difference equation and then filter this okay so for i equal to 1 to say 200 okay I cannot do this I have to say e f e filtered let us create a dummy vector initially e f is equal to 0s 200 comma 1 and now for i is equal to 1 to 200 e filtered k is equal to oh sorry e filtered i is equal to let us take filtering constant to be point say 0.8 okay 0.8 I want to create a unity gain filter so this is 0.8 see this is filtering is star e f i plus 1 is equal to e f i plus see this is the last filtered value new filtered value is 0.8 times the last filtered value okay plus 1 minus 0.8 this will make sure that the gain is unity okay star e k e i okay so this e f is a filtered signal now let us we have this signal here let us say hold on and I will say plot e f on the same graph you see here is a time shift is my filtered signal this is my filtered signal this is my original signal what has happened when I moved from there is a shift this like this that is okay that is fine okay what about smoothing if I were to see there is one time shift I should actually account for that but that I have not done here right now I can do that but what has happened here is this a smooth signal this is a smooth signal as compared to the so what has happened when I moved from here to here the high frequency component has been knocked off have I somehow see it is time shifted but have I recovered changing mean have I recovered changing mean see suppose I will do superimpose suppose I will do superimpose this on this what will happen the high frequency is knocked off the low frequency drift is captured okay see you can make it even even less oscillatory by changing this instead of 0.8 I can have this as 0.9 or make it 0.95 this 0.95 and this is 0.95 okay I have to make this I have to make this I have to make this equal to 1-0.95 so that the gain of the filter is unity steady state gain of the filter is unity my I do not want to change the gain of the signal I just want to knock off high frequency component okay and let me say again plot EF and then I will say red color so I will see this well when I increased it there is a trouble it reduced it is not able to capture the full height okay but the smoothness what about the smoothness the smoothness is there okay I could actually move other direction so this is 0.6 see now 0.6 this black one is 0.6 it has less oscillation but it is trying to capture the it is trying to capture the mean pretty well it is trying to capture the mean pretty well as compared to 0.9 or this signal okay so there is a way of recovering drifting mean and knocking off high frequency signal just by doing simple unity gain filtering okay that is what I want to emphasize here is this clear from this picture what I am trying to do so this parameter which I am talking about here this parameter which so this EF is the signal which is containing the drifting mean how do you choose this alpha the golden question it is a tuning parameter right now we have to choose it between say 0.6 and 0.9 we have to tune it to see how the closed loop behaves okay typical value is 0.9 I would say but it depends upon what is your frequency content and you have to have some experience on tuning this so I can use this filtered signal okay and now I am going to so what does this filtered signal contain okay this filtered signal let us suppose that there is some optimal way of choosing this and knocking off the high frequency part high frequency part is like white noise low frequency part is like a drift okay what does this drifting signal contain it contains everything that is not explained by the model what is that model plan mismatch it may have drifting disturbances okay model plan mismatch drifting disturbances color disturbances non white disturbances everything is contained in this EK if the model was perfect if the true plan noise was perfectly voice then EK would be white noise but EK is not a white noise in reality EK is not a white noise if it is not a white noise we try to find out its mean okay this mean signal contains the information about model plan mismatch unmeasured disturbances everything that is not explained by my model okay fine so now I am not making any when I move to this point I am not making any assumption on what kind of disturbances exist disturbances can be of any type okay colored non stationary now what I do is my controller is now modified like this excess and us and then what I do is I solve for this equation now I am going to solve for this equation okay I am going to find out steady state excess okay I am going to find out steady set excess for the specified RK RK is my set point okay I am putting this R not as a constant earlier when I did the development I said there was a constant set point what if there is a set point trajectory okay so I am saying this to be RK you know the set point is changing as a function of time now what is this EF here is the filtered innovations we created this filtered innovations here if you remember I have created this filtered innovations okay this filtered innovations are used here in this equation okay what is this component bringing in information unmeasured disturbances unmodeled dynamics everything is captured as an effect through this E filtered okay this E filtered is changing as a function of time that means I cannot talk of one steady state target I cannot talk of one steady state target okay I have to talk of time varying steady state target okay see that is why I am doing this so now I am going to solve this equation just like I solved the equation right now here okay in the same way I did this in the same way I did this here in the same way we did it here except now the signals are time varying okay signals are time varying which means now this RK here is not constant R is not constant I am looking at RK okay I am also looking at the correction in the state I am also looking at the correction in the output this is coming through filtered innovations okay yeah so here we are so now I am going to solve for the steady state here okay and then this will give me of course the way I solved earlier I am going to solve for this okay this will give me you steady state solving for this equation and then substituting for excess I will get x steady state so this is a time varying excess us which I get when I solve this okay and for the square systems of course when the system is square you can write this as KU inverse okay the system is not square you replace KU inverse by the pseudo inverse okay the same thing will work okay so this is my these are my moving targets okay and this moving targets I am going to and with this moving targets I am going to implement this control law okay now this control law this control law this modified control law will take care of body plan mismatch drifting disturbances will take care of everything okay even though I have done development in bits and pieces initially I assume that there are no disturbances okay to most ideal case I just use that situation to find out G once I know how to fix G okay I will fix for unmeasure disturbances model plan mismatch all other okay that is through that is through this okay so this is a this is a fix this is a fix which will help you to deal with model plan mismatch unmeasure disturbances so actually when I implement my control law when you are going to do it in your in your project okay you are actually going to do this you are going to do this filtering of innovations you will write the observer for the observer you will take the innovations filter them from the filtered innovations at every time instant at every time instant you are going to find out the new target state and the control law is going to be this okay this control law will take care of everything okay I will show you an example of how this works and when you actually implement you will see how it works okay now is this the only way of doing this there are other ways of doing this one is this is called as a state augmentation approach what you can do is I am going to get too much into details of this of this is a similar approach what you do here is that you add extra states beta and neta you augment the system using artificial extra states okay and these artificial extra states are treated see these are integrators if you see here these are pure integrators so you add pure integrators into your system model equations arbitrary okay and then you develop a controller for this augmented system okay you develop a controller for this augmented system and you develop an observer using the augmented system you develop a controller using augmented system and then you can take care of so these augmented this white noise here and a white noise here these are treated as a you know tuning parameters and then one can one has to find out the covariances one has to tune the covariance of this I particularly find the previous approach which I discussed this is a simpler approach just stick to this when you study for the first time okay when you become an advanced user if you want to switch to the other approach you can do that this is a easier approach to get rid of the colored noise plan model mismatch moving to any set point everything can be achieved using this particular formulation same thing can be achieved through alternatively through what I have discussed here you can just go through this I am not going to go just see here what we have done there is a something like a state correction there is something like a output correction okay these are arbitrarily added augmented state vectors how to add these how to choose this gamma beta and C beta I have given some some guidelines here there are different ways of doing this there is something called output bias formulation is something called input bias formulation so you can choose this drifting bias to be in the inputs you can choose it to be in the input you can choose to be input and output and all kinds of combinations you can do so let us not get into this you can choose it into the disturbance and you can augment the state space model then you work with augmented state space model and then develop your controller for the augmented system and so I work with this augmented system okay I have given the methods of augmentation you work with augmented controller augmented model develop a LQ controller artificial introduction of integrating states will remove this model plan mismatch will remove the offset sorry will remove the offset okay it will not remove the model plan mismatch because you are arbitrarily adding some integrators you are making some fixes in your model to make sure all this is just to make sure that there is no offset okay this means this is not this is not this is a fix you should remember that is a fix yeah so that is a good question so there is a fundamental limitation that you cannot add you cannot add this last statement to maintain the observability of the additional states you cannot add more than number of outputs so there are if there are 20 states and 5 outputs measurement measured outputs you can add at most 5 artificial variables okay so those could be out of that all 5 can be in the input all 5 can be in the output 3 in the input 2 in the output whatever you cannot add more than 5 there is a fundamental limitation as to okay actually the innovation bias approach which I talked about is also adding artificial variables it is not very clear here you have to work out some little bit but it is exactly adding equal to number of outputs because innovations number of innovations is equal to number of outputs okay so that condition is perfectly satisfied here you have to make sure that you should not add more than otherwise you lose the observability then you cannot design an observer and then you get into trouble so I will just show you an example here well pretty much the same things you have to do when you do augmentation you have to find out a target state you modify the control law in the same way then you implement the control law there is no difference everything is same and then you know well I have mentioned this here that in case KU is not invertible you use you know zero inverse so all that is I am just keeping through this because this is another approach you can stick to one of them for the completeness I have put the other approach in my notes okay so further if you go and if you are to use if you are not convinced with the innovation bias approach you can use the other one if you like it that way okay so what I have done here is I am going to compare I am going to show you that I am going to develop a control scheme for a CSTR the reactor problem which we have been looking at from for quite some time I am going to measure only temperature estimate concentration and temperature using the state observer okay and then I am going to develop multi loop control scheme to PI controllers and multi variable control scheme LQG okay then I am going to compare three controllers one is multi loop PI controller multi loop PI controller with decoupling okay and LQG controller okay and my control problem is like this I want to give a step change in the concentration set point keep the temperature set point same I want to ramp up the concentration set point and then I want to introduce a unmeasure disturbance my controller should be able to reject the unmeasure disturbance move the system to the new set point okay all the three controllers are given same task and you will see how the closed loop performance is so this is I have done this using state augmentation approach I could have done it using input bias approach or the innovation bias approach results will not be too different I have just shown here how to design the controller so I get here some control law which is observer gain here I will also get the controller gain by the controller gain is found for the original system and the observer gain is for for the augmented system and then you you know do the controller implementation through the augmented state space model okay so just see here first I am just showing you the observer initially I have just designed the observer okay observer is designed by doing the state augmentation I am assuming that there is a drifting disturbance in the unmeasured input decay okay and then I have augmented the state space model only temperature is measured only temperature is measured concentration is not measured this new augmented state okay which is introduced here that is also not measured okay so now just see here model no model plan mismatch that means plant is linear model is linear everything is perfect okay I gave a step change in the inlet concentration as a disturbance my observer is able to track this my observer is able to track this I am not measuring in that concentration my observer is able to track inlet concentration okay so I have a state estimator which estimates not only the mind you even the concentration is not measured only temperature is measured I am estimating the reactor concentration I am also estimating inlet concentration I am also estimating change in the inlet concentration through this augmentation okay through this augmentation I am also trying to track that so it is able to find out the change in the okay so it is like it is like through the model I have a disturbance observer okay if you the true disturbance is not measured but through the model I am constructing an estimate which is perfect for quite quite okay this disturbance estimate is quite okay I can do control which is speed forward control right I know I have a disturbance measurement now indirect but I have a disturbance measurement I can actually so my LQ controller will get converted into a speed forward controller well I am just showing here what happens if the plant is nonlinear and observer is linear there is a slight mismatch okay but nevertheless it does capture the step change okay so the observer is bad now but it is okay it is giving you something is better than having no information about and then this mismatch will be taken care by the my LQ controller I chose some WX matrix I have to choose a waiting matrix for state I do choose a waiting matrix for input okay and then this G infinity is found using DLQR subroutine of the MATLAB you just give 5 gamma matrix to DLQR you give this waiting matrices okay it will give you it will solve the Riccati equation for you give you G infinity okay it will give you G infinity so this is this is your and then with this G infinity I have just shown you two different ways of getting G infinity once this WX is 101 in other case see this in this case I have increased the waiting here see here it was 11 here I have made it 100 okay what will be the difference between this and this controller if the input waiting is more the movement of you will be smaller so the gain will be smaller okay this will be a sluggish controller this will be a sluggish controller this will be a more active controller that is reflected through the gain values here you can see this is minus 47 this is minus 1 okay so the values in the gain matrix are reduced what is this telling what is the first X variable concentration what is the second X variable temperature I am saying that change in the concentration value should be multiplied by 100 give more importance to perturbations in the concentrations than in the temperature okay so my controller will bother more about controlling the concentration than controlling the temperature okay so these values have to be used to fiddle with the performance you know which is what is more important what is less important you can tell you tell the controller through these values okay so just this will this because I have shown you once two controllers okay two PID controllers which are not coordinated okay it is a disaster you can see it somehow takes it here okay it somehow rejects the disturbance okay but what is happening is you know the system is all over it is not a great great control bind you I am not doing anything here hanky-panky I have chosen a controller PID design which is using the X book methods I am just using them you know whole placement method or whatever I have used and I have not deliberately found a PI controller setting which will give me bad bad bad this is by the design using the design approach okay now once I move to PID controller with gain decoupling okay does it help it does help okay you can see here that this performance at least visually is less oscillatory okay it is this is settling faster here see this is settling much faster whereas here it has not settled okay it is taking much much longer time to settle whereas with decoupling it seems to improve those loop is better with decoupling than without decoupling okay everything is fine till I am using linear plant simulation okay moment I move to nonlinear plant simulation okay this decoupling seems to go away okay so this decoupling seems to work when plant is linear model is linear everything is perfect I just keep the controller same just change plant simulation from linear to nonlinear plant simulation this decoupling seems to work okay so decoupling is fine look at performance of LQG I mean just visually you can see what is the difference okay see this performance and see this performance I just gave a step change the time by mind you this time 25 and this time 25 is same simulation for the same time okay here it took about 5 minutes to reach or 5 or 10 minutes to reach the set point okay look here I just gave a set point change as if you know there is no delay in the system it went to this new set point okay there was a small blip here okay the other loop is not affected there is no loops actually there are no two different loops there is nothing like two PID controllers fighting with each other in earlier case the two controller loops they were fighting with each other here nothing like that okay well you will say that well this is happening so nicely because the model plant mismatch is not there plant is linear model is linear everything is perfect LQ controller you know you are finding out I just changed to nonlinear plant simulation not much different not too much different this has changed slightly okay this disturbance response has changed slightly but not much different okay you are getting pretty much the same controller behavior as you got in the earlier case okay this is much more robust controller it is performance is much better than the this is a multi variable controller it simultaneously changes both the inputs okay by using reconstructed states okay it does much much better than two PID controllers acting independently even for a system where there are two loops imagine when you have a controller or when you have a system where are multiple such loops acting together that is why you need multiple operators who are continuously watching the plant and you know it becomes a nightmare to run a big plant because there are so many loops all of them could be fighting or some of them could be helping you know it is a so what is the problem what are the difficulties with LQG so LQG seems to be a good solution very nice why did people think of moving away from it what was what was required so I cannot I cannot impose constraints okay for example here if I wanted to say that you know the temperature should never cross 396 and actually it is crossing 396 here okay or it should never go below 390 okay can I say this in LQG controller I cannot because LQG controller is gain times x estimated you know if x estimated is large you will come out to be large it will can I say that you cannot be higher than something you cannot be lower than something all these constraints which are in real life there are constraints which I cannot specify to the LQ because it just finds one gain multiplied by we have fixed to deal with offset removal but you know we do not have a way of dealing with constraints okay there is also some problem systematically dealing with model plan mismatch I won't go into it right now then difficulty is this if you have a large plant okay I am going to talk about a plant controlled by model 22 control which is as large as 600 outputs and 280 inputs simultaneously huge system okay for such a huge system solving algebraic Riccati equations to get a gain matrix is a nightmare you cannot do it in a reliable manner it is very very difficult to solve algebraic Riccati equations even for some of you when you will start using it for 6 systems you have a 6 state system you will have difficulties so algebraic Riccati equations can be solved for simple systems small problems fine no difficulties when you move to large plants okay it is difficult to use AREs so they form they give you a good theory it is everything is fine good understanding but you need something more than this okay that is where this model by 2 control will come in it is a multivariable controller that uses online dynamic model this is one of the most widely used multivariable control schemes in the industry today it was developed the something is different about this is that this particular control scheme emerged in the industry and then it moved into academics most other control schemes were first developed in the laboratories in universities and then you know they moved into the industrial practice this is something different this was developed by industrial practitioners and then people in the academics started doing their you know mathematics to show why it works okay so we will talk about it in the next lecture so it is a very mature technology by now and then I will talk about it more and you can actually control very complex large dimensional systems using this approach okay so with this I will close today's lecture