 We have discussed the what is called LQG problems and finally, we have come with an expression that dynamic equation of the system or process dynamics and estimated dynamics we have come across with this expression, where v of t small v of t is the measurement noise. This equation can be rewritten into a error that state equation argumented with a error equation, which will come into this form. And if you look at the equation number 12, you see this the matrix this matrix is a diagonal matrix. And this we can control the what is called closed loop system Eigen values and observer or estimated Eigen values or Kalman filter Eigen values separately one does not depend on the others. So, this a of b k can be designed independently from the a minus l c of the Kalman filter gain. Both are controller gain and Kalman filter gain can be designed independently that means separation principle holds for this one. So, now let us look at this stage that what about the it is loop gain that means if you talk about the robustness of this controller as we have seen in the what is called earlier case LQR controller design. When the states are we are assuming available to us then we have seen that controller LQR controller gives the what is the good stability margin that means gain margin is varies from what is called greater than equal to half to infinity. And phase margin is greater than 6 degree whether this properties of the systems holds in case of LQG problem or not that we will study now. So, that is called the loop transfer recovery. So, let us start with loop transfer recovery that means LTR and if you see this one and this loop let us find out the loop gain of these systems. So, we know the our estimated LQG problem we know our LQG problem the estimated equation LQG problem the estimated equation is dynamic equation is x dot hat x hat dot of t this is x hat dot of t is equal to a minus b k minus l c x hat of t plus l of y of t. This l of l is the Kalman gain Kalman filter gain and this k is the controller gain gain and where u is equal to width you can write width u is equal to minus k x hat of t u of t is equal to this. Now, if you take the Laplace transform of this quantity that u of s is equal to minus k x hat of s. So, which you can write it because this even if you take the Laplace transform of this equation then we will get it x hat of s we can write it this minus that will be a s i minus a minus b k minus l c this whole inverse agree then this will be a l y of s because you take the Laplace transform of this one you will get it s i x hat of s then Laplace transform this one is x hat of s bring to this side and take the value of x hat of s then x hat of s i minus a minus b k minus l k whole inverse this will be multiplied by l into y s taking Laplace transform from this one we will get it that quantity from this to this. So, x hat of s is replaced by this one from this equation by taking Laplace transform both. So, let us call this equation is one and we have we have s is equal to s is equal to minus this whole quantity I am writing is k s into y s the whole quantity from this to up to this this is I am connected by y s. So, now, where we can write where y s as a where k of s is equal to k s i minus a minus b k minus l c this whole inverse into l this is our k s. Now, let us this whole thing this we denoted by let us call phi r of s phi r of s this whole thing is nothing but a what is called regular resultment result you this phi r of s is called phi phi r of s is called regular resultment resultment matrix regulated resultment matrix regulated resultment matrix regulated resultment matrix phi r of s agree because this is controller is there if you see in this one controller and observer gain matrix also involved in this one that sense it is called regulated resultment matrix phi of s. So, now, this is let us call this equation is that is what is we consider that equation is to this equation is to agree. Now, if you draw this circuit configuration or block diagram form this is our reference input r which is equal to assume 0 then this is our in transaction domain if you block diagram we put it then g this is our plant this is equal to s i minus a whole inverse b is nothing but a this I am written by phi of s into b this s i minus a resultment matrix of the system is phi of s and b as it is this is the and this is our symmetric the output of this is x of s this is c this is our y of s agree and from this one we have a that our carbon filter filter output of the carbon filter is our x hat of s and this is this agree then this will go with a controller gain then this is minus of that one and this is our u of s agree and this inside this one that carbon filter structure is there that input information also there output information also there. So, this is in built in there so basically in transaction domain it is a s i minus a minus b k minus l c this whole inverse into l agree that is our phi l into k that will be our k of s. So, this is nothing but our k of s see our k of s expression k into phi s i minus a whole inverse l is the our k s whole thing agree the whole thing is our and this is our phi of r the whole thing our k of s. So, now find out the loop what is called this is this is u of s find out the what is called loop transfer function for this carbon filter based controller loop transfer functions. So, if you loop gain or loop transfer loop gain let us call we have denoted by l r of s is nothing but a use from here to here if you just move it here this loop if you break it find out the loop transfer function just like in our basic block diagram if you see basic block diagram if you see this is our g of s and this is our feedback path h of s and this is our minus this. So, this is y of s in order to study this one then or sensitivity or compartmentalized loop transfer function our loop transfer function is g s h of s. Similarly, in this case also we have a loop transfer function if you loop gain of this one I can write it this equal to k into this quantity I have considered is equal to phi this quantity if you see it is a phi r s into l then c then phi s into b. This is the loop gain of this carbon filter based controller when we have designed the carbon filter based controller the loop gain is k phi r this is phi r then l then l then c then b phi. So, this is the loop transfer function if you look carefully with the what is called l q r there our loop transfer function is k phi s into b because this filter is not present in the l q r we are assume that our states are measurable all states are measurable. So, next is the let us call this equation is 3 this equation is 3. So, next we find out the loop transfer function for the l q r problems loop transfer function or loop gain for l q r problem is equal to k into g s which is nothing but a k into s i minus this or phi s into b and this is nothing but a s i minus a whole inverse. This is our loop gain and clearly you see this one this loop gain of the carbon filter based controller design and the l q r design l q g l q r loop gain is different and we have seen in what is called l q r design that controller what we have designed the k controller design the system what is called gain it can be varied from k is equal to greater than equal to half or you can increase the gain up to infinity from the present value of gain whereas phase margin it will give at least 60 degree for the design gain k of the closed loop systems. So, that we have seen in other words if you draw the block diagram form of this one it is look like this g s and g s g s this is nothing but a s i minus a whole inverse whole inverse into b that output will be that output will be x of s and this states are all measurable. So, this and this loop gain if you see the loop gain for this one it is k into g s g s is what this is phi s result matrix of the originals from the given matrix s i minus a inverse this is the inverse of s i minus a this is inverse this the two loop gains are different. So, we may expect that l q r l q g controller they may lose the what is called the excellent properties of l q r what is the excellent property we got in l q r excellent property in the sense of gain margin and phase margin what we got it we may lose that one. So, remarks l q g does not have the same properties as l q r design method due to the introduction of Kalman filter the introduction of Kalman filter because the loop gain is different and that that part is coming additional this part if you see this part from here to here sorry from here to here this from here to phi l s l c this part is coming extra and this is definitely there will be role of what is called gain role as frequency is changing. So, first remarks is this one second it has lower stability margin it has lower stability margin then the l q r design l q r design and sensitivity properties that means sensitivity properties sensitivity properties are not good as those of l q r sensitive properties are not good as that of as that those of l q r design because sensitivity is stability margin that means gain margin will lose it will not able to maintain as we got l q r problems due to the introduction of this Kalman filter again and not only this that sensitivity disturbance rejection or noise rejection case it will not be good as it is in case of l q r is to third one is your that third one is it is possible to recover the it is possible to recover the what is called stability margin or good properties of l q r what we got it that properties will be able to recover while we have designed the l q r design l q g problem when you have designed the l q g problem we have seen that that is the loss of what is called stability now what is called and what is stability properties stability margin it will be reduced. So, that we can regain or recover by introducing some noise in the process signal. So, it may be possible it might be possible to recover some of the desire some of the desirable properties like gain and phase margin l q r by selecting the noise process noise process noise by selecting the process noise covariance matrix variance. Let us see how we can regain our what is called stability margin by introducing what is called by what is called selecting the process noise covariance matrix. So, let us define there are some mathematical manipulation we have to done and we have seen how the process noise if you can select properly then it is possible to regain the stability properties of l q r. So, define the following quantities it is. So, it is conclusion is that that if you design the l q r l q g problem that stability properties of l q r is not retained again. So, we can able to recover the stability margin of l q r by introducing or by selecting the noise process noise covariance matrix. So, let us define phi of s is equal to s i minus a inverse that is the result of matrix of the system. So, next we consider phi of c is equal to s i minus a minus v k whole inverse with this identity with this symbol using any equation number 1. Let us see equation number 1 this I will use that one then what will in this equation I use the phi phi c. So, from 1 loop gain 1 loop gain expression this is the loop gain expression this from equation 3 from 3 loop gain l r is equal to k then you have a that is if you see this one s i minus a minus v k minus l c whole then whole inverse that whole inverse minus l. So, this l then phi of s into b. So, this I can write it this k into s i minus a minus v k a minus v k this I am club together a minus v k plus l c whole inverse l then c that I missed here c c then phi of s into b. So, this equation I can write further l r of s then I can write it this part you see s i minus a this inverse is a your phi c that is what phi c. So, I can write this is nothing but our phi c s inverse plus what is left l c then whole inverse l c into phi of s into b let us call this is equation number 4. So, from this equation this is equal to I have written phi c inverse then l c this one. So, this and using now in equation 4 will be using one matrix inversion lemma using matrix inversion lemma what is this material let us call we have a matrix a bar b bar c bar and d bar whole inverse is equal to a bar inverse minus a bar inverse a bar inverse then your b bar then d bar then a bar inverse then b bar plus c bar inverse then whole inverse d inverse d bar a bar inverse this one this matrix inversion can be written into this form. So, you are using in this expression this matrix inversion lemma for that one we set a bar is equal to phi c inverse of s then b bar is equal to your l and c bar is equal to your identity matrix and d bar is equal to is equal to c. So, I will use this lemma in this inversion. So, our using this one in equation 4 from 4 using the above lemma in 4 we get l r of s is equal to then this is phi inverse of a a bar. So, our you see a bar phi inverse so this is phi c of s phi c of s then minus and you see this one there is a k is there k is missing here k bracket minus a bar inverse that means phi c of x s b bar is what l then bracket d bar is what c a bar of this phi c of s b bar is what l then your c bar is i. So, this whole inverse d bar is what c and then a bar is phi c inverse that inverse is phi c of s. So, this equation is rewritten into this form into that into star into l c phi of s into b. So, this equation is complete now now we are doing some manipulation of this one. So, I push it see this one I push l in this expression agree. So, phi c l and this l also go here. So, phi c l so if you take it phi c l common then it will be remain here i minus. So, this will be a i plus c phi c s l whole inverse then c phi c s into l into l bracket closed then c l I pushed inside. So, remaining part is c phi of s into b agree. So, this now so this I can write it further I can write it this phi c s into l. So, what will come here phi c l this and this I can write it this I can write it this matrix into inverse of this matrix. So, I am taking common of that inverse of this matrix i c phi c x l whole inverse then what is left here because I am writing this matrix into i plus c phi c l. So, it is left is i c phi c s into l minus this one left because this I have taken common c phi c s into l. So, close this one then c phi c s then c phi of s into b. So, you see this cancelled so ultimately it is we will get ultimately we will get it here if you see this one the simplification that l c r l r of s is equal to k phi c of s l i c phi c of s l whole inverse c phi of s b. Again this is the equation let us call this equation is equation number 5 this is the equation number 5. Now, recall our what is called our basic equation system equation if you recall our system equation recall the system dynamic equation system dynamic equation from just recall from our previous lecture the dynamic equation. So, our equation number 1 there so x dot is equal to a x plus b u plus gamma w of t. So, in the process in the system equation this is the process noise w of thing process noise again we know the and its dimension is let us call it dimension r cross 1 other dimension we have already specified in the previous statement of the LQG problem. So, this process noise covariance matrix so if you recall this one so now in l t r loop transform transfer recovery approach loop transfer recovery approach in loop it is assumed that the process noise that the process noise covariance matrix that in the process noise covariance matrix we have considered w in our last lecture matrix is nothing but it from here you see it is a process noise covariance matrix gamma w into w transpose is equal to q that gamma transpose this is the noise covariance matrix of w or noise control is small omega that is q agree. So, this is the noise covariance matrix if you recall our LQG problem statement w is the process noise and its covariance matrix is your q. So, since it is going through a channel w gamma so the noise process covariance matrix process noise covariance matrix is gamma q gamma transpose. So, this is replaced by now by gamma q gamma transpose is now replaced by gamma w 0 gamma transpose plus small q b into b transpose. This is any positive scalar quantity and the gamma 0 is the initial guess of process noise initial guess of process noise covariance matrix process noise covariance matrix initial guess again. So, let us call this is equation number six now using this thing using this expression in our algebraic Riccati equation of Kalman filter using this equation then equation 6 in A R E algebraic Riccati equation of our Kalman filter that we have discussed last class if you see this one Kalman filter then we have it is the dual of what is called algebraic Riccati equation of controller design. So, there is A transpose P here it will be A transpose replaced by A. So, A P E P E for algebraic Riccati equation for the Kalman filter and this is the error covariance matrix then P E please refer to our last lecture note. So, P E then there is P B B is transpose by C transpose. So, this P B R inverse here is measurement noise covariance matrix this one P B R inverse B transpose means C then P plus gamma 0 gamma transpose plus Q square B B transpose is equal to 0 and that quantity is nothing but our gamma Q gamma transpose again gamma Q gamma transpose. So, this is the algebraic Riccati equation for the Kalman filter of that one. So, let us see what we have considered Q or W I am not able to recollect what is considered let us see in the algebraic Riccati equation of the Kalman filter the noise covariance matrix what we have considered. The noise covariance matrix I think I have considered W. So, better this is W I have considered there W. So, this is W agree this is W with earlier I have considered W. So, please write it same notation here W noise covariance matrix I have considered is a W there. So, this gamma W double term now replaced by a two parts one is the initial guess and another is Q transpose this one and this is the positive semi definite matrix that is one. So, all these things and W B is a greater than 0 means positive definite matrix. So, this equation we can rewrite by dividing by Q square. So, you if you divide by Q square that means A P E P suffix E P suffix A Q square P E suffix Q square this is whole A transpose then P E Q square agree that I am dividing by C transpose B inverse agree C P E I divided by Q square multiplied by Q square it does not change anything. So, that will be a gamma W 0 gamma transpose Q square plus B B transpose is equal to 0 agree. Let us call this equation is equation number 7 this equation when what I did it I divided by Q these things. So, now we made an assumption the plant we have assumed this minimum phase system the plant is if the plant is minimum phase system means all zeros are stable and B and C B and C full rank with number of inputs is equal to number of outputs B P is equal to M number of outputs is equal to number of then one can write it limit Q tends to infinity P E divided by Q square is equal to 0 why the way we are increasing the Q we are the P is increasing P is increasing more slowly then covariance matrix Q is increasing the and which in turn when Q is very large that will tends to 0. That means, if we increase in the Q again the P is increasing the error covariance matrix P is increasing more slowly then the Q is increasing. So, this implies that this will be Q is very large infinity this will be 0. So, now from equation 7 if you use this expression in equation 7 what is the logic behind this Q the way Q is increasing the increasing in P is more slowly than increasing in Q. So, with this logic we can rewrite the equation from 8 or from 7 sorry from 7 you can say this is 0 this is 0 this is 0, but this quantity is 0, but you see P is also increasing. So, this part I cannot write is a 0. So, what is the term is left here is Q square P E by Q square C transpose V inverse C P E Q square see what part is left this part C A divided by B square into Q minus B B transpose is equal to 0 because Q Q cancel now this is approaching 0, but P is also large. So, I cannot write this is equal to 0. So, further we can write it if you see this one that Q square P E by Q square C transpose V inverse C P E Q square tends to this I can write it tends to B B transpose when Q are increasing that this tends to this quantity tends to B B transpose which I can write it also P E C transpose this is not transpose this is V inverse C transpose V inverse. Then I can write it V into V inverse is identity matrix this is identity matrix then C P E into 1 by Q square tends to B B transpose this I have written just this cancel then every this is if you see this is nothing, but a Kalman gain L. So, this will be a L that will be also V is there if you see this one this L and this will be your L transpose and this is V. So, I can write it that L V L transpose this is L. So, from here to here this is L transpose then V and 1 by Q square tends to B B transpose then V is a noise covariance matrix of the measurement noise covariance matrix. So, I can write V half L V is a symmetric matrix this and I can write it V half agree L this I can write L V half whole transpose tends to B B transpose because V is a symmetric matrix the square root of V I am writing V half transpose then manipulating this one I am writing this one. So, and there is a Q square 1 Q I keep it here another Q I keep it here. So, this expression remains same. So, if you look at this expression expression number 8 the general this quantity tends to B B transpose if you select the L if you select the general solution of equation 8 is L tends to when L tends to Q B B half L tends to B B minus half if it is L is Q B Q B B minus half minus plus half is cancelled. So, that will be a B Q Q Q cancelled similarly this B. So, B B transpose it approach if you selection of this one L solution is that one P you know half as Q tends to infinity. So, that is the solution of is that one then Q B B inverse of this Q tends to infinity. So, substituting using this value 9 using equation 9 in 5 or L R expression L R expression the loop gain expression of L Q G expression. So, I am using loop gain expression in this expression I am using the value of L is what Q B B and V half that means square root of your noise covariance measurement noise covariance. If you use this one in this expression we get L R of S is equal to K phi C S L in place of L I am writing Q B B minus half then bracket I C then phi C of S Q B B minus half whole inverse then star multiplied by this C phi of S into B as Q tends to infinity as Q tends to infinity look this is our L we are substituting L this is L and this is our L in the expression 5. So, this further we can write it this finally we can write it since this is the large quantity of this one compared to one in diagonal elements then we can neglect this one compared to this one. So, that tends to phi phi C of S Q phi C of S Q then V B of half then C phi C of S Q B B half whole inverse star C phi of S B as Q tends to infinity now look at this one this inverse. So, Q Q cancel then V V is the identity matrix. So, this expression finally will come if you see this expression finally will come K phi C of S B then C phi C S B phi C S B whole inverse C phi of S this is phi of S B as it is from there C phi C B inverse this as Q tends to infinity let us call this equation is equation number 10. So, now we are using another identity in this expression 10 using the identity phi C of S we know is nothing but a S i minus A minus B K whole inverse which I can write it S i minus A this plus B K whole inverse which I can write it this is nothing but a phi S inverse this one plus B K whole inverse. So, I can write it this is nothing but a if you write it this one phi of i phi of S phi S if you take it common then i B K phi of S i B K i B K phi of S whole into phi of S this inverse. So, push it phi B K this is a phi X. So, this is nothing but a phi of S because A into B whole inverse B inverse A inverse reverse order. So, this inverse phi of S then it is a i plus B K into phi of S whole inverse. So, this let us call this equation number 1 this identity I am just writing S i minus A minus B K inverse I can write phi S plus 1 plus B K into phi S whole inverse this expression I am using in equation number 10 sorry. So, using 11 in 10 in 10 is this one wherever there is a phi C I will use that one using this one in 10 we get L S of this is K phi S into i plus B K phi S whole inverse B C phi S into i plus B K phi S whole inverse this whole inverse into B look at this expression this K phi C I am using that one phi S this expression this expression is nothing but a phi C of S. Then your what is this expression B B I have written then B I have written C this one B I have written then C phi C phi C expression is what from here to here is phi C expression this is phi C then B then this multiplied by this thing then this thing multiplied by multiplied by C phi S of B as Q tends to infinity let us call this is equation number 1. That means in equation number 10 I use this expression phi whenever there is a phi C is there phi C is there I am using in equation 1 and we got this expression. So, finally and once again we are using this identity using the identity i plus B K phi of S whole inverse B you can write it this reverse order B i plus K phi S B inverse. So, i plus B K phi S whole inverse B is B i plus B will go later phi S this identity I am using in using this identity using in 12 we get that limit Q tends to infinity L R of S is equal to K say in phi K phi S K phi S then instead of that one instead of this A this I am using the identity that one B first then I am a B then I plus K phi S into B whole inverse star. So, this is then B then B again C C phi S C phi S that is it then this identity I am using this part identity. So, that is B that will be B then I plus K phi S into B whole inverse B this i plus this that inverse is here agree that inverse is here. So, this inverse is here inverse then whole inverse this inverse into C phi S B. So, what is left in here this will this inverse and again that I miss something. So, this will be finally, it will be because we have considered that number of inputs and output is same. So, this inversion is exist that whole inversion is exist this inversion exist. So, ultimately we will see we will derive this thing in next class some two three lines is left. So, we will discuss in this next class that what is the our choice of that our loop gain choice through noise process what will be the ultimately loop gain will come it will come same as the linear quadratic regulator loop gain. So, one can regain or recover the loop gain of LQG by selecting the properly the process noise covariance. So, the rest of the portion we will discuss in next class.