 So, last class we have discussed that linear quadratic regulator problems and we have proved that linear quadratic regulator problem has a good robust I mean properties gain margin is infinity or you can say LQR LQR problem gain margin we have seen it is a infinity and greater than equal to half. That means, what are the present gain is there of this system we can increase the gain up to infinity and still the system is stable if you design the controller based on the LQR. And also the gain you can reduce to half what are the present gain is there that gain you can reduce to half of its value still the system is stable at least it may be less than half also it is stable, but this range is granted if you design a controller based on LQR then system will be stable this is the gain margin and phase margin phase margin will be at least the phase margin is 60 degree. So, this LQR design if you do the LQR design again based on the what is called performing index minimization then you will get this gain margin and phase margin is greater than 60 and gain margin infinity half greater than half and less than infinity. So, today we will discuss that is what is called linear quadratic Gaussian problem in short it is going LQG problem. So, in LQR design problem there is a restriction while we are designing the controller first restriction is there the system is deterministic systems. Again in real practice or real world the what is called system is under stochastic environment. So, we have to design a controller under that environment. So, more general control problem you can say the more general problem is LQG problem LQG that deals with optimization of a quadratic performing index performance measure quadratic performance measure for the stochastic systems. In LQR what you did it deals with the optimization of quadratic performance under deterministic systems system is deterministic that means the system is not corrupted with noise in state is not corrupted with noise and measurement equation also not corrupted with noise. But more general problem you see that what is LQG problem deals with the quadratic minimization quadratic minimization under stochastic environment. So, first we will state the statement of the problem so LQG problem statement LQG problem statement. So, consider the our consider the stochastic systems x dot of t is equal to a of x t plus b of u t plus gamma w of t and x initial state is given which is in stochastic in nature that random in nature. So, this is Gaussian with 0 mean x 0 is Gaussian with 0 mean. Now, let us call the state of the number of the states in the system is n. So, number of inputs is m and number of the noise corrupted the state equation corrupted with noise through gamma channel is dimension let us call small r into 1. So, once you define the state of the dimension of the x w u t and w of t w t is the noise and this noise is 0 mean white noise Gaussian process agree the w w mean 0 mean white noise Gaussian process and once you know this dimension immediately you know the dimension of a a b and gamma. So, and this is the state equation you can say this is nothing but a state equation state equation and our output equation y of t is equal to c of x t plus v of t and again this w of t sorry v of t is 0 mean white noise Gaussian process whose characteristics is known means noise covariance matrix is known to us before that. So, this is called measurement noise this is called input noise w is input noise v is called the measurement noise their statistics is known to us mean and covariance is known to us w and v as I mentioned it they are 0 mean white noise Gaussian process and they are uncorrelated. So, there are few assumptions are made before we what is called get the solution of controller solution in under stochastic environment. So, input stroke output statistics are like this way first is e expected value of omega of t input noise is equal to expected value of v of t is 0 mean both input and measurement noise input measurement in 0 mean and not only this they are white noise that mean e w of t into w transpose of tau that equal to q delta t delta tau is operator when tau is equal to 0 tau is equal to 0 then this value is 1 other than 0 this value is 0. So, this indicates that t is equal to t is equal to small t is equal to tau then their covariance matrix is the q and d this is the delta operator again this is the delta operator. Similarly, this indicates that this input noise covariance matrix is q and they are independent different because at t time what is the noise and t plus some other time or t 1 time they are uncorrelated that means noise and t 1 time is no way related to noise at t 2 time. So, this indicates this is the noise covariance input noise covariance matrix input noise covariance matrix agree. So, in place of this one when tau is equal to t when tau is equal to t this value value is 0 sorry this value is 1 when tau is not equal to t this value is 0 that means it indicates they are uncorrelated and this w of v of t into v transpose of tau this equal to r delta of tau and this is the noise covariance of measurement matrix noise covariance of the measurement noise covariance of noise covariance matrix of v of t and this is of v w of t these are this indicates that w is w and v are what is called white noise with 0 mean and v and w are also that uncorrelated you can say that v and w you can just say then v and w w and v are uncorrelated and uncorrelated how will you define w of t v tau transpose of t this equal to their different initials or v of t is equal to 0 they are uncorrelated of t and also we assume that initial state has a 0 mean and their covariance is equal to s which is known matrix that each noise covariance matrix q and v are this is a this matrix this matrix is positive semi definite matrix and this is your that we use the v because we have used if you recollect we have used the r when you are discussing the linear quadratic regulator problem that r is the weightage in the control input. So, we should not use the same symbol here. So, we are using the noise covariance matrix of the measurement noise covariance matrix is v. So, this is this is also we will not use q. So, we will use this value is w capital W capital W is the noise covariance matrix of w because q you have used it l q r problem in state weighting matrix. So, in order to avoid a confusion we have used the w and this is v. So, then this is the noise covariance matrix and this matrix is positive definite matrix and this is positive semi definite matrix and the measurement noise covariance matrix is positive definite matrix we made a sense and they are symmetric. So, this matrix is positive definite matrix and symmetric. So, and they are uncorrelated this initial state is uncorrelated with w of t of this 0. Similarly, there is initial state uncorrelated with the input noise as well as it is uncorrelated with the measurement noise. So, this for all t. So, these are the assumptions is made it before we solve the l q g problem. There are further this characteristics assumption or characteristic as input noise covariance matrix output noise covariance matrix and also initial state what is called covariance matrix is in initial net variance is known to us. Then our made an assumptions a and b pair is controllable because then we will able to shift the all the poles of the system open loop poles at desired location by l q g problem also as just like l q r that must be controllable. Then a and c is controllable pair a and c observable pair and we assume that all the state variables are not measurable that is why we need an estimator to estimate the state of the systems. So, our quadratic the objective function the quadratic objective function first we define the j is equal to limit t time t tends to infinity 1 by 2 t expected value of that performance index minus t 2 plus t and it is nothing but 1 by t I am taking the average of that one x transpose q x transpose q x of t plus u transpose r u of t whole d t whole d t agree. So, this we want to minimize this one the now our problem boils down to like this way the problem is to find is to find the optimal control u of t capital u of t that minimizes the above performance index that minimizes the average cost function. Let us call this is the equation number two and our basic equation what we given describe this two is a equation number one. So, state equation and measurement equation we have given the name equation number one and the performance index what we are assume agree in stochastic environment this is equation number two we are taking the expected value of this just like a quadratic function that means we are giving weightage on the state and as well as on the input vector u. So, this q and r is same as before one when we have discussed the what is called l q r problem the q is positive semi definite matrix r is positive definite matrix and they are symmetric as early as possible. So, now next is we will just solution of l q problem solution of the l q g problem via Kalman filter. You see we have consider that system is corrupted with newton that means state equation is corrupted with noise of measurement equation also corrupted with noise. So, when we are implementing the l q g problem that u is able to minus controller gain into because states are not accessible we have to estimate the x hat of t we have to use. So, first we have to find out what is called that how to estimate the state of the system under stochastic environment that one. So, details of this one I will not discuss about the Kalman filter here again. So, I will just tell you how to design a what is called Kalman filter here briefly. So, first step is control law our control law u of t is equal to minus x hat of t. So, let us call this is equation number 3 this is dimension in n cross 1 and this dimension since it is m cross 1 the dimension of k must be m cross n. In l q r problem we have assumed the all the state variables are accessible to us which will be used for implementing the control law. Whereas, in stochastic environment or l q g problem the states are not accessible or measurable. So, what we did it we have estimated we have to estimate the states except to this x hat of t. So, how will we estimate the state that we will discuss step by step now. So, here is if you see here is the k is the controller gain and the controller gain is designed same as what is called the l q g problem. And we will prove it here that estimated or the Kalman filter estimated and the controller design. Estimator design and controller design they can be done separately or independently because they are varied separation theorem is valid in order to design controller as well as the estimated separately. So, let us see this one where k is equal to minus r inverse b transpose p let us call this is equation 4. As I mentioned earlier that k is designed same as l q r problem. So, I am not already we have discussed how to get we got this expression. So, since k dimension is m cross n and n dimension this immediately other one dimension we know it. So, that is equation number 4 where the p is the p is n cross n is the solution of our solution of algebraic Riccati equation. So, what is the algebraic Riccati equation for this controller design a transpose p p a minus p b r inverse b transpose p plus q is equal to 0. So, this is an infinite regulator problem we know already how to solve this one different techniques. So, let us call this is equation number 5 next is once we design k because I cannot implement u of t until and unless we estimate the states because our system equation are corrupted with noise with a known statistics. That means, we have used the conventional that noises are statistic is known means they are 0 mean white noise Gaussian process and some other assumption also we made it. So, now next is how to estimate x of t x is the estimated state estimated state obtained form Kalman filter in short k f. So, what is the Biffler and Jettling? So, ultimately in Kalman filter they are minimizing a performance index and that performance index is the error covariance matrix trace of error covariance matrix. So, minimize the that means minimize my is the performance index performance index. What is this performance index considered for estimated that J e is equal to expected value of x t of x hat of t. So, this is nothing but a error between actual state and estimated state. So, that into transpose x of t minus x hat of t. So, that will be a scalar quantity that means we are doing the sum of the errors squared over the period of time and then we are taking the expected value of that one. This is same as if you see summation of component wise of state if I write it there are n states are there i is equal to 1 to n then expected value of x small x of t or small x i of t into small x hat of t. So, this is state error square the same as that expression or minimizing the trace of error covariance matrix. So, one can write it this also like this way minimize J that means J minimize the trace of e e x of t minus x hat of t this into x t this is capital X capital X minus x hat of t whole transpose then this and this take the trace minimize that trace of that one that means your problem estimated minimize the performance index that either you write expected value of the error square or you write it trace minimize the trace of this is the error covariance matrix trace the error that means all the diagonal elements of this matrix is nothing but a that quantities. So, this you do it the so this way we have to estimate the state of the parameter. So, I am writing directly that what is the estimated equation the Kalman filter of this equation is same as what we did it for the deterministic case structure is same, but only the implementation wise we have to do in different way. So, this plus L into Y capital Y of t minus C x hat of t look that one this L is the Kalman gain and this is the error. So, this third term is the error correction each iteration or each time this will correct the error. So, this is the error term again. So, this dimension Y dimension you know this dimension p cross 1 the beginning I just mention it just see the dimension of p is not mention it here dimension of p dimension of y is p cross 1. So, similarly immediately we can find out C this dimension of a number of outputs is p. So, if it is this p immediately you know the dimension of this one will be n cross p L Kalman filter gain is n cross p. So, now how to obtain the L in the next question is how to obtain the L. So, this if you can simplify I can write it a minus L C this into this L C x hat of t plus b u of t plus L Y of t. So, let us call this equation last equation number is what we have given it 5 then this is the equation number 6 you give it this is equation number 7. So, the estimator design is the dual of controller design agree. So, how L is computed this Kalman gain is computed L is computed as follows. So, you write the expression for L Q R controller or L Q G controller controller design expression you write it there you put it in place of A you replace by A transpose. So, you replace A by A transpose B by C transpose and Q by W and R by R replace by V and then L replace by K transpose or K replaced by K transpose replaced by L. So, let us call how it is. So, our expression for L and whose dimension is n cross p is equal to what is the if you see side by side I am writing what is the controller expression R inverse B transpose P R inverse B transpose P. So, I will replace the because our K is what K will be replaced by L transpose, but I want to find out L transpose sorry L then it will be K transpose L is K transpose. So, it will be a P first will come P let us call for estimator I am denoted by P E then B transpose and then transpose is B B is replaced by C transpose. So, C transpose then it will be coming R inverse R is replaced by V. So, V inverse again. So, this is the our Kalman gain you have to find out for you know C you know V is the noise V is the measurement noise covariance matrix, but we do not know P E P is the error covariance matrix P E is the P E is the error covariance matrix of dimension n cross n. So, that we do not know look here how I am a final computing the Kalman gain Kalman gain I am finding out what is the expression for controller gain that K is equal to R inverse B transpose. So, take the transpose of K if you take the transpose of P transpose, but I am writing is P is symmetric matrix P. So, P is replaced by P E my error covariance matrix and then B B is replaced by C transpose. Then R inverse transpose is R. So, it will be a R is replaced by V. So, this is the Kalman gain expression. Once I get, but here is unknown is P E error covariance matrix of dimension n how to solve the error covariance. The way we solve algebraic equation for L Q G problems or L Q G problem when we are designing the what is called the L Q G controller we have used this expression if you recollect that one. This expression is used to calculate the L Q G controller gain. So, in this expression I will replace A by A transpose and P by P E B by B will be replaced by C transpose and Q will be replaced by W that we what we have mentioned in the beginning. So, if you do this one that L is that P E is completed P E is completed by solving continuous filter algebraic equation. Algebraic equation what is this continuous filter. So, filter expression design of filter and design of controller is dual of each other and how we are doing it dual replacing controller whatever the A is there replaced by A transpose B is there replaced by C transpose and your Q is replaced by noise covariance matrix and R is replaced by measurement noise covariance matrix. This are replaced by in their original Riccati equation. So, if you replace by this one it will be coming A P E plus P E A transpose then P E C transpose B inverse. Then we will see P E then gamma transpose gamma W gamma transpose. What is this one because it is the expected value of gamma into W of t. So, this is nothing but this term I am getting from this one expected value of gamma W of t into this into transpose W of t. So, transpose of that gamma this gamma Y of t whole transpose and that will come gamma that is W noise covariance of this one into gamma transpose that one is right this one. So, anyway this equal to 0 and that is equation number. How I am writing it you see the controller design how you do it A transpose P plus P A minus P B R inverse B transpose P plus Q is equal to 0. So, A is replaced by A transpose. So, A transpose and its transpose A P is replaced by P E error covariance matrix and in this way B is replaced by C transpose C transpose R is replaced by measurement noise covariance matrix V and so on. This Q is now I have shown you this Q is what noise covariance matrix that one this Q. So, this is equation number 8. So, now if you see the our LQG controller structure how it is then the detail diagram that what block diagram of block diagram of LQG controller. So, we have a this star then B this then integrated I am just writing the block diagram from actual system this is X of t then C and this is our output actual output, but it is corrupted with a noise V of k and then we are getting the Y of t measured output. So, this output from this output and this is our B and it is corrupted with here you see this is the W, W is coming it is passing through the gamma this block is gamma and it is coming here this is W of t input noise input noise entering through channel gamma, so this now again you can see here that from here this is reference input let us consider 0. Then we have a B the structure of control by estimator is same as the structure of the actual plant structure I am talking about. So, this equal to this one then integrated and here is not completed. So, here from here from here the system matrix A is there system matrix A and it is coming here plus here is X dot of t this signal is X dot of t. So, this then it will be coming is X hat of t. So, this is a integration this is a integration so X dot X hat dot of t then c then it is coming here Y hat of t this is minus this is plus again. So, this signal is coming and from here this A this and the if you see this is one this is the Y hat and from the estimated state the controller I am telling the estimated state will go to the controller k minus k and that is the control law that is the control law agree U of t and this signal is plus U of t. This is B this. So, now if you see the observer what is your estimated equation A X plus this is the correction term correction term we have not build up here. So, now this is the this signal is Y minus Y of t minus Y hat of t again. So, this will go to the Kalman gain l. So, that l that l will come ultimately it will come that l or you write it here instead of showing here this is the Y hat of this one I will show you in the red one this is the Kalman gain. So, that will come this Kalman gain will come from here plus that one. So, this is this I am now showing you this block this red with this block this is your plant this is your plant this is your plant and this is your controller again and this part from here from this part from here to here part is your Kalman filter Kalman filter which will estimate the state of that one. So, this is the this block I am indicates the controller along with the Kalman filter where the states are not available. So, you see the Kalman filter what this dynamics is involved here X hat dot is coming one signal A X hat dot X hat plus B U plus there is a correction term Y Y hat error multiplied again it is coming here. So, this is the our this blue dotted line is the Kalman filter and this dotted line small dotted line small box with dotted line is the controller and this red dotted box is our plant. So, this is the your Kalman filter best controller is design now we have to see we will see it in the LQR design we have observed that if you design the controller based on LQR that we have a gain margin is infinity and the present gain you can increase to infinity. At the same time present gain you can reduce to a half in this region the system is stable from as well as you can save the phase margin of the system when it is controller is design based on the LQR the phase margin will be at least 60 degree that is we have a gain, but when you are using the estimator along with the controller that properties is totally lost that means robustness is of the LQR LQG is compared to LQR is much less of this one. Then we will see this one how to that. So, first we have see we will see the LQR LQG separation principle. So, since we have done earlier so I will not go separation property since separation property for LQR we have derived it will not go in details in this. So, recall the equation number 7 this equation number 7 you recall that one this equation putting u is equal to minus k x hat. So, from 7 we can write x dot of t is equal to a minus l c and b k because b u k u k is replaced by minus k minus x hat of t. So, that is why it is x hat of t plus l y of t. And y expression measurement equation is nothing but a c of x t plus v of t because output is corrupted with noise this one. So, with u of t we replaced by minus k x hat of t let us call this is equation number 9. Now, we will find out the trans function from this is just a from y to that u what is the trans function of that one y to u if you see the trans function from y to u. So, now find out the trans function from y s to u s why you are finding out the trans function to show that our loop gain is different from the loop gain of LQG is different from the loop gain of LQR. And there we will show it that we are the properties of LQR nice properties of LQR is lost when we are designing a LQG problem. So, can be derived as so take the Laplace transform from both side if you take this one I can write s i minus a minus b k minus l c this into x hat of s is equal to L y of s which I can write x hat of s is equal to what is this s i minus a minus b k minus l c whole inverse into L y of s both side I multiplied by minus k x both side I multiplied by minus k x k minus k. So, both side I multiplied. So, this is nothing but our u of s agree this is u of s and this is nothing but a minus as it is what we got it s i minus a minus b k minus l c that whole inverse L y of s let us call this is equation number 10. And we denote this one when there was no observer s i minus s i minus bracket a minus b k when there is no observed that we have denoted by phi s if you recollect in the LQR design problems. Now, this whole thing I am denoted by phi r of s agree this part s i minus a minus b k whole inverse this one. So, now from system state equation and from 9 one can write it from 1 and 9 with gamma I just consider is an identity matrix again whose dimension will be a r cross r because noise w of t noise we have considered the dimension is r cross 1. So, now it is a that one dimension gamma let us call it assume or in general you can keep it gamma gamma also in the expression. So, we get x dot of t and x hat of t x hat dot of t is equal to a minus b k l c a minus b k minus l c from 1 and 9 only gamma you have replaced by identity matrix plus. So, this but from this thing we cannot come to conclusion that whether the estimated design calmer filter design and the controller design can be done independently or not we cannot say that means separation principle is applicable we cannot say. So, what we have to do let us call this equation is equation number 11 what we define that error equation now. So, you define the error e of t is equal to x and x hat of t is equal to x and x hat of t minus x hat of t this you define and then you express x hat in terms of error equation. So, I will get it finally, x e of t is equal to a minus b k b k 0 a minus l c plus i i 0 minus l w of t that will be v of t what I did it x hat I express in terms of e. So, x hat is equal to x minus e in this if you manipulate this expression equation you will get this one. Now, see the structure of that x dot and a e dot expression there is upper triangular form the Eigen values of t that will be v of t what I did it x hat I express in terms of e. So, x hat is equal to x minus e in this if you manipulate this expression equation you will get this one. Now, see the structure of that x dot and a e dot expression there is upper triangular form the Eigen values of this equation 12 Eigen value of this 12 note Eigen values of this system is same as the Eigen values of a minus b k and Eigen values of l minus a minus l c this Eigen values are nothing but a controller Eigen values a minus l c Eigen values is nothing but a Kalman filter Eigen values filter Eigen values. So, they are decoupled when we are designing this one we do not need any information of Kalman filter gain information and when we are designing this one we do not need any information of the controller design information. So, this and both the matrices that means a minus b k and a minus l k both the matrices are stable matrices are stable again. However, you see the how do you design the controller gain or you observe gain l. So, you have to design it is better you did the Eigen values of observer Eigen values of estimator a minus l c should be at least 5 to 10 times of the Eigen values of the controllers agree. In that way we can get better performance or the better closed loop behavior of the system that means Eigen values of this one should be 5 times 5 to 10 times more distance from the Eigen values of that controller Eigen values. And that will give you the what is called good closed loop system behavior this is. So, now let us investigate our loop gain loop gain of the l q g problem. If you look at the l q g problem our basic structure it is a system the Kalman filter then controller. So, this what is our loop gain of this thing we can make it in simplified form. This is the estimator equation a of minus b k minus l c x hat of t plus l y of k y of t. Now, take the with u of t is equal to minus k x hat of t this is the with u of t is equal to minus k x hat of t. This is the controller gain and this is the Kalman filter gain. So, if you take the Laplace transform both side of this equation u of s u of s is equal to minus k x hat of s. And you know the x hat of x expression in details from equation number that is what you have given it x hat of s equation number that is 10 we can get it this one. So, I will discuss this that expression that how to find out the loop gain of the l q r and compared with the l q g how to find out the loop gain of l q g and compared with l q r that we will discuss next class. But l q g when you will see there are different loop gain you will get it and that will show that we are losing the properties of excellent properties of l q r when you are using the Kalman filter along with the controller.