 So, in the last lecture, I had started talking about the positive real lemma. Now, I did not give the complete proof of the positive real lemma. In fact, what I had said was when you have the positive real lemma, it gives an if and only if conditions, one set of conditions is in terms of matrices from the state space representation of the system and the other set of conditions is a frequency domain condition which deals with the fact that the given transfer function is positive real and things like that. But what I showed in the last lecture was that when the set of conditions that is satisfied by the matrices, when that set of conditions are satisfied, then passivity takes place in the sense that I showed, I mean we had already discussed in the last class this thing about a system being passive is equivalent to the existence of a storage function and that the supply minus the rate of change of the storage is equal to dissipation which is a strictly positive function. And in the last lecture, I showed that when the matrix conditions are satisfied, then we do get some other matrix which is positive definite and that actually stands for the dissipation function. So, today what I will do is I will start first with positive real lemma and I will give the complete proof of the positive real lemma and then we will carry forward with the rest of the stuff. So, let me recall the statement of the positive real lemma. So, the positive real lemma, so let G s be a transfer function and let us assume that there is a minimal state realization of this transfer function which is given by x dot equal to A x plus B u, y equal to C x plus D u. Of course, because this is a minimal state representation, what that means is A B is controllable and A C is observable. Now, further we assume that this transfer function is stable, so G s is stable. Then the statement says that G s is positive real if and only if there exist three matrices. The first one P is a symmetric positive semi definite matrix. In fact, most of the times you are talking about positive matrix, a symmetric positive matrix P and two other matrices L and W such that the following three equations are satisfied and the equations are A transpose P plus P A is equal to minus L transpose L. This statement essentially says that using this positive definite matrix P and the state matrix A, you end up with this Lyapunov equation and the resulting thing is something which is negative semi definite because L transpose L is going to be positive semi definite and the minus sign will make it negative semi definite. Then P B, B being this matrix B from the state representation is equal to C transpose minus L transpose W and the last equation is W transpose W is equal to D plus D transpose. Of course, there is also this ambiguity about what we mean by positive real and what I had said in the earlier lecture is that for the time being at least we will consider a function to be positive real if Nyquist plot of that particular transfer function lies in the first and the fourth quadrant. That means the real part of the Nyquist plot is always positive and I had also said in the last lecture that of course, this definition of positive real may not necessarily be the definition that you see in all the books. Often the definition of positive real includes the fact that it is already stable and so on but what those intricacies are I will come to that a bit later and I will explain why I mean of course, there is no agreement as to what exactly the definition of positive real is but I will try and explain so that it is clear to you what are the various notions of positive real that exists and how they all related and within the epsilon neighborhood of one another. Now, in the last class, in the last lecture what I did show is the following that suppose we assume that there is a Gs and we assume this is the minimal state representation and we assume that there are matrices P and L and W such that these equations are satisfied then what it means is that the transfer function results in a system which is passive. Now, the fact that passive is equivalent to positive real is something that I have talked about but is not been completely proved. So, in some sense what I showed yesterday along with what I will show today that means these two conditions that this is equivalent to this should also prove that when you have the either these matrix conditions or the fact that something is positive real satisfied then you have a passive system. So, now let me begin the proof. So, in the beginning let me assume. So, I will prove this way that means I will assume that there exists P, L and W such that these equations are satisfied and I will show that Gs is stable and it is positive real that means the Nyquist plot is in first and the fourth quadrants. Now, if you look at this first equation this is the Lyapunov equation. Now, what we are saying is using this A you write down the Lyapunov equation and with a positive definite matrix P you are ending up with something which is negative semi definite. Now, it is well known that this is only possible if the matrix A is Hurwitz. So, if the matrix A is Hurwitz then the resulting transfer function is stable. So, the first equation already shows us that Gs is stable. So, all that we have to do further is show that the transfer function that you get is positive real that means it is Nyquist plot lies in the first and the fourth quadrants. So, let us start trying to do that. So, for that let me first write down what Gs is. So, Gs is really C Si minus A inverse P plus D. Let me use the symbol phi of S for Si minus A inverse. Now, what I want to show is the following that G of j omega plus G of minus j omega transpose is greater than equal to 0 for all omega. If I show this I would have shown it for the Nyquist plot but in fact what I am going to do is something more general than that what I am going to show is G of S plus G of S star transpose is greater than equal to 0. This is roughly what I would be showing. Now, how to show this? So, I will write down the expression for this and I will write down the expression for this but I will use phi of S instead of writing Si minus A. So, this expression is C phi of S plus D plus this expression will give me D transpose plus B transpose phi S star transpose phi S star transpose C transpose and now what I will do is I will make use of some equations that we already have in the positive real lemma. So, in the positive real lemma we see that D plus D transpose is W transpose W and we are assuming these things are satisfied. So, for D plus D transpose I will substitute by W transpose W. So, for this one I can write W transpose W and there are these other two terms and what I would do is I will use this particular second equation for C transpose I will substitute P B minus L transpose W. So, if I do that this particular expression becomes B transpose phi S star transpose times P B that is one expression plus I will have one more expression which will have B transpose phi S star transpose L transpose W and now similarly just like what we did for C transpose I can substitute for C it will be the transposes and so these two guys will appear as transposes here, but instead of phi S star transpose I will have phi S for those two terms. So, maybe I should just write them down. So, I will get another term B transpose P transpose phi S B and one more term which is W transpose L phi S. So, I get these five terms. Now, out of these five terms let me concentrate. So, there are these five terms that appeared let me concentrate just on this one and this one. So, let the other three terms be as they are I will just concentrate on these two. So, so just concentrating on those two I have B transpose phi phi S star transpose P B plus B transpose P transpose phi S B. Now, I am going to do some simplification of this. So, what I would do is this particular expression I can write it down in the following way B transpose phi S star transpose and now inside I introduce P times S i minus a and remember that we have already said that this phi of S is the inverse of S i minus a. So, these two are really inverses. So, I am effectively writing this down, but I have written the first three terms down here and then these two actually cancel followed by B and I do the same kind of thing for the other term also. So, sorry probably I should not have done it for this one, but I should have done it for this one I guess phi S. No, it does not matter. So, now for this one I can write plus plus. So, that one is the same as this one plus here I write down P transpose phi S star transpose and then I have S star i minus a transpose which is really the inverse of this. So, these two can cancel then and then I have P. So, this P transpose because P is a symmetric matrix P transpose is the same as P. So, I am just putting P here and then I have phi S B. Now, if you look at both the terms, both the terms have B transpose phi S star transpose in the left side and phi S B in the right side. So, what is inside can just be put in together and so then what you have is B transpose phi S star transpose. Now, putting the things inside together you get S plus S star times P minus P A minus A transpose P times phi S times B. And now we again go back to the positive real lemma. The first equation in the positive real lemma say that A transpose P plus P A is minus L transpose L and so we can substitute that in there. So, if you substitute that in there then this particular expression can be written as this particular term will give me S plus S star. These are just scalar. So, I can pull them out B transpose phi S star transpose P phi S B. So, I have just used up this much and then the other portion. So, P A plus A transpose P from that first equation that should be equal to minus L transpose L. So, I will substitute that minus L transpose L and therefore, I end up with B transpose phi S star transpose L transpose L phi S B. So, these two terms we picked up and we end up with these two terms. This is a term which is sort of symmetric if you like but multiplied by S plus S star and this is also something which is symmetric but what you have is that L transpose L. Now, let us go back to that previous thing where you had these five terms and we picked up these two terms and did manipulations to get what we have there. So, what I will do is into this slide I will cancel these two terms and I will add the two terms that we have got now. The two terms that we have got now are S plus S star times B transpose phi S star transpose P phi S B that is the one term and the other term is B transpose phi S star transpose L transpose L phi S B. Now, let us forget this particular, let me now not think about this one term and let us look at the other four terms and if you look at the other four terms you see this W transpose appearing in two of the terms and W appearing in two of the terms and so you can write this as a sum of squares. So, let me just write that as a sum of squares in a fresh slide. So, as a sum of squares what you get is W transpose plus B transpose phi S star transpose L transpose multiplying W plus L phi S B. So, if you multiply this out you will get W transpose W, W transpose L phi S B, B transpose phi S star L transpose W and then B transpose phi S star L transpose L phi S B. So, you would have got this, this, this and this, these four terms. Now, these four terms and then the one other thing that we have is S plus S star B transpose phi S star transpose P phi S B. So, this is the full expression that you will get and this is the expression mind you when you started out with G S plus G S star transpose this is equal to this whole expression. Now, if you look at this whole expression this is really a square. So, if this is a square this is always going to be positive and if you look here P the assumption was that P is a positive definite matrix. So, whatever is this thing this is something acting on a positive definite matrix and as a result what you have here is something positive. And if you assume that S is such that the real part of S is greater than 0 then S plus S star this is going to be positive. So, this whole thing is going to be positive. So, what we can conclude therefore is this is greater than equal to 0 and so effectively we have shown that this is greater than equal to 0. So, what we have done just now is we have started in the positive real lemma we have started with this assumption and we have gone ahead and shown that this G S is positive real. In fact, what we have shown is whatever I have been using as the definition of positive real it is something slightly more than that. So, what we have really shown is if you look at the slide it is clear what it is that we have really shown we have shown that G S plus G S star transpose is greater than equal to 0 for all S such that the real part of S is greater than 0. And in fact, this should ideally be taken as the definition of positive real. The definition of positive real in fact rather than use S greater than equal to rather than use S in the imaginary axis which means you are looking at whether the Nyquist plot is in the first or the fourth quadrant rather than use that it is this definition that means what we are really saying is that the whole of the right half plane should map under this map G S plus G S star transpose when you are taking this map from the complex plane to the complex plane whole of the right half plane should map to something which is in the first and the fourth quadrant and in that case you can call it positive real. And so if you use this definition of positive reality then in the positive real lemma in the original statement you do not have to insist that G S is stable because it has to be stable if this condition has to be satisfied. But these are intricacies and there is no general agreement about what is the exact definition. So we will leave it at that. So what we have now effectively shown is one way of this argument that means assuming that these equations are satisfied we have shown that this holds. Now we want to show the other way that means if you assume G S is stable then G S is positive real you want to show that these equations are satisfied. Now in order to do this I will have to invoke some other generic theorems that are known. One of them is the spectral factorization theorem. Now as we go along I would talk about the spectral factorization theorem. The spectral factorization theorem is a theorem which plays a central role in other fields also not just in control theory. So that is something that we require and I would also invoke a lot of things that we know about realization theory that means given a transfer function how do you realize state space representation. So let me start. So we are assuming G S is stable and G S plus G S star transpose is greater than equal to 0 for all S such that the real part of S is greater than 0. So this is our assumption. Let us look at this particular thing. Now what this means is if you specifically evaluate this particular matrix on the imaginary axis then what this tells us is G j omega plus G transpose minus j omega is greater than equal to 0 for all omega. Now what this means is that evaluated along the imaginary axis this particular quantity on the left hand side is greater than equal to 0 and of course even though I have only been talking about transfer functions these things also hold for matrices. So even though I stated this positive real lemma but before that I was only talking about transfer functions which are single input single output we can carry out I mean the positive real lemma as it stands is valid even for G S's which are not single input single output the only constraint is that this G S must be multiple input multiple output so they are square matrices the number of inputs equal to the number of outputs and G S is positive real well there is the definition of positive reality for the matrices which I have not given but I will give as soon as we finish the complete proof of positive real. So whatever I have been showing holds even for the matrix case it is not just for the Scala's case even though earlier I have only talked about the Scala situation the single input single output situation but you could also look at the multi input multi output situation and exactly the same kind of proofs go through. So the exact definition of what positive real is for multi input multi output matrices transfer functions which are matrices I have not yet given but the definition is such that it simplifies to exactly the definition that we had I mean the kind of definition that we had about the Nyquist plot that kind of definition or about the right half plane mapping to the first and the fourth quadrant well the same kind of thing does appear but that I would give just after finishing the proof of this positive real lemma. So let us finish the proof of the positive real lemma so coming back here so this is what it means and at this point I would invoke the spectral factorization theorem. So what does the spectral factorization theorem say well the spectral factorization theorem says the following that suppose you have U s which is let us assume this U s is a p cross p matrix which is positive real what we mean by a positive real matrix is something that I have not yet defined but I will define just after the complete proof of the positive real lemma but just assume I mean you could listen to this portion by just thinking of U s as a scalar single input single output case and that would be good enough. So you assume U s is a p cross p positive real and Hurwitz that means stable then there exists r cross p matrix which is an r cross p matrix which is Hurwitz it is proper rational. So then there exists a r cross p matrix which is Hurwitz proper rational let me call it V of s such that U of s plus U s star transpose is equal to oh sorry yeah perhaps I will just use a new sheet. So we are talking about spectral factorization. So suppose you have a U of s which is a p cross p positive real and Hurwitz matrix then there exists an r cross p matrix which is Hurwitz the mean stable proper rational and let us call this matrix V of s such that U of s plus U transpose of minus s is equal to V transpose of minus s times V of s. So let me explain what is going on here. So we are talking about U of s which is a p cross p positive real matrix positive real well I will give you the definition shortly. But it is a positive real U so I mean instead of this p cross p one could just think of 1 cross 1 and so this is just a transfer function and it is positive real Hurwitz then there exists a r cross p well this r cross p business and the p cross p if this only appears in multi input multi output in single input single output of course this r will be p is 1 and r will also therefore be 1 1 cross 1 there exists a matrix which is Hurwitz proper rational call it V of s such that this matrix U of s plus U transpose of minus s the sum of these two matrices is equal to the product of V s with V transpose of minus s and the r comes in the matrix case essentially because when you add these two matrices they are p cross p square matrices it has a certain rank and that rank is this r and therefore this matrix V is not really a square matrix but it is so V is more like this and so V transpose is like that and so the and the product of these two matrices is a square matrix so this has p columns and r rows that is what V s is. Now if you see earlier we had said G s is stable and therefore this thing holds but you see G of j omega plus G transpose of minus j omega so if you look at the imaginary axis you are saying that in the imaginary axis it is positive definite if you think of this as matrices it is positive definite matrix. So what would happen is when you look at this sum this sum will have roots which are on the left half plane and roots which are on the right half plane and then this V transpose minus s times V s is constructed by using all in this sum using all the roots which are in the left half plane you use that to somehow construct this V s and then the V transpose minus s comes automatically because of some symmetry which exists in this sum. Now the spectral factorization is of course a very well known result and used widely for example in communication theory but we will invoke this. So because we have started out with this G s which is stable and has this property therefore now we can use this and invoke the spectral factorization theorem and by invoking the spectral factorization theorem we can say that G of s plus G transpose of minus s is equal to let me just continue calling this V V transpose minus s into V of s and now what I am going to do is I am going to start off by looking at state representations of each of these matrices. Now if you look at G of s we have already seen what the state representation of this G of s is. So it was x dot equal to A x plus B u y equal to C x plus D u. Now if this is the state representation for G of s then from here we can get the state representation of G transpose of minus s and the state representation for G transpose of minus s would be let me call the states here x 1. So x 1 dot is going to be minus A transpose x 1 plus C transpose u and y is going to be minus B transpose x 1 plus D transpose u. Now here we have G s plus G transpose minus s so it is as if these two transfer functions they are adding up which is like so they are two systems which are parallel I mean you could think of them as this is G of s this is G transpose of minus s out here you give u out here you get y. So if you now decide to look at the state representation of the full transfer function that you have on the left hand side then that is given by x x 1 dot equal to A 0 0 minus A transpose x 1 sorry x x 1 plus what you have here would be B C transpose u and y will be given by C minus B transpose x x 1 plus D plus D transpose. So what we have got here is the state representation this is the minimal state because we have said that this is the minimal state representation for G s therefore this is a minimal state representation for G transpose minus s and therefore this is a minimal state representation for what is on the left hand side. So let me just call this left hand side. So what we have is a minimal state representation for the left hand side. Now in the same way let us consider a minimal state representation for the right hand side. So, in order to construct a state representation for the right hand side, let us assume a state representation for V of s. So, let the state representation of V of s be given in the following way. So, z dot is equal to let us say f z plus g w and so w is the input for this V of s and then let me call it w 1 and then w 2 is equal to h z plus j w 1. So, if this is the state representation for V of s, then V transpose minus s has a state representation, where the matrices involved will be minus f transpose here and here you will have h transpose, here you will have minus g transpose and here you will have j transpose. But on the left hand side, we were looking at V transpose minus s times V s. So, this is like saying that w 1 is an input to V of s that way and the output of that is w 2, but this w 2 is the input to V transpose minus s and the output to that let me call it w 3. So, it is a series connection if you think about it and so I will appropriately use w 2 as the input for this for the representation of V transpose minus s. So, the input I will use as w 2, let me call the state that one I had called z, so let me call them z 1. So, z 1 dot equal to minus f transpose z 1 plus h plus h transpose w 2 and the output let me call it w 3. So, w 3 is equal to minus g transpose z 1 plus j transpose w 2. So, now what was there on the right hand side, the net thing it is a series thing. So, I will have to put both these state representations together and I will end up getting a state representation for this thing and so the state representation I will get for this is going to be z z 1 dot is equal to, so f 0 z z 1 plus g w 1 and for z 1 dot I will get minus f transpose z 1 and out here I will get a term because h transpose w 2, but w 2 is h z plus z. So, I would get here h transpose h and here I would get h transpose j and the output equation well w 3 is equal to I would have minus g transpose. So, this I have taken care of and then I have j transpose w 2, so putting that in there I will have j transpose h times z and plus j transpose j times w 1. So, this now is the minimal state representation for the right hand side. So, if you recall there was a minimal state representation for the left hand side of this equation that we obtained and we have also obtained minimal state representation of the right hand side of that equation. And so this left hand side and the right hand side ideally if they are equal that means these two minimal state representations are a similarity transform away. So, now what we would do is we would use some smart way of manipulating these matrices in such a way that we can show some relation between this and that. So, one other thing that I wanted to mention is that from the spectral factorization theorem we get that g transpose s plus g s plus g transpose minus s is equal to this product v transpose minus s v s and we can always take this v s to be Hurwitz. And if v s is Hurwitz this matrix f is a Hurwitz matrix. So, please remember that this matrix f is a Hurwitz matrix. So, first what we are going to do is we are going to do a transformation on this second state representation that we got and the transformation that we would do on this thing is using a matrix which will help us make the system matrix here. We will convert the system matrix here into a diagonal matrix that means we will try and get rid of this h transpose h. Now, the way we get rid of this h transpose h is the following. So, we will use for the transformation a matrix T which is of the form i k 0 i and this k is not any old k, but this k is a k that would satisfy the Lyapunov equation k f plus f transpose k equal to minus h transpose h. Now, let me just revisit. This is the equation, the minimal state equation that we got for the right hand side. We want to make this matrix diagonal. In order to make this matrix diagonal also remember that this f is a Hurwitz matrix. Now, if f is a Hurwitz matrix this equation for any positive I mean f is a Hurwitz matrix if you write down this Lyapunov equation what you have on the right hand side well h transpose h this is always going to be positive semi definite. So, with a negative sign this is going to be negative semi definite and therefore, you will always get a solution k for this. So, this k is the k that you use in the similarity transformation matrix. So, if you use this k in the similarity transformation matrix. So, let us just look at what we would get we have h transpose h minus f transpose and we are going to use this. So, i 0 k i and of course, the inverse of that matrix is i minus k 0 i. So, if you multiply this out multiply the first two matrices you get f 0 and this one when you multiply you get k times f plus h transpose h and the other one you get minus f transpose then post multiplied by i 0 minus k i. So, when you multiply this when you multiply this you get f 0 and then you have this thing multiplying this thing. Now, if you see what happens is you have k f plus h transpose h plus f transpose k, but that is essentially this Lyapunov equation. So, you get a 0 and then the last one gives you minus f transpose. So, we have effectively managed to diagonalize that matrix by using a similarity transform t. So, now if you use this t and you diagonalize then the kind of matrices that you would get what I will do is rather than do the calculations I just write down what are the matrices that you would get. See these were the original matrices that you had and if you do this transformation using i k 0 i then the system matrices that you would get are. So, what I will do is I will write down the original system matrices and I will write down what the corresponding system matrices are. So, the original matrices this is the system matrix that you had and under transformation this goes to f 0 0 minus f transpose. Then you had the input matrix as G H transpose J and under this similarity transform this will go to this will continue to be G and here you will have k G plus H transpose J then you have you have the other matrix being J transpose the C matrix or the observable matrix to be this and that will get transformed to J transpose H plus G transpose k and here you will have minus G transpose will remain as it is. And then the last one which was J transpose J the feed through matrix that will remain J transpose J. Now, this is the new set of matrices that you have and what we will now show is that you see the right hand side at these matrices and one can show that there is a similarity transform that you can do on these matrices such that you get these four matrices. So, this is what we are saying this left hand side matrix this is the right hand side matrix there is a similarity transform which will take this matrix to here this matrix to here this matrix to here and this matrix to here. I am not going to go into the details of how to construct the similarity transform, but I would just say that this similarity transform that you use you can show that it is a block diagonal matrix and so on and now it should be clear that once you do this transform whatever this gets transformed by. So, suppose you have that similarity transform acting on this to make it this then whatever this gets transformed by this will turn out to be equal to D plus D transpose. So, this J transpose J is really let me pull out the positive real lemma this last equation says D plus D transpose is equal to W transpose W this is really saying that this matrix is the same as this matrix. Now, when you do the other transforms then from equating whatever is here to whatever gets transformed here you will get this equation and the other equation about L transpose L that you will get that you will get from the similarity transform that takes a to f. So, as a result what you have shown from this construction is by this methodology and showing the similarity transform between these two you have shown that these equations are going to be satisfied and so that was the converse proof of the positive real lemma. So, it looks like I am out of time and so let me stop this lecture but what I would do in the next lecture is I would start off by giving the definition of positive real for matrices for the matrix case. We prove the positive real lemma assuming I mean initially I had only given the understanding of what positive realness is for scalar single input single output system, but of course it can be extended to matrix case. The proof of positive real lemma because the proof was dealing more with state space it really did not matter whether we were looking at the single input single output or the multi input multi output case. But what exactly the definition of positive realness is that I would talk about in the next lecture.