 In the last lecture, I spent the whole lecture showing the proof of positive real lemma, but throughout the proof I kept talking about the definition of positive realness and the definition of the matrix situation for positive realness and so on. So, I would in this lecture initially try to spend time and give a sort of a better idea about what this definition of positive reality especially for matrices are. Now, as far as the definition for positive realness for matrices are concerned, I will be following what is given in the book by Khalil. But as I was saying earlier for the scalar case, there is no real agreement about what exactly is the definition of positive realness. So, what I would do is I would just revisit what I had said about positive realness and I would start with the scalar case and then I would give you the definition for the matrix case and motivate what are the advantages and disadvantages of the equivalent definitions which are there for positive realness. So, let me start by revisiting what the definition of positive realness is. So, given a transfer function g of s, one definition for positive realness is you look at the Nyquist plot and if the Nyquist plot lies in the first and the fourth quadrant. So, if you have a Nyquist plot which looks like that for example, then that transfer function can be called positive real. This is one particular definition of positive realness that you could use. So, if the Nyquist plot is in the first and the fourth quadrant, then you call it positive real. And sometime back I had given examples of transfer function. So, this is single input case. So, it is some polynomial divided by some other polynomial. Now, if you look at the theory of Nyquist criterion, then you know that if the relative degree, that means if p is a degree 3 polynomial and q is a degree 4 polynomial, then the relative degree is 1 or minus 1 depending upon the way you want to look at it. Basically, only if the relative degree is 1 or 0, can you expect to have the Nyquist plot restricted to this half. Now, the reason for that is because if the relative degree is more than 1, then it turns out that the angles that is so if the for example, if the denominator is 2 degrees higher than the numerator, then what would happen is finally it would I mean the Nyquist plot would enter this quadrant. So, one observation that you have for single input, single output case is a Nyquist plot is restricted to this half of the complex plane if and only if the relative degree is 1. Now, even if the relative degree is 1, we still not guaranteed what we want as far as the positive real lemma is concerned and the reason for that is several. So, let us just take this definition that G s has Nyquist plot in quadrants 1 and 4. This is the same as saying G of j omega plus G of minus j omega is greater than equal to 0. Mind you in this particular case, I am thinking of G s as a single input, single output case. So, it is some polynomial divided by some other polynomial and so this one is greater than equal to 0. So, that means when you evaluate the transfer function along the imaginary axis, that imaginary axis, the image of the imaginary axis lies in the right half of the complex plane. Now, of course, we have this map from the complex plane to the complex plane and what we are saying is that the image of the imaginary axis is something that lies in the right half of the complex plane. The image, I mean this map here is G s. So, any point s here goes to G s there. Now, there are several examples of such things. So, for example, if you use s plus 1 by s plus 2, this transfer function when you look at this map and you look at where the imaginary axis maps to, that will lie completely in the right half. If you look at minus s upon 1 minus s, this was also an example that I used earlier and if you look at the map, if you take G s to be minus s upon 1 minus s and you look at the map, then again the imaginary axis will map to something here. So, for example, in this particular case, it will map to something like that. Now, if positive realness definition is just taken to be this, then both of these will turn out to be positive real. Now, if you take the state-space representation of this and try to use the positive real lemma, the positive real lemma will be applicable to this only because the denominator has its roots in the left half plane whereas, in this case the denominator has its roots not in the left half plane, but in the right half plane and so you cannot apply the positive real lemma. Earlier, I had also talked about idea of dissipativity and idea of storage function. So, in both these cases, you will get a storage function. The only difference between these two is that in this case, because the denominator has all its roots in the left half plane, that means this is a stable transfer function. Therefore, in this particular case, the storage function that you get is positive. On the other hand, in this case also you can construct a storage function, but this storage function is not going to be positive, it is going to be negative. Now, in physical systems, if you are talking about storage function, it is some function which stores energy. Now, if you are going to look at a function that stores energy and that function is negative, that does not make sense. I mean, what does it mean to say that the amount of energy stored in the system is negative. As a result, it does not make sense in this particular case, though you can still find a storage function, whereas it makes sense in this case and you can find a storage function. So, in both cases, as soon as this equation is satisfied, you can find a storage function. But it is only when the transfer function is Hurwitz that you can find a storage function which is positive. Whereas, if it is not Hurwitz, then you can find a storage function, but that will not be positive. Now, if you look at this further, that means instead of just looking at it this way, if you put the condition that G s plus G of minus s is greater than equal to 0, for all s such that the real part of s is greater than equal to 0, that means instead of just looking at the imaginary axis, you look at where this whole half plane maps to under G s. And this whole half plane should map such that it falls in this right half. If you put this added condition, then because the imaginary part is in the boundary of this thing, that means if this condition is satisfied, then this condition is automatically satisfied. But there is something more satisfied, that means whatever is here now gets mapped to the right half plane. Then in that case, this function for this transfer function it would be alright, for this transfer function it will not be alright. So, if you take this transfer function and see where this right half maps to, what you will get is this whole area outside the curve. Of course, what that means is some of these points get mapped to points here, which is not in the right half, but in the left half. Whereas if you had taken s plus 1 by s plus 2, then the imaginary axis would have mapped to a curve and the right half would have mapped to the inside of the curve, which means every point here on the right half is going to get mapped to the right half plane. And so, this equation captures the fact that this is satisfied plus the storage function is positive. And therefore, in some sense, this should be the real definition of positive realness. But for historical reasons, this is usually given as the definition for positive realness. And there are some places where this definition for positive realness is used, but if you give this definition for positive realness, then such functions are also permissible. And to disallow these functions, the additional condition that is given is this condition holds plus g of s is stable. And as it turns out, these two conditions together is equivalent to this condition. So, the various books that you go through might have some mixture of these definitions as the definition for positive realness. And this is for the scalar single input single output case. And I mean depending upon your taste, you can adopt any one of them as what you would believe positive realness to be. But they all roughly say the same thing, but there are these subtleties that need to be handled. So, let me now give you the definition for positive realness as far as matrices are concerned. So, let us assume g of s is p cross p transfer function matrix. Now, of course, in all these cases, this matrix, the transfer function matrix has to be a square matrix because the number of inputs have to be equal to the number of outputs. So, g of s is a p cross p transfer function matrix. Then the following conditions have to be satisfied for g s to be declared as a positive real transfer function. So, g of s is positive real. And this particular definition that I am using is the definition that is given in the book by Khalil, Non-linear Systems by Khalil. So, just like what I said in the 1D case, here also in the multiple input, multiple output, there could be other opinions, but I am just sticking to this for now. So, is positive real if number 1, the first condition is poles of all elements in g of s are in the left half plane. That means every entry of g of s is a transfer function and every one of those transfer functions is stable. And the second condition is for all j omega, I mean purely imaginary value, where j omega is not a pole. So, where j omega is not a pole of any element in g s, g transpose of j omega plus g transpose of minus j omega is a positive semi-definite matrix. So, this condition is precisely like the Nyquist plot condition in the single input, single output case. So, what we are saying is for all j omega, where j omega is not a pole. If j omega was a pole for some entry in the transfer function matrix, of course, then this thing will not be very well defined and there are problems. So, you remove all those omega on the imaginary axis, which might be a pole of any one entry of g of s. And for all the others, you will have this. But of course, remember these are matrices, so the sum of these two matrices, one is claiming is a positive semi-definite matrix and there is one more condition. The third condition is that any purely imaginary root or rather pole, purely imaginary pole of g of s is a simple pole. It is a simple pole that means it does not have multiplicity greater than 1 and the residue. So, the way you obtain the residue is limit s tending to j omega of s minus j omega times g s is positive semi-definite Hermitian. So, there are these three conditions. So, the third condition is that for any purely imaginary pole of g s, any purely imaginary pole of g s is a simple pole and the residue limit as you tend towards that pole of this particular thing. That residue, of course, here it is a matrix, so it is in fact a positive semi-definite Hermitian matrix. So, the definition for positive realness is this. So, now if you just specialize, you take g s to be a one cross one transfer function. Well, the poles are all in the left half plane as we had said before and this is the Nyquist criterion condition. So, the Nyquist plot in the first and the fourth quadrant and this one is the stability. So, these two conditions are what? This is an additional condition that appears in the matrix case. In the scalar case, this is clearly true, but in the matrix case it is a bit more involved and therefore, this condition makes its appearance. So, this is the definition for positive realness as far as matrices are concerned. So, now if you take a p cross p transfer function matrix g of s and you want to talk about it, being positive real, you put these things, these criterion in and you can check whether it is positive real. And in the earlier lecture, when I talked about the positive real lemma, well, there g s could just be taken to be positive real with this definition. And as far as the realization is concerned, whether it is single input, single output or multiple input multiple output, the realization would be in terms of those matrices and so that matrix condition would remain unchanged. So, now, since we have also already done the positive real lemma, there are variations of the positive real lemma. If you remember the positive real lemma, ultimately in the statement about the matrices, there is the specific matrix P, which is a positive semi-definite matrix. Now, this positive semi-definite matrix during the proof of the positive real lemma or in fact, when I showed that the existence of those equations are equivalent to the system being passive, I had made use of the fact that this matrix P defines the storage function. Now, if P is a positive definite matrix, then the storage function is positive definite. If P is a positive semi-definite function, then the storage is positive semi-definite. Now, between the situation when the storage function is positive definite and the storage function is positive semi-definite, there is a slight problem. And this problem is very much similar to the kind of problem that you would get when you use systems with inputs, no outputs and you are using Lyapunov theory. Now, in Lyapunov theory, when you take function which is positive definite, then that actually guarantees whatever is the conclusions that you can draw from using Lyapunov theory. Whereas, if you take something which is positive semi-definite, you cannot utilize it to the full power of the Lyapunov theory. So, as a Lyapunov function candidate, you have to always take something which is positive definite and you hope that its derivative is negative definite. But if the derivative is negative semi-definite, you cannot draw that strong a conclusion. Your conclusions that you can draw is weaker and so on and so forth. So, in the same way as in exactly the same way, as far as storage functions are concerned, when you have positive definite storage functions, it is good. And when you have positive semi-definite storage function, it is not that good. And the statement of the positive real lemma only guarantees that the storage function is positive semi-definite, not positive definite. Now, in order to guarantee the positive definiteness of the storage function, one brings in this additional thing. So, there we had shown that positive semi-definite storage function, I mean positive semi-definite p and those other matrix conditions are equivalent to the transfer function being positive real. Now, one can give an additional thing which is additional definition kind of thing which guarantees that the storage function is strictly positive definite. Now, the storage function being strictly positive definite is equivalent to the transfer function being strictly proper real. So, we have already given some definition for Gs being positive real. So, there is a Nyquist condition and so on and so forth. Now, we say G of s is strictly positive real when G of s minus epsilon is positive real for epsilon greater than 0, I mean epsilon greater than 0, small for small epsilon greater than 0. What we mean by this is, you see earlier when I was talking about positive realness, I had said that of course, this is a map G of s. So, any point as goes to the corresponding point here and the imaginary axis mapping to something and then the right half mapping to the inside of course, is the good thing that can happen and that is the definition of positive real. Now, in this particular situation where which I have drawn where the imaginary axis maps to this and the right half maps to the inside, clearly if instead of G of s, we take the map G of s minus epsilon that also would map to you know some neighborhood of this and so in fact, this particular situation G of s is strictly positive real. It might happen that you have some plot which looks like that, the imaginary axis maps to something like that and then the right half plane perhaps maps to something like that and now if you look at G of s minus epsilon, then the perturbation might be such that this gets move like that and because it gets move like that G of s minus epsilon does not satisfy the positive realness condition and so this will not be strictly positive real and this touching in the imaginary axis of this area, this region, it maps to some other region here and that region how it touches the imaginary axis that in some sense defines the positive definiteness and the positive semi definiteness of the storage function and so when you do this G of s minus epsilon, that means you perturb by epsilon, in some sense what you are doing is you are moving this imaginary axis and so if there are these places where it touches the imaginary axis, this image then when you shift the imaginary axis, then those things get mapped to the left half and G of s minus epsilon does not remain positive real. So, such things which are on the boundary, they are not strictly positive real, anything else is strictly positive real. Now of course, just like the positive real lemma, there is also a famous lemma which instead of talking about the equivalence of positive semi definite P and G of s being positive real, it talks about the equivalence of strictly positive real and this particular theorem is attributed to three famous people, Kalman, Yakubovich, Popov. So, this lemma is attributed to all three of them and the lemma is exactly same as the positive real lemma, the only difference is that G s is strictly positive real. So, instead of positive real earlier, you had only positive real, now you have the additional strictly if and only if and so now you have the equations which are A transpose if and only if there exists P positive definite and other two matrices L and W such that A transpose P plus PA is equal to minus L transpose L minus epsilon times P and PB is equal to C transpose minus L L transpose W and W transpose W is equal to D plus D transpose. Of course, here just like in the earlier strictly in the positive real lemma, here also all the assumptions are that this A, B, C and D come from a minimal state representation of G s. So, as you see between the positive real lemma and this lemma, the only difference is that G s on one side, we are saying a strictly positive real and on the other side in this Lyapunov equation, instead of A transpose A P plus PA being equal to minus L transpose L, there is this additional epsilon P and this sort of guarantees the strictly positive real situation and it guarantees the positive storage function. So, let me now revisit and look at what we have been talking about earlier and why we started looking at this positive real transfer functions and so on. So, the reason why we started looking at this positive real transfer functions is, first of all there was a Isaman conjecture. And from the Isaman conjecture, a certain guess was taken that if you have a nonlinearity in some certain sector and you have a feedback connection of that nonlinearity with a linear system, then if that linear system with those particular gains gives you a stable close loop system, then the linear system with the nonlinearity in that loop will give you a stable system and then we also saw that counter examples were given. So, Isaman's conjecture is not correct. Now, after that we came into this passive systems and what these passive systems are and we have a lot of results with respect to passive systems. Now, the important thing about passive systems is that if you interconnect two passive systems, I mean if you have a feedback connection of two passive systems, then the resulting system is also passive. And this makes things very good because what one is really saying is if you start off with some system which is passive and you have another system which is passive and you interconnect the two, the new system that you get which is the interconnection of the two systems is also passive. And this I mean especially if you think about this passivity in terms of the energy that means the passive system is something where the total amount of energy supplied is either dissipated or it goes to increase the stored energy, then this seems very natural. But what we will now do is we will formally show that when you interconnect two passive systems, the resulting system turns out to be passive. As a result, it turns out that this concept of passivity is something that goes a long way in answering the question raised by Isoman and in fact providing an answer which is similar to what Isoman guessed. So let me begin by first talking about the lemma or the theorem, might as well call it a theorem, interconnection of two passive systems is passive. So what do I mean by this interconnection? So let me assume this is system one, let me call it G1, so let me call the input U1 and the output Y1 and let me have a second system G2. So input, you see when you are talking about input and output, one needs to probably draw an arrow so that it is clear what is the input and what is the output. So U1 is the input and Y1 is the output. And let us have another system G2 and this system has U2 as the input, Y2 as the output. Now G1 is passive, what does it mean to say G1 is passive? Well, one thing that from all the discussion that we had about passivity is that U1, Y1, let me say U1 transpose, so rather than think of it as single input, single output, I could think of it as multi-input, multi-output. So I am saying U1 transpose Y1 is greater than equal to V1 dot where this V1 is the storage function of the first system. So we had said that for the passive systems, the product of the input and the output is greater than equal to the rate of change of the storage function. So V1 is like the amount of energy stored in G1 roughly and so U1 transpose Y1 is like the amount of energy supplied and the energy supplied is greater than equal to the rate of change or the power supplied is greater than equal to the rate of change of the stored energy in the first system. Now this one being passive, essentially you have a similar statement U transpose Y2 must be greater than equal to V2 dot and this V2 dot is the storage function of the second transfer function. Now let us look at what we mean by interconnection. So let us interconnect it in the following way. So I put in this thing and I will assume that there is some input E1 coming into the next system and maybe I subtract this. So what this essentially tells me is that U1 equal to E1 minus Y2 and I do a same kind of thing here. There is some input here which let me call it E2. Now of this system, this is the interconnected system and in this interconnected system I can think of the vector E1 E2, the vector E1 E2 as being the set of inputs and I can continue to think of Y1 and Y2 as a set of outputs. So then input times output is essentially E1 Y1 plus E2 Y2. Now for E1 if I substitute U1 plus Y2, so I get U1 plus Y2 times Y1 plus, now I have not written the equation for this. I will continue to call it positive. So what I have is U2 is equal to E2 plus Y1. So this E1 Y1 plus E2 Y2 is equal to U1 plus Y2 times Y1 and for E2 I can substitute E2 is U2 minus Y1. So U2 minus Y1 times Y2. Of course there are these transposes but that really does not matter. Now you see you have a Y2 transpose Y1 and you have a Y1 transpose Y2 but with negative sign, so they sort of cancel. So what you are left with is U1 transpose Y1 plus U2 transpose Y2 which from these two inequalities is greater than equal to V1 dot plus V2 dot. Now what does this mean? This means that if you take this E1 and E2 as the inputs for the interconnected system and the Y1 and Y2 as the output, so for this interconnected system when you look at the inputs multiplying the outputs, this is greater than equal to the rate of change of a storage function which is in fact the sum of the storage function of the first one and the second one. So in physical systems if this was a physical system and it had some elements which stored energy and this is another system which has some elements which is storing energy then the complete storage function is the sum of this storage function plus this storage function. So now this is an extremely powerful sort of result and therefore I mean what we can say is if you have two systems which are passive and you interconnected then the interconnected system continues to be passive. Now if I mean how this theorem becomes really powerful is by the following means. You see suppose you think of this G1, this system G1 as a linear system which is passive. G2 is some system which is let us say a non-linear system but you can sort of by some means show that this is passive. Then if you interconnect these two then the interconnected system is also passive. So if this was a non-linear system and you managed to find some storage function for this non-linear system then in some sense you have found a storage function for the complete non-linear system. Now if you are talking about for example Lyapunov theory and so you do not think of these inputs earlier we had discussed how given a general non-linear equation you can split it up into a linear part and a non-linear part and now if you can show that this linear part is passive and this non-linear part is passive independently and for this non-linear part you can find some storage function. For the linear part of course we already have the positive real lemma and the Kalman Yakovych Popov lemma by which you can find storage function. Then the sum of these two storage functions act like the storage function for the net system but with zero input the sum of the two storage functions would act like the Lyapunov function and therefore this is in fact a way to construct a Lyapunov function for that particular system. Now how this connects up with Isomans idea is what I will now try to explain and for that first let me consider non-linearities which are memoryless. So what I mean by a memoryless non-linearity is the following. If you give a certain input to the non-linearity you get an output but this output is not dependent upon what happened in the system earlier in time or later in time whatever. It is an instantaneous map so what I am trying to say is that whatever is the input the output is completely determined by what the input is at this instant. The output at this instant is completely determined by the input at this instant. Such maps we would call a non-linearity. Now if you have a non-linearity I mean such a map we would call a memoryless non-linearity. So if you have a memoryless non-linearity then one way you can characterize that non-linearity is by this map where you have the input and here you draw the output. So for any given input there is a particular output for this input there is some particular output and so you can connect all those dots and you get a curve. Of course if this curve was a straight line passing to the origin then the non-linear system is not really non-linear but it is linear. But if you have this situation it is a non-linear. So this is a non-linearity. So for any given so this is like a lookup table if you want given any input you go through the graph and you know what exactly the output is. Now if you have a non-linearity such that this curve so let me call this non-linearity f then this non-linearity is such that if the input is u then u times f of u is greater than 0 equal to 0. Then as far as the non-linear system is concerned so here is a non-linear system suppose this is the input going into the non-linearity and what you get here is f of that and what you are saying is input multiplied by output is always greater than 0 then from whatever we have been discussing earlier we could call this passive and now if you call this passive this thing is memory less that means in the sense it just depends on what the instantaneous input is one could think of the storage function. So the definition of passivity is u dot y input multiplying output is greater than equal to the rate of change of the storage function. Suppose you take the storage function to be the zero storage function then you get this that means u dot y is greater than equal to 0 and so assuming that this storage function is a zero storage function is one way that this equation is satisfied and so you have a passive system this non-linear system is passive and it has a storage function which is a zero storage function. Here is the beauty of the whole thing suppose you take a matrix a linear system G of s and you take a non-linearity and this non-linearity is in the first and the third quadrant. Incidentally non-linearity which are in the first and third quadrant are often denoted by this. This is a non-linearity lying in the zero infinity sector that means they lie here or here the slope or rather if you take any u the f of u in fact I had given this kind of a definition earlier if you call the input psi f of psi by psi is less than infinity such non-linearity is zero infinity non-linearity and if you have any non-linearity like this then of course this is true psi times f of psi is greater than equal to 0 and from what I said in the last slide one could view this as passivity. So if you have a non-linearity which belongs to this class and you have a feedback structure which looks like this then if this G s I mean this linear part if G s is passive and the non-linearity is belongs to it is a non-linearity like this then G s is passive the non-linearity is passive. So from what we had talked about earlier the interconnection of these two the interconnection of these two systems is also passive. Now this is passive what we mean by that is if this input is u and the output is y then what we are saying is u transpose y is greater than equal to v dot where v is the storage function as far as this guy is concerned and the non-linearity is passive well it is the same y here and out here what you have would be the negative of this u. So let me call it minus u but this minus u and y it obeys the rules of the non-linearity therefore minus u transpose y is greater than equal to 0. This is for the non-linearity and because it is in the 0 infinity sector this must be true this G s is passive so this must be true I add these two I get 0 greater than equal to v dot. So this is an autonomous system in which I have v provided G s is passive I can find this storage function to be a positive definite function and I have therefore a positive definite function whose derivative is less than 0 therefore the resulting system is asymptotically stable. So if you remember Isoman's conjecture said that if you had some trajectory which lied in this sector something like this and his conjecture was that if you had a G of s such that on the feedback loop if you put any gain between 0 and infinity and it gives you a stable system then if you put a non-linearity then the resulting closed loop system is asymptotically stable and that was proven to be false. But what we have got is a very similar result what we are saying is if you take any non-linearity in the 0 infinity sector and out here you are not going to take any transfer function such that you know you put any gain between 0 and infinity and the resulting feedback system is asymptotically stable that is not what you are going to do what you are going to do instead is instead of that condition you are going to take a G s which is passive and if you do that then the resulting system is such that it is asymptotically stable. So if you recall earlier few lectures back I had shown that in Isoman's conjecture there is some sort of counter example I showed and if you remember in that counter example the G s that I took was s plus 1 upon s squared. Now if you look at this particular transfer function and you look at its Nyquist plot it will be clear that this is not a passive transfer function and as a result one cannot expect this interconnected system to be passive and therefore asymptotically stable. So the interconnection of passive systems being passive that result in fact is a solution to the Isoman's conjecture in the sense it is a positive reply to Isoman's conjecture in the sense that if you have a nonlinearity in the zero infinity sector then that is like interpreting the nonlinearity as a passive nonlinearity and therefore you interconnected with the passive linear system and the resulting system is passive and because the resulting system is passive what you have is asymptotic stability. So I am out of time for this lecture and so we would stop here today.