 For complete feedback linearization we have this new result this result we will try to prove and understand ok. Because it is simply based on using the Frobenius theorem alright and going back to our earlier conditions. The first condition was that the first requirement is that you have a notice that it is a necessary and sufficient condition first of all then you are saying that this matrix is full rank and this distribution is involuted ok great. How do we prove this? The necessary and sufficiency if you remember we can go back to what we required for feedback linearization earlier right we required that you know you have this sort of thing happening what is this? Do you remember what this was? If you have an output y equal to lambda x right then you wanted its derivative when you take its successive derivative you want the control to not appear ok. Because we are talking about relative degree n so we go all the way to n minus 2 ok. This means that control does not appear in the n minus 1 derivatives ok only in the last one and then you also required that Lglf n minus 1 lambda is actually not equal to 0 right. Basically it is just saying that you take when you take the nth derivative the control appears ok this is what this is saying right this is how we did it we said that you had an output function y equal to lambda x yeah or in that case we use hx it does not matter just notation and if you keep taking derivatives of lambda control will not should not appear in the first n minus 1th derivative but should appear in the nth derivative that was this condition ok. Now we are going to basically say that this is going to imply complete integrability ok. How? You remember that this set of conditions by lemma 0.1 is identical to this conditions right all the Lglf can become add f yeah I am not going to go back and show you the lemma but because we remember this yeah it just becomes add f 0 add f 1 add f 2 all the way to add f n minus 2 yeah if these are 0 all the way to add f n minus 2g is equal to 0 ok and similarly using this you can show that add f n minus 1g times lambda is not 0 ok pretty straight forward to show that we already did this before ok. Now from this what we have already shown again in lemma 0.2 is that these are linearly independent vectors ok. How? Ok now I have to swipe back I apologize let me go back here and if you remember we proved this using a kind of matrix multiplication right and we had two matrices right just look at this yeah we proved the linear independence of the dh functions using this product right and we showed that this product is rank r therefore individually each of these must be rank r ok which means these are linearly independent ok which means these are linearly independent as is this in this lemma of course our statement said that these are linearly independent but we also proved that these are linearly independent right because if the product is rank r then individually each of them are rank r ok great. So this is what we are using so we go back and there the r gets replaced by n that is all so we know that this is now a set of linearly independent vectors ok. So this is the first thing we already proved the first requirement we are going backwards that we already proved that these are linearly independent done. Now we only are left to prove involutivity of the distribution ok there we are going to use the Frobenius theorem what how do we do that d is a span of these guys so obviously it is non-singular in n minus 1 dimensional we have already said that right ok. So this condition that you have this guy yeah this condition can be written as this just in matrix form yes this is precisely the integrable condition right this is the same as this correct. So this is exactly the integrable condition right on g add of g add of n minus 2g yeah yes. So we are done what we have shown what have we shown we have shown that the distribution is non-singular and we have shown that the distribution is integrable right because in this case you had right you have k equal to in this case this is what k equal to n minus 1 right this is k equal to n minus 1 right because you have 1, 2, n minus 1 vectors in the distribution. So k is equal to n minus 1 therefore how many h functions we are going to get only 1 because n minus n minus 1 n minus k right. So only one h function that is this lambda ok and this is what we had right in our earlier feedback linearization result also that lambda was the output function here this is this lambda alright. So now here if you notice this lambda becomes the output function in the partial linearization I said that it could be the output function but it could also be the extra coordinates that we are talking about ok. But in any case so you have here this lambda as the output function because we have already proven that d is non-singular and integrable it means that d is involutive right and that is the second statement ok. So one side of the proof is done alright. Now we can also prove the other side similarly ok. So from the first from the two conditions we have that the distribution is non-singular and involutive this is our assumption therefore by Frobenius theorem d is completely integrable which means there exists an h such that this happens yeah and what is this this is nothing but lg l addf l addfg all the way to l addf n minus 2g yes just by multiplying this out yeah and again I go back to lemma 0.1. So you see the lemma 0.1 and 0.2 are regularly getting used. So if I go back to lemma 0.1 all these l addf 0 addf 1 addf n minus 2 is basically lg lg lf lg lf n minus 2 ok. So I got one condition that is required for feedback linearization right the other one is that lg lf n minus 1 has to be non-zero ok alright. How do I prove that ok. I take dhx which is a 1 by n vector and I multiply it with this additional quantity here ok I take this whole guy but then I multiply with this ok. So what does the product give me it gives me zeros in the first all of these become zeros right because of this guy but then I get this last piece alright then I get this last piece. Now what do I know I know that this is full rank right this is a full rank matrix and this is of course not zero rank right this is rank 1 at least because it is just one coordinate rate. So if I take partial and it is zero it is not a it is not a function at all it is a constant it is not a coordinate anymore. So this is at least rank 1 well at least at most rank 1 because it is a row vector. So what do I have I have the product of a rank 1 matrix and a rank n matrix ok this is a rank n matrix this is a rank 1 matrix. So the product can be cannot be less than rank 1 right because the the rank of the product is actually the minimum of the rank of each of the entities. So the minimum is 1 so this has to be rank 1 cannot be rank 0 if this is rank 1 this is not 0 right ok. So same tricks if you remember the tricks that I am using are the same use lemma 0.1, lemma 0.2 take products of matrices show that the product has some rank ok that is how we are doing things ok. So because the individually each of them this is rank 1 this is rank n so obviously the product cannot be less than rank 1 yeah so therefore this has to be non-zero. So we just prove that this guy is non-zero. So we are done right this is what is the conditions we require for feedback deneration in fact H again becomes the output the desired output ok. So and this is what we can use to come up with the output ok. So we did the DC motor example right if you let us go back this was the DC motor yeah what did we do we verified this 0.1, 0.2 and so on and did we try to yeah we also found a feedback deneration right we found a feedback deneration and in order to do the feedback deneration we actually already had an output correct we already had this output which is this yeah. In this case at least the model that I had used I do not know what is model what is the model coming up I do not know but at least the model that I used had relative degree what what was the relative degree in this case 2 right, relative degree was 2 not 3. So this was not completely linearizable ok. So the relative degree was only 2 that is why if you remember I had chosen this extra state right I had chosen this additional state right which was the file to make the diffeomorphism alright. So let us keep this in mind what was the dynamics this and this was the output ok. So some of my feeling is that the dynamics that we will have now is something different ok alright that is fine let us go and see what happens let us not worry alright. So what is the DC motor dynamics that you have here this is what I mean there is a proper with network and in electrical voltages in L and R and so on yeah. So we have all the nice the proper equation the electrical dynamics on the stator side then you have the electrical dynamics on the rotor side then you have the mechanical dynamics of the rotor ok. So this is the mechanical part and the other two are the electrical parts. You have inductances, the resistances, the back EMF, the rotor inertia, angular speed, friction and torque ok. So this is the torque developed at the rotor shaft alright. So there is yeah I am just going to skip to this part yeah. So this is the dynamics of the system yeah FX plus GXU ok. Now yeah there was an error here there was an additional 0 this should be only 3 dimensional right cannot be 4 dimensional it is because it is a 3 dimensional state space. So I just remove this but does this match the dynamics we had for the DC motor is what I am wondering yeah this was it. So because I took the dynamics from Khalil I really do not know if so let me paste it here. Now does this look similar to this guy X1 dot is minus X1 plus U yeah similar alright. X2 dot contains some X2 term and a constant and an X1 X3 similar. So X3 contains X1 X2 and an X3 this is not similar ok. There is some additional term here I do not know what it is from I think in that model clearly looks like the friction term is missing ok. In the earlier model in this model you see the X3 dynamics has this missing yeah. So it looks like there is some and that X3 is coming from this guy yeah that is the friction model it looks to me like or am I missing something yeah X3 is omega yeah yeah yeah looks like it is missing the I mean the model I believe that I took from Khalil is missing the friction ok maybe whatever very good DC motor alright anyway. So we have this model alright now the question that we want to ask is can we fully feedback the analyzer system how do we verify this we are given two conditions right first is that g you have to verify this g add of g and add of squared g right why because the three dimensional system right and we want add of n minus when g until add of n minus 1 g we want this to be full rank. Now let us do some computations ok what is g is this guy ok what is add of g I have already written it add of g is this ok how do you compute add of g I want you to compute I want you to verify how do you compute add of g otherwise you folks will get completely lost ok you are given g 1 L s 0 0 and you are given f well actually I am not going to rewrite this. So add of g is what is the same as Leigh bracket of fg excellent excellent alright and this is what formula first you do the second one dgf minus dfg always remember the first term is Jacobian with respect to the second argument. I will do this in a smart way first of all dg if you look at dg what is dg what is dg Jacobian of this yes 0 it is a constant right constant things do not have derivatives right. So this guy is gone already ok no first term all you are left with the second term second term ok. Now you have to play it smart I would not compute all of it what would I do I am doing df times g. So some matrix multiplied by g so what do I know the g has last two elements 0 ok. So do I have to compute the entire matrix what will I compute first column right first column of df is enough because it will be multiplied by the first element of g. So that is what I am doing just computing the first column what is the first column of df no I mean what is the expression how do you compute first column of df ok what is df expand df for me in terms of the actual partial derivatives first column is what now absolutely del f 1 del x 1 del f 2 del x 1 del f 3 del x 1 right that is the first column right that is how I am doing this. So all I am doing is taking partial of each of these with respect to x 1 right. So that is what I did look at this minus rs by ls how did I get a square I have no idea I do not know God help me I do not know I got this square no square there is no square here right did I miss something am I right what about del f by del x 1 of this guy just this right minus k l r what why am I missing everything I have already multiplied oh see you guys do not tell me when I am right all right this is correct I have already multiplied by g no so obviously 1 by ls is multiplied so rs by ls 1 by ls right this is correct yes partial with respect to x 1 is just minus rs over ls multiplied by 1 by ls rs ls squared done partial with respect to x 1 just this multiplied by ls is this guy third term partial with respect to x 1 is this multiplied by ls again this excellent excellent excellent now done what about add f squared this is f of f g now it is not nice anymore because f g is now this yeah f g is now this guy let us see did I do this apparently I did not do this let us put some effort and do it and what else I have f g all right folks now we are going to work now what is add f squared g this is equal to f comma f g this is what we have to verify linear independence as of now what have we obtained we have obtained g which is which has something in the first element all right then we have obtained this guy which has also something positive in the or whatever non zero in the first element but then you have some funny things happening here all right but we are going to compute this this is what d f g f minus d f f g ok now what first row compute d f g f g is written here take take Jacobian and tell me what is the first row cannot take so much time for first row thank you very much just be confident I will shout at you later all right ok second row correct correct got it right third row I cannot hear anything all right then you have this f ooh nice whatever it is f I am not even going to write it I will do the multiplication later on d f minus r over s sorry r over ls 0 0 ok second r r lr and then third is minus k ls lr times x 1 third one yep yep k ls j x 2 but do you prefer this or the Frobenius theorem proof this one I do you can write something right in that one it is k ls by j times x 1 and f by j times what minus rs by ls squared minus k lr x 3 k by j x 2 whoo could anything nice is coming out of here or what or nothing is going to be nice here is it that is the whole idea right ok let us just look at this guy what is this guy the first row is 0 so I have a 0 here what about the second row it is just minus k over lr multiplied by f by j x 3 plus k ls by j x 1 x 2 wow nice this is just not nice we will try to avoid such cases minus rr by lr x 2 plus vr by lr minus k ls by lr x 1 x 3 minus this whole way ok I am not even going to try to expand this ok what do we have to do we have to now say that this is full rank so what what he is saying is that it is full rank if x 2 and x x 2 or x 3 are non-zero ok alright then alright then let us see we already have g which is giving me 1 here if either x 3 or x 2 is non-zero this is going to give me another independent vector alright this one I really do not know I mean I do not see how this can be simplified any further yeah ideally do not know I mean I assume that this will also give me something like unless I miss something this is correct this is correct because I have already written it before also yeah this is not turning out to be simple but I guess if x 3 and x 2 are non-zero then this should be fine too yeah I really have to check or you guys can check I do not know yeah yeah I would not usually make up an example like this that is fine yeah in fact what I was wondering was in our case we did not have the friction term so how did that impact the linearizability is what I was wondering because all that is getting rid of is some of these f terms here which is here and here you know in some very few places to be honest yeah this is I will say to be completed offline yeah you can do it on a leisure time yeah or you can even you can even check it numerically yeah just feed it and then just plot it and if it does not hit zeros you are fine okay rather than actually doing the whole thing does not make sense to be honest okay let us go to the second condition what was the second condition you want the distribution to be actually we need involutivity right we need involutivity okay we want involutivity of the in this case g and add of g okay only two vectors the add of square g is not required yeah because we are looking at n minus 2 here n is 3 so we are looking at the distribution which is g and add of g and this you want to be involutive that should be relatively easy but actually what is there to prove because we have only two okay all right sure sure we have g and add of g and what we want to claim is that g comma fg belongs to the distribution okay this is the involutivity right because you the distribution is just consisting of yeah but this should be easier to find yeah why because this is what dg times fg minus dfg times g what do I know now what is dg 0 right dg is 0 so this term is 0 okay so and I know that g only has element in the first there is only the first term so I only have to compute the first column of this guy okay I believe we have already done dfg is this guy dfg was this right what is the first column 0 so so obviously 0 belongs to delta I mean if you have any vector space 0 as an element of the vector space you are confused yeah or if you do not like to look at it like that you can think of I take g 1 times g minus 1 times g still have 0 okay any vector space contains 0 okay so this is the 0 so trivially true okay so so implies delta is involutive okay which means that delta is completely integrable so what does that mean it means that if you take some function beta and you take its gradient or whatever you can call it d beta multiplied by the elements of the distribution that is g and add fg that is equal to 0 right this is the condition right and so this gives again what a partial differential equation right because this grad b is what partial with respect to x1 x2 x3 multiplied by these okay alright so let us actually try to evaluate this what does it what does it give me I mean I do not know why he keeps changing notation anyway you have d beta multiplied by terms of the distribution which is g and I will say add fg and this is equal to 0 yeah by complete integrability alright so we use this to actually compute yeah.