 The relative degree is defined so that it tells you in which derivative of the output you get the input, ok. H is the output function, right. So, you keep taking successive derivatives and these are the terms that appear in the connected to the control in the successive derivatives, successive derivatives, yeah. This is what will happen. You will get H, the derivative will contain lgh multiplying the control. In the second derivative you will have lglfh multiplying the control and if you keep going on and on, the r minus 1th derivative will have lglfr minus 2 multiplying the control. You can see the pattern, right. It is pretty easy, right. lglf, lglf square, lglf cubed, lglfr minus 2, right. So, until r minus 2 this is 0, yeah. You take one more derivative, the term multiplying the control will be lglfr minus 1. That is not 0. So, in the rth derivative of y you see that the control appears, ok. So, this entire business is to codify in what derivative control appears, yeah. That is what I have written here. y is H, y1 is lfh, yr minus 1. Because of this property, your dynamics in terms of this new variable, this is the state transformation. The state transformation is coming from your output that you chose. So, the y you chose, ok. I choose the y as H. Its first derivative is lfh because the control term is 0 by virtue of this assumption. Second derivative so on has 0 control. Similarly, all the way to r minus 1th derivative no control, right. No control because of this assumption. But then when I take the rth derivative, the control appears because this term is now not 0, ok. Now taking any subsequent derivatives is useless as well as feedback liberation is concerned. And he has a very relevant question is this the order of the system? No, r is not necessarily n, ok. r is not n. So, you can see in this new state transformation, how many variables did I get? New variables? r, right. I just have r, r or r, r plus 1, right. If I take y, y1, y0, y1, yeah, it is r plus 1 variables, right, yeah. So, this is invariably going to be less than n, less than or equal to n and definitely cannot be more than n, yeah. Because in the nth derivative anyway of the control appearing, otherwise you have no control. The control does not appear in any derivative then means that you have no control, yeah. So, the point is that you have only r minus 1 variables that you could design, alright. Now what is the whole purpose here? I took this output y just like in the pendulum case I took the output as the x1 state or the q1 state which is the angle, right. I took some output. I got this r plus 1 state equations and in this equation I have the control, right. And this term is non-zero. So, I can use this guy, ok. I can use this guy to cancel this. You can see that, right, that I can use this control to cancel this guy and introduce a linear term that makes this system linear, ok. But then the question is what happens to the rest of this? Are these linear or not? Yeah, are these linear or not? That is not yet clear, ok. But what we can say is this piece becomes linear, ok. At least that much you can say for sure, alright. Alright, let us see. Let us see what happens further. Now in order to sort of assess the properties of Lie derivatives, we have to prove a couple of lemmas, yeah. Slightly difficult looking. So, please do not get scared. We will introduce one more new notation which is the add bracket that is the add notation, ok, which is basically the notation for successive Lie brackets, yeah. So, add f kg is basically k brackets. So, basically if you think of it simply, add of 0g is just g itself, add f to the power 1g is f Lie bracket with g, add f to the power 2g is f Lie bracket with fg Lie bracket, ok. It is a successive Lie bracket notation, yeah. Again, why are these important? From a controllability perspective, you can move along successive Lie brackets also. So, it is pretty cool actually. If you think about the geometric implication of this, it is saying that you can move along f and g and you can move along Lie bracket of fg, you can move along successive Lie brackets of fg. So, these are the directions in which you can move. So, again, we will look at that later or we may not look at it. So, let us not worry about it, but we will use this in the proof of when we can do feedback linearization. So, we need this result which says that if you have these quantities to be 0, all these quantities to be 0 and notice these quantities are 0 for up to k equal to r minus 2, right. This first result is equivalent to the second result is what we have said, ok. The first one we already have by this, by the relative degree assumption, right. So, what we are saying is if the first one holds true then so does the second one, ok. Again, we will use this in the proof. So, bear with me, it is a bit technical, but look at the expressions. Here you had lg, lg, lf1 all the way to lg, lfk, yeah. And this being 0 is equivalent to we are saying lgh, lfgh, l add fkgh, ok. So, here if you remember the Lie brackets is also giving another vector field. So, we are saying you are taking this Lie bracket with respect to g, then g and f subsequently and so on is same as saying that these being 0 is same as saying that you take Lie bracket with respect to g, add fg and so on and so on, add fg, add f square g, add f cube g and so on, yeah. So, that turns out to be 0. How do we prove this? We prove one key result and that proves all of this. What is that key result? The claim is that lfg is equal to actually I have written it here. The claim is this Lie bracket sorry the Lie derivative with respect to fg is the same as lflg minus lglf times h, ok. So, we are saying that this holds always, ok. So, this is all a play of derivatives, yeah. It is playing a lot with derivatives. It is notationally seems very complicated and may be difficult to follow for the first time, but just look at it. It is just derivatives reordering derivatives, ok. So, all I am saying is if I take the Lie derivative with respect to this fg bracket is the same as taking lflg minus lglf. There is always this commutativity type of you will see in all of Lie derivative ideas. Even the linear system you know context you have this the matrix ab minus ba has a very a lot of value, ok. So, it is almost like that it is like ab minus ba, right lglf minus lflg or sorry lflg minus lglf. How do I prove they are equal? I evaluate both of them and show that they are equal, alright. So, I do not do anything very complicated. What is Lie derivative with respect to the bracket? I expand the bracket, yeah. We already know this formula, alright. And now I expand the Lie derivative, which is what del h del x multiplied by this guy. That is it because this thing inside this is actually the Lie bracket between f and g. I just defined it, right, yeah, ok. Now I evaluate both pieces here, ok, lflgh. So, this is essentially lf of lgh, right. What is lgh? This guy. Similarly, lglf is lg of this guy, ok. After that I mean again this is doing successive derivatives like I said, yeah. What is lf of this? Notice that this is a scalar valued function, right because del h del x is a row vector, gx is a column vector, product is a scalar, same here, ok. So, I am trying to do Lie derivative of a scalar, yeah. What do I get? Take partial with respect to x, yeah, multiply by fx. That is it. Only thing is how to take partial with respect to x for this? I just take product rule, take partial of this guy first and then take partial of this guy, ok. There is two pieces here, yeah. You just have to make sure that you are consistent with the dimensions because now there are matrices involved. This became a matrix. This is a matrix, yeah. This is a vector. This is a row vector, ok. So, that is it. You just have to be consistent with the dimensions otherwise it is just using the product rule, yeah, ok. So, all I did was because I have to take another derivative, right. It is like taking second, like a Hessian, like you have Jacobian Hessian, right. You take first derivative with respect to state, then second derivative with respect to c. It is almost like that. Similarly, I took the second derivative now of this guy, that is whole this guy, just using the product rule and then I multiply by gx, alright. Now what? If you look at these guys, this term and this term will cancel. Just look at this, del square h, del x square, gx times fx, del square h, del x square, fx gx, ok. This will cancel, yeah, believe me. There is matrices and vectors involved, yeah, but this guy will cancel with this guy, ok. Once you have that cancellation, you can see what is left if I subtract the two, only this much and that is this, yeah. That is it. So, in order to prove that these two are the same, I have done nothing but write the two formula, yeah. Very painful looking, bookkeeping, but that is all it is, it is bookkeeping and I cancel, ok. So, but this is very cool, right. Looks like some new language we are writing, yeah. It is like, I mean, if somebody does not follow this area, what have you written? Looks like some Morse code, yeah. So, it is like, you know, Lfga is Lfg minus Lglf, yeah, looks simple, right, nice, ok, good. But remember, we were trying to prove that equality, that you know, Lglf, Lglf square and all these being 0 means that Lgladfg, Ladfsquareg, those are also 0, yeah, that they are equivalent. But what is the nice thing? We can, using this simple idea, we can iteratively prove this, ok. The first thing that we have is that Lgh is 0, ok. We also have Lglfh is 0, right, by the first 2, right. I know that Lgadf, Ladfgh is what? Is this guy, what I just wrote here? And that is what? Lflg minus Lglf times h, ok. So, I expand it Lflgh minus Lglfh. What do I know? I know that Lgh is 0, right. I also know that Lglfh is 0, yeah, because that is what I assumed, right, yeah. So, I have already proved that Lfgh is 0, done, ok. So, once I have this nice formula, this entire proof goes through very smoothly, ok. Now, if I want to do the second level, yes, we are not going to show too many levels, I think, until second level. What is the Adfsquareg? It is Lffghx, ok. Again, just painful looking, but math is not complicated, yeah. This is actually Liebracket. So, this is Lflfg minus Lfglfhx, again by using the same formula, yeah. Because you can take, do this with any 2 vector fields, right. I have f and I can think of this thing as g now, the new vector field, right. So, I can keep doing this with any vector, Lflfg minus Lfglf, ok. I can keep doing this again and again, right. Now what, now I can expand this, this is Ladfg, this is basically the same as Ladfg, right. Similarly, this is Ladfg and Lfhx. Now, notice that Lfh is already 0 by the first step itself, right. We already have Lfh is 0, we already assumed Lfh is 0, yeah. Now, what we are left with is Ladfg, Lfhx, this guy, yeah. No, did I get that right? No, no, no, no, sorry, sorry, sorry, sorry, I did not get that right. I apologize, I apologize. We know that Ladfg hx is 0, we have already proved that, yeah. I am wondering, why do I need the next step? Because I already have Lfh also to be 0, right. So, this term is actually 0, I should be done anyway. Yeah, I do not think I need this additional step, this additional step is not needed. See, because Ladfg hx is already 0 by this guy and Lfh is already 0 from here. What did I miss? Oh, thank you, Lgh is 0. Thank you, folks. Thank you very much. See, one tends to make mistakes like this. So, all I have proved is Ladfg hx is 0. So, this guy goes away. This does not go. Of course, Lfh is not 0, Lg Lfh is 0. Okay, okay. Thank you, thank you. All right. So, all I am left with is this guy. Okay. What do I do with this guy? I expand this again. Ladfg, I expand again. Ladfg is what? Lf Lg minus Lglf, right, by this same formula. Yeah. So, Lf Lg minus Lglf multiplied by Lfhx. Now I am back to where I want things. This is Lglf squared h. This is 0 by this whole thing. Yeah. So, Lglf square h is 0. Similarly, Lglfh is also 0. Again, by this guy. So, now it's good. Now we are good. Now this is 0. Okay. So, I hope you are able to follow this fun math. Yeah. Just practice it a little bit. What I would recommend is you try to do this, write out this proof by hand on your own. Yeah. Write out a few more terms. Yeah. Go to addfqg. Yeah. Since you do addfqg, I think you will learn because it's painful enough to write this much. Yeah. So, but what have we shown essentially now that we have until now we have used that Lgh is 0, Lglfh is 0 and Lglf square h is 0, right. We used it here. So, we have used these three are 0 and what have we obtained? We have obtained that Ladf0gh is 0 because Ladf0 is just Ladfg0h is just, I did not say this is actually just Lgh because that's what we defined this notation. So, that is already 0. We will also prove that Ladfg is 0. We also prove that Ladf square g is 0. So, we use three qualities from the first one to prove three qualities in the second one. So, this should be evident to you that if I go further, I will be able to prove that Ladf kgh Lf Lh is 0 if and only if Lglf k plus Lh is 0, okay. This is just pattern. We are using just the pattern. You are just matching the indices. Yeah. Until now we used what? Like I said Lgh0, Lglfh0 and Lglf square h0. We use three things. So, 2 plus 1, 3 and we proved what? Lgh is 0 again. We proved Ladfh is 0 and we proved Ladf2h is 0, right. So, that is essentially what I am generalizing here, okay. That is what I am generalizing here. We have proved Ladf here k is 2 until now. Until now what we proved? We have shown with k equal to 2, right and L equal to 0 I guess. Yeah. Yeah. We used k equal to 2 and L equal to 0 and so similar. So, we used Lglf square h is equal to 0 until this point we have used, okay. Does that make sense? Yeah. Again, what I will strongly recommend is please go back and write out these terms yourself in a notebook. If you write it out, it will be clear to you. If you just look at it, it will not be clear to you. Yeah. But I am not doing anything complicated. This is the only thing that we needed to prove. Yeah. Once we prove this, we keep using this successively again and again to do all this entire proof go through, okay. But anyway, this is enough for the proof. For this lemma, this is enough. So, basically we can, so basically using this idea, we can actually prove that Lgh is 0, Ladfh, Ladfgh is 0 and all the way to Ladfk gh equal to 0, okay. We can basically go on like this is what I am saying. You can do the same thing again and again. You can take another derivative, take Adf cube g, Adf to the power 4g. You can do this all the way, all right. That is essentially what I am saying. Yeah. All right. Fine. Then we have another lemma which is sort of based on this lemma which is essentially saying that if I have a relative degree R for the system, okay. By the way, in these nodes, we have been rather specific. It says the relative degree R at some state x0. Yeah. Usually it is better if you have a relative degree R in a set. Yeah. Because otherwise doing feedback linearization only at one point is not going to help you in terms of control, right. Because you are not going to operate only at one point. So, you typically want to have some at least partial feedback linearization in a set, okay, some set at least. So, that if you operate in that set, you can apply this feedback so that the system looks linear, all right. All right. So, what are we saying? We are saying that if you have a relative degree R system, then these row vectors are linearly independent. What are these vectors? DH is simply partial H with respect to x, okay. This is just the small d notation because this is H is a scalar valued function. When I had the matrix G, sorry, when I vector value vector fields G and F, then I use the capital D notation for the Jacobian, okay. So, again, these vectors should look familiar to you. This DH, DLFH and so on and so forth. Because we have been taking partials of the, we have been taking these partials, right. This comes up in the first derivative of H, right. This will come up in the second derivative and so on and so forth, yeah. You keep going on and on. This will show up in all the derivatives. That is essentially what these vectors are, okay. It will show up in successive derivatives of the output, yeah. What we are saying is that these successive derivatives until the R minus 1th, again R being the relative degree, these are linearly independent, yeah. If you say that your system is relative degree R, okay. How? Okay, how? How do we claim that? We claim that by doing a matrix multiplication, okay. I will just go to the sort of the end. We use this. We want to use this, yeah. That is rank of product of matrices is equal to the minimum of the two ranks of each of the matrices. This is what we want to use to prove that these are linearly independent vectors. So what we do is we construct a matrix out of this, right. Because these are all row vectors. So I construct this matrix, right, out of this, right. So this matrix, yeah. And I multiply it with another matrix, okay. And I look at the rank of the product, okay. So let us see. What we are seeing is this product is at, this inner product is actually equal to this. This is not difficult. Just look at this, this D L F I H is basically this guy, yeah. Just whatever is this D, D is just replace D by the del del x, right. So I am just doing del del x of this guy. And that is sort of multiplied by this inner product in the R case is just vector multiplication or matrix multiplication, right. So if you multiply these two, this is just the lead derivative, right. Yeah, this is how we have defined lead derivative, okay. Why? Because this is a scalar valued function, right. Any lead derivative of a scalar value function gives you a scalar valued function, okay. So this is a scalar valued function. I took partial of that function and I multiplied it with a vector field. That is the lead derivative, right with respect to this vector field. So that is what it is, L add F G multiplied by L F, yeah. The only difference is there are some indices here. Here there is I add F I and here there is J. That is all. Just to account for the fact that you could have L F square or you could have add F 3 and think like that. That is it, yeah. But other than that, this notation is the equivalent notation of taking the lead derivative with respect to a vector field, okay. If I put I and J equal to 0, what will I get? Just to see if you are following. If I and J are 0 in this expression, what will be the left hand side and right hand side? What will this term be for J equal to 0? If J equal to 0, what is this? What is L F 0 H is H? L F 0 H is just H, okay. When I put the power is 0, it is just H. No derivative. Zero derivative, yeah. So L F 0 H is just H and D of H is just del H del X, okay. Similarly, if I is 0, what happens? What is add F 0 G? G. Nothing happens, yeah. So that will be basically inner product between del H del X and G. What is that giving you? What is the inner product between del H del X and G or product of del H del X times G X. What is that? L G H. Okay. That is what this is. You can do it for different parts. That is what this expression is. Nothing too complicated. It is just saying that again, unfortunately, notationally so complicated that you have to wrap your mind around it. That is why I am saying, please go and write it out. Yeah, please make sure you write this one out by hand in a piece of paper. If you do that, you will follow it. All right. All right. So what do we know? Now, by relative degree argument, we know that this guy L add F K G L F L H is going to be non-zero for K plus L equal to R minus 1 and it is going to be equal to 0 for K plus L less than R minus 1. Here, we have used the previous lemma, the lemma 0.1, yeah. Because the lemma 0.1 says that if you have this happening, you have this happening. Yeah. And what is this? This is what I have written just now. I am just saying that it is using this what I wrote here. Yeah. It is just using this. Yeah. Basically, just by using that successive L G L F L G L F square is 0. I will be able to prove that as many L add F G H are also 0. Okay. That is essentially what. Therefore, we know that L G L F L G L F square is 0 until R minus 2. R minus 2th power. And so therefore, I will get similarly 0 here. Yeah. Until R minus 2. And then for R minus 1, I will get non-zero. Okay. That is exactly what I have written. Okay. Until R minus 2, it is 0. It is less than R minus 1 means R minus 2 because this is integers. Yeah. Yeah. And this is non-zero at R minus 1. Okay.