 we just rewrote this guy here. We just rewrote this basically the expression in terms of add f g's and so on. Basically the equivalence of the L add f g's with L g L f's. This equivalence is what we have rewritten here. Why we rewritten here? Because we are going to use this in the matrix multiplication to show that some terms are 0 and some are not. So, what is this matrix we are multiplying? We took that all the row vectors we had, we stacked them. So, I got a r by n matrix. R rows because whatever r terms and n columns because of course partial of h with respect to x will have n columns. And I multiply it with these guys. You start seeing all the add notation here. And this is a n by r matrix because each is a vector field. Each term is a vector field. So, it is an n by 1 vector. And there are r such terms. So, it is an n by r. So, the product as you can see is a r by r matrix. It is a square matrix. Deliberately done, very deliberately constructed. Now if you look at some of the terms in this product, you already know this is an r by n matrix. One of them is an r by n, other is an n by r. So, the product is an r by r matrix. That is obvious. Now, we want to look at the rank of this product. So, obviously I have to multiply things out and see what happens. It is deliberately chosen in a very smart way. So, that it turns out to a nice structure. So, what will be the first row? Row 1 will be dh. I am going to remove, forget the x 0 argument. So, I have to write less. dh times g or I can even write this is inner product of dhg, dh add fg and dh add r minus 1g. This is row 1. Everybody agrees, I have just done the product. You can see these are consistent. Because this was a each row is an r by 1 vector and each sorry 1 by n vector. This is 1 by n multiplied by n by 1. So, I get a scalar. Inner product gives you a scalar always. So, that is it. What do I know from here? This thing that I concluded, this complicated thing. This contains all this mess. If you look at the first term and the second term dhg, we just evaluated this. dhg corresponds to, in fact, I am going to write this out also if I can paste. I am going to use this. What is this? It is saying that whatever this mess is actually equal to this and I also have obviously from the previous lemma that and the fact that we are relative degree r system that this is non-zero only when this is r minus 1 and it is 0 for k plus l is less than r minus 1. So, if I look at the first term, can you tell me what is k and l? Can you tell me k and l corresponding to this? I will just use ij here and k and l here. They are just replaceable. Whatever is k here is i here. Whatever is l here is j here. Same thing. So, in this case what is it? j 0. We just did this. j 0 i is 0. So, k 0 i is 0. So, their sum is 0 which is less than r minus 1. So, basically what happens? The inner product is 0. What about the second one now? Again in this case j is what? What is j from here? j is 0. Excellent. What is i? 1. What is their sum? 1. Again less than r minus 1. So, inner product is 0. Keep going on. You will get 0s everywhere. Let us look at the last term directly. Let us look at the last term. What is the j here? 0, because I did not change the first term at all. j is 0. j remains fixed. i is r minus. What is the sum? r minus 1. Non-zero. Non-zero. So, I am going to say this is just whatever. I am just going to write it as l whatever add l add f r minus 1 g h. Rho 2. Let us look at rho 2. What happens to rho 2? h it gets replaced by l f h. That is it. Everything else is the same. So, what is it? Inner product d l f h g, inner product d l f h add f g all the way. Inner product d l f h add f r minus 1 g. Great. First term, what is j? 1. Remember, in every row j remains fixed. Excellent. So, I do not have to compute j again. j is fixed now. j is 1 for this row. i is going to change, of course. i is 0. 1 plus 0, 1 less than r minus 1. So, inner product has to be 0 by this. Again, second term 1 and 1, 2. Let you get the last term. In the last term, what is again j is 1. What is this guy? r minus 1. Sum is r. So, this is not equal to 0. So, there is some. Actually, I am going to make my life simple and I am going to call it not alpha. I am going to call it alpha 2 1 which is non-zero. What about the term before the last one? What is the term before the last one? d l f h add f r minus 2 h. Non-zero. That is because the sum is r minus 1. Because j is 1, i is r minus 2. So, again, I have r minus 1 which is non-zero. So, I have something which is alpha 2 2. I am just using some notation. I am going to call this alpha 1 1. So, there I am going to write the whole thing. Now, you can go on and on. You can see what is going to happen. For the third row, j will be always 2. Excellent. And so the first term will combine to give 2 plus 0, 2, 2 plus 1, 3 and so on and so forth. So, last 3 terms will be non-zero. Very nice. We are getting the pattern. So, row 3 will be alpha 3 1, alpha 3 2, alpha 3 3 and 0 is everywhere of course. What happened? I get what is being claimed here. I get a lower triangular matrix like this. I get a lower triangular matrix and what is the cool thing about the lower triangular matrix? What is the rank of this lower triangular matrix? Well, it is an r by r matrix. It is r. Lower triangular matrix rank r, right? Because very easy, right? Find null space. Because this is something multiplied by nth term is 0, means nth term is 0. Last 2 multiplied by last 2 terms is 0 or whatever. It is in echelon form without having to reduce. It is already in echelon form. So, rank is obviously the number of non-zero rows, okay? It is rank r. That is it. So, that is what we wanted to show that it is rank r. So, here I will say lower triangular form. We have rank r and then this is all this explanation which we do not care about. I already explained it to you. Whatever is written here is we have already discussed it, yeah? What have we just shown that the product of these 2 matrices is rank r, which means the product of the individual matrices has to be at least rank r, okay? Now, notice what were the individual matrices? This guy and that guy. What are the dimensions of these matrix? r by n and n by r, right? Each of them can anyway be maximum dimension r only. Rank can be maximally r, right? Because that is the number of rows or columns, right? n is larger than r, right? So, obviously each of these is maximal rank. That is what we proved, right? Because their rank is at least r is what we are saying, but the point is each of these matrices can never have rank more than r because they are deficient in the number of columns and rows, yeah? So, obviously each of these are maximally ranked. Now, let us forget about this for now, but if you look at this guy, what does maximal rank mean for this? It means all rows are linearly independent, right? Maximum rank is only possible or rank r only possible if rows are linearly independent and that is what was our theorem itself, right? That is what was the lemma. If you have a relative degree r system, then these quantities are linearly independent, yeah? That is what we wanted to claim. So, in order to prove this, of course we use lemma, the 0.1, yeah? So, this is where we wanted to get it, yeah? In basically to show partial feedback variation. Why? These form a new coordinates and coordinates have to be linearly independent, I mean x and y axis and z axis, linearly independent. In fact, orthogonal, not necessarily to be orthogonal, but all coordinates are to be linearly independent, yeah? Otherwise they are not coordinates, makes no sense to have dependent coordinates. So, what is this? We define our new coordinates, right? Because I already told you, right? Feedback variation, although the word says feedback, actually there is a state transformation also involved and a control transformation. There are two pieces to it. The state transformation is this guy. You take the coordinates as the output itself. It is first derivative because notice Lgh is 0. So, obviously first derivative and so on and so forth, all the way to r minus 1th derivative, okay? And that is where the control will appear also. You are already aware that the control will appear. And then you augment to it some functions. By the way, this does not specify how to choose those functions. Feedback linearization does not specify how to choose the rest of the states. But remember, if I started with n states, any valid new coordinate system also has to have n states, okay? Can't suddenly move from 6 dimension to 2 dimension. No, right? The system is still evolving on 6 dimension. So, if you got feedback linearization only in the dimensions, that is you had a relative degree 2 system corresponding to your output, then you have to augment it with 4 more states, which are what is called diffeomorphic maps. Basically, these are maps that will make sure that this entire thing has a diffeomorphism between old coordinates and the new coordinates. It is a 1 to 1 on 2 mapping, 1 to 1 on 2 invertible, smooth invertible mapping with a diffeomorphism, yeah? Basically, it means that I should be able to seamlessly move from one set of coordinates to the other set of coordinates. If I say this system is stable in these coordinates, it cannot be unstable in these coordinates, yeah? Everything has to be identical. Controllable in this, controllable in this. Stabilizable in this, similar in this, observable in this, observable in this. So, so that all properties are identical. Basically, the equivalent of a similarity transformation in linear systems, yeah? Simulatory transformation in linear systems accomplished by diffeomorphism in nonlinear systems, okay? So, so that is how you choose the phi 1, phi 1 to phi n minus r. That is the rest of the coordinate. There is no guideline for it. You have to choose it so that you get independent coordinates. The first are already independent. Why? Because they are Jacobian, which is this guy, is linearly independent. Yeah, I actually, yeah, I actually do not need the, wait a second. I do not need the last one. Let us see, yeah. Yeah, dh dLf r minus 1. So, actually, I do not think this should have been there. This should not be there and this should be r minus 1, yeah? Because that is the Jacobian. I do not know why I wrote it like this. This is not okay. This is actually d, yeah? That would be the Jacobian basically. So, how do you say that coordinates are independent? You basically take the Jacobian and if they are, if the Jacobian is invertible, then it is a diffeomorphism, okay? Local diffeomorphism. So, this is the Jacobian, right? Because you just take d, d, d everywhere that becomes a Jacobian, right? And so this has to be full rank, right? Because this is an n by n matrix, remember, right? And in order to facilitate this, we have already proven that the first r are already linearly independent, right? We have already proven that this is rank r, okay? We have already done our best is what we are saying. Rest you figure out yourself. You still have to figure out the rest of the coordinates, okay? That is, that is on you, on us also, of course, not just on you, alright? Alright? So, basically that is what we are saying. These phi i's are n minus r smooth functions such that this entire map capital phi is a diffeomorphism and the Jacobian has full rank. It is not an and by the way, diffeomorphism is equivalent to the Jacobian being full rank, not an and. If you want to check that a nonlinear map between same number of variables to another same number of variables is a one-one-on-two invertible smooth map, you just have to take its Jacobian and verify its invertibility at a particular point, obviously, right? It is a nonlinear map. So, you can verify invertibility only at a particular point, okay? It is almost like linearization actually, you think about it, yeah? Alright? So, anyway, that is fine. I mean, we do not actually need it, but we will talk about that later. The system, of course, evolves, evolution, the new set of coordinates is this guy, which is basically going to look like this. It is fine. It has some nice structure. We will look at that rest of the piece slightly later, alright?