 Welcome to non-linear control. So, where we were last time was at the Frobenius theorem we had started discussing the Frobenius theorem. We are going to continue to actually discuss and talk about the actual theorem itself. In order to talk about the Frobenius theorem we first introduce the notion of a distribution which is basically you take all these generating vector fields and you take a span. So, basically at every point P in the state space or whatever set you are working with you get a subspace. At every point P you get a subspace. So, that is what is the distribution itself. Obviously, we do not want the distribution to lose rank or change rank. So, therefore, we talk about non-singular distributions which do not alter in rank as you change the point P. So, we will always work with non-singular distributions and then we stated and proved this lemma which is saying that if delta is a non-singular distribution then involutivity is exactly identical to the Liebrakhet being in delta. So, actually I forgot we also defined involutivity which is saying that if there are two vector fields in the distribution. If fg belong to the distribution then the Liebrakhet also belongs to the distribution. Now, we also discussed this that it will lead us if you want to actually check this in reality you will have to do all these iterated Liebrakhet. So, many iterated Liebrakhet. So, we came up with a single simpler condition which is this lemma 1.3 which says that if you have k generating vector fields for delta then involutivity is identical to just verifying that pair wise they belong to delta. That is it, pair wise you have to check that fifj belong to delta. We also proved this I am not going to go into the proof of this. We also sort of I mean anyway did this example on feedback linearization, this system and anyway we just this is again going back to the previous material. We just took an output which was given to us and then computed the relative degree of the system. After computing the relative degree of the system we saw that it was 2 and the state space was 3 dimensional. So, obviously I need one more additional coordinate and we saw what we were thinking how to choose it. The simple obvious choice does not work. Like if I choose z1 is x1 the derivative of x1 contains the control which we do not like because in the normal form the control appears only in the linear part and not in the non-linear part. So, we do not like this choice. So, obviously in order to come up with this good choice we use the definition itself that is you know we want to choose this phi function such that L g phi is equal to 0 and that gives us a bunch of basically gives us just one partial differential equation. Just one partial differential equation which is looking like this. As you can see this is not completely specifying the function, but we are just going to guess one possible choice that satisfies this relationship and we guessed it. I would say I guessed it. If you can come up with a more smarter way of doing this sure. In fact, we can go later and see how to maybe I do not know maybe it is possible to do better than this. But as of now no we just have this we are basically guessing this function based on the PDE that we have. So, it is not like a completely out in the blue sort of a guess, but it is more like you have a PDE and it is not easy to solve this. So, you will sort of try to do something better. I mean actually you can do something, but I am not going to I mean it is going to be very complicated. Guessing is way easier here. You just trying to cancel appropriate terms and so on and we discussed this last time. This I left for you to sort of look at because this is part of the exercise that was given, but I have it looks like I have already done something on it. So, I leave it to you folks to sort of verify this ok. Now, we actually move on to the statement of the Frobenius theorem itself. Why do we care about the Frobenius theorem? Because we are trying to answer that original question. When is it possible to feedback linearize a system completely feedback linearize system? Until now we have been looking at relative degree something right which is less than n of course the Frobenius theorem is useful even then, but it becomes especially useful to figure out when your relative degree can be made n ok which means that you essentially have the entire system to look like a linear system right after suitable state transformation ok. So, that is the question we are trying to answer. So, Frobenius theorem is what lets you answer that question. So, that is why we are moving towards the Frobenius theorem alright. So, we make a couple of couple of one definition and one theorem right. So, the non-singular distribution delta which is generated by these k vector fields is said to be completely integrable on some open set in the state space if there exist n minus k annihilators ok. If you remember your vector space course annihilators are essentially what annihilate the elements of the vector space vector field or vector space itself ok. In this case and and remember that the annihilator always has dimension n minus the yeah the vector space of annihilators is always n minus the dimension of the vector space itself that is how it works. So, in this case you have k vector fields distribution is non-singular which means delta is of dimension k at every point p at every point p delta is going to have dimension k right because it is a non-singular distribution ok. Therefore, we have n minus k functions ok. So, what are these n minus k functions these are h notice that these are real valued by these are scalar functions ok these are scalar functions and what how do we define them they have to satisfy that L F I H J which is this del H J del X F I is exactly equal to 0 ok this is actually like an annihilator ok. So, here it is not h itself, but you are taking the it is not the h is not the annihilator vector the annihilator vector is the partial of h or dh ok because h is a scalar field or this real valued function yeah, but when I take its partial with respect to x I now get a rho vector right it is it is 1 by n dimension ok. So, this is the annihilating vector field vector ok. So, and further you want that these dh dh is of course this guy itself we have already defined this notation dh is this these dh are supposed to be linearly independent ok ok. So, if this if these conditions are satisfied then the vector field or the sorry or the distribution is said to be completely integrable ok this is just a definition. So, just keep this in mind we do not want to go into too much more detail than this yeah. So, distribution generated by k vector fields I want the existence of n minus k functions such that their dh multiplied by f i is 0 for all i and all j yeah i is ranging from 1 to k j from n 1 to n minus k ok. And of course, these dh have to be linearly independent as soon as I said this dh has to be linearly independent you should be reminded of this lemma 0.2 yeah. If you remember we I mean I will scroll back what was lemma 0.2 yeah we started saying that some row vectors are linearly independent ok. So, exactly like this as soon as I say that these dh are linearly independent you should be reminded of that yeah. So, you are already it looks like there is a connection between what we are saying here or what we want here and what we were looking for there ok all right great. Then we talk about I mean in order to actually verify this result the Frobenius theorem we need the inverse function theorem, but the inverse function theorem is such a inverse function and implicit function theorem. These two theorems are such powerful the results that all of you who are doing anything in systems and control just read and understand this theorem, these two theorems the implicit function theorem and the inverse function theorem ok. Implicit function theorem is an out by product of the inverse function theorem, it's very straightforward what does the inverse function theorem say it just says that if you have a continuously differentiable function from an open set in R n to R n and if the total derivative that is d t of p is invertible ok at some point p. So, d t of p is invertible then the function is not just invertible at p, but in a neighborhood of p ok. So, so all these result both the implicit and inverse function theorem results remember I connected if you remember when I talked about the diffeomorphism right and I said that it is essentially like the equivalent of a similarity transformation for non-linear systems right. So, when I talked about the diffeomorphism how did I verify something is a diffeomorphism I took its Jacobian and the Jacobian has to be invertible right. So, this is where it all that comes from because if the Jacobian is invertible that is the d t is invertible at a particular point then there exists a neighborhood around p in which the entire function is invertible ok. So, it is almost like what we do with linearization in you know in controls also we do this right we take a non-linear system and we linearize around then some equilibrium point and then we look at the linearization the a b matrices and if they have some nice property like if a is a Hurwitz matrix we say that the system is locally exponentially stable or locally asymptotically stable. So, this is almost like that the derivative of a map or the Jacobian of a map actually gives you something about the invertibility of the map ok and this is why these results are rather powerful and applicable in so many different places yeah. So, that is what this is saying is just saying that if you have a continuously differentiable function from one open set in R n to another and the derivative or the Jacobian is invertible at some point p you just have to check at one point then there exists a neighborhood around p in which the entire function is invertible yeah not talking about Jacobian Jacobian was already invertible yeah the function itself is invertible in this entire set ok. So, that is the inverse function theorem and not only invertible it is in fact the inverse is also continuously differentiable ok. So, similarly if you started with a smooth map then the inverse will also be smooth this is exactly what we use to define a diffeomorphism ok. So, whenever you have to check that a map is a diffeomorphism by the way the map t can be is a d t can be invertible only if it is going from the same dimension to the same dimension right. So, therefore, we are always talking about R n to R n same in diffeomorphism right it is a state transformation. So, the number of states here in the left has to be same as the number of states on the right right. How do you check it is a diffeomorphism just compute the Jacobian of the map and the Jacobian is invertible you are good to go it is a diffeomorphism ok and these are admissible transformations state transformations like similarity transformations yeah ok excellent great. Now, we are ready to state the Frobenius theorem ok. Before I state the theorem itself I will tell you that I am not going to do the proof extensively the proof is written here yeah if we actually end up having time I might just do a session separately just for the proof yeah proof is quite involved and but I will encourage you to re-retain you. So, I will just give you a short sketch we do not expect you to do the proof itself. So, I am not going to actually cover it, but later on if there is extra time I will just do a separate session on just the proof ok alright. It is very actually this Vivek's notes is done a very good job because usually it is not easy to find the proof of Frobenius theorem in Euclidean space yeah typically whenever if you remember I told you that the reference was Astolfi Alessandro Astolfi's book. So, most of the notes and everything out there whenever we talk of Frobenius theorem they are working on manifolds not Rn and then there is lot more notation and you know careful bookkeeping there yeah. So, here everything is proved in Rn which is rather nice. So, the proof is very nice please go through it ok if you get a chance. So, let us look at the statement the statement is actually just two pieces one it says that if you of course, you start with a non-singular distribution of dimension k ok always I mean that is what we have been doing then in what this entire thing says is two things one involutivity and complete integrability are equivalent ok. There is two pieces here although it has long statement here involutivity is equivalent to complete integrability that is the one statement and the other statement is that other piece is that if you have involutivity then your there exist some transformation such that your delta looks like a span of unit vectors ok I do not know if you can appreciate it yet. So, I started with what I started with yeah that is what I started with that delta is what it is a span of this guy right some k vector fields evaluated at particular point what are we saying we are saying that something rather powerful we are saying that if delta is involutive in the state space then it is equivalent to saying that this delta is in fact the span of identity vector fields or the unit vector fields what is the unit vector field E i is this E 1 is this. So, obviously it is not affected by point p or anything everywhere it is the same. So, it is pretty powerful it says that there exist some smooth change of coordinates ok such that all these vector fields that you have f 1 to f k looks exactly like E 1 E 2 E 3 and E k ok alright I hope you can appreciate the power of the result ok. And if you remember what are these vector fields there is a right hand sides of differential equation right if you mean that is how we have been if you think about the control vector field is something like x dot is u i summation of u i f i ok that is how we have been looking at right and it basically gives you what directions of movement at every point at every point a distribution test tells you that what are the directions where you can move using this particular control ok. So, if I now say that this span f 1 to f k which is some complex nonlinear function became just span of E 1 to E k. Now E 1 to E k is just you know what it is it is the k dimensional r k it is a subspace r k in R n right. So, it essentially you immediately can say that at every point in the state space I can move in a k dimensional subspace of that state space yeah that is pretty powerful ok. Also if you replace f 1 by E 1 f 2 by E 2 f 3 by E 3 what is the dynamics come out to somehow looks like some integrators are happening right x 1 dot is x 2 x 2 dot is x 3 no is it look like it is happening maybe not exactly ok that may be not that no no no it is not the integrator is not happening very well let us not worry about that. But the point is the key point is you are reducing this rather complicated nonlinear you know at every point p you get a different subspace different shape of the subspace to a very nice hyper plane ok. So, this is the power of the Frobenius theorem 1 if you have invulitivity of this delta then you get this very nice hyper plane at every point yeah which is the r k dimensional subspace in r n ok and 2 invulitivity and complete integrability are identical yeah if you have a non singular delta yeah and that is pretty cool because invulitivity was some kind of an algebraic computation you know whatever I mean you are just doing Leigh brackets yeah and you just do it is easy to verify yeah and by doing that you actually get that you have complete integrability and when we actually say it is completely integrable you actually got these functions h 1 to h n minus k yeah that is what it means right complete integrability means I actually get these functions h 1 to h n minus k which the derivatives of which act as annihilators ok alright. So, these are what will become a new coordinates right. So, alright proof I will just sketch it for you proof goes in 3 steps in the first step in step 1 what is being done is just this proof that if you start with invulitivity any such f 1 to f k will become even 2 e k ok that is what this step 1 is doing ok it does it in a rather nice cool way ok, but I mean it does not do it ok it does not do it in completeness, but it more or less does that ok that you are if you see these f 1 to f k's you started with f 1 to f k these are and then you ended up with something like even 2 e k, but with something more here yeah with something more here not even 2 e k itself, but up to the kth element this is consistent right it is just the unit vectors first unit vector second unit vector and so on, but there is some terms here ok that is what this does in the first step yeah it all the other thing it also does is it shows that this new vector fields commute what are the new vector fields these guys ok and when we say vector fields commute we mean that the Leigh brackets are 0 ok that is how you define commuting vector fields it is like in the matrix case also right you a b minus b a equal to 0 that is actually the same exactly this for the matrix case yeah if you say linear system case x dot is a x x dot is b x this is the Leigh bracket it is easy to commute easy to compute if I say f 1 is a x f 2 is b x Leigh bracket is exactly a b minus b a ok not difficult you can think about it in fact just a thought experiment. So, f i f j commute ok so earlier when you started with f 1 to f k it was not evident, but once you do this transformation it is evident that the vector fields commute ok alright what do we do in the second step hmm second step is the complicated step actually sorry I apologize here all you show is that the vector fields commute that is all you do not show that it becomes equivalent to e 1 to e k that is done at the in the second step you look at just the last line of second step you can see f 1 to f k are mapped to the constant distribution e 1 to e k ok and what the second step does is it actually comes up with the transformation the state transformation you need a state transformation right to go from the f 1 to f k vector fields to the e 1 to e k vector fields that is what the second step does it is rather complicated because it uses flows and things like that I have actually not introduced this terminology too much again if time permits we will do a separate session on this, but not right now yeah all I am telling you is what each step is moving alright the final step ok once you know that your f 1 to f k's are smoothly mapped to e 1 to e k's ok you try to come up with the h functions for the e 1 to e k system ok what did we do we went from the f 1 to f k vector fields to the e 1 to e k vector fields alright and now in order to prove complete integrability what do I need I need to come up with the h h 1 to h n minus k that will annihilate this guy and it is this is much easier to do right because even annihilating e 1 to e k is much easier right what do I say all I do is I pick this annihilating functions as x j plus k for j starting from 1 to n minus k ok I hope this is not getting already too complicated ok just look at this x 1 x 2 x k x k plus 1 I am going to write this as x k plus n minus k yeah you see this vector yeah let us go back to what we are doing in feedback linearization let us forget this annihilator business what were we doing in feedback linearization whatever was the relative degree of the system we were choosing the rest of the coordinates right if relative degree was r we were choosing n minus r coordinates on our own so that the entire thing becomes a difumorphism right and in the normal form of course l g phi is 0 right it is almost like that almost exactly like that yeah if you look at this this almost looks like l g phi does not it yeah because the g it is the flipped version the g is this guy the vector field and the phi is this guy right exactly looks like even if I if I go back here l f h right exactly looks like l g phi this is how I was choosing the new coordinates right so that the control does not appear that is all this this looks exactly similar to that you make something 0 some l g phi to be 0 it is exactly like that in this case the g is this vector field and phi is this function ok so that is all we are doing if you notice I am choosing the g in a smart way so that this partial multiplied by E i is 0 now I ranges from what 1 to k right so E i has 1 in what position in the ith position right E i has 1 in the ith position and 0 everywhere else and this i is maximum k cannot be more than k now if I choose my this guy is j plus k ok then where will it have its if I take a partial with respect to x this row vector where will it have 1 where will it have 1 in the j plus kth position right it will have 1 in j plus kth position and 0 everywhere else right so definitely I will in so this guy will definitely have 1 in the j plus kth position this will have 1 in the ith position and i is less than or equal to k therefore they do not have this is a dot product right so they do not have 1 in the same position can never have that is how I chose it right this E i will have 1 say here E 2 will have 1 in the second position ok this will have 1 in the j plus 2th position ok so therefore they are not going to be have 1 in the same position so the dot product has to be 0 ok that is it I am just making the smart choice ok why why do I do it in the E i in the E 1 E 2 E k frame coordinates because it is easier it is just unit vectors yeah much easier to construct this ok now once I have constructed this ni highlighter in the E coordinates E i coordinates I can take it back to the original coordinates that is the f 1 to f k system yeah just by doing a transformation that transformation comes from here yeah that transformation comes from here which is why this thing is a bit complicated and it is pretty easy to show that once I make this backward transformation this will also have the same property that del h j del q f i q will be 0 because I moved back and forth with the same coordinate change right I got a very nice coordinate change ok and I constructed a annihilator in the new coordinates then I went back I took that annihilator back in the original coordinates that is it ok yeah the process is simple notation is complicated alright and so what have I done I started with involutivity ok and I proved that I have integrity complete integrity ok.