 What is a good plan is, I know I have given this is a homework, but I am going to just do this now. We will just do this here in class, we can assign something else for the homework. This is a basically a DC motor model and what I need you, what I had asked of in the homework was to basically verify the two lemmas, 0.1 and 0.2. So what I am going to do is basically try to just, let us see if we can do this on our own and if we remember all the ingredients of this lemmas, so that is basically what I wanted you to verify. I think that is visible, yeah, so you see that there is some non-linearity here and here. Let us not worry about what the parameters are and so on, what theta is, what AB is and so on. That is not our concern right now. We just want to verify these lemmas and we are also given an output with respect to which we want to work. Now the first thing is what was lemma 0.2, sorry 0.1, we are just verifying that all this, if you are given that LGH equal to LGLFH all the way to LGLF R minus 2 H is 0, then you get that L, yeah I am just going to copy this I guess. Then you can basically claim that all the L add FKs are also 0, okay, yeah. So you can claim that L add FKGH is 0 for K equal to I guess 0 to R minus 2, okay because I am just specialized it to relative degree R, okay. So the first thing we want to do obviously is to compute these, yeah, let us start computing. So first things first, what is F, what is the drift vector field, what is the second term, I think this is also a controller, wow, wait a second, Vx2 plus u, okay, that is some K, yeah, looks like it is just some constant, okay. This is just K, yeah, good handwriting, all right, all right, fine. What is F, let us start, minus AX1, okay, second component, yeah, the whole thing, right, third component, excellent, everything without the control and G is excellent, yeah, great, okay. What is H of X is X3, yeah, of course one simple way to do your find relative degree is to start taking derivatives directly and find relative degree, but unfortunately I have asked you to verify the lemmas, yeah, or ask myself I guess in this case to verify the lemmas. So we will actually have to compute all this, all right. So if you see that the first term here is also LGH, right, because if I put K equal to 0, this is just LGH, right. So let us just compute LGH first, what is LGH, first I take partial of H with respect to X, what is it, partial H with respect to X, yeah, 001, thank you and G is just 100, so obviously 0, done, first done. Let us do LGH, LFH, did I get this correct, yeah, I think I got this correct, right. What is LFH first, LFH is again 001 multiplied by this mess, minus AX1 minus BX2 plus K minus CX1X3 theta X1X2, okay. What is this, theta X1X2, thank you, all right. Now we do LG of LFH or LG LF I can put the bracket that is fine, yeah, it is a distributive operation. Now if you want to do LG of LFH what do I have to do, I will take partial of this guy, right, what is it with respect to X, yes go for it, theta X2 theta X1 0, now and I multiply with G. Now did I get something, thank God I got something, yeah, this is what theta X2, right, okay, then what was the relative degree, 80 degree is what now, R minus, so this has to be 0 until R minus 2, right, so it was 0 until K equal to, it was 0 until what, 1, 80 degrees 2, how did I get to, from this expression how did I get to 2, I got that only LGH is 0, so this term has to be LGH, so R minus 2 equal to 0, R equal to 2, great, okay, just the last term has to be 0, right, all right. So then I have to of course do this also, I have to prove that, I have to now compute L add F 0 G H also, but this is actually equal to what, LGH which is already proven to be 0, so proving, so lemma 0.1 proved, yeah, too easy, because I did not have to compute any further derivatives at all, done, okay, great, what was lemma 0.2, it was the linear independence, all right, linear independence of D H, D L F H and so on and so forth, okay, in this case how far do I have to go, only till R minus 1, so what is R minus 1 in our case, 1, so I just have to go till L F H, L F 1 H whatever or L F H, yeah, so this is that is it, so I just have to compute this much, D H and D L F H, what is H, sorry, what is D H, you already done this, it is 0 0 1, what is D L F H, did I get this right, yeah, D L H is correct, L F H is this guy, what is the D of that, you already computed that also, yeah, theta x2, theta x1 is 0, okay, this is linearly independent, right, I mean, okay, let us be precise, right, if you remember all, we always talked about this linear independence for a particular point, we said at x0, in this case, this is rank 2 for all x1, x2 not equal to 0 0, okay, x3 can be anything, notice, x3 can be anything it is irrelevant, but x1 x2 both cannot be 0, okay and of course you can assume theta is not 0, otherwise stupid system, yeah, alright, so both x1 x2 cannot be simultaneously 0, okay, x3 anything is allowed, okay, then this is rank 2, we are done, we have coordinates, right, so what are our coordinates now, y1 is H x itself, which is x3, y2 is L F H x, right, that is what we chose, which is what, theta x1 x2, that is it I think, you only get 2 coordinates, right, because I mean, how many do we get, until r minus 1, right, we get until r minus 1, r minus 1 is just 1 in our case, so we get H and L F H, these are the 2 coordinates we get, now what is the third coordinate, I do not know, I do not know what I can choose, I just want to make the whole thing a diffeomorphism, alright, so as of now what did I get as the Jacobian, I got it as 001, theta x2, theta x1, 0, yeah, now what should I choose as my third coordinate, any guesses, basically I need it to be linearly independent from y1 and y2, that is in the Jacobian it should contribute a third dimension, what do you think, I think something like minus x1 and x2, 0, this will work, because that makes these orthogonal, this makes it orthogonal, right, I mean this dot product of the, dot product of these is 0, right, I just use that idea, so if this is the Jacobian, then I am good, because this will be rank 3, because if you notice, if you take the determinant this is always okay, no problem, yeah, always going to give me one independent row or column, the determinant of this is what, theta x1 square plus x2 square, right, so if it is not at the origin, it is okay, that is the best I can do, so I can take this, so what will be the y3 such that this is the Jacobian, partial with respect to x1 is x1, partial with respect to x2 is x2, half x1 square plus half x2 square, yeah, alright, so I sort of back calculated this, yeah, no point adding anything here, I see this is the trick, I am just playing a trick here, I am just adding a state, so that I get this diffeomorphism, that is all my ideas, so adding anything here is pointless, because already have full rank in this, no problem, no point adding anything here, now how to make these two linearly independent, make them orthogonal, the best way to make any two vectors linearly independent is make them orthogonal, this is the best way to do it, so I just do x1 square by 2 and x2 square by 2, very, look at this, very crazy looking coordinates I got, unusual, started with very nice looking output, yeah, but then I ended up almost looks like my Lyapunov function, I got almost a Lyapunov candidate in the first two states, but whatever, this is a fair set of new coordinates, okay, now what will happen, I mean if I actually write the dynamics, yeah, chem is key, kind of minus, there is always wait till the end, which one is minus, this one, this one, no, thank you, all right, so not a Lyapunov function, excellent, I like that, all right, whatever, right, yeah funny system, I wonder that can it be simpler, it could be, but then what will happen is, I could try something simpler, what will happen, that do is it will restrict the x1, x2 in which this will work, right now it will works for a very good class of x1, x2, basically whenever both of them are nonzero, it works, right, which is what was the assumption here also, so this works whenever both of them are nonzero, sorry, any of them is nonzero, any of them is nonzero, this works, right, that is pretty good, right, because usually I always want to drive my systems to 0, so it works all the way till the end, then it does not work, it does not work, and you can see that I work almost till the end, right, if I choose something simpler, I think it will become restrictive, if I tried anything else, if I remove this, for example, if I remove this guy, if I made this 0, what would that do, determinant will be theta x2 squared, right, so x2 will have to be nonzero for this to work, right, so yeah, you can think of other fun choices, but yeah, let us try to compute the dynamics in this new series, let us see what I get, y1 dot is what, x3 dot, x3 dot was what, theta x1 x2, oh that was expected, y1 dot was supposed to be y2, sorry, that is the whole point of this whole exercise, yeah, y1 dot is y2, yeah, what is y2 dot, y2 dot is the derivative of theta x1 x2, so that is theta x1 dot, which is ax1 plus u minus ax1 plus u times x2 plus theta x1 x2 dot, which is what, minus bx2 plus k plus cx1 x3, right, something funny, the only thing is my control appears here, only thing is the control appears here, again it is very difficult to actually write this in terms of the y1, y2 and y3, so I am not even going to try to attempt this, but the control appears here, that is the good thing, yeah, and then if I do y3 dot, I get half x1 square, half x2 square, so the derivative is minus x1 x1 dot, which is minus ax1 plus u plus x2 x2 dot, which is x2 minus ex3 plus k minus x1 x3, okay, so again, if you see the control appears here also, okay, so the control appears now in two equations, okay, could have been avoided, again if I got rid of this gate and got rid of this gate, control would not appear here, then you would have control only one equation, may have made control design easier, so this, now then the choosing the third one is really depends on a lot of factors, of course you are trying to make a defumorphism, but there are infinitely many choices, right, then it is really on what is good for you, okay, so I think I do not know, did I try to, yeah, I did something interesting here if you see, I chose this, yeah, if you see the previous page, z1 was x3, I used the notation z1 z2, z1 was x3, of course z2 is theta x1 x2, these we have no choice, because what it is, right, z3 took as x2 minus k over v, okay, now why did I do that, then because I got z1 dot is z2, z2 dot is whatever, and z3 dot is this nice expression, I have no idea why I chose this, but if I take z3 as what I chose x2 minus k by v, then what happens, look at this, what happens, if I choose z3 as x2 minus k over v, then this guy, the third row becomes what, the derivative, right, it is just the partial of this, so it will be 0 1 0, because this k by b is some constant, right, this is also not a bad choice actually, we think about it, right, because it gave me a good extra column, right, and then this is what it is, if one of them is, any one of them is non-zero, I am fine, so actually this was a simpler choice, right, because this is just actually linear, right, any linear transformation is always nice, right, starting from one coordinate, you move to another coordinate in a linear way, this is also a nice choice, because it gave me a nice full, you know, these two are already linearly independent, depends on nothing, okay, and then this one, all you need is, well actually in this case also, it is not that nice, in this case also you must have x2 to be non-zero, I apologize, it is not that nice by the way, yeah, sorry, you will need x2 to be non-zero, because if x2 is zero, then this goes away, you could have something here, but then this becomes the same, yeah, yeah, yeah, so this will also need x2 non-zero, yeah, whereas this choice, even though it is ugly looking, only x1 square plus x2 square non-zero is required, okay, all right, so this is how you work with feedback linearization, eventually all those complicated looking expressions, you only use some part of it, in this case you only used r equal to 2, right, so those expressions are significantly less in number, and the computations are exactly like this, you have to do this partial and multiply and so on and so forth, okay, all right, any questions? Then you will have full state feedback linearization, which is what we will see next, in that case the system is completely linearizable, yeah, physical meaning, yeah, I mean it has physical meaning in the sense that it connects how your output and input are sort of, you know, again if you are looking in terms of whatever is your r, your system has a subsystem, which is an r dimensional transfer function, basically your, you know, the larger system with additional state, but then in the middle there is sitting this r dimensional linear system, so it is almost like saying there is, yeah, I mean, then there is these notions of, I mean, I mean unfortunately we will not have time to talk about those, but there is this notion of differential flatness also, some of you have seen this, basically it says that you take a certain number of flat outputs and all the system states and the inputs can be represented as these flat outputs are the derivatives, right, so again this is also similar notion, right, there you are, it is a property of the input output system, yeah, it is somehow making some part of the system linear, so the behavior is, will be linear, yeah, whatever, all the robot dynamics cases, I mean, it is fully feedback linearizable, right, so it actually behaves like a double integral, it behaves like a linear system, yeah, particularly easy to control, particularly easy to work with, other than that, no, I do not know of any other physical relevance, and controls folks attach physical relevance to sensors and actuators, right, so if there is a linear connection between the sensor and actuator, we really love it, right, we can do so much, yeah, we can even, you know, most importantly, why because we can get performance, right, we can talk performance, transients, nonlinear systems, yeah, now there is so much literature on transients, but linear systems, it is so mature, right, how much overshoot, how much, you know, damping, how will the oscillations look, these are also like critical values, critical, that is the physical relevance for us, all right, yeah. Fundamentally, there are robustness issues with feedback linearization, it is not Lyapunov based, as you can see you are canceling things, we will keep trying to cancel things, yeah, even this dynamics we saw in the end, if you see, you got this structure, it is rather complicated looking structure here, you got the control here, and you got the control here, notice you had u x 2 and minus u x 1, you see this orthogonality sort of in the control, you remember in a Lyapunov function things, you just take it as, you know, if you take x 1 square plus x 2 square, it sort of cancels the 2 in some sense, okay, but you have u x 2 theta and minus u x 1, of course, you can have a theta, it does not matter, you will see that you are not asking for x 1 and x 2 individually to be nonzero, but one of them to be nonzero at least, so if one of them is nonzero, say x 2 is nonzero, yeah, and this is 0, then you cannot do anything with this state anyway, but this state you can make this whole thing linear by canceling all the nonlinearities, right. So, because it is a relative degree 2 system, you can only get a two-dimensional linear system, similarly if x 1 is nonzero, then this system can be made linear, this still remains whatever it is, okay, so you again get a two-dimensional system which is linear, that is the whole idea, all right, okay, any further questions? All right, we will stop here.