 So, we specified this condition for stability for linear systems which is connected to the state transition matrix and let us see how to prove it. We are actually saying this is equivalent all right. So, like I said it is actually in a lot of linear systems books you will not see epsilon delta definitions, but you will see this as the definition for internal stability all right. So, of course, let us see how it is equivalent. So, unfortunately I repeated it I did not need to the solution looks like this. So, this is the how you define or write the solution for a linear system in terms of the state transition matrix right and you are already given an initial condition x0. So, let us start with assuming that this is true. So, if this is true we want to prove that the system is stable in the epsilon delta sense that we just defined ok. So, if the RHS holds I know that the norm of x is less than equal to norm of this guy right. So, in fact I do not know what happened right. So, I have sort of skipped a step. So, norm of x is actually equal to norm of x is actually equal to norm of the state transition matrix multiplied by the initial condition vector which by this property is less than equal to norm of phi t t0 times the norm of x0 right just by using my induced norm inequality alright not doing anything fancy here. And I have already assumed that the right hand side is true. So, I have an upper bound on this. So, I get this guy alright excellent. Now, if I am given an epsilon I choose my delta as this yeah delta is just epsilon divided by this guy yeah because it is obvious that if this happens then norm of x is less than equal to k t0 epsilon by k t0 and I am done right well in fact this is becomes less than not less than equal to less than because remember that my initial condition is strictly less than delta ok. So, keep these in minor the less than and less than equal to c in the stability definition everything is strictly less than yeah or strictly greater than. So, delta is strictly positive epsilon is strictly positive initial condition x0 is strictly less than delta then all trajectories are strictly less than epsilon ok. So, x0 is strictly less than delta therefore, this is not a less than equal to but a strictly less than just keep track of these these are sort of important ok. I not going to discuss too much in length on y, but we like to work with open balls or open sets ok and the set norm x t less than epsilon is an open set yeah, but norm. So, this is an open set, but if I take this guy this is a closed set ok we do not like working with closed sets you do not like them basically we do not like to work I mean even when you are doing geometric control and so on we will see we do not like working with manifolds or any spaces with boundaries. So, so as soon as you have something like this is a boundary here it impacts differentiability and so on and so forth and what happens on the boundary and these are annoying things we do not like to consider so much yeah. So, we like to work with open sets because there is no actually boundary here it goes all the way very close to epsilon. So, yeah we are fine. So, so keep these in mind just just as a I would say is to be a little bit more precise ok good to be precise sometimes ok, but the important thing to remember is that very easy to choose a delta given an epsilon I mean I think we have done enough examples for you to get a feel for this I hope you just write the solution ok and you write the solution here and and you make an inequality on the solution if you want to and then what do you need you just need in fact how did I get the delta choice I needed this to be less than epsilon I need norm x to be less than epsilon from this I can in fact directly get what I need my x 0 to be smaller than right because x 0 has to be smaller than epsilon over k t 0 for this to happen right ok. So, I have simply use these inequalities I need x to be less than epsilon. So, if I take this quantity which is possibly larger than norm x and I make that less than epsilon then it is guaranteed that x is also less than epsilon. So, I have just use these inequalities smartly to my advantage right. So, this is how I always find a delta given an epsilon. So, if you get a problem on stability this is what you have to do yeah you write the solution and if you have an upper bound on the solution or the solution itself just upper bounded by epsilon and you try to find what is the initial x 0 because the solution will always contain the norm x 0 itself ok always yeah without that without initial condition there will be no solution. The only thing is in the nice linear case initial condition appears linearly yeah this is one of the outcomes of linearity right which will not happen in a non linear case ok you will not necessarily have linearity in initial condition right ok ok very good one side too easy no problem other way round if I assume stability holds and I want to prove this if I assume this system is stable and I want to prove this happens then we have to make some interesting moves ok. If LHS holds it implies what if I am given an epsilon let us be precise if I am given an epsilon which is positive there exists a delta which potentially depends on initial condition initial time and also epsilon but ok whatever and is also positive positive right such that if my initial condition lie in a delta ball then my solution lie in an epsilon ball ok this is exactly the definition copied yeah. Now I say something interesting I say that I will fix a TA and choose an XA such that this happens what is this by the way what is the left hand side what is the right hand side what is the left hand side this guy what is this is it yeah it is weird no it is not it is not first of all I did not say XTA equal to XA or anything like that notice I did not say that yeah ok I did not say that. So, this is nothing it is not the solution at time TA or P0 or anything like that ok it is just the product of the state transition matrix times some vector XA ok ok what is exactly happening here the first thing I did is I fixed a time right so that this matrix now becomes a constant matrix right once I fix a time a constant matrix then what is my sort of a claim this is actually a claim in a sense right I am saying I choose an XA such that the norm of this matrix yeah what would be the norm of this matrix by definition what would it be supremum of this guy divided by this guy over all possible X yeah, but I am saying and I had even made a claim here right that is always greater than equal to norm of A times X is always less than equal to norm of A times norm of X right, but I am saying there exists an XA such that this equality holds ok in general if you plug in arbitrary X this is true yes just by definition of the norm the induced norm, but I am claiming that there exists an XA because I am now talking about a constant matrix phi TAT0 whatever A ok I am claiming that I can choose an XA such that this I get an equality here why do you think I can do that really I took a supremum you remember the supremum right supremum is like you know like least upper bound does not have to be in the set and all that it is the supremum we saw these examples right 1 minus E minus X and where it is and then you are talking about the set which is so 1 minus sorry 1 minus E minus X is what it is no that was not 1 minus E minus X can not be 1 plus E minus X whatever we do 1 plus E minus X right 1 plus E to the power minus X yeah no does it work no no no what how did you choose it 1 minus E minus X and so the set was basically this guy I get everything from 0 1 right, but the supremum is exactly 1 right not in the set and so on why do you think this does not happen in this case I how can I get an exact equality here you are saying that is what will give you the make quality pretty good sort of close to it yes, but that is for a specific P by the way see if you one of the ways to convince yourself I mean none of this is a proof by the way is one of the things to remember is that I constructed a weird set here ok I did something funny I made an open it some I did open somewhere close somewhere and things like that I created a funny set so that it fails yeah in this case you are talking about all of R n ok which is both open and closed right you have all the nice properties that you want in all of R n ok. The second thing to sort of should help you convince should help convince you is that I have formulae here for norm of A which is independent of X right I mean anyway it supremum is expected to be independent of X yeah, but I have some formulae which exactly gives me what my norm is ok so that basically again not a proof this is not a proof if you ask me for a proof I have to hunt for a proof in the sense that it will have to be it has to be based on it is basically based on the idea that the reals have this nice a Banach space type property ok. So, if I think all of R n it has a Banach space it is a Banach space Hilbert space whatever it has all the good properties which we talked about ok this is essentially based on the fact that you are taking all of R n you are not making any funny sets and it is a Hilbert space or a Banach space ok. So, that is why you will always have an X a for which given any constant matrix you will be able to find that equality. So, basically the max and the soup will become the same ok that is what we are saying ok. So, that is basically the idea and that is what you rely on to prove this ok once you have such an X a which gives you this equality ok here I have just said that it exists by the definition of induced norm, but it is not as simple as that it is a little bit more than that just like we said yeah once you have such an X a what is the good thing you can actually now play with this system ok what do we do we consider this sort of an initial condition ok do not worry about how this is going it will sort of you will close the loop and see how things work out well for you ok, but this is the clinching thing here yeah once you have such an X a I construct an initial condition ok I construct an initial condition which is this ok what does this give me if I take a norm it gives me delta by 2 and these cancel out right. So, I know that the initial condition is bounded by delta right because it is equal to delta t0 by 2 therefore it is upper bounded by delta is that ok I have just constructed this X 0 in this funny way ok I am basically going to try to use the this definition to get to this sort of an inequality ok. So, I am going to I am basically trying to use elements of this definition. So, I have constructed my initial condition using the delta that I got from stability ok. So, I know that this the way I have constructed I know that norm X 0 is less than delta which means that norm X t corresponding to this X 0 will be less than epsilon right. So, norm of X t a I do not compute X t for arbitrary t I now compute X t a ok which is this phi t a t 0 times X 0 ok phi t a t 0 times X 0 I have chosen this X 0 in this interesting way ok. Again this is a scalar, but anyway this product the norm of this product is less than epsilon by my stability assumption right. So, this is less than epsilon by my assumption of stability this is a scalar goes out ok and this product I have already claimed is actually equal to this yeah norm of phi times X a is actually equal to norm of phi times norm of X a because I have chosen this X a in this very special way alright ok. And this is less than epsilon you can see that I am already close to the end now ok not difficult now because I have the norm of phi t a t 0 basically have the norm of phi which I want to bound right. So, I am going to get a bound of norm of phi here right. So, that is essentially what I have again I have repeated it and from here I get norm of phi these X a's cancel out that is the nice thing X a plays no role anymore and I get the norm bound as this guy which is some k t 0 ok. Now you might say that I took a particular t a and I mean I took a t a and so on, but remember I said fix t a to begin with yeah. So, if you say that I fix t a you only prove for one particular t a I will say that you fix some other t a or a t a prime, but you can do the same arguments again and you will get the same inequality again. In fact nothing will change will be exactly the same because the right hand side does not contain t a or X a or anything like that all the everything that we introduced goes missing from here on the right hand side. Therefore you can keep changing this t a to t a prime t a double prime triple prime whatever different choices of t a right hand side is not going to change which means that for arbitrary choice of t this has to hold ok. So, basically you prove the other side of the argument also ok make sense a little bit involved, but the only thing that is important here is the existence of an X a such that this happens ok alright. All of this works out again because R n is a very very nice vector space alright if you do not have very nice vector spaces, but in we do not work with the non nice ones again let me be honest yeah because we already said that you are working with some non linear space inner product linear space where you have Cauchy convergence is equal to convergence. So obviously we are already sitting in some very nice vector space ok. So having this kind of a property is actually not so unusual ok. So what about uniform stability I mean nothing will change you will you will get the same kind of result ok one side this this to this is anyway too simple because your if a uniform stability this k will be independent of t0 right that is how you will have uniform stability because you sort of remove the dependence on initial time. So therefore this will there will be no longer t0 it will be a just a constant k ok just a constant k for all t0 alright and once you have that going from here to here is very easy because k is independent of t0. So delta is independent of t0 done on the other side also if you see no longer dependent on t0 right because you assumed uniform stability. So the delta so this delta is also independent of t0 you started with uniform stability. So obviously this has no t0 here once you do not have t0 your x0 does not have t0 ok and this guy does not have t0 alright here also there is no t0. So essentially too simple right this t0 dependence vanishes here ok. So again you get a k which is independent of t0 so it works out on both sides yeah. So very simple which is why I am not giving a separate proof but all you have to do is remove the t0s from your proofs that is it that is all you do here alright great. So actually for linear systems asymptotic stability is actually equal to stability plus this sort of a convergence ok. So attractivity is this guy but this is pretty evident right because if you write the solution you know that as your solution as time increases this goes to 0 therefore whatever be the initial condition your solutions will converge to 0 right. So this is essentially attractivity in fact global attractivity but I already said that local global is irrelevant in this context ok. So if this goes to 0 then initial condition is irrelevant it is just some scaling constant ok so everything goes to 0 alright. If there are no questions we will sort of conclude here yeah. So this is basically what we have for stability and I believe from next time we will be able to start talking about the Lyapunov theorems alright. So already we will get to the crux of how to analyze stability for nonlinear system without actually solving the system as you can see very hard yeah even these conditions phi norm of phi less than equal to kt0 or k or virtually impossible to you know claim anything on without actually solving the system. So you know so this is something you have to do you will have to do the Lyapunov theorem without which for nonlinear systems you cannot claim anything yeah except with the linearization methods which are restrictive yeah because they do not give you a basin of attraction alright ok alright. So we will we will start with those from next session ok.