 We are looking at analysis of convergence or convergence analysis of these methods. So A x equal to b is a set of linear algebraic equations and what we have looked at till now is forming you know iteration scheme. So iteration scheme was formed as by splitting this matrix as s minus t okay and then you come up with a iteration scheme by rearranging this equation that is s minus t into x equal to b. This you rearrange and in vector form for the purpose of analysis we can rewrite this equation as x so my iteration scheme in general is xk plus 1 is equal to s inverse t xk and plus s inverse b. So this is the way we have written this for the purpose of analysis different choices of s and t will give you Jacobi method, Gauss-Seidel method and relaxation method and so on and when you actually reach the solution what you get is x star is equal to s inverse t s inverse b. So this is let us say this is the solution of the system of equation A x equal to b if I subtract okay then I get this equation okay. So we have been looking at this general equation you know error or distance from the true solution is equal to s inverse t into ek where k here is the index of iteration okay. So I start my iterations from some arbitrary x0 this is my iteration equation x0 will give me x1, x1 will give me x2 if you substitute in this okay we will get a sequence of vectors x1, x2, x3, x4 starting from x0 the question is under what conditions will this sequence converge to the true solution under what condition this error will go to zero okay that is what we are looking at. So we have abstracted this by a general linear difference equation we said that this exactly looks like zk plus 1 is equal to b zk okay and in my last lecture what I have done is derived conditions under which this particular equation will actually behave as we want that is zk will go to zero as k goes to infinity as k increases okay. So I have derived general conditions for this linear difference equation we just mapped this specific equation that we are looking at to this general equation and then we derived certain conditions so these conditions are as follows okay. So now what I am going to do is I am going to define this spectral radius of a matrix so spectral radius of matrix B is defined as so here lambda i represents eigenvalue i-th eigenvalue of matrix B okay I am looking at mod of that magnitude of this i-th eigenvalue okay and maximum over that. So spectral radius of a matrix is defined as maximum over i eigenvalues 1 to n whatever eigenvalues that we have and this is over you know eigenvalue maximum magnitude eigenvalue is called as the spectral radius of matrix B. So the condition that we derived was something like this that if a is spectral radius of B is strictly less than 1 spectral radius of a is strictly less than 1 then what we have shown is that zk will go to zero norm of zk norm of zk will go to zero as k goes to infinity as k increases what I mean by k goes to infinity as k increases okay norm of zk goes to zero if all the eigenvalues of B are strictly inside the unit circle if all the eigenvalues are strictly inside the unit circle okay the second condition that we derived was something like this okay that instead of strictly less than 1 if some eigenvalues are on the unit circle okay then what happens is that norm of zk will not go to zero what will happen in that case is that norm of zk will remain bounded okay so in this particular case norm of zk as k goes to infinity okay and the third condition that we derived so what is important here is that without actually having to solve just looking at eigenvalues or the spectral radius of matrix B we can tell how the asymptotic solution is going to behave okay we do not have to actually solve it for a specific z naught and say how the solution will behave we just look at the eigenvalues and we can tell how the solution will behave so the last case was if norm zk was if it is if well sorry if spectral radius of well I made a mistake here okay so not norm zk if spectral radius of B well I made a mistake here just make correction spectral radius of B if this is strictly less than or equal to 1 the spectral radius of the strictly less than or equal to 1 then norm zk is bounded okay as k goes to infinity okay so spectral radius spectral radius will directly tell you whether the solution is asymptotically bounded or asymptotically converging to zero okay and the last condition was the spectral radius of B is greater than 1 so if one of the eigenvalues is outside the unit circle okay then norm zk tends to infinity as okay so spectral radius of B is what critically decides how the asymptotic behavior of the solution is going to be okay if spectral radius is strictly inside the unit circle okay then norm of zk will go to zero as k goes to infinity if the spectral radius okay is less than or equal to zero which means some of the I if if the if some of the eigenvalues are on the unit circle okay in some matrices in such cases the error will remain bounded actually strictly speaking we should say it is strictly less than zero we can even replace this condition by equal to 1 we can replace this condition by spectral radius equal to 1 and the third case as spectral radius is greater than 1 so if this will happen if one or more eigenvalues are on the unit circle okay the first condition will occur if all the eigenvalues are inside the unit circle and this condition if any one of the eigenvalues is outside the unit circle then so what is the implication when it comes to when it comes to solving linear algebraic equations iteratively okay this is our iteration scheme okay and this is the error or distance from the true solution okay the question is will the distance from the true solution shrink okay will you go towards the true solution as k goes to infinity as or k increases as iteration index increases will you approach x star okay now that will depend upon spectral radius of s inverse t okay that will depend upon spectral radius of s inverse t if you just apply these conditions okay what we would want is the first condition that is spectral radius of s inverse t should be strictly less than 1 okay spectral radius of this should be strictly less than 1 in order that the iterations converge to the true solution what is important here is if spectral radius is if I choose this if I choose this s n t matrices in such a way that this condition on the spectral radius is satisfied then convergence will occur for any initial guess that you give here okay see we are going to use iterative methods for solving x equal to b when a is a large matrix okay so very large matrix and how do you give initial guess okay it is very difficult so because I do not know the solution so how do I start how do I pick start so what this tells you is that if eigenvalues of s inverse t are inside unit circle okay then I am guaranteed to converge to the solution from whatever initial guess you give okay so you can give an arbitrary guess you are guaranteed to converge provided spectral radius of s inverse t is less than 1 is strictly less than 1 okay so the condition that we get from this analysis for convergence of iterative schemes is that spectral radius of s inverse t should be strictly less than 1 so this is the condition that is that is important okay this is the convergence criteria so if I choose s inverse t in such a way that okay now these kind of equations okay I just want to touch upon something else before I move to you know a little bit of digression from what we have been doing this kind of equations do appear in other context when we are doing numerical analysis I just want to point to that before I move to you know something more related to this convergence analysis okay in this particular case we are getting these equations in the context of iterative schemes okay we are where k here k here is the index of iteration okay and starting from some initial guess we are getting a new guess and so on okay and we are looking at the convergence of iterations okay in some other context we get very very similar equations except the k there will not be iteration index it could be time okay so for example let us look at this equation dx by dt is equal to ax okay where initial condition initial condition x0 okay here a is a n cross n matrix a is a n cross n matrix x is a vector okay x belongs to Rn and I want to solve this kind of an equation okay using say simply Euler method okay so Euler method if I apply to this particular equation forward difference method then you know I will get xt plus h minus xt divided by h will be equal to ax at t so this this equation I can rearrange and write at xt plus h is equal to i plus h times a xt okay I okay and let us start now writing this solution starting from time 0 okay so initial condition is x0 initial condition is x0 okay so at t equal to h we will get x to h is equal to i plus h a into x well we will start at 0 so at x equal to h at t equal to h at t equal to 0 let us start from t equal to 0 let us start from t equal to 0 so this will be xh xh equal to x0 okay then x2h is equal to i plus h a into xh okay then x3h is equal to i plus h a into x2h okay you can just go on writing the solution of this difference equation okay and then finally what you can show this is xk plus 1 times h is equal to i plus h a in general kth instant in time I can write this equation difference equation now this difference equation this difference equation is qualitatively similar to this equation is qualitatively similar to this equation except when we looked at this equation this k was considered to be index of iteration okay it I could replace this I could reinterpret this as time okay this is the same equation xk plus 1 is equal to some matrix into xk okay and if I want to analyze behavior of this numerical iteration scheme I could look at spectral radius of this matrix okay so this is the fundamental this is the very fundamental equation and this appears in many different context okay for example I would want to decide how to choose let's say original system is asymptotically stable okay that means all the all the eigenvalues of matrix A are in the left half plane okay and the solutions of that particular system asymptotically go to 0 question is will the numerical solution behave in the same way how do I choose this how do I choose this you know integration interval h so that the solution approximate solution constructed by Newton's by Euler method is same as the analytical solution this question will be revisiting later in the course but I am just preempting that a very very similar equation will arise at that point so this particular matrix you can look at spectral radius of this matrix and comment upon whether the solution is going to go to 0 or the solution is going to go and explode so this particular analysis is very very generic okay let's come back to our original case okay so as I said we have been looking at this iterative schemes that is x k plus 1 is equal to S inverse t x k plus S inverse b okay this is for solving a x equal to b and we have written a matrix as S minus t okay and we have been looking at or we have been analyzing E k plus 1 is equal to S inverse t we have been analyzing this equation okay now this is this is nice that we got a condition which without having to solve this equation we can comment upon how the error is going to behave or how the distance from the true solution is going to behave as a function of iteration index k okay just looking at spectral radius just looking at eigenvalues of S inverse t okay but then is this something very easy to calculate the trouble here this is nice in the sense that it's a nice analytical tool we got an insight that looking at eigenvalues you will actually you can actually without actually having solving without actually solving the set of equations we can comment upon the behavior but the difficulty now is that I have to compute eigenvalues of this matrix okay just remember that this A matrix could be a large matrix okay for a large matrix for a large matrix okay to compute S inverse t and its eigenvalues is even more complex problem okay if I could if I have computational resources to solve this okay I might as well solve this by some other numerical method rather than by some other method like you know usual technique like Gaussian elimination rather than wasting my time in computing eigenvalues to see whether the conditions are met or not okay so this is this is nice this gives us a lot of insight but we need something more than or some simpler conditions than this and this is where we are going to use induced matrix norms okay so matrix norms using matrix norms we are going to or using relationship between the spectral radius and the matrix norm we are going to derive some simpler conditions if those conditions are satisfied then we are guaranteed that the iteration iteration method will converge okay so in order that you know in order that we come up with this this simpler conditions I need to introduce the concept of induced matrix norms okay well we have been looking at vector spaces and then I had mentioned that set of matrices you know if I have set of if I have this matrices A which are n cross m matrices okay this n cross m matrices set of n cross m matrices it forms a vector space these are all real valued matrices okay it forms a vector space and I can define a norm on this vector space so I can have a you know a norm vector space or a norm associated with a vector space set of all matrices why this is a vector space this is a vector space because you know if I add two matrices I will get another matrix if I add two real matrices a scalar multiplication of any matrix will give me another matrix in the same set so this is a vector space and now I need to define a norm on this so there are various ways of defining norm on this particular space I am interested in a particular set of norms which are called as induced matrix norms okay so we are going to define this we are going to look at induced matrix norms for matrices which are n cross m okay so this matrix norm okay we will denote this matrix norm by this symbol norm of matrix A okay is actually a map from R n cross m okay to R plus okay it is a map from the set of vectors which is this R n cross m space to R plus R plus is set of all positive real numbers the set of all positive real numbers okay because norm has to be positive value okay so this is set of all positive real numbers okay and the definition of a induced matrix norm is as follows an induced matrix norm is defined as max over x not equal to 0 vector okay norm of okay it is defined as max maximum of this particular ratio A x norm of A x divided by norm of x okay in some sense you can look at this as gain okay or amplification power of a matrix okay see if I if I if I decide to write this A x is equal to y okay this is like this y is a vector okay which is obtained when operator A transforms x okay see actually when a matrix operates on x it could do various things for example it could rotate the vector okay this this this new vector it could be a rotated vector if if A is a invertible matrix in general this will be a rotated vector it could be a stretched vector it could be a contracted vector it could be you know it could be in a reverse direction it could be reflection you could actually have A matrix doing variety of things it could be a projection okay so this A matrix is like an operator which operates on x gives you y okay so in let us say control system terminology this is like you know this is input this is output okay and then we could ask this question that you know what is norm of A x divided by norm of x we could we could look at this particular quantity norm of A x okay I am interested in finding I am interested in finding a maximum value okay I am interested in finding a maximum value okay which bounds this ratio okay this maximum value okay this maximum value which bounds this ratio is called as the norm of course I want it in such a way that for some x this ratio is attained okay which means I do not want to bound which is which is not reachable so for some value of x equality holds for remaining values inequality holds so I want to find out that number okay so this is this this norm of A is nothing but a ratio is nothing but a bound on this ratio okay so you could you could view this as you know amplification factor of a matrix or amplification power of a matrix now why it is an induced norm what is it what is special about it being an induced norm okay this norm is induced by see this is this is my domain and this is the range okay this is my domain and this is the range okay I have defined a norm on the range I have defined a norm on the domain the the definition of norm on the range and definition of norm on the domain together will define an induced norm so this these two induce a norm which is which is this A so for example I could I could define a 1 2 norm this A x is a element in a range space there could be a different norm associated in that range space okay I could I might be using one norm in the range space okay I have another norm defined in the domain that is 2 norm okay so I could define a norm which is induced norm which is induced by the norms defined on the domain and the range space and I would I could call this as a 1 2 okay this is one norm and this is 2 now now this kind of funny definitions of norms are rarely used in practice okay we we do not do this okay normally what we are interested in so I am just I am just telling you that this is possible this is not what is actually used okay what we normally do is that we use one norm or two norm for example if I define two norm then this is two norm by two norm okay then it would be called two now of a matrix A similarly I could define one norm which is max x not equal to 0 norm A x so this is one norm divided by one norm okay I would I want to find out ratio maximum of this ratio okay so in particular we are going to use three different norms in this course one is two norm other one is one norm and a third one is infinite norm so the infinite norm will be simply defined as infinite norm here and infinite norm here okay so I am interested in one norm two norm and infinite norm and these norms will help us the relationship between the norm and the spectral radius will help us you know decide the behavior of solutions asymptotic behavior of solutions okay so it's it's so far so good we have defined this norm the question is how do I compute a norm okay if I can compute a norm very easily okay then then you know it makes sense to use norm instead of spectral radius as a measure of convergence of iterative schemes okay now what I am going to do is initially okay I am going to I am going to find a way of computing two norm okay well that is more for the reason of let's say pedagogical reason that you know give you some insights into how the norm is computed okay again but the problem is going to be with two norm is that we will get again eigenvalues so again you to compute two norm you have to compute eigenvalues okay so it's not again convenient okay and finally we will move on to definitions of one norm and infinite norm which will be used subsequently in the analysis okay now unless okay from the viewpoint of getting insight I am going to first look at two norm okay so what I want to do is to compute numerically this value for two norm of a vector okay I want to numerically compute value for this two norm of a vector okay what I am going to do now is square this equation on both the sides okay I am going to square this equation on both the sides so two norm square is equal to two norm square is equal to two norm square okay is equal to two norm square so this is this is a x what is two norm of a vector see this ax is a vector okay ax is a vector okay two norm of that is given by or square of two norm is given by ax transpose ax okay divided by x transpose x okay I want to find out I want to find out this ratio okay which actually translates to finding out x transpose a transpose ax divided by x transpose x okay I want to find out this ratio x transpose okay so okay so before we move to actually calculating the two norm let me state few properties of norm see when you call when you call a function to be a norm what are the conditions that it should meet well the first condition is that so we need to examine whether these ratios do they qualify to be called as norms okay so the first condition for the first axiom that a norm function should satisfy is that norm of a should be greater than 0 if a is not equal to the null matrix is not equal to the null to the to the 0 vector 0 vector in a set of matrices is null matrix okay so this should be greater than 0 if the matrix is not equal to 0 and this a should be equal to 0 if a is equal to null matrix so this is the first condition that it should satisfy okay the second condition the second condition that this function should satisfy is norm of alpha some alpha is some arbitrary scalar from the scalar field that is under consideration should be equal to mod alpha norm a it's very easy to check these conditions are satisfied by the given definition okay the third condition is the triangle inequality okay the third condition is a plus b norm of a plus b if you take any two matrices a and b from this space or set of matrices n cross m matrices then what you can show is that norm a plus norm b okay norm of a plus b is less than or equal to norm a plus norm b this is the triangle inequality starting from the definition it's very very easy to show all these three axioms hold so this given definitions actually the given definition of induced norm actually satisfies all the three properties and it's indeed a norm and then there is one more property which is followed by all the matrix norms which is additional which is not part of the axioms is that if I have multiplication of two matrices a b okay then this is always equal to norm a norm b okay this is this is always less than or equal to norm a plus norm b this particularly would be useful for square matrices okay any arbitrary m cross n matrix you cannot probably okay so before before I actually do computations for two norm okay I want to give you a little bit of reinterpretation of definition of norm okay so we said that this ax okay this ratio ax by x is bounded by norm of a okay now this norm of x is a positive scalar this x is not equal to 0 so norm of x is a positive scalar and I can always rewrite this as a times x by norm x less than or equal to a what is this this is nothing but if I call this as x cap x cap this is a unit vector such that this one is nothing but a unit vector so for any arbitrary vector x okay x upon norm x is nothing but a unit vector so this will be x hat so norm of x hat yeah norm of x hat will be equal to 1 yeah yeah thanks for the correction so norm of x hat x upon norm x will always be equal to 1 okay and then I could redefine the norm as I could redefine norm of matrix a as set of you know max over okay I could redefine my norm I could redefine my norm as max over x hat is equal to 1 okay and of course this is this is for the set where this will happen this will happen only when x is not equal to 0 so that is implicit here okay excluding the point origin that is implicit here so I could redefine my norm as x hat is equal to 1 okay norm a x hat where x hat is a unit vector okay so if I take this if I take this let us say in two dimensions if I take this unit circle in x 1 x 2 okay so this is x 1 x 2 okay all the vectors all the vectors can map onto this unit vector okay because you can normalize them by x by norm of x okay and then we are looking for and this is very easy to interpret we are looking for that value of ratio okay which is maximum when you move along this circle okay when you move along this circle what is the point where you get the maximum of this norm x by norm a x by x or maximum of the ratio when a x cap or maximum of this norm a x cap okay in the range space so this could be another way of interpreting the induced matrix norm okay okay let us move let us all let us now move on to computing the two norm so for computing two norm what I had said was norm a x 2 square so I wrote this as a x transpose a x by x transpose x which is same as x transpose a transpose a x by x transpose x okay now the question is how do I find maximum of this ratio okay how do I find maximum of this ratio what I am going to do now is do some algebraic tricks and then find a maximum of this ratio I am going to first of all this matrix this matrix is always a symmetric matrix transpose of this matrix is itself so symmetric matrix okay if if a is a full rank matrix a transpose a okay a transpose a you know will will what will be a transpose a will it be always positive definite or positive definite depends upon what are the dimensions of m and n okay depending upon dimensions of m and n a transpose a is always a positive definite matrix or it is a positive semi-definite matrix okay so this is a symmetric matrix and it is always a positive semi-definite matrix okay so a transpose a is symmetric okay is symmetric and positive positive semi-definite why it is positive semi-definite how do you define positive definiteness x transpose x transpose suppose this this you call this matrix B okay x transpose B x should be greater than or equal to 0 okay look here okay this this is a this is a vector okay into into you know this vector transpose this vector is always going to be positive or it is always going to be greater than or equal to 0 even if even if x lies in the null space of a this will be you know this numerator can be 0 or it can be positive it can never be negative okay so that is why this this matrix A transpose A will always be a positive definite or a positive semi-definite matrix okay let us take a special case where A transpose A is a positive definite matrix it is a symmetric matrix let us take a special case when it is positive definite matrix okay so it is a positive definite matrix okay for the for the sake of convenience okay let us let us further assume that A transpose A has linearly independent Eigen vectors okay I am going to make an assumption additional assumption that is this required if it is a positive definite matrix symmetric positive definite matrix the Eigen vectors are linearly independent in fact something more they are orthogonal symmetric then they are orthogonal or they are you can also choose them orthonormal okay so it follows actually from this assumption that A has linearly independent A transpose okay these are linearly independent Eigen vectors not only that so in fact the Eigen vectors are orthogonal okay so symmetric positive definite matrix okay the Eigen vectors are linearly independent and they are orthogonal okay that is much stronger property so I can write this A transpose A okay I can write A transpose A okay I want to split A transpose A I want to split A transpose A as you know psi lambda psi inverse okay I want to diagonalize this matrix okay what is this psi what is this psi matrix this psi matrix it consists of Eigen vectors of A transpose A so let us say v1 is Eigen vector of A transpose A v2 is Eigen vector of A transpose A so these are n Eigen vectors of A transpose A okay then I can diagonalize this matrix A transpose A okay using Eigen vectors and this lambda is a diagonal matrix which consists of lambda 1 okay this is a diagonal matrix okay I can rewrite this I can rewrite this as A transpose A is equal to psi lambda psi inverse okay in fact this Eigen vectors of A transpose A they are orthogonal okay and it follows that it follows that this psi inverse okay is nothing but psi transpose we can show because of the orthogonality property that psi inverse is equal to psi transpose okay because the Eigen vectors are because the Eigen vectors are orthogonal you can choose them to be orthonormal if you choose them to be orthonormal then this property will hold okay psi inverse equal to psi transpose and then I can I can write A transpose A as psi lambda psi transpose okay if you just want to see you know a quick derivation of this it would be a very quick derivation of this would go something like this let us say A transpose A times v1 is equal to lambda 1 v1 this is the first Eigen value and Eigen vector combination of A transpose A okay A transpose A v2 is equal to lambda 2 v2 and so on I can combine these equations I can combine these equations as A transpose A v1 A transpose A v2 all that I have done is I have kept this I have kept this vectors next to each other so this matrix is equal to this matrix I have combined all these equations into one single equation okay and then I can write this as I can write this as A transpose A into v1 v2 vn is equal to v1 to vn into lambda 1 okay I can rearrange this equation this is this is my psi matrix this is my psi matrix okay and with this I can simply I can simply write because these are orthogonal vectors and orthonormal and you know this is invertible matrix I can write so this is this is psi this is psi so A transpose A is equal to psi lambda psi transpose or psi lambda psi inverse okay I have chosen Eigen vectors to be orthonormal okay and this is my this is my diagonal matrix lambda okay so this derivation is very very straightforward okay so now having done this having done this what I am going to do I am going to use this to arrive at the norm of or two norm of the matrix so coming back to our two norm definition we have X transpose A transpose A X by X transpose X okay what I am going to do here is to replace A transpose A by this term so this is this is equal to so this is two norm square two norm square is equal to this term okay so this is nothing but X transpose psi lambda psi transpose X divided by X transpose X okay I am going to define a transformation now I am going to define the transformation now which is okay I am going to define a transformation Z which is equal to psi transpose X using this transformation I can write X transpose psi lambda psi transpose X okay divided by X transpose X this is equal to Z transpose lambda Z divided by X transpose X okay now I am going to play one more trick okay I know that psi psi transpose is equal to I okay psi psi transpose is equal to I using this identity or psi transpose psi is equal to I okay using this identity I am going to I am going to write this as X transpose psi psi transpose X is equal to X transpose X because psi psi transpose is equal to I okay so this is nothing but Z transpose Z this is nothing but Z transpose Z okay now with this transformation with this transformation my two norm square my two norm square this becomes this becomes Z transpose lambda Z divided by Z transpose Z okay now this is a diagonal matrix this is a diagonal matrix so this is nothing but lambda 1 Z 1 square plus lambda 2 Z 2 square lambda n Z n square Z 1 square plus Z 2 square Z n square okay so I have expressed A transpose A I am sorry I have expressed square of A norm of A okay in terms of this ratio of two polynomials what you can see very easily that this is the numerator is always a positive number because A transpose A is a positive definite matrix all eigenvalues of A transpose A are positive so this is always a positive number divided by this which is a positive number okay now let us let us order the eigenvalues like let us number the eigenvalues we can number the eigenvalues the way we want I can I am ordering them in such a way I am numbering them in such a way that lambda 1 is the largest magnitude eigenvalue lambda 1 is the largest magnitude eigenvalue okay what you can show what you can show is here here lambda 1 is is the largest value ok. So, I can replace lambda 2 by lambda 1 lambda 3 by lambda 1 lambda 4 by lambda 1 ok. What I can show is that this ratio this ratio here this ratio here if I is always less than or equal to lambda 1 square g 1 square plus lambda 1 square g 2 square there will be oh yeah sorry sorry yeah thanks for correction. So, lambda 1 into lambda 1 up into g 1 square plus lambda 2 and what I have done is in this in this particular case I have just replaced lambda 2 by lambda 1 lambda 3 by lambda 1 ok. So, this inequality will always hold this is this is equal to lambda 1 ok. So, lambda 1 lambda 1 is an upper bound on this ratio ok. So, what we have effectively what we have effectively shown is that if I just move here what we have effectively shown is that norm A 2 square is equal to you know A x transpose A x by x transpose x this is equal to z transpose lambda z by z transpose z which is less than or equal to lambda 1 this is the largest magnitude eigenvalue of A transpose A ok. So, this is where z is equal to psi transpose x psi consist of orthonormal eigenvectors of A transpose A. So, this here is largest magnitude eigenvalue of A transpose A or the largest singular value of A in other words ok. So, what it turns out is that if I come back here it turns out that for a particular value for a particular value of z the bound is attained. So, we can actually show equality we can actually show equality and then further we can show that norm A 2 norm of A is nothing but square root of what we have proved effectively is that 2 norm of A is nothing but maximum magnitude eigenvalue of A transpose A. A transpose A is positive definite even if it is semi definite this equality will hold I have just done everything for a simpler case ok. So, this is what we have proved that 2 norm of a matrix can be computed as maximum magnitude eigenvalue square root of the maximum eigenvalue of A transpose A ok or square root of the maximum magnitude singular value of A ok. It is also called singular value of A. So, but again there is a problem ok eigenvalue ok. So, 2 norm we wanted to compute something which is computationally simple ok for S University, but again we are stuck with eigenvalue. So, this is not convenient we will move on in the next lecture to finding out 1 norm and infinite norm ok. So, which are more convenient nevertheless this derivation is quite you know it gives insight into how the norm can be computed and that is why for the more for the pedagogical reason I have done this derivation, but more useful thing is 1 norm and infinite norm which we will be discussing in the next lecture.