 Now, let us look at what happens for diagonalizable matrices. So, first of all, we have the following fact, which is that suppose A is a matrix that is diagonalizable with A equal to S lambda S inverse and lambda being a diagonal matrix. And suppose E is a perturbation matrix of size n cross n. Now, if lambda hat is an eigenvalue of A plus E, then there is some eigenvalue lambda i of A for which this difference between lambda hat minus lambda i is at most the L infinity norm of S times L infinity norm of S inverse times the L infinity norm of E, which as we defined it earlier is K infinity of S times the L infinity norm of E. So, K infinity of S is the condition number of S under the L infinity norm. So, we see the condition number showing up in when we try to analyze the stability of eigenvalue computations. But what matters here in the case of diagonalizable matrices is the condition number of S, which is the matrix that diagonalizes A, not the condition number of A itself. Of course, we know that A plus E and S inverse A plus E times S have the same eigenvalues and S inverse A plus E times S is nothing but lambda plus S inverse E times S. Now, lambda is a diagonal matrix. And so, by the Gershkorin's theorem, there is some eigenvalue lambda i such that if lambda hat is an eigenvalue of A plus E, then there is some lambda i such that lambda hat minus lambda i is at most S inverse E S the L infinity norm. And now the result follows from the submultiplicativity property of the matrix norm. That is it. We can actually extend this result to a more general class of norms, which are norms satisfying this property that the norm of D is equal to the maximum diagonal entry when the matrix D is diagonal. Some examples of such norms are the L1 norm, L2 norm and L infinity norm. So, this is the extension. So, suppose A is a diagonalizable matrix of size n cross n and A can be written as S lambda S inverse where lambda is a diagonal matrix containing the eigenvalues of A on the diagonal. And let E be a perturbation matrix of size n cross n and let this norm be a matrix norm such that norm of D equals the maximum diagonal entry for all diagonal matrices. If lambda hat is an eigenvalue of A plus E, then there is some eigenvalue lambda i F of A such that lambda hat minus lambda i magnitude is at most K of S times the norm of E, this norm of E where K is the condition number with respect to this particular matrix norm. So, let us see how to show this. So, starting point is the same as that of the previous result. Obviously, S inverse A plus E times S is equal to lambda plus S inverse E S. Now, if lambda hat is an eigenvalue of lambda plus S inverse E S, then what we know is that if I take lambda hat times the identity matrix minus lambda minus S inverse E S, okay, what can I say about this matrix? Correct. So, basically eigenvalue satisfy the determinant of lambda i minus A equal to 0 and so lambda hat i minus this that determinant is equal to 0. So, this matrix itself is singular, okay. Now, if this matrix itself is singular, this means that lambda hat equals lambda i for some i and then there is nothing to prove. That is this inequality will be anyway satisfied, okay. But if so, I think we can safely assume that lambda hat is not equal to lambda i for any i so that lambda hat i minus lambda is non-singular, okay. Then I will consider the matrix lambda hat i minus lambda inverse times lambda hat i minus lambda lambda minus S inverse E S, okay. And then I will just expand this out. This is, oops, the identity matrix minus lambda hat i minus lambda inverse times S inverse E S, okay. So, this matrix is singular, okay. Now, recall a result we showed a long time ago which is that A in C to the n cross n is invertible if there is a matrix norm such that norm of i minus A is less than 1, okay. So, what that means is that if this matrix is singular no matter which norm I consider the norm of i minus this matrix should be greater than 1, okay. So, I will just write that here implies norm of i minus A is greater than 1 for every norm, for any norm if A is singular, okay. By the way, how did we show this result? Just to recall, if norm of i minus A is less than 1 then we consider the series k equal to 0 to infinity i minus A power k. This converges to C because the radius of convergence of summation z power k is 1. Then what we do is we look at A times sigma say k equal to 0 to n i minus A power k and this is equal to, we write this as i minus i minus A times sigma k equal to 0 to n i minus A power k and when you expand this out, it becomes a telescoping sum and you are left with only the first and last terms which is i minus i minus A power n plus 1 and this goes to the identity matrix as n goes to infinity because this spectral radius of this is less than 1 and so or rather this matrix converges to the all zero matrix and so we conclude that basically this matrix whatever this converges to is the matrix A inverse A is actually I should write it the other way A is invertible C equals A inverse, okay. This was just an aside to recall how this goes but now we will come back to our the proof we are trying to write out. So we will apply this result and that is to be applied to the matrix i minus lambda hat i minus capital lambda inverse times S inverse E s and so i minus that matrix is so if i minus that matrix is just lambda hat minus i lambda hat so okay so so thus i minus lambda hat i minus lambda inverse S inverse E s singular implies i minus this matrix which is lambda hat i minus lambda inverse S inverse E s norm is greater than or equal to 1 it doesn't matter which norm i pick and so we have that 1 is less than or equal to this norm actually what i'll do is i'll simplify this a bit and write it in this way 1 is less than or equal to this which i'll use my sub multiplicativity and write it as norm of S inverse E s times the norm of lambda hat i minus lambda inverse and now i'll use the property that this norm it returns the largest diagonal entry whenever the matrix is diagonal and this is a diagonal matrix so that so that this thing is actually equal to the right hand side here is equal to norm of S inverse E s times the max 1 less than or equal to i less than equal to n of mod lambda hat minus lambda i inverse it's the largest i can value i largest magnitude diagonal entry which i can also write as norm of S inverse E s to be varied by min 1 less than or equal to i less than or equal to n of mod lambda hat minus lambda i so we have min 1 less than or equal to i less than or equal to n mod lambda hat minus lambda i'm just taking this to the other side so 1 is over here and that multiplied by this just gives me this is less than or equal to the norm of S inverse E s against a multiple kt kativity S S inverse norm of E which is actually equal to k of E k of S times norm of E which is what we wanted to show okay so what we have done is that we have shown the importance of the condition number with respect to solving finding eigenvalues of the matrix but there is an important difference between what we saw just now and what we saw earlier when we were looking at the importance of the condition number in solving linear systems of equations so when you are solving ax equals b it is the condition number of a k of a that mattered here it is the condition number of s k of s that matters not k of a directly of course s depends on a s is the matrix that diagonalizes this matrix a but it is k of s that matters not k of a directly so therefore if k of s is a small number then small changes in a lead to small changes in the eigenvalues but if k of s is large then small changes in a could lead to large changes in the eigenvalues in particular if s is unitary then the condition number of s is equal to 1 with respect to the spectral norm and in this case the eigenvalues of a are actually well conditioned because k of s equals 1 and also recall that a matrix a can be unitary diagonalized if and only if it is a normal matrix okay so we conclude that so I will just write it this way normal matrices can be unitary diagonalized and second point is that unitary matrices have condition number equal to 1 with respect to spectral norm okay which implies that normal matrices are perfectly conditioned are perfectly conditioned with respect to eigenvalue computation okay so we have the following corollary so let a be in c to the n cross n and it is a normal matrix with eigenvalues lambda 1 up to lambda n and let e be an n cross n matrix if lambda hat is an eigenvalue of a plus e then there is some eigenvalue lambda i of a for which lambda hat minus lambda i is less than or equal to norm e spectral norm okay now in the case where a and a plus e are both Hermitian matrices we can actually use Weyl's interlacing theorem to get an even better bound so that's the next theorem if a and e in c to the n cross n are Hermitian are Hermitian matrices normal yes sir yes sir yes okay so if they're Hermitian are normal matrices Hermitian they need not be correct okay so if a and e are both Hermitian and lambda 1 less than or equal to lambda 2 lambda n are the ordered eigenvalues of a and lambda hat 1 lambda hat 2 lambda hat n are the ordered eigenvalues of a plus e then lambda 1 of e less than or equal to lambda hat k minus lambda k less than or equal to lambda n of e and this is true for k equal to 1 2 n and mod of lambda hat k minus lambda k is less than or equal to rho of e spectral radius which is equal to in this case because it's Hermitian the l to norm of e okay so basically this is a this is a better bound compared to the bounds we've seen earlier because it's really comparing the kth eigenvalue of a plus e with the kth eigenvalue of a so it's telling us which eigenvalue of a lambda hat k will be close to okay so there are a couple of different paths I can take from here and I was yet to decide what I will cover in the remainder of this course so I'd like to stop here for today and I'll figure out what I want to do next and continue in the in the next class