 So, the last time we discussed a bit about induced norms and also introduced the spectral radius not norm and today we will continue this discussion about the spectral radius. Okay, so just to recall, so the spectral radius rho of A is defined to be the maximum magnitude eigenvalue of a matrix A. Okay, and we also saw that the spectral radius is a lower bound or any matrix norm. So, if is a matrix norm, rho of A is less than reported norm of A is true for any matrix A. And in fact, if you go back and look at the proof, yeah. So, in fact, so we saw that the spectral radius has this universality property that regardless of which matrix norm you choose, if you evaluate the norm from a matrix A, we are able to do the spectral radius of that matrix A. However, the spectral radius is not a norm. So, we were looking at some other properties of this spectral radius. And so, at the end of the previous class, we were trying to show the following lemma that, so let A be C to the N cross N and epsilon be greater than 0. Then there exists a matrix norm such that in other words, we can find a norm such that you can the norm of this particular matrix is as close to its spectral radius as you wish. Notice that if I mean this is for a specific matrix. So, in other words, if I take a different matrix B, then it is not generally true that norm of B will also be between rho of A and rho of A plus epsilon. So, only for this one matrix A, you can find a matrix norm such that the norm of A is between rho of A and rho of A plus epsilon. So, for the proof of this, I said that we use this short triangularization theorem which we will be proving much later in the course. So, the first step is this, which says that any matrix A in C to the N cross N with eigenvalues lambda 1, then there exists a unitary U and C to the N cross N and upper triangular delta with delta ii equal to lambda i such that A is equal to U Hermitian delta U. This is such a decomposition as possible for any N cross N matrix A. You can always find a unitary U and upper triangular delta satisfying these properties. Now, we will set dt to be the matrix with t, t squared up to t power N on the diagonal and zeroes seven URLs. And then for such a matrix, if I compute, for example, I just, since I went through it a little bit quickly in the previous class, let me just, so here I'll just say compute dt delta dt inverse where delta is the upper triangular matrix you get by doing the short decomposition of this matrix A. So, since I went through it a little bit quickly in the previous class, let me just take a 3 cross 3 example and just show you how this works out. So, dt is t squared tq times this delta matrix will have lambda 1, lambda 2 and lambda 3 on the diagonal. It's upper triangular, so it will have zeroes below the diagonal. And let's say it has delta 1, delta 1, 3 and delta 2, 3 here. And then for a diagonal matrix when you invert it, all the diagonal entries get inverted. So, it's this t inverse t to the minus 2 and t to the minus 3 and zeroes everywhere else. So, if I do this, this is equal to, I'll multiply these two first. So, it is t, t squared tq times, if you multiply this, you'll get lambda 1 t inverse and this will be delta 1, 2, t to the minus 2 and delta 1, 3, t to the minus 3. And this entry will be zero and lambda 2 t to the minus 2 and t to the minus 3, zero, zero, lambda 3, t to the minus 3. So, if I complete this multiplication, I'll get, this will become lambda 1, then delta 1, 2, t inverse this times, then this times the first row will become delta 1, 3, t to the minus 2, then zero, lambda 2, delta 2, 3, t to the minus 1, zero, zero and lambda 3. So, you can see that when you do such an operation, dt delta dt inverse, it will retain lambda 1, lambda 2, lambda 3 on the diagonal. The first off diagonal, the first that is the super diagonal, the entries will all get multiplied by t inverse, the second off diagonal will get multiplied by t to the minus 2 and so on. So, in general, what we have is that dt delta dt inverse is equal to lambda 1, zeros, t inverse delta 1, 2, all the way up to t to the minus n minus 1, delta 1, n, lambda 2, up to lambda n on the diagonal and this will be t inverse delta n minus 1 and n. So, you can fill in the rest of the entries. So, this is how it will look. So, all the first off diagonal, the first super diagonal entries will get multiplied by t inverse, then t to the minus 2, all the way up to t to the minus of n minus 1. So, basically, if I choose t very large, all these off diagonal terms can be made small enough. So, specifically what we will do is we will choose t large enough such that the sum of the absolute values of all the off diagonal terms here is at most epsilon, is less than or equal to epsilon. So, what this means is that if we look at dt delta dt inverse and we consider its L1 norm, what is the L1 norm? It is the maximum column sum norm. So, I will have to take the sum of the magnitudes of the entries in every column and see which column is giving me the biggest number and that is this maximum column sum norm. So, when I take the magnitude of the entries here, I get mod of lambda 1. Here, I will get mod lambda 2 plus mod of t inverse delta 1 2. But the sum of all these entries is at most epsilon and so, this one term in magnitude cannot, the magnitude of this one term cannot exceed epsilon. So, that is also going to be smaller than epsilon. So, the this column, this the L1 norm of this column can be at most mod of lambda 2 plus epsilon. The L1 norm of this column also can be at most mod of lambda 3 plus epsilon because this is the sum of the magnitudes of just two terms. Whereas, we have already made sure that the sum of the magnitudes of all these terms is at most epsilon. So, this quantity here will be at most just a second. And the maximum of all these lambdas is rho of a by definition and so, this is going to be at most rho of a plus epsilon. In other words, one of these columns is going to have the largest eigenvalue in magnitude. That largest eigenvalue plus the sum of all these terms together in magnitude is an upper bound on the maximum column sum norm of dt delta dt inverse. And so, this is this is true. So, that was the kind of key step in the proof. And so now, the rest of it is kind of connecting the dots here. So, if we define the norm, so we will consider dt u b u Hermitian dt inverse and maximum column sum. So, we have already seen that if I take any invertible matrix and I consider S inverse AS where the norm of S inverse AS, that is also a valid norm. So, this is a valid matrix norm. So, L1, this maximum column sum norm is a matrix norm and this matrix dt or u Hermitian dt inverse, this matrix is an invertible matrix. So, this is in fact a valid norm. Okay. So, this is what I will define. Then, so, so then, repeat. Hello, sir. Hold on one second. Okay. Volume is too low on my machine. Okay. Yeah. Please repeat your question. Sir, can you please repeat why it is valid norm? So, we have already seen that, okay. First of all is a norm. And the second point is that given any norm AS, S inverse AS is a valid norm. The only requirement is that S should be non-singular. But that is true here because this I can always Hermitian dt inverse, inverse b u Hermitian dt inverse. Okay. This is the theorem we have stated and proved in the previous class. Okay. So, this is a valid norm. And then if I compute under this norm what the norm of A is. So, then for large enough D, we have norm of A is equal to L1 norm of dt u. Now, A itself I can write as u Hermitian delta u, this equals A, u Hermitian dt inverse, the maximum column sum norm of this. But u, u Hermitian and u, u Hermitian are both the identity matrix. So, I have dt delta dt inverse which is less than or equal to rho of A plus epsilon as we just showed here. And of course, for any norm we have already seen that for any norm this rho of A is a lower bound on the norm. We are done. I mean that is all we wanted to show which is that the norm of A is between rho of A and rho of A plus epsilon. These are the two things we wanted to show. Okay. So, that is this result. This completes the proof we started the previous class. So, now we will continue. Okay. So, one remark is that what this result shows is that the rho of A is essentially the greatest lower bound for the values of all matrix norms of, for the value of all matrix norms of A. In other words, I can find a norm such that the norm of A is as close to rho of A as I wish. And so, there is no way anybody else can find some other zeta of A or something which is bigger than rho of A and such that no matter which norm of A I choose to evaluate that will still be bigger than zeta of A. The rho of A is the biggest lower bound I can find on any possible matrix norm of A.