 So, then the one small point is, so in order to solve the pivoted system, so recall that our goal was actually to solve Ax equals b in some computationally efficient manner. So, what we will do is, okay, so if we have Lu equals Pa pi, then since P P transpose equals P transpose P, this is the property of permutation matrices is the identity matrix and a similar thing holds for pi, that means that if I pre-multiply by P transpose Lu pi transpose is equal to this matrix A and so I will use that here P transpose Lu pi transpose x equals b. And so what we will do is, we will set y equal to u pi transpose x and z equals pi transpose x, then we solve Ly equals Pb, so I am multiplying this by P, then we solve uz equals y and finally, we compute x equals pi z. So, basically if rows of A are permuted, then the corresponding entries of b must be exchanged, so that is what Pb does. If columns of A are permitted, then the corresponding elements of x have to be exchanged, that is what pi z does. So, this is how you solve the pivoted system. Now, the last point that we have not discussed is, what happens if A is ranked efficient? So, the point is that the pivot elements are guaranteed to be non-zero, only if A is not ranked efficient, if A is ranked efficient, then the pivot element will go to zero at some point. And so if rank of A equals r, which is less than n, then at the beginning of the r plus 1th step, we will have A r of rows r plus 1 through n, and columns r plus 1 n will become the all-zero matrix. And so then we can simply set Pk equals pi k equals mk equals the identity matrix for k equal to r plus 1 all the way up to n. So, once we see that this sub-matrix has become the all-zero matrix, we do not have to do any more Gaussian elimination steps. So, we can terminate Gaussian elimination after r steps. And we will get Pa pi equals matrix that is an LU, but it is L11, which is going to be r cross r, 0, L21, which is of size n minus r cross r, and then i r times u11, which is again of size r cross r, u12, 0, which is of size n minus r cross r, and then 0, which is of size n minus r cross n minus r. So, this is the kind of form you will get. So, in fact, this can be used to find the rank of a matrix, but in practice, you will need to have a threshold on the entries of this matrix to decide when you are going to call it the all-zero matrix. So, I think these things will become very easy to, I mean very clear to you, if you were to try to do these Gaussian elimination steps using a rank-deficient matrix. So, pick your favorite 3 cross 3 or 4 cross 4 matrix, make it rank-deficient. So, for example, write down three arbitrary columns and then write a fourth column, which is the sum of those three columns or something. Now, it becomes a rank-deficient matrix and then try to execute the LU decomposition on it with pivoting. And you will see that there will be a, after r steps, this sub-matrix will go to 0, and so you can stop the Gaussian elimination there. So, what we just discussed about pivoting is actually called the full pivoting algorithm. It gives you maximal numerical stability, because at each stage, it is pulling the biggest magnitude entry in akk of r minus 1, a, sorry, a22 of k minus 1, and putting that to be the top left entry. Now, but then that is also maximally numerically expensive to compute because you have to compare at least, so akk, sorry, a22 of k minus 1 has n minus k plus 1 is of size n minus k plus 1 cross n minus k plus 1. So, that many entries need to be compared in magnitude and you need to find the biggest entry, find its location and then do a row and column exchange. So, as far as numerical computation is concerned, this increases the computational complexity of the Gaussian elimination. However, I mean, so people also sometimes do partial pivoting, where for example, you may only perform row operations. In that case, you will have the P matrices as discussed above, but you won't have the Pi matrices. You won't do the column exchanges. This can reduce the number of comparisons required. And of course, there are cases where pivoting is not required. So, for example, in what are called diagonally dominant matrices, a matrix is diagonally dominant if the magnitude of the diagonal entry is bigger than the sum of the magnitudes of all the off diagonal entries in the same row or the same column. Then the such matrices are called diagonally dominant matrices. And for such matrices, pivoting or row and column exchanges are not required. Or in other words, even if you did that, it would just return pk equals the identity matrix and pi k equals the identity matrix each time. So, you don't have to do those comparisons. So, in some cases, pivoting is not required. So, this is kind of an area in itself on how to efficiently compute the LU decomposition. There is a lot one can say. But for the purposes of this course, I think this is for you to get an idea of how it is and how it is done and what it is is sufficient. There are a few other useful modifications of this LU decomposition that one often considers. So, I will highlight them a bit here. Modifications to the LU decomposition. So, the first is what is called the LDM decomposition. It is also often called the LDM transpose decomposition. This can be done if there are no zero pivots that are encountered in the Gaussian elimination process. Because if you encounter a zero pivot, it doesn't mean you can stop there. You can stop there only if this entire matrix AR of r plus 1 through n and r plus 1 through n, that sub matrix of AR is all zeros. But if the pivot element is zero, then you have to do some row and column exchanges to bring a nonzero entry to the r plus 1 comma r plus 1 entry and then you can proceed with the Gaussian elimination. But if no zero pivots are encountered, the Gaussian elimination process, then there is a unit lower triangular L and M, both are unit lower triangular. That is why we write it like this as M transpose and diagonal D such that A is equal to LDM transpose. So, essentially what we are doing here is since A is equal to LU, what we can do is this exists. So, we let U is equal to D times M transpose where DII equals UII. Basically what this is doing is each row of M transpose is equal to the corresponding row of U divided by this element UII. So, it is just extracting the diagonal entries and pulling them out and writing it as D times M transpose. So, this gives you a decomposition of this form where L and M are both unit lower triangular. So, then if you were to again, if you had to solve Ax equals B, then you have LDM transpose X equals B. So, what we will do is we will let Y be this part here, B M transpose X and solve Ly equals B, then we let Z equals this part here M transpose X and solve DZ equals Y. This is a diagonal matrix. So, this is easy to do. And third step is you solve M transpose X equals Z and obtain Z, obtain X. So, and in particular, if the matrix is symmetric, then you can actually have a decomposition of the form LDL transpose decomposition for symmetric matrix. So, the idea is that if A in R to the N cross N is symmetric and non-singular, then L equals M. So, LDM transpose becomes LDL transpose. So, this is not hard to see. So, since A equals LDM transpose, that means that M inverse times A times M to the minus transpose is equal to M inverse L times D. I just pre-multiplied by M inverse and post-multiplied by my M inverse transpose. And these are unit lower triangular matrices. So, their inverses always exist. So, this is M inverse LD. Now, M is lower triangular. So, M inverse is also unit lower triangular. So, M inverse L times D, D is diagonal. And so, this whole thing is actually still lower triangular. And A is symmetric. So, M inverse A, M inverse transpose is symmetric. Now, the only way a symmetric matrix equals a lower triangular matrix is if it is a diagonal matrix. So, basically, M A M minus transpose is diagonal. And if you call it some D dash, then you can take M times that and then M transpose multiply on the right and you see that A is equal to LDL transpose. It is of that form. So, basically, D is diagonal and non-singular. And so, basically, we have just showed that M inverse LD is diagonal, which implies M inverse L is also diagonal. But M and M inverse and L are unit lower triangular, which implies that M inverse L is unit lower triangular. But if it is lower triangular, the only unit lower triangular matrix, unit lower triangular matrix that is also diagonal is the identity matrix. So, that means that M inverse L equals the identity or M equals L. So, basically, the decomposition of the form LDL transpose. Okay, we stop here for today. And the next class I will continue with another closely related decomposition called Cholesky decomposition. And that is applicable to positive definite matrices, positive definite Hermitian matrices. And that will lead our way into a detailed discussion of Hermitian symmetric matrices and positive definite matrices, which will occupy several weeks of further lectures.