 Okay, so the next thing I want to talk about is we've discussed the Jordan canonical form. So now that gives us a nice opportunity to discuss maybe other canonical forms and factorizations. So specifically we will look at triangular factorizations where you will reduce a matrix to a triangular form and this is useful because if you are trying to solve a system of linear equations then if A is, so suppose you, so the motivation is that we want to solve Ax equals b and suppose A is non-singular and square and let's say it's upper triangular. Then what I can do is it's basically this system of equations is of the form a11 to a1n ann0 times x1 through xn equals b1 to bn. Okay, and one way to solve this is through what is known as back substitution. We will use the last equation annxn equals bn and this will directly give us xn and then we substitute that into an minus 1 n minus 1 n minus 1 times xn minus 1 plus an minus 1 n times xn equals bn minus 1 and you can solve, you already know what xn is. So you can substitute for xn in here, take it to the other side and you can solve for xn minus 1 and so on. So basically if A is triangular we can do this but if A is not triangular but it is non-singular but you can almost do what I just showed you if we have a factorization that looks like A equal to LU where L is lower triangular and U is upper triangular. Okay, so then what we can do is to solve for Ax equals b what we can do is we first solve L, call it y, A is LU so UxL call it y equals b and L is lower triangular so you can use exactly the opposite of what I discussed here and that is typically also called forward substitution and then once you found y you solve Ux equals y, this is backward as I did because U is upper triangular. So if I can find the factorization of A in the formal U then I can solve Ax equals b using these two forward and backward substitution steps. So this is meaningful only if I can compute L and U without too much computational effort otherwise I might as well try to invert A. Okay, so how do you do this LU factorization? So the answer to that question lies in what is known as Gaussian elimination. Okay, so we'll come to LU factorization in a bit but first this is a small detail which is Gaussian elimination is one way to solve a system of linear equations. I suspect most of you have seen this in your undergraduate already but just to recap, so suppose just as an example we are given Ax equals b, A is some matrix of size 3 cross 3 and it's non-singular. Then basically the system of equations I'm trying to solve will look like A11, A12, A13, A21, A22, A23, A31, A32, A33 times x1, x2, x3 is equal to b1, b2, b3. Then what I can do is I can use Gaussian elimination to reduce this to the form A11, A12, A13, 0, A22 dash, A23 dash and 0, 0, A33, say double dash times x1, x2, x3 is equal to I have to do the same Gaussian elimination operations on the right side, so I have b1, b2 dash and b3 double dash. Then this is of the form u times x equals b and backwards substitution works. So what are these row operations I do to get this form? It's very simple. What I have to do is first I compute row 2 with the dash, the single dash is equal to row 2 minus A21 over A11 times row 1. Do that, this element will become 0 and these two will give you something else and this b1 dash, b2 will become some b2 dash. Then I do row 3 dash is equal to row 3 minus A31 over A11 times row 1. So this will kill the bottom right and 3 of the a3 and then you'll have a b3 dash on the right hand side. But these entries may be not 0 and then we compute row 3 double dash equals to row 3 dash minus A32 dash divided by A22 dash times row 2 dash. So if you do these three steps, it'll reduce the matrix out of this form and then you'll have the b2 dash b3 dash and then you can use backwards substitution to solve for x. So basically each row operation what they do, these row operations they preserve the original system of equations and each operation places 0 in an appropriate place below the main diagonal. And this is the reason why this Gaussian elimination works. Okay, so basically once A is diagonalized, we can obtain the solution by backward substitution. So here is the backward substitution algorithm just for the sake of completeness, I've already explained what it is. So for i equal to n, n minus 1 down to 1, what you do is you set xi equals bi. And then for j equal to i plus 1 to n, we set xi, xi minus uij times xj. And then finally you set xi equals xi over uii. So if you actually went through the computing effort involved in doing Gaussian elimination and backward substitution, one counts the number of computational operations in terms of flops or floating point operations. And so the total number of floating point operations for Gaussian elimination is of the order of 2n cube over 3. And the total number of flops backward substitution is of the order of n square. So basically Gaussian elimination is the most expensive step in solving for ax equals b via Gaussian elimination. Okay, so with that background we can now discuss about LUD composition. We want to find L which is a lower triangular matrix and u which is an upper triangular matrix such that A is equal to LU. Okay, so the question is how do you perform this LUD composition? And what is its computational effort? And finally what is the relationship between Gaussian elimination and LUD composition? So the first point about finding this or first step in finding this LUD composition is something that transforms. So basically the Gaussian elimination is equivalent to a sequence of Gauss transforms. What this we will see and we will explicitly write out how this happens. That is there exists matrices m1, m2 up to n minus 1 which are in r to the n cross n such that mn minus 1, mn minus 2 all the way down to m1 times A is equal to u and this is in the upper triangular form. Okay and mk specifically is a matrix that introduces a zero or zeros below the main diagonal on the kth column after the previous k minus 1 transforms. Okay, so basically after n minus 1 transforms the result is upper triangular and the Gaussian elimination is complete. So the first transform m1 will introduce zeros below the first column of A. The second m2, m2, m1, A will have zeros below the main diagonal of the first two columns of A and so on. So after n minus 1 transforms the result is upper triangular and Gaussian elimination is complete. Okay, so the thing that we need to understand next is what is the structure of mk? So this is something I will discuss in the next class. We will stop here for today.