 So with the help of Python and this Google Collab notebook I'm going to explain the process of QR decomposition or QR factorization of a matrix of whose columns we have mutually independent or independent set of column vectors and we're going to change that basis then because if they are mutually independent remember their basis usually of some subspace because they is not squared and we're going to change that basis so that each new column vector is going to be orthonormal so they're all mutually orthogonal and they have unit length one and to get that we're going to go through what is called the Graham Schmidt process and that gives us this matrix Q and it'll also help us to develop this matrix R which is upper triangular so we have this multiplication of an orthonormal matrix times an upper triangular matrix and that's going to give us back a so we've decomposed a we factorized a so what I'm going to assume is just that you have knowledge of vector arithmetic just how to add vectors how do you project one vector onto another and usually use the dot product for that if you remember linear independence those are the column vectors of a what the rank of a matrix is because we can use that to help us just to make sure that our column vectors in our matrix are all linearly independent orthogonality what it means to be orthogonal and then also the normalization of a matrix so if you have some idea there it'll be easy enough to follow along the packages that we can use in this notebook I'm going to explain it using the some pie or the symbolic python package makes it very easy to to do symbolic mathematics or just the normal numerical mathematics as well so I'm going to import the matrix function that's uppercase m in the init underscore printing function and if I call that function it's going to allow a some pie to express la tech as the output of your computation so let's run those and set that in a printing so here's our example matrix three columns three column vectors there one one one zero one one and zero zero one and that's a basis for our three or three space in this instance but we want to change that basis for three column vectors that are mutually orthogonal and they are unit length as I mentioned of course we have to first make sure that these three are linearly independent and we'll do that but first let's create the matrix I'm going to use the matrix function from some pie as a list object I'm just passing all nine values and then I'm calling the reshape method and bringing three comma three so that we have a three by three matrix and if you can see the print out there is using la tech so it's nice and crisp lovely design there to the printer so let's just make sure that these are linearly independent column vectors and one way that I can do that is just to to row reduce the or use a elementary row operations cast Jordan elimination just to make sure that we get to reduce row echelon form and if I use some pie the matrix there and then the RREF reduced echelon form method there it's going to do elementary row operations cast Jordan elimination and what we're going to end up with is the is the identity matrix and what that means it's only the zero vector that is a solution to this only to the homogeneous system involving this matrix a in other words those three column vectors are linearly independent the other thing that we could do is also just call the rank method so a dot rank open close parentheses and that's got to give me three and it spans all of three space so they are linearly independent now just to show you if you want to do qr decomposition you can use the dot composition there the decomposition method on my matrix so there's my matrix that I create and then I'm just calling qr decomposition on on on that matrix and that's going to give us two matrices back which I've named q and r and you can see q there and you can see r and you're going to see that q all these column vectors are orthonormal so orthogonal end of unit length and then we have r this upper triangular matrix just to verify that q times r is back to a I'm saying q times r and I get back to a no problem whatsoever so let's do this by hand of course when we have a set of basis vectors we're just going to choose one of them to be the start one and all the other subsequent to that must be orthogonal to this first one so since we have our first one there we can just use it and this is what we're going to do so a sub i that is just going to be the column vectors in my initial set of basis vectors u sub i that is going to be my new author orthogonal set and if I normalize it in other words divided by its norm then I'm going to get u hat sub 1 or u hat sub 2 that's the orthonormal version of that so we're going to be mutually orthogonal but they're also going to be of unit length so let's just create a one a computer variable that I'm using here for a sub one first column vector inside of a and I'm just using square bracket notation there all the rows in column zero and that's just going to give me the one one one the first column one one one and I'm just going to set that to be u one I've got to take one of them to start off with as long as the others then are mutually orthogonal orthonormal to u one of course as I said we have to divide by the norm of you of u one or a one I can call it anything here seeing it's the same thing so I'm just calling the dot norm method there so if we take the norm of u one or a one we see it's square root three so if we take a one and we divide it by the norm of a one I'm going to call that u one normalized so it's now normalized normalized so from one one one we go to square root one over square root of three or square root of three over three for all three elements now I just want to remind you of this how we do a projection so if I take a projection of mate of vector a sub two uh onto a sub one it is this dot product of the two of them divided by the norm of the one that you're taking the projection two times that vector let's just have a look at a google drawing here we have a google drawing there's my a sub one so it's not the one we're dealing with in this problem it's just a schematic and there's a sub two the blue one to the top and if we use the projection remember that's an orthogonal projection which leaves me with a pink vector here along a one which is the projection of a two onto a one but because it's an orthogonal projection u two is actually very nice because it is orthogonal to a one and we've just set a one to be u one and now we have something that's orthogonal to it is given to us such that this projection the pink at the bottom plus u two has got to give me a and that's what we see there a sub two that's the projection of a two onto u one plus you two and if I get you two on its own then I have everything I need to calculate you two it's a two which I'm given in the problem and it's very easy for me given that I have you one because we decided you once a one I have been given a two so to take the projection of a two along you one is very easy and just that simple algebra leaves me with you two so that's what we're doing here we're saying the projection of a two along you one or a one then doesn't matter is a sub two dot product with a with u sub one divided by the norm of u sub one squared times u one so let's extract a two there as the second column beautifully we can see it there render to the screen that's just going to be our second column zero one one so what I need to do is just to take this dot product divided by the norm squared of u one multiplied by u one and you can see it right there so how to do the dot product you take one of your vectors a sub one use the dot method so dot dot and then pass this parameter or argument the other vector that you're interested in so if we look at the projection of a two onto u one again I'm giving it very simple names there so that I can remember if I see this code down the line or give it to someone else sort of can figure out what I meant by this computer variable name and it's two-thirds two-thirds two-thirds and now as I said very conveniently we have this orthogonal decomposition basically in using that projection onto the other vector and that's what I have here u two plus the projection of a two onto u one it's exactly what we had in the picture that's got an equal a sub two and all I'm doing here is just putting in the definition so it's going to be u two plus in this middle bit here a sub two dotted with u sub one times the norm of u sub one all squared remember that's the projection there and that makes it very easy let's you know u two is just going to be a two minus this projection that we calculated so lo and behold there's u two and all we want to do now is just to normalize it or divide it by its norm so u two divided by the u two norm and there I see my second my second new basis vector and it is going to be orthogonal to the first one and it's orthonormal it's seeing that they're all in one so what I've shown you in five that's sort of the most the the the what you have to understand or at least just memorize it put in your head but it's easy enough to understand u sub any of the n's that I'm looking for we've just looked at u sub two but next up we're going to look to use up three well just as we had a sub two where we just said a sub a sub two minus the projection it's going to give us u two we do exactly the same here so it might be u uh u sub a thousand that's just going to be a sub a thousand minus and then we're going to do this summation we're going to start i equals one and then we're going to go all the way up to n minus one so not all the way up to n and if you think about all the ones from one to two to three to four this u sub n better be orthogonal to each one of the ones that it came before it's got to be mutually orthogonal it's got to be orthogonal to each and every one before and that's how we do this that's why we have this summation there we have to now take a sub n so if i'm looking at u sub three i take a sub three minus and then a sub three dotted with each of the previous ones u sub one u sub two only up to two because we only go to n minus one so it's got to be mutually orthogonal so we we've got to have each one of those mutually orthogonal ones in there so if we look at a sub just the projection of a sub three onto u two and then also the projection of a sub three onto u one so a sub three has to be projected onto the one before and the one before that and you see the equation there such that it's very easy for us to write the code there's a sub three and then there's the dot product of a sub three onto u two there's a sub three onto u two and we see that and then a sub three onto u one we see that one those ones and now we're just going to subtract it from a sub three and you can see the algebra v clearly there because just as we had in the two-dimensional case that you can add the two vectors to get to the final one this is what you've got to do now it's orthogonal to two other vectors so you better bring them both in which makes it then be easy through simple algebra for us to calculate u three and all we have to remember is to divide it by its norm and that's going to give us the normalized version of that so now we have it we have the whole of u remember we're just calling it q because it's q r d composition i'll call it u hat here and i'm just using the matrix function and putting all three column vectors next to each other and just to show you that they all mutually orthogonal i'm going to take them each pair of them there'll be three pairs and i'm just going to do the dot product between each of them and they better all just be zero because the dot product of orthogonal vectors this is zero and we see zero zero zero if you use numpy it's going to do it not symbolically like it's going to use any medical calculations that's going to have a little bit of overflow error and you might not get exactly zero but you're going to get something time 16 or 17 to the power negative one and that really is to zero so we can see all three of them are mutually orthogonal and these are this is how you calculate are this is an upper triangular matrix and you can see the pattern repeating it's just the normal dot product and that's how you would calculate are such this if you then have q and r you can get back to a now up till now what we've done now is we've looked at a very very simple case where we had this matrix over a field of real values so this is in essence then not the full picture you've also got to consider the field of complex numbers and how do you do that so the dot product we just use that because we do we were just talking about euclidean space here so that wasn't a problem but as soon as we go to other spaces as soon as we include complex numbers for instance remember it's not the normal dot product we're talking about but the inner product so you've got to define the inner product between the two vectors in the case that you're doing and as long as you take not the dot product but the inner product of all of those nothing changes so in then short that is q r decomposition using the graham schmitt process it really is an easy process you just have to do a couple of of them and just not make a silly mistake but it's really easy easy to remember and it's all about the projection of one vector onto another but to remember that you have to keep on adding them in as much as every new one that you calculate has to be orthonormal to all the ones that came before