 So, given any matrix A, we can produce its singular value decomposition, SvD, A equals U, sigma, V transpose, where U and V are matrices corresponding to rotations, and sigma is a diagonal matrix. Well, at least we say we can find it. Let's go ahead and find some. So, for example, let's try to find a singular value decomposition for this matrix. So, by assumption, this matrix A will be U, sigma, V transpose. So, if we find A transpose A, that's going to be V, sigma transpose, sigma, V transpose. Well, I know what A transpose A is, and so this means that we want to find the eigenvalues and eigenvectors of our matrix. So we find our eigenvalues, which gives us our diagonal matrix lambda, which, by assumption, is the product of sigma transpose times sigma. Now, remember that sigma is supposed to be some diagonal matrix. So, let's represent it as the diagonal matrix sigma 1, 1, sigma 2, 2. If we multiply those two together, then we get another diagonal matrix whose entries are the squares of the original diagonal entries. Now, meanwhile, lambda is the diagonal matrix of eigenvalues. And we have a choice here. We can put the eigenvalues in any order that we want to. And for reasons that will become clear later on, it's convenient to set them down in descending order. So, while lambda could be any diagonal matrix whose entries are the eigenvalues, the one we'll actually want to use is this one. And by a direct comparison of the entries, we can find both sigma 1, 1, and sigma 2, 2. And at this point, you may be wondering, there are two square roots of these numbers. And so it doesn't matter if we take the positive or the negative square root. And the answer is, I won't do your homework for you. That's something that you should check out on your own. Now that we know the eigenvalues, we can find the eigenvectors and normalize them. And remember, the columns of V will be the eigenvectors of A transpose A. And so we have V equals this matrix, and V transpose is the transpose. Now, we could find U in the same way, but we'll take a shortcut. To find U, notice that if A is U sigma V transpose, then A V is U sigma. And we know A, we know V, and we know sigma. And so this allows us to find U directly. Here, we'll treat U as a matrix of column vectors. So if we multiply both sides, since the two matrices should be equal, this first column should equal 6.3592 times U1. And so we can find our first column vector. And by comparing these second columns, we can find our second column vector. And we'll get our columns of U. Now, the last step we took there does raise the following question. If we decompose A as U sigma V transpose, if we find U, we can compute V transpose by comparing U transpose A with sigma V transpose. And that's because if we left multiply by U transpose, we get U transpose A equals sigma V transpose. And by assumption, at this point, we will have found U and sigma. And so we can find V transpose by inspection. Or we could have found V first. And then we can compute U by comparing A V equals U sigma. So again, from our decomposition, this time right multiplying by V gives us A V equals U sigma. And again, by assumption at this point, we will have found V and sigma. And so the question is, which one should we find? And the answer is, it doesn't really matter. But we can choose to find the easier one. For example, suppose we wanted to find the singular value decomposition for a 2 by 3 matrix. Now, we might note that if A actually is U sigma V transpose, then in order to make the dimensionalities work out, we'll want U and V and V transpose to be square matrices. So this 2 by 3 matrix A has to be the product of a 2 by 2 matrix U, a 2 by 3 matrix sigma, and a 3 by 3 matrix V transpose. And since V and V transpose are larger matrices, it might be easier to find the matrix U first. So we'll start out by finding A A transpose. We'll find the eigenvalues and eigenvectors of A A transpose. And we find the columns of U are going to be the orthonormal basis. And so we know that U has these normalized vectors as its column vectors. And the entries of sigma are the square roots of the eigenvalues. And we'll conveniently put them in order. Now, sigma is a 2 by 3 matrix. So the question is, what can we put in that last column? And we won't go into the details here, do your own homework. But it turns out that we can actually fill that last column with the column of zeros. So again, by assumption, A is U sigma V transpose. And so if we left multiply by U transpose, we know that U transpose A is sigma V transpose. Now let's set up that matrix equation. On the left-hand side, we'll just do that multiplication. On the right-hand side, we'll write it this way. We have our matrix sigma, and this matrix V transpose. Well, that matrix will consist of the column vectors of V transposed. And so that gives us the rows of our product equal the transpose of a column vector times some scalar. And that means we can find V1 transpose and V2 transpose. But what about V3? Now, before we go out and find that third vector, let's see if we actually need it. So remember, the idea here is that we want to write A as the product U sigma V transpose. But notice that if we take the three matrices we already have, U, sigma, and V transpose, if we multiply them together, that third vector doesn't appear anywhere in the answer. We actually get our original matrix A, and so we don't need that third vector. And at this point it's probably worth mentioning the value of productive laziness. Since the components of the third eigenvector don't enter into the computation, we don't need to compute them. So we could be lazy and find the eigenvalues and eigenvectors for the smaller of the two matrices A transpose A or A A transpose. But if we put some effort into our laziness, we can find out something very interesting. And we'll take a look at that next.