 So, remember that to multiply two matrices, we find the sum of the component-wise product of the rows of the first matrix with the columns of the second. But the rows and columns can be treated as vectors. And the sum of the component-wise product is the dot product. So we can reformulate matrix multiplication in terms of the dot product of the row and column vectors. So, suppose we take our matrix A to be a matrix of row vectors, and our matrix B to be the matrix of column vectors, and we'll assume that the V's and U's are vectors of the same size, otherwise we can't find the dot product. Then we can express the product AB to be the matrix of dot products. So, for example, let's take this matrix product, we'll interpret this in terms of the dot product, and then find the product of the matrices. So, we can take the rows of the first matrix as our vectors, and the columns of our second matrix as our vectors. So, our two matrices can be rewritten as A matrix of row vectors and A matrix of column vectors. And our product can be expressed in terms of the dot product of these vectors. And we find, which gives us our matrix product. We can also try to go backwards. So, let's find a matrix product equal to the dot product of 154 with 216. And so, we note that 154 dotted with 216 will be, and if we think about this as a matrix product row times column, then we'll want a matrix with row entries 154 and a column 216. And if we find row times column, we get the product. If we view the vectors U and V as column vectors, then it appears we can express the dot product U dot V in terms of the matrix product U transpose V. Nope. The important thing to note here is that the dot product returns a scalar, but the matrix product returns a matrix. But as long as you keep this distinction in mind, it is convenient to remember that for two column vectors of the same size U and V, U dot V is U transpose V, where we have to interpret the left-hand side as a scalar and the right-hand side as a one-by-one matrix whose entry is the scalar. Now, computationally, there's no advantage to the dot product interpretation of matrix multiplication. And again, there's no simple interpretation of a transpose. But in combination, the two lead to a very powerful result. And that's the following. Later on, it will be useful to find AB transpose the transpose of a product. We'd like it to be somehow related to the transpose of A and the transpose of B. And the obvious question is, might it be A transpose B transpose? So let's do a little bit of analysis first. Suppose the product AB exists, which products of their transposes exist, and which will have the same dimensions as the transpose of the product. So suppose A is an M by N matrix. If AB exists, then B must be an N by K matrix. And so AB will be an M by K matrix. AB transpose, remember rows become columns, will be a K by M matrix. Meanwhile, A transpose and B transpose will be. So the product B transpose A transpose exists. It will be a K by M matrix. And so it will have the same dimensions as AB transpose. In other words, if the product AB exists, then AB transpose will have the same dimensions as B transpose A transpose. And this suggests that there is a possibility that AB transpose is B transpose A transpose. But let's prove it. So suppose A consists of the row vectors u1 through um, and B consists of the column vectors v1 through vk. And again, we'll assume that these vectors are the same size, so we can form the dot product. So let's consider the ijth entry of AB transpose. So remember the transpose turns rows into columns and columns into rows. And so this will be the jith entry of the product AB. But the jith entry of AB will be the product of the jth row of A with the ith column of B. And so it will equal the dot product uj.vi. Meanwhile, let's consider the ijth entry of B transpose A transpose. This will be the dot product of the ith row of B transpose and the jth column of A transpose. But again, the transpose turns rows into products. So the ith row of B transpose is the ith column of B, and that's our vector vi. And similarly, the jth column of A transpose is the jth row of A, and that's our vector uj. And so the ijth entry of B transpose A transpose is the dot product vi uj. But remember the dot product is commutative, so uj.vi is the same as vi.uj. And so AB transpose and B transpose A transpose have component-wise equality, and this proves that if it exists, AB transpose is B transpose A transpose.