 If we interpret matrices as representing linear transformations, and we should, then, if they exist, AB, A plus B, and A inverse are also linear transformations. There's one more matrix operation that has no simple interpretation as a linear transformation, and that's the transpose of a matrix. Let A be a matrix. The transpose of A, written A-T, is the matrix M, where the component M-I-J is A-J-I. In other words, we switch the rows and columns, or visually we flip the matrix across its diagonal. So, for example, let's consider the transpose of this matrix. So, when we find the transpose, the rows are going to become columns, and the columns are going to become rows. So, the first row of the matrix becomes the first column of the transpose, and the second row becomes the second column. And, again, one way we might picture this is that we flip the matrix along its diagonal. Or, here's another matrix. We can find the transpose. First row becomes first column. Second row becomes second column. Sometimes the transpose of a matrix is the same as the original. And since the transpose of a matrix appears as a flip along the main diagonal, this introduces the idea of a symmetry, and this suggests the definition, a matrix is symmetric if its transpose is itself. So, suppose A is a symmetric matrix, except we don't know some of the entries. Well, let's find those missing entries. Now, since A is a symmetric matrix, we know that A is a transpose. So, we can find the transpose of our matrix. And since the two matrices are the same, then the missing entries on the right-hand side must be the entries on the left-hand side. So, for example, this entry in the first row second column should be the same as the entry in the first row second column 3. Similarly, the entry in the second row third column on the right must be the same as the entry in the second row third column on the left. And likewise, for the entry in the third row first column on the right must be the entry in the third row first column on the left. Now, again, the transpose doesn't have an obvious interpretation as a linear transformation, but it does turn out to be useful notationally. So, suppose we have a matrix, for example, this one. We can interpret the rows as vectors. For example, v1, those are the entries in the first row, and v2, those are the entries in the second row. And remember, the vector brackets and comma separators exist for readability only. They're not an intrinsic part of the vector. All that we require for a vector is it be an ordered tuple of field entries. And so, if we interpret the rows as vectors, we can rewrite our matrix as the matrix first row is the vector v1, second row is the vector v2. But wait, there's more. If we interpret a vector as an ordered tuple, which it is, we can also interpret the columns as vectors. So, our first column with entries a11, a21, we can interpret that as the vector a11, a21. And similarly, the second column becomes a vector a12, a22. And the third column becomes a vector a13, a23. Now, there is a little bit of a problem here, because if we write our matrix as u1, u2, u3, this suggests that our matrix looks like this, which we don't want. And we can fix that if we use the transpose of the vectors. Remember, in a transpose, rows become columns. This turns our row vector into a column vector. And this allows us to rewrite our matrix as the vector u1 transpose, u2 transpose, and u3 transpose. And so, we can rewrite a matrix as a matrix of row vectors and as a matrix of column vectors. So, the row vectors are easy. We take our first vector to be the first row, and our second vector to be the second row, and our matrix becomes... As column vectors, we can take u1 to be the vector whose components are the same as the first column of our matrix, u2 to be the second column, and u3 to be the third column. And so, our matrix can be represented as u1 transpose, u2 transpose, and u3 transpose.