 A permutation matrix is an n by n identity matrix whose rows have been rearranged. In other words, it's a permutation of the rows of i. For example, let's find the permutation matrix peruse by switching the second and third rows of the 3 by 3 identity matrix. So we'll start with the 3 by 3 identity matrix, we'll switch the second and third rows, and we weren't told to do anything with the first row, so we'll leave it where it was. Now why is the permutation matrix called the permutation matrix? The obvious answer is that it's a permutation of the rows of the identity matrix. But if the only thing you learn in linear algebra is that every matrix corresponds to a linear transformation, you'll probably fail the class as there's a lot more you should be learning. But at least you'll have learned the most important idea. In the case of a permutation matrix, it corresponds to the permutation of the components of a vector. So we might take our permutation matrix and apply it to the vector A, B, C. So in order to do that, we'll need to rewrite the vector as a column vector, and then multiply, and we get... Notice that we produce the permutation matrix by switching the second and third rows of the identity matrix, and when we apply the permutation matrix to a vector, it switches the second and third components of the vector. And in general, the permutations of the rows of the identity matrix correspond to the permutations of the vector components. And this means we can also go backwards. Suppose we have a permutation to find the corresponding permutation matrix, permute the rows of the identity matrix in the same way. This is actually a little trickier than it sounds. So suppose we want to find a permutation matrix corresponding to the permutation A, B, C, D, E goes to D, B, A, E, C. It's important to keep in mind the permutation is the rearrangement. So while we could read this permutation as A becomes D, that's not the permutation. The permutation is that A shifts from the first position to the third. So we would move the first row of the identity matrix to the third row. And remember, if it's not written down, it didn't happen. We need to move R1 to R3. Also note that this is not a switching of rows one and three. Row three goes someplace else, which we'll have to determine. Next, the second element, B, stays the second element. Strictly speaking, we don't need to write this down as a row switch, but we should just so we know we've taken care of it. Remember, if it didn't happen, it should still be written down. So we'll indicate that R2 goes to R2. The third element, C, moves to the fifth position. So that's R3 becomes R5. The fourth element, D, moves to the first place. That's R4 goes to R1. And the fifth element, E, moves to the fourth place. The fifth row becomes the fourth row. And so applying these permutations, we get the permutation matrix. And remember, if you don't catch your mistakes, someone else will. This permutation matrix is supposed to produce this permutation, so let's verify that this produces the correct permutation by applying it to the vector. So we'll multiply the matrix by the column vector. And we find which is what we wanted to get. Given the matrix, the first question you should ask is, does it have an inverse? Since permutation matrices are permutations of the identity matrix, they have inverses. So how do we find them? So here's an important observation. Every row of a permutation matrix consists of zeros and a single one. So if u and v are the vectors corresponding to different rows, we know the dot product will be zero because the ones won't align. Meanwhile, the dot product of any row with itself will be one. Since the product of a matrix and its inverse should be at the identity matrix, which means that all of the entries should either be zero or one, this suggests that if we could multiply the rows of the permutation matrix by other rows of the permutation matrix, we might get the identity. But remember, the transpose of a matrix turns row vectors into column vectors. And this suggests that if p is a permutation matrix, its inverse is its transpose. Well, let's prove this. Suppose p is a permutation matrix where we'll indicate our row vectors as p1 through pn and p transpose will be, so let's find the product pp transpose, which will be. Now, since every row pI consists of all zeros with a one in one position, then the product of any row with the transpose of any row will either be one if the rows have the same row or zero if the rows are different. So the entry in the first row first position will be p1 times p1 transpose and that's got to be one because the indices are the same and the remaining entries will be zero because the indices will be different. And so that gives us the first row of our matrix. In the second row, all entries will be zero except for which will be the second row, second column entry. And in general, all entries in the ith row will be zero except for the entry in the ith column. The ith row, ith column entry will be one and our product will be the identity matrix. Now remember the inverse has to work on both sides so we need to prove that p transpose p is equal to I. To prove this as the case, we'll make it a homework problem. But once you've done that, we have the following result. If p is a permutation matrix, its inverse is its transpose.