 In this video, I want to talk about the so-called classical matrix group known as the unitary group. It's very similar to the orthogonal group and the special orthogonal group, which were special subgroups of the general linear group of real matrices. The unitary group is essentially the complex analog of the orthogonal group, and because of that, I'm not going to do all the details over again that we did with our video on the orthogonal group. I would suggest you look at that video if you want to see some more details that we're going to omit here. But there are some important differences. Why do we have a unitary group versus just the complex orthogonal group? And that's because it turns out that the notion of orthogonal needs to be modified when we talk about complex vectors and complex matrices. Particularly, the issue is with the transpose, right? If we take the usual dot product between two vectors u dot v to mean in terms of matrix multiplication, you have u transpose times v. Turns out that's the wrong way to define it in a product for complex vectors. Let me give you one prime example here. If you take the complex vector 1 and i, it seems perfectly fine. If you take the dot product of 1i with itself, you're going to get 1 times 1, which is 1, and you're going to get i times i, which is negative 1, so you get 1 minus 1, which is 0. So you get that the dot product between the two vectors is 0, but notice the vector 1i is not the zero vector. This is a problem because the way that we defined length of a vector in these Euclidean vector spaces is the following. We take the length of a vector, say v, to equal the square root of v dot v. So we take the dot product of the vector and take the square root. If v was equal to 1i, this would suggest the length of the vector is equal to the square root of zero, which is zero. This would say that the vector 1i has zero length, but that seems counter-intuitive to our intuition of what the geometry of this complex vector space should be. It should have some positive length. And that's because there's a defect for the transpose operator for complex matrices, and we have to improve that with what we're going to talk about right here. So instead of using the dot product for complex vectors, we use the so-called Hermitian product, so what does that mean? So before we define the Hermitian product, let me show you what the band-aid we can put on this boo-boo is here. If you have A as an in-by-in complex matrix, we define the so-called conjugate transpose. This is usually denoted as A star. The conjugate transpose, we're going to take the real transpose we had before, but we also include the complex conjugate. So that bar on top there means we take the conjugate of all the numbers. So whenever you talk about a complex vector space, a complex matrix, a complex vector, you always use the conjugate transpose and never the actual transpose. Because the problem was before we weren't taking conjugates, which is why we got zero when we shouldn't have. We didn't notice this with real vector spaces because the conjugate of a real number is just itself, so the conjugation is invisible for real matrices. We need it for complex matrices. So just as a quick example, if you take the 2 by 3 matrix A to be 1 plus i minus i 0, 2, 3 minus 2i and i, it's conjugate transpose is you'll take the rows and turn them into columns. So this row turns into a column, but we take conjugates. 1 plus i becomes 1 minus i, negative i becomes i, and then 0 becomes 0. Real numbers when you take the conjugate doesn't do anything. And then the second row becomes the second column when you take the conjugate transpose. If you take the conjugate of 2, you get back to because it's a real number. The conjugate of 3 minus 2i is 3 plus 2i, and the conjugate of i is negative i. You see that. Same thing for B here. We have a 3 by 3 complex matrix. If we take the first row, it becomes the first column, take conjugates. You're going to get 1 negative i, 1 minus i. The second row becomes the second column when we take B star, and you're going to get plus i minus 5 and 2 plus i here. Notice the sign of the real part doesn't change, even if that's the only number there. And then if you take the third row, it becomes the third column. The conjugate of 1 minus i becomes 1 plus i. The conjugate of 2 plus i, excuse me, becomes 2 minus i. And the conjugate of 3 becomes 3 because it gets a real number. So we take the conjugate transpose whenever we have complex vectors or matrices. Then using the conjugate transpose instead of the regular transpose, we define the Hermitian product. And this is the inner product for complex vectors here. So when you see u dot v for complex vectors, this means u star v. So we're going to take the conjugate transpose of the first vector. So we turn that vector into a row vector and we take conjugates of each of these things. And then you'll multiply that with just the original vector, the second vector, unaffected. In which case then you get this linear combination of the first entries multiplied together, the first entries multiplied, the second entries, the nth entries. But remember to always take the conjugate of the first one. Now I have to caution you in this setting here, because when it comes to the Hermitian product, many textbooks alternatively define the Hermitian product to be u dot v is equal to u transpose v bar. So you take the transpose, then you take the conjugate of the second one. And from a theoretical point of view, there's really no difference between this. It does change like the exact numbers. Actually, when we take the Hermitian product away, we've defined it compared to how others define it. You always get the conjugate. That's the only difference, right? That's the only difference between these two definitions. So from a theoretical point of view, no difference. From a computational point of view, you just take conjugates of everything. Now the reason why we deviated from this one, because honestly I'd say this one's more popular, it requires we use the transpose, right? And then the conjugate separate. It seems kind of weird. If we're going to be true to the principle here, I'm making a blood oath with you right now, right? You know, slice your hand, we're going to shake on it. We never use transposition when we talk about complex matrices, always the conjugate transpose. And so in order to keep this unbreakable vow, we must have star right here. And so it makes a slight difference. But in the end, I would argue it leads to a simpler theory of orthogonality for complex vectors and such. But I'll let you be the judge, of course. Just some examples as you might be unfamiliar with the Hermitian product. If we take some vectors in C3, take U to be 1 plus I, I3 minus I, and take V to be 1 plus I2 and 4I. If we take the Hermitian product of these things, U dot V, this means we're going to take the product of the first entries together. Make sure you take the conjugate of the first. Then take the product of the middle two entries. Make sure you take the conjugate of the first vector there. And then you're going to take the product of 3I, 3 minus I times 4I, take the conjugate of the first one right there. So the conjugate just means you switch the sign of the imaginary part, like you see right here, right? And then multiply these out by the usual rules of complex arithmetic. So you get 1 times 1, 1 times I, you know, foil this thing out, that's going to turn out to be a 2 when you simplify it. Combining other like terms, right, you get a negative 2I that's imaginary, foil this thing out. And just combining the details, combining all the like terms, I'll let you kind of verify the details. I'm going to go through this a little bit quickly. You end up with negative 2 plus 10I. Verify this on your own, pause the video if you have to. If you switch the order of the Hermitian product, you do take V dot U this time. This does change the final result because you're now taking the conjugate of the other term, right? So whoever in the hot seat is the one you're going to take the conjugate of. And that affects things if you go through the calculations. Here again, pause the video if you want to double check it on your own. But you're going to end up with the conjugate negative 2 minus 10I. That's the difference here. These are conjugates of each other. So this number here is in fact just equal to U dot V conjugate. But this does fix the problem we were saying about earlier. What if we want to take the length of a vector? This length should equal the square root of U dot U for which you're going to take the conjugate of the first bit. So you're going to get 1 plus I conjugate times 1 plus I. You're going to get conjugate I times I. You're going to get 3 minus I conjugate times 3 minus I. And this ends up giving you, when you take these constants, you get that. And this is a nice little trick here. When you take the conjugate of these things, just get the sum of the squares of the real imaginary part. So you're going to get, so this here is going to equal 1 squared plus 1 squared. This one right here is going to equal 1 squared. And this here is going to equal 3 squared plus 1 squared. So you end up with 1 plus 1, which is 2, 1, and then 9 plus 1, which is 10. So you get the square root of 13. The nice thing about conjugates, this is why we need them for complex vectors. If you take A minus BI and you times it by A plus BI, right? So you have the complex number times by its conjugate. When you foil this out, you always get A squared. You're going to get a plus ABI. You're going to get minus ABI. And then you're going to get minus B squared I squared. For which the ABI and the minus ABI obviously cancel each other out. But the thing to remember about I squared is I squared is actually negative 1. So you get a double negative, and this thing would simplify just to be A squared plus B squared. For which this is always a real number. When you multiply a complex number by its conjugate, you always get a real number. And this then recovers our notions of length that we want. Okay? Some properties we should mention about the Hermitian product. Because you might not be familiar with these things. Like you mentioned already, if you switch the order of the factors, you take the conjugate. The Hermitian product is distributive. That is, if you take U plus V, this is vector addition right here. If you take U plus V dot W, you can distribute the W. It's distributive in both the first and second factor. U dot V plus W becomes U dot V plus U dot W. You have to be careful with scalars, right? If you have a scalar in the second factor, you can factor it out. No big deal. U dot V, Z V is the same thing as Z times U dot V. But because of the conjugate, if you have a scalar complex scalar in the first factor, when you take it out, it actually becomes a conjugate. So V U dot, excuse me, Z U dot V is equal to the conjugate. You get Z bar times U dot V. So you have to pay attention to that. The most important property right here are positive definite condition. If you take a complex vector U dot V, that'll always be a non-negative real number. And it's equal to zero if and only if it's the zero vector in play here. We need this positive definite condition. That's what it's called here, property six. The positive definite condition will, that'll allow us to develop a geometry from this inner product, a so-called inner product space. We need that. And we can't do it if we don't have conjugates. All right, let's get to the heart of the matter. Now that we've reviewed the idea of conjugate transposes and the Hermitian product, we're now ready to define the so-called unitary group. This is usually called UN for short or it can be called UN of C to emphasize the complex numbers here. In the classical sense, the unitary group is only described over the complex numbers, which is why we don't even mention it. But in a more modern case, we can replace the complex numbers with some other number system like a field or ring or something like that. But if we do that, we need to have some type of, some type of like a sesquilinear form to replace the, we have to be careful about the inner product. That lets us say that. So if you have some appropriate inner product space, we can have an analogy of the unitary group. But for our purposes, we'll stick just to the complex unitary group. Now the unitary group is going to be defined analogous to the orthogonal group for the real matrices. So we say that matrix is unitary if its conjugate transpose is equal to its inverse. Some consequences of that statement is we're going to see is that q, q star is going to equal to q star q, which is equal to the identity, right? Since the conjugate transpose is the inverse, you can also argue that a matrix is unitary if and only if its column vectors form an orthonormal set in the sense of complex vectors. Where orthogonal is, in this case, orthogonal means that u star v is equal to zero. That's the Hermitian product. It's orthogonal with respect to the Hermitian product. So you can prove very similar things in that regard. Basically, everything that's true about orthogonal matrices over real numbers is true for the unitary matrices over complex numbers with very few exceptions. Nearly the theory is identical. Let me give you an example of a unitary matrix. Here's a 2 by 2 unitary matrix. So I claim that u star, u times u star is equal to its inverse, right? If we multiply them together. So here's just u copied down again. It's conjugate transpose. You're going to take the first row that becomes the first column. Make sure you take conjugates here. So the 1 plus i's become 1 minus i's. Take the second row. It becomes the second column. Make sure you take conjugates there. You're going to get 1 minus i becomes 1 plus i. And then negative 1 plus i becomes negative 1 minus i. So to make this calculation a little bit easier, notice everything's divisible by 1 half. You can factor 1 half from the first and the second. You end up with 1 fourth times. You're going to get the 1 plus i, 1 plus i. You're going to get 1 minus i, negative 1 plus i, which that could be factored as negative times 1 minus i. Like so that's the first matrix. The second matrix could look like 1 minus i, 1 plus i, 1 minus i. And this is a negative 1 plus i. Like so. And so the real kicker to pay attention to in this setting here is if you take 1 plus i and times it by 1 minus i, that's its complex conjugate. This will equal the number 2. And so when you look at these things, you end up with 1 plus i, 1 plus i times 1 minus i, 1 plus i. That's going to give you a 2. You're going to get 2 for the first one, 2 for the second one. So you get 2 plus 2, which is 4 divided by 4. You get a 1. If you do the first row, second column, you're going to get 1 plus i times 1 plus i. That's 1 plus i squared. And then the next one, you're going to get 1 plus i times 1 plus i squared. But there's a negative sign here that can't solve to give you 0. So the second row, excuse me, first column, you're going to get 1 minus i squared minus 1 minus i squared. So that's 0 again. And if you take the second column, you're going to get 1 minus i times 1 plus i, that's a 2. And then you're going to get a double negative. So it's positive 1 minus i times 1 plus i, that's a 2. 2 plus 2 is 4 divided by 4. You get 1 again. So kind of going through the details here, you see that this is in fact the identity matrix. This is in fact a unitary matrix. So analogously, unitary matrices are exactly the complex matrices that preserve the Hermitian product. You can prove that, like we saw with orthogonal matrices, the unitary matrix, they preserve the Hermitian product. That is ux dot uy, oops, uy, will equal x dot y for complex matrices, if and only if u is a unitary matrix. As such, unitary matrices will preserve distances and angles and lengths and all of the geometric properties you want. So multiplying by a unitary matrix gives you a rigid motion of the complex vector space Cn. And in fact, un is the group of rigid motions of Cn as long as we restrict to linear maps, right? Because we don't want things like translation, which that's an affine map, but not a linear one. So this unitary group acts as a symmetry group for Cn. And this notion of symmetry is something we're going to talk more about in the next lecture, because that's why people were interested in the classical matrix groups in the first place, the general linear group, the special linear group, the orthogonal group, the special orthogonal group, the unitary group. We can talk about the special unitary groups. So this is called SU of n, kind of want to call the last one u there. So then the group would call SU u, because Southern Utah University is where we're at right now. This is going to be the intersection of the special linear group on complex numbers with the unitary group. So you get the special unitary group. This is going to be a normal subgroup of the unitary group. It's going to be the rotations of the complex vector space. And then of course, also there's the other classical matrix group, the symplectic group, which I'm omitting from this lecture. Feel free to look it up online. Wikipedia has a pretty good page about it if you want to learn more about it. These are all examples of symmetry groups of various Euclidean spaces, but they're different ones. Why do we care about the unitary group and the special unit group? Why separate the two? Why do we care? I mean, unitary is for complex as orthogonal is for real. That makes sense. But why the general linear group and the orthogonal group? Why do we need both? Why special? In the next video, we'll talk some more about these ideas of symmetries and see that these classical groups, classical metric groups, are important groups of symmetries of Euclidean geometry, which then has consequences to their applications to the physical sciences. So thanks for watching. If you liked what you saw or learned something, feel free to hit the like button. And always, you know, please subscribe if you would like to see more videos like this in the future and post your questions in the comments. If you have any, I would be happy to answer them. See you next time, everyone. Bye.