 Welcome back to our lecture series linear algebra done openly as usual be your professor today. Dr. Andrew Misseldine We are starting chapter four in our series entitled Orthogonality now that might sound like a made-up word and basically every word in linear algebra sounds like that sometimes but no This is a topic. We're gonna talk about a lot in Chapter four this chapter is gonna be very very geometric in its nature we want to develop some of the geometric framework of lengths and Angles through the lens of linear algebra this leads towards ideas in geometry higher geometry Trigonometry and things like that. So we've dealt with geometric issues in this course already So while the notions of affine geometry were introduced as early as chapter two And there were some geometric threads that were woven inside of chapter one or introductory chapter The the notions of affine geometry are very very broad and we want to focus more on what's called Euclidean geometry And as such this geometric development will not be entirely possible for all of the finite fields or other fields We've talked about there's gonna be some limitations that one can make while there are some geometric analogs that can be made They're gonna be somewhat restrictive and limited and so for simplicity's sake in chapter four We're gonna we're going to resolve ourselves to only talk about the fields are and see So the real numbers and the complex numbers these only two fields We want to worry about for this chapter because that's the setting for which the full-blown Orthogonality conditions can be made now again I don't want to say things like dot products and norms have no places for other fields I mean in the finite field z2 you have the hamming metric the hamming norm Which is a very good way of measuring distance on binary vectors, but again, it won't have all of the same Meanings so like this the positive definite condition and things There's a lot of stuff that would be technical that we don't want to get into in this series So it kind of goes beyond the scope of our course Those those analogs there So we are going to restrict our attention to just the real numbers and the complex numbers as we talk about the subject of inner products Now in this series, we there's really only two inner products We care to introduce the so-called dot product and the Hermitian product and the inner product is the term We'll use to describe either of those there are lots of different inner products out there But again at this at this first Interjectory level the two inner products we care about are gonna be the dot product and the Hermitian product now the re the the dot product is the inner product the sort of the canonical inner product for the real Numbers and we define that right now So the dot product is gonna be a function which goes from Rn times Rn to just R So what happens here is we take a vector in Rn We combined it with another vector in Rn and we produce just a scalar that is a number in Rn So the the dot product is sometimes called the scalar product because we're gonna multiply together vectors Yes, but we produce a scalar value be aware that prior to chapter 4 We hadn't really developed a notion of multiplication of vectors now. Yeah, we did have a matrix times a vector product But in that situation we really were just doing a special case of matrix multiplication No notion of a vector multiplication this chapter 4 will introduce and also in chapter 5 We'll introduce many notions of vector products the first of which is this inner product or sometimes called the scalar product It takes two vectors together and produces a scalar now the formula for the dot product is going to be the following We're gonna take u dot v and so we get the name dot product because the symbol we use to describe the dot product is Actually a dot there are lots of different notations one uses to describe Multiplication for the dot product we always use a dot we will see other types of products of vectors for which we'll use different symbols So the dot product by definition is going to be u transpose v if we think of vectors as one column matrices Then the dot product is going to be a row vector times a column vector Which we can see right here a row vector and a column vector Well matrix multiplication tells us exactly what we do with this If we have the same number of columns as we have number of rows which in this situation means the two vectors have the same number of Entries which ought to be in we're gonna take all the possible combinations So you can take u1 times v1 plus u2 times v2 plus u3 times v3 all the way down to un times vn And that's what we mean by the dot product. It's a calculation We're probably quite used to because we see it all the time with matrix multiplication I'll talk more about that in just a second. So if we take the dot product of u times v What this means is you're gonna multiply together the corresponding components of the vectors and add them together So we're gonna multiply together the first entries So we get one times negative five then we add that to the product of the second coordinates two times zero Then we add that to the product of the third coordinates. We do these things we get negative five Plus zero plus nine and so then the dot product turned out to be four What is the relevance of the dot product? What does this mean geometrically? We'll get into that Don't worry about it at the moment. Let's just focus on computing these things So let's see so I mentioned the connection to matrix multiplication Honestly as we have been multiplying matrices together. We really have been doing dot products all the long We just we just didn't really mention it So for example, if we have a as an in by in matrix Then consider some vector x right in our end Well, consider the row vectors not the column vectors consider the row vectors of a Okay, so we have the first row the second row the mth row all the way through well When you multiply together a matrix by a vector a times x this really is just we can view it as dot products you take The product of r1 the first row times x That's the first entry then the second entry is the dot product between r2 and x and then this will go all the way down to RM times x so, you know, if we took as an example one two three four five six Just as our matrix and then we take a vector We'll take it to be say zero Three negative five or something like that. We often talk about this finger multiplication Why you take the first row times the first column? This is really just going to be the dot product of those two vectors So we're gonna get one times zero plus two times three times three times negative or sorry plus three times negative five You simplify that then you take the second row times the column here and you see you get four times zero plus five times three Plus six times negative five. So every time we do multiplication vectors We're essentially doing a dot product. And so for that in that perspective the dot product Works for every vector space for any field. We can do dot products as long as we have some type of coordinate system That's perfectly fine now. So while dot products work for every single vector space the notion of an inner product is Restrictive just these reels and complex Numbers and the reason I say that is that the dot product will only be an inner product for the real vector spaces We can do dot products on other vector spaces, but doesn't form an inner product I'll explain the distinction of that in just a moment before we do that I want to introduce the Hermitian product, which is the inner product for complex spaces The problem is we could do a dot product We could do a dot product for example of vectors in C2 C3 whatever CN and let me give you an example of that real quickly. What if we took? Let's say the vector U will be the vector 1 I for example and V is Going to be the vector. I guess we'll take 1 I again You know why not just make it the same thing? And so if we take U transpose V Which was the dot product we defined previously this is going to look like one times one Which is one plus I squared I times I which gives you one minus one which gives you zero and so this is going to be a little bit concerning for us because Essentially what we do it took is we took the product of a vector with itself So this really here is just you transpose you right here The dot product of vector equals it equals zero and the thing is we don't want that to happen for Inner products we don't want the product of a the inner product of a vector with itself to be zero again That'll make more sense by the end of this video and so while real numbers have no problems with this Complex numbers we can see that if you take the product if you take the vector transpose itself You can actually get zero now part of the problem Has to do with the transpose you'll remember that we vowed never never never to use the transpose of a complex Matrix for which if we think of a column vector, it's just a one column matrix This was like the forbidden sin. We've broken the unbreakable vow. Don't do that This is really the problem Transposes shouldn't be used with complex matrices and it really has to do with the notion of inner products For the finite fields and some of these other fields that are besides the real numbers and the complex numbers again There are issues there are some ways of fixing it, but for the finite fields we've talked about There's really no way of resolving this problem that the dot product can produce zeroes when you take the dot product of that Effective with itself and therefore it's not a good candidate for the inner product So Let's first talk about our candidate for inner products with complex matrices So we should never ever ever use transposes so to define an alternative to the dot product We take the so-called Hermitian product and the Hermitian product is gonna we're gonna use the exact same notation u dot v yikes I'll talk some more about that in a second But instead of defining that to be u transpose v we define that to be u star v which this is the conjugate transpose We transpose and we take conjugates So what that would look like is we're gonna turn the first vector into a row vector But we have to conjugate each of the scalars in that row vector and then we multiply together the vectors using the usual finger multiplication and so the Hermitian product will look like the product of u 1 and v 1 But you take the conjugate of u 1 you'll take the product of u 2 v 2 But you take the conjugate of u 2 all the way down to u n times v n But you take the conjugate of v n let me show you what that looks like If we take the product of the inner product in this case the Hermitian product of u and v what that will look like will be the following So u dot v we take the conjugate of the first one. So it's gonna be 1 plus i conjugate times i times excuse me 1 plus i Then we're gonna add to that i bar times 2 and then we add to that 3 minus i Bar times 4 i and if we go through the details of this let's do the complex conjugate to changes the sign So you get 1 minus i times 1 plus i You're gonna get negative i times 2 So it's a negative 2i and then you're going to get a 3 plus i times that by 4 i and if you foil out the 1 minus i times 1 plus i you end up with a You're gonna get a 1. I don't necessarily need all the details here But why not I guess we got 1 minus i plus i Plus 1 because you get negative i times i We're gonna get a negative 2i We're going to get then a 12 i minus 4 You'll notice that these complex numbers can't out right there the eyes so canceling, you know collecting real parts We have 1 1 negative 4 So that gives me 2 minus 4 which is gonna be a negative 2 and then if we gather together the imaginary parts We get a negative 2i plus 12 i that's it add up to be 10 i Positive 10 i like so so we get the product we get this Hermitian product You have to remember to take the conjugates here now Alternatively if you do it the other way around I do want to mention here if you take v dot u Now you're gonna take the conjugate of v because you're taking the conjugate of the first term there So you're gonna get 1 plus i bar Times 1 plus i okay well the first entry of u and v is the same number so you switch things around you don't notice it Now if we take the next one you're gonna take 2 bar times i and then for the last one You're gonna take 4 i bar times 3 minus i like so well What happens we take the conjugate of a real number well you switch the sign of the imaginary part, but there is no imaginary part So it actually just stays the same number the conjugate of a real number is actually just a real number And so you're gonna get 1 minus i times 1 I mean the exact same real numbers what I meant to say So you're just gonna get a 2i you have the 1 minus i times 1 plus i and then you're gonna get minus 4i times 3 Minus i and so distribute did multiply these things out like we saw before 1 minus i times 2i 1 plus i that's a 2 You're gonna get a positive 2i this time and distribute the negative 4i you're gonna get negative 12 i plus 4i Excuse me. That's gonna be a plus 4 Nope. Nope. Nope. Let's be careful here. So we're distributing this negative 4i here So you get negative 4 times 3 that's the negative 12 i then you're going to get it's a double negative great So then you're gonna get 4i squared which like before it's still a negative 1 so you're still going to get 2 minus 4 which is a negative 2 But this time you get a plus 2i minus 12 i so actually gives you a negative 10 i Like so so if you switch around the Hermitian product You'll notice that you actually get the conjugate of the number. They're not equal to each other, but they are conjugates negative 2 negative 2 plus 10 i versus negative 2 minus 10 i and so when working with complex numbers Complex vectors you need to make sure you take the Hermitian product take the conjugate But like I said if you take the conjugate of a real number it just gives you back the Same real number. So really the dot product is just a special case of this Hermitian product Because when you if you if you have a real vector, it's both a real vector and a complex vector taking its conjugate It doesn't do much of a difference. So really the dot product we talked about before for real vectors is a special case of the Hermitian product we have in front of us. So why why the dot product? Why the Hermitian product? What's this big deal about inner products? Why do we need the conjugate? Why can't it just be zero sometimes? What's the issue? Well, let's talk about some properties of the so-called inner product So the first thing I want to mention is the following four properties hold for the dot product or the Hermitian product And so when I talk about inner products You talk about inner products So there's two ways to define inner products one an inner product is any bilinear form that satisfies these conditions right here Don't worry about that necessarily, but basically this theorem could be taken as the definition of the inner product What we're going to interpret it is as the following an inner product is going to be the dot product when you're on the vector space Rn, and it's going to be the Hermitian product When you're on Cn that's what we're going to define to be the inner product and the reason we're allowed to do that is because these two Operations dot product and Hermitian product will have these conditions So the first thing we want about an inner product is we want the inner product to distribute So if you have a sum of vectors then you dot v plus w the dot product The Hermitian product or in general the inner product distributes over vector addition. That's what we want We also want some type of compatibility between the scalar multiplication We already had with this new inner multiplication that we've just defined And so for example if you have a scalar multiple as your second factor inside of an inner product That is the same thing as C times u dot v now Let's be careful what's going on here So this is a scalar times a vector and then we take two vectors multiplied together This right here is a scalar u dot v is a scalar and you times that by a scalar So this is just a product in the field. We want those two things to be the same thing Okay, you'll notice I didn't say I didn't say anything about the first factor And that's because we actually have to distinguish between the two when you're talking about real numbers for the real numbers if you have C u Dot v that in fact is going to equal C times u u dot v There's no distinction there, but when you have complex numbers You do have to be a little bit more careful with complex numbers if you have C u Dot v. This is actually equal to C bar U dot v so you take conjugates on the first factor and remember C bar here That would that that's just the usual conjugate of the complex number So factoring a scalar out of the second factor in an inner product gives you back the original scalar now Facting a factor in a scalar out of the first factor Gives you the conjugate now That's that's really not a difference for the real numbers because if you take the conjugate of a real number It's back to the same number again. No big deal So really this is the principle we want taking away from the second factor is the same taken away from the first factor Takes conjugation just like the Hermitian product in Alright, the other thing we want is that if you take a product of if you take the inner product it commutes But this is also somewhat of a fib This right here. This is actually true for real vector spaces U dot v is equal to v dot u for complex vector spaces We get something a little bit different and what that is is actually when you twist things around V dot u is actually equal to u dot v Conjugate because again this inner product is a scalar and if you twist if you twist the inner product around You have to take conjugations now, of course for a real number Since it's equal to its own conjugate you get v dot u equals u dot v Everything we're saying right now about real numbers is just a special case of the complex numbers now If you were to take properties one two and three this is true for the dot product over any vector space the complex numbers the finite fields any of them Why did we need the conjugates for complex numbers? And why can't we just do this for any vector space that because the dot product formula makes sense for any common vectors? This is this is the reason so these first these first three properties, right? This is some type of like distributive law that seems to be working for the For the the inner product here. There's some type of Compatibility conditions you could say like this is some type of associative property oftentimes. This is called homogeneity And then slas condition right here. This is some type of like commutative property or you might call it the symmetric property Now for complex numbers since it's not purely commutative you might actually say it's like skew commutative Because it's almost commutative, but there's this conjugation that comes into play when you do it for complex numbers All right, so again, those are properties that hold for the dot product in general Why do we need the conjugates for complex and why can't we do this for finite fields as well? Why why so exclusive why so sacred Segregatory in this one here and that's this last condition this last condition number four here This is commonly referred to as the positive definite condition The positive definite condition Which it's really two properties for the price of one. So the first one is naturally the positive condition So we want that you dot you is greater than equal to zero. This is a scalar And so we want the scalar to be positive and this is one of the problems We're gonna have for the finite fields for the complex numbers We need to guarantee that this thing is greater than equal to zero That's that's that's really the positive conditions in the definite condition is the second part You dot you equals zero if and only if you was zero in the first place We only want the inner product of Two of the vector with itself to be zero when you have the zero vector We already saw a counter example that with the complex vectors the dot product can produce zero Even if the vectors not zero the Hermitian product Aha, it won't do that and so that's why for complex numbers We assign the Hermitian product as the inner product dot products for real numbers is perfectly fine And for finite fields, we also have this problem We cannot construct a function that mimics, you know properties one two and three, but also gets This positive definite condition. So the positive definite condition is the motivation on why We are restricting our attention to Only the real numbers and only the complex numbers and in the complex number case. Why do we have? The conjugate the conjugate transpose there We have to do it because it's positive definite condition. So before closing this video I do want to say one thing about why is it the inner product? What what does that even mean? There's a lot of reasons why People call it the inner product I mean it's in it's in contrast to the outer product which will define in a future lecture For which the you know that the inner product goes into the field you produce a scalar The outer product will produce a matrix a linear transformation There's a lot of reasons that you could try to think about By analog. There's some type of pair will going on here But the way I like to think of the following when you define the inner product You get this you transpose V when you define the outer product again This is something we'll talk about in a future lecture. This is defined to be UV transpose So in my opinion, it's the location of the transpose is the transpose in Side or outside of the pair And you know it is that the reason why people originally started saying internet our product probably not But it's a nice mnemonic device to help you if the t's on the inside It's an inner product if the t is on the outside It's an outer product and we'll see the outer product later in this chapter