 Welcome back to our lecture series, Math 42-20, Abstract Algebra 1, for students at Southern Utah University. As usual, I'll be your professor today, Dr. Andrew Misseldine. In lecture 24 in our series, we're going to continue our discussion of linear codes, that is algebraic coding theory, and in this lecture, we really are going to start using substantial amount of linear algebra to help you out with our coding process. And so, since the linear algebra is going to be very heavy, if a viewer of this video is a little bit rusty on their linear algebra, I might actually recommend you take a look at some of the links posted throughout these videos, or you could check out my other lecture series entitled Linear Algebra Done Openly, which is an open source textbook developer myself and my students that'll cover and review a lot of these linear algebra topics, if necessary. And so, why are we using linear algebra here? Well, the basic idea is that when we have a binary sequence, the sequence of ones and zeros, let's call that sequence x for a moment, we can identify that x, that sequence of numbers as like a vector, like we might write it as a column vector, we have like the first entry, the second entry, the third entry, all the way down to the nth entry. And so, we have these column vectors, which then can be naturally associated as vectors inside the vector space z2n, where in standard linear algebra, you often see as your vector space rn, so you have these column vectors of real numbers, maybe at some point, even if it's momentarily, you might talk about column vectors and cn, maybe like if you want to allow complex eigenvalues and things like that. Well, honestly, when I teach linear algebra, I actually allow my students to, and I encourage them to use fields other than the real numbers, and even the complex numbers, we teach them very early about finite arithmetic, and therefore we can start doing, we can start doing linear algebra over these other field, other vector spaces using different fields. And really, the process doesn't change that much. And one of the reasons I had this inspiration start doing that was actually about this idea of linear codes, that we're learning about this in an abstract algebra course, but honestly, there's nothing that would stop a usual linear algebra student from understanding the basics of these linear codes, other than the fact that they might not be familiar with the finite fields, like z2, but z2 honestly is not that complicated in its arithmetic. But let me give you some motivation on why we want to use this linear algebra, so we can represent our sequences as vectors, which is great, but since we can recognize them as vectors, we can then start doing linear algebra on these things. And I mentioned this before in our series, but representation theory is essentially the branch of abstract algebra that you use linear algebra and things related to that to help you better understand concepts like groups, rings, and other things in abstract algebra. So abstract algebra is basically group theory equipped with linear algebra. And so throughout this lecture in the next one, we're going to see how linear algebra can be used to help us understand this coding theory problem. So let me kind of present you some examples here. So it turns out that, for example, the dot product that you learn from standard linear algebra, maybe it's called the inner product. In that context, it depends on the textbook of course you used here. I'm going to prefer to use the word dot product in this conversation because really we will be taking the dot product of vectors. Inner products, although the dot product is an inner product on say Rn, I would not say that dot product is an inner product on the vector spaces, z2n are any of these other vector spaces we're going to talk about here. Mostly because an inner product needs to satisfy the positive definite condition for which you can then use the inner product to develop lengths and distances and angles. We won't be able to do that for these finite vector spaces, but the operation of the dot product will still be possible. So let me kind of motivate why we're using the dot product here. Well, with the ASCII code which we introduced previously in this series, remember, our message is we can think of vectors in z2,7. So these are column vectors with seven entries and they're binary entries. We either have a zero or a one just a bit right there. And so then the eighth bit, which we actually places the first bit I should mention, but the ASCII encoding process is we take our original message and then we insert an extra bit, a so-called check bit into the first position. And it was calculated by the following formula. You would take x1 plus x2 plus x3 up to x7 and then you would compute this mod 2. Okay, so you take the sum of all of the information bits inside of our vector and then you stick that in the front. We'll call it x0. And that way we could check. Oh, if the message got sent correctly, then we could add these together. The receiver could add these together and see if it matched up with the check bit. If there was any disagreement, that means there was an error in transmission, so we were able to detect the error. Not necessarily corrected, but we could detect the error here. Now this sum right here is not something just out of the blue. We can actually realize this sum as a dot product of vectors. If we take the all ones vector, so this would be the vector in z2n where every entry is one. Okay. And so if you take the dot product of the all ones vector with our original message, well the dot product is just multiple, if we think it uses matrices, right? Just as one column matrices, you take the transpose of the first matrix, the first vector, make it into a row vector, and then you multiply that out. You're going to get 1 times x1 plus 1 times x2 all the way down to 1 times x7. And that's where this formula comes from. So our check bit is essentially just the dot product between the all ones vector, which we could think of as a vector of weights, and the message vector. That's how we compute this check bit. Okay. Now the reason I bring this up is I want to introduce some other examples. So in Judson's textbook, he sometimes has these additional exercise sections for which those are kind of like supplementary sections co-curricular to what we've been learning about. And the questions themselves, the exercises themselves are often very informative teaching us new concepts. One that's very pertinent to chapter eight would actually be section 3.6. This is actually a supplemental section to chapter three about group theory, about groups. It's entitled detecting errors. Okay, so it gives us some elementary ideas about error detecting codes and things like that. And so there's two particular examples that are introduced in that section I want to mention just very briefly right now. The first one is the idea of UPC symbols where UPC stands for universal product code. Basically, when you go to the grocery store and you see a barcode on an object, that's a UPC. So a UPC is going to be, in fact, a 12 digit code. So there are 12, there's 12 numbers in the code and their digits, so it's going to be like D1, D2, all the way down to D12. Although there's typically hyphens between these things. Let me give you an example. Let's say that the first digit is zero, then the next one we're going to have a five and then we're going to have four zeros. One, two, three, four, dash. Then we're going to have three, zero, zero, four, two, dash, six. And so this gives us an example of a UPC symbol of some kind. Now I'm not going to really worry about the encoding process here, but I want to show you how a dot product can be used to check whether this is a valid UPC or not. So to check this for UPC, there's a weight vector associated to this. So if we call this vector our message vector X, this is the message maybe one would receive. Then there's a weight vector associated to this, which the weight vector is simple enough. It's 3, 1, 3, 1, and this is alternate 3, 1, 3, 1, 3, 1, all the way down to the end 3, 1. And so if you have a potential code word in front of you, if you want to check whether this is an actual UPC or not, you're going to take this weight vector and times it with respect to the dot product by the code word X and see what happens. And so we want to compute this thing and we're going to do this mod 10. All right, in which case then the product would look like you're going to get three times zero plus one times five plus there's a lot of zeros here, three times zero plus one times zero plus three times zero plus one times zero plus three times three plus one times zero plus three times zero plus let's see where we at one times four plus we're going to get three times two and then finally one times six. So we want to reduce this mod 10. Like I said, there's a lot of zeros. Let's just get rid of all of those zeros and play here. And so then what's left, you're going to end up with a five, you're going to get a three. So one times five is five, you're going to three times three, which is nine. Then we're going to get one times four, which is four, three times six, three times two, which is six, and then there's six right there. So we're going to want to add these things together. You know, we can do this by hand four and five gives us a nine, six and six gives me a 12. And so we end up with nine plus nine, which is 18, 18 plus 12 is equal to 40. And remember, we're trying to reduce this mod 10 and we end up with a zero. So if this dot product turns out to be zero mod 10, then this means this is a valid code. This is a valid code word. There's no error in it whatsoever. On the other hand, if you had, if things didn't turn out to be zero, like for example, what if by mistake, you know, maybe the message transmitted a three here instead. Maybe something like that. Well, what that then changes that would change this to be a 3.3. In which case you don't have a six anymore, you actually have a nine that adds three to the total. So you actually would have gotten 43, which would then reduce down to three, which is not zero. So that would then tell you you had an invalid code. So then you detected an error. And so the detection of error comes down from this dot product process. So the UPC is just one example, the ASCII code words also have this dot product built into it. ISBN codes that is international standard book numbers. That's a 10 digit code for which you have a sequence of numbers. Your weights look like 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. And you have to calculate the dot product mod 11. It works very similar. We can use dot products to check codes with check bits and things very similar to that. So this is just the tip of the iceberg of how linear algebra can be useful as a tool to help us with these codes that we're working on the encoding process, the decoding process. All that stuff is going to come to play. Also another example I want to point out here is that when you calculate the hamming weight of a vector, essentially you're just taking the vector dot itself and that computes the weight. That's a little bit of a lie because we're not working mod 2 in this essence, right? When we do the hamming weight, we actually are treating the ones and zeros as if they're real numbers, zero and ones. And you add that together as a real number. But with that slight modification with the arithmetic, we use dot products all the time in this conversation. Well, we're going to see later in this lecture and also in the next lecture that in addition to the simple object of dot products, we can use matrices, they're null spaces, they're column spaces. And other important tools from linear algebra to create very nice and simple error detected and correcting codes.