 Welcome back to our lecture series math 4220 abstract algebra one for students at Southern Utah University. As usual, I'll be your professor today, Dr. Andrew Mistledine. In this lecture, I want to continue what we started in the last lecture about algebraic coding theory and how we're going to use algebra, particularly linear algebra, on the vector space z2n to help us develop error-detected and error-correcting codes. So, in order to do that, we're going to actually introduce what's known as the hamming distance or sometimes called the hamming metric. And so this is something that's very common with linear algebra on vector spaces. We want a notion of distance, of norm, of length, so we use some type of distance function or norm function to very well do that. Now for the real numbers, we actually employ the dot product, an inner product of some kind to develop this. That's a little bit harder on a finite field or the vector space over finite field. But the hamming distance will do exactly what we needed to do to create a metric for our vector space here. So imagine we have two bits, or two sequences, two vectors, I should say in our vector space z2n. Let's call them x and y. And as these are vectors, I'll typically write them in boldface font. The hamming distance is defined, the hamming distance on x and y, which we'll denote as d of x, y. This is going to be the number of bits in which x and y differ. So just as a quick example right here, let's say that x is the message 0011 and y is the word, we'll say 1001, something like this. And so when we look at the hamming distance between x and y right here, notice that they disagree on the first letter. They agree on the second bit, they disagree on the third bit, but they agree on the fourth bit. So as we're trying to count how they differ, this means that the hamming distance will be two because they, again, they disagreed on the first bit and on the third bit. So that's how we calculate this thing. On the other hand, the weight or norm of the hamming norm of a vector, which we'll denote this as w of x. Sometimes you'll see like absolute value of x or like double absolute value of x. Those are sometimes used instead when it's clear what we're talking about. We'll use the notation w of x in this lecture series here. The w of x, the weight or norm of our vector here, it's going to be the number of ones inside of the vector here. So if we go back to our original vector here, w of x, it had two ones, so its weight is two. Likewise, the weight of y here was also two because it had two ones in the sequence. We'll just count the number of ones. And so that gives us the hamming distance and also gives us the hamming norm. Given a code, remember code is a set of code words. Code words are elements that belong to Z2n. There's these binary sequences here. The minimum distance of a code is going to be the smallest distance between two vectors in that code. So it's the smallest distance between distinct code words. Now be aware that if you take the distance between a vector with itself, you're always going to get zero. So we are talking about distinct here. The minimum distance is the closest that two code words can be in a code. And so here's some more examples. If I take the code words one, zero, one, zero, one, and then one, one, zero, one, zero, and then zero, zero, zero, one, one. We can see the weights of these things very easily. The weight of x is three, y is likewise three, z is two in the situation. What's the distance between them? Well, if you look between x and y, their second bit disagrees, their third bit disagrees, its fourth bit disagrees, and its fifth bit disagrees. So the distance is actually four in that situation. They're relatively speaking far apart in the code there. Worst case scenario, they would be five units apart. If we look at x and z, they differ on the first bit, they agree on the second, they disagree on the third. They, let's see, they disagree on the fourth, but they do agree on the fifth bit, so the distance would be three. And then lastly, comparing y and z, they disagree on the first second bit, they agree on the third. They agree on the fourth, they disagree on the fifth, so the distance between them would be three bits. So that's how this hamming distance works. Now, some important properties we should know about the hamming distance is that the first one, which I already mentioned a little bit, the hamming distance satisfies the positive definite condition. That notice, if I take the distance between any two code words, this distance will always be non-negative. It'll be positive or zero. And in fact, it's equal to zero only when the two words are the same. And that makes sense, and we argued already that the distance between a word with itself would be zero. But also, the only way that the distance could be zero is if they don't have any differences, which would mean that's the same bit. And likewise, the way we've constructed it, that the hamming distance is always a natural number, zero or positive. So the positive definite condition is satisfied here. We also have that this function, the hamming distance, is symmetric. That is, the distance from x to y is the same as the distance between y and x. It doesn't matter who you compare to first, the number of differences will be the same. And the one that's hardest to see is the idea of the triangle inequality, that if you take the distance between x and y, this will be less than or equal to the distance from x to z, plus the distance from z to y right here. Again, that one's a little bit harder to convince yourself, but the reason it's called the triangle inequality is the following idea. If you think of these as points in space, x and y right here. Well, if we add some third point, call it z, the distance from x to z, excuse me, from x to y, is going to be shorter than if we take a detour. A detour is going to be longer than going the straight path. And therefore, you get this triangle picture right here, the triangle inequality. I'm not going to prove the triangle inequality for the hamming metric. I'll leave it as an exercise to the viewer here to try to figure that out. Now, one thing I do want to mention is that if you ever have a function from a set across itself to the real numbers that satisfies these three conditions, this is what you call in topology a metric. And so the set x equipped with a metric function is called a metric space. So this shows us that the vector space z to n equipped with the hamming metric is in fact a metric space. And so we can talk about distance between objects and do some basic topology on these things. This will be very useful in our conversation going forward. Now, it's not just a topological thing, the metric space. We also have some algebra going on here and these things do work together. So if you have two words x, y and z to n, then it turns out that the weight of the sum x plus y is identical to the distance between the two words x and y. In particular, the weight of a single word is just its distance from zero. And the argument for this is actually fairly simple. Let's note that the weight of x plus y just counts the number of ones that are in x plus y. But how do you get a one in x plus y? If x and y both had a zero in a specific bit, then the sum would be zero. And if they had a one in the same bit, let's say the second position, right? If x and y both had a one in the second position, when you add those together, you'll get a zero. So whenever x plus y has a zero in a certain bit, that's because x and y had the same number. On the other hand, if x and y differed, that's because x would have a one and y had a zero, or it was because x had a zero and y had a one. And so the ones in x plus y correspond to the differences between x and y. And therefore the number of ones inside of x plus y, its weight, will also count the differences between x and y, thus calculating the distance. And then the last observation is also very trivial here. W of x is W of x plus zero, which will be the distance between x and zero. So this gives us then the hamming metric for our vector space. It makes it into a metric space. And then like I said, this weight function right here satisfies the conditions in a vector space called a norm. So z to n becomes a normed vector space for which a normed vector space is always a metric space, like we saw in this proof right here.