 Let's talk about how the Hamming metric might be relevant to this idea of error-correcting and error-detecting codes. So suppose we have two code words for a moment, I'll take the code words 1101 and 1100. So these are code words in some code and imagine that we transmit X right here, but during the transmission there's an error and the last digit, the last bit there is actually changed. Well, since it's a one, it would switch to a zero. So the transmission of the code word X turns out to be 1100, which actually is the code word Y. So when the receiver hears Y, they're going to be like, oh, Y is a code word, therefore Y was what you transmitted here. And so the receiver wouldn't realize that we sent the wrong word, the wrong code word. We sent X, but you received Y instead. The issue kind of comes down to the fact right here that the distance between X and Y is only one. The Hamming distance, that is, they only differ by one bit, therefore if that bit that they differ by were to change, we wouldn't know did we get Y, we know we got Y, but did they want to send Y or did they want to send X? There's so much of a problem there because the words are too close together, alright? And so that's a problem we don't want. On the other hand, let's say that we have these words this time. We take the code words 1100 and 1010. You'll notice that their Hamming distance this time is two, right? They disagree on the second bit and on the third bit. That's where the disagreement happens. So the distance between them is two. So this time, let's say the same thing happens, that we transmit the word, the code word X. And there was an error in transmission. Well, which bit was it? Was it the first bit, the second bit, the third bit, the fourth bit? It doesn't matter because X and Y differ by two bits. If X has a single error in transmission, there's no way you're going to confuse the transmitted code word T of X with the code word Y. You won't think they're the same thing. As such, if T of X doesn't match up with any of the code words, we would then recognize, oh, there was an error in transmission. We detected an error. And so, in fact, if the distance between the code word X and the code word Y was K plus one, then you would need K plus one errors to turn X into Y. So therefore, if there were K errors or less, then we would never confuse the transmitted word T of X with Y. And so therefore, because there's no confusion, we could detect an error in such a situation. So let's summarize the observations we just made here. So imagine we have a code and the minimum distance in that code is K plus one. Remember, the minimum distance is the shortest distance between two distinct code words in the code. If the minimum distance is K plus one, then the code can detect up to K many errors. Because of this principle right here, that if the closest two words can be is K plus one, then if any word suffers K many errors in transmission, it would never have become a new code word. And therefore, it wouldn't match up with any code words when we tried to decode it. Therefore, we've detected errors up to K many of them. Let's talk about error correction, in fact. This also will have to do with distance here. So given that the Hamming metric actually gives us a distance function. This is a real McCoy distance function from topology. It makes sense to ask, what is the closest code word to the binary message? So when we detect, we detect a message like, oh, this received message we have isn't in our code. It's not a code word. Which code word should it have been? Well the idea is we could ask, what's the closest code word to the received transmission? That probably was the word that was supposed to be sent. So we can identify the correct word as the closest word to the transmission that we received. So let's see. This would be an undefined, if the binary message was midpoint between code words. So like if you're halfway between two code words, we couldn't decide what it is. So the distance between two code words, X and Y, is 2L plus 1. And so then if T of X is transmitted with less than L plus 1, that means we can then know, so like we have X over here, we have Y over here. It's like every time there's an error, it's like you're taking a step from X towards something else. Take a step, take a step, take a step all the way. Eventually if you have too many errors, you'll turn into L. Turn into Y. Excuse me. Well, if there's a halfway mark, right? So you have L over here and L over here. If you have fewer than L plus 1 many errors, then you're over here somewhere. And it's like, oh, X was the word we came from. Of course, if you have too many errors, you might think Y was where we came from. So we don't want like that. But the point is we can detect up to L many errors and correct them. If this distance is less than 2L plus 1. And so summarizing what we're trying to see right here is if we have a code, we have a code for which the minimum distance is 2L plus 1, we could detect up to 2L many errors, but we can actually correct up to L many errors. So let me show you what this might look like. So consider the following table, which is going to represent a code. This kind of looks like a Cayley table where we're thinking of the distance. Because after all, the distance is a function which takes in two operands. You have the one on the left, the one on the right. Because distance is a symmetric function, this will be a symmetric table, right? Kind of the same thing we saw with Abelian groups. It's not exactly a Cayley table because the distance function is not actually a binary operation. A binary operation should go from X cross X to itself, again, right? The co-domain should be these things right here. But for a distance function, this is always going to be the real number. So it's not exactly a binary operation, but the analog of a Cayley table does make sense right here. So when you see this right here, so you have 0, 1, 0, 1, and then 1, 0, 0, 1. You're supposed to interpret this number right here as the distance between the two. For which case they differ on the first bit, they differ on the second bit, but these two words agree on the third and fourth bits, so the distance between them is the following. So this table represents the code in play right here. So these code words are four bits. Don't worry about the encoding process right here, just worry about this code right here, this four bit code. You'll notice if you look through this table, we're gonna ignore the zeros, which are only gonna see zeros along the diagonal. Cuz the distance between a word with itself is zero. So ignoring the zeros, what's the smallest distance? I definitely see a two, as you kind of scan through, you do see some fours. But that's it, you only see twos and fours. And so this tells us that the minimum distance between any two words is gonna be two, the minimum distance of this code is two. What does this tell us? Well if we decompose two, two looks like one plus one. It also looks like two times one, in particular, it's gonna look like two. So we're trying to break this up as a k plus one, which of course is gonna be one plus one. This tells us that k equals one. Our code can detect up to one error. On the other hand, two is gonna look like two times k in this case plus one. Since it's an even number, if it was an odd number, that's when we worry about the two L plus one. I should be using L right here, two L plus one, since it's an even number. And so that's gonna have to be in this situation since it's two times one, L is gonna be zero. So this code cannot correct any errors, but it can detect accurately up to one error, for which we can then request a retransmission. And then as a second example, the following table, you're gonna have four code words here. And that's one thing I do wanna mention here that if you count your code words, one, two, three, four, five, six, seven, eight. Eight here, which is two cubed, that's where eight comes from. So these things have four bits, but since there's eight words, we can actually encode three bits of information. So this is an example of a four, three block code. In this situation, we're gonna have five bits, but when we actually count the words, one, two, three, four, that's two squared is equal to four. So this is an example of a five, two block code. Let's look at its minimum distance. You'll see that the minimum distance very quickly is three. So since the minimum distance is three, three looks like k plus one, that looks like two plus one. So this thing can detect up to two errors, but also three is equal to two L plus one, which is gonna look like two times one plus one. So this code is actually an error correcting code up to one error it can correct and two errors it can detect.