 It's so sad, but we're now in our last video about coding theory for our series math 42 20 This is gonna briefly cover section 8.4 from Tom Judson's abstract algebra textbook for which we want to talk about the decoding process We've talked about it a little bit Mostly I want to talk about the correction process We have talked about how to detect errors right because if a word x is received Then you just have to multiply it by h if this equals zero this would apply x is inside of the null space of h Which is the code and therefore? That was supposed to be a lot of checkmarks ding ding ding ding ding then we see that it's a code word We have the correct we can have the correct message. Well, what if this is not zero, right? that means x is not in the null space and That means it's not in the code so we've detected an error So that's all you have to do to detect errors is you just have to multiply by the matrix h and if there weren't too many errors This won't be zero If there's zero errors, then it would be correct But if you had like one error two areas a small number of errors this process would not or this product would not be zero So this number right here is very critical on the Decoding process so in the literature here we refer to h of x here as the syndrome the syndrome of H of x and I don't want you to think of like the Incredibles that I'm syndrome your nemesis No, this is the idea here is that if there was an error that means our code word is sick It has a sickness right and I don't need the coronavirus necessarily, but it's got some symptom Um, and we can use the symptom to try to determine Where was the error right and so that's why we look at the syndrome the h of x the syndromes guy Much like how it's useful to detect errors the syndrome can actually be used to correct the error as well How does that work? Well, it depends on our our matrix h right here So let's see what what's on the screen that I haven't talked about yet. So we have our linear code c It's going to be the null space of some matrix h Which was hopefully chosen in a good way if we take a word x with no errors the nature of x would equal zero It's in the null space. We're good if it has an error Okay, that's what we want to talk about next. So what if there is an error? All right, so imagine the message we received is x x is what was transmitted But the original message was supposed to be mu mu, you know Greek letter for m right there, right? So it was supposed to be mu. That was the original message But some noise got in there noise noise noise makes me think of the Grinch Um, there's some noise that got into the message there So you have the original message plus some error that is what was received with x here So basically when we transmitted the message mu it turned into x which of course was mu plus something else That that's how you want to interpret this thing right here All right. Well, what happens when we look at the syndrome of this thing? So the receiver doesn't know what the original message was the receiver doesn't know mu It only has x but this is what's really cool here. Actually, it's on the screen I don't need to write it if you take h of x you look at the syndrome of x Well x is the original message which is unknown plus the error which is unknown But because this is matrix multiplication, we can distribute this sum here We get h of mu plus h of h of epsilon excuse me Now mu is an actual codeword that was the message we were trying to shoot and therefore mu belongs Oh, it's on the screen. I need to actually read what's on my slides right here Mu is belonging to the null space therefore when you times h by mu You're gonna get zero and zero plus the other thing will be the other thing So this is this is this is the neat observation the message that we received When we compute it it'll have the same syndrome as the error basically the true message is taken away and We have the the syndrome of the message there Now if there was a single error just a single error, so if you had just one error That means we changed one of the bits So our error would look something like oh, let's say we changed the the fifth position So we get zero zero zero one zero something like that We have a five-bit message the fifth one changed. Well notice. This is just e5 This is just e5 Uh and as such h H of epsilon would be h of ei H of ei that's right, which remember is would be the i-th column of h So therefore the syndrome of x if there was a single error Will just be the i-th column of h so as long as we don't have any repeated columns Then we could actually see exactly where that single error occurred All right And so we'll see an example of how to how to do exactly that in a moment um And I before I before we go further though I want to talk a little bit about what if there's multiple errors because single error is pretty nice You just look for the column uh that Matches up there So for multiple errors the detection process remains the same use the syndrome right if the syndrome is non-zero There is an error then we have to decide can we correct it so for error correction We have to solve the linear system associated to this augmented matrix So you take h augment the syndrome and you solve that the reason you're doing that Is you're trying to figure out what what linear combination of the columns of h will produce h of x and The error because if there's multiple errors epsilon would look like maybe there's an ei Plus an ej plus like an ek however many how many there are We want to figure out which columns of h coincide with the syndrome So we want to figure out that h of x is equal to a sum of some h e i's where i is in some index set We want to figure out which combination produces h of x right here So this so solving this linear system here This will express h of x as a linear combination of the columns of h since the columns are linearly dependent There will be multiple solutions. We do expect some dependence relationships on our matrix h. We need some Now we assume of course that the number of errors is few So we'll choose the vector in this solution set of minimum weight So when you compute the solution set to this linear system Choose the vector which has smallest weight Because that would then be the most probable error. It's not perfect But it's the most problem because we're going with the assumption the number of errors is few Now on the other hand the solution set is none other than the coset Of c is the coset of the null space of h when I teach linear algebra I can never tell people that when you solve a linear system You find this affine set. It's a particular solution plus the null space. That's a coset, right? Every every solution to a linear system looks like x plus c where x where c is the is the c is the with what i'm looking for It's the null space right and x is just a particular solution to set the solution that we found So we have to look for the coset that contains x inside of this thing And then we choose the word of minimum length that would then be That would then correct the code here. This is what we call coset decoding All right, I want to show you in practice how this would actually work So suppose we have the following matrix h h is going to be a three by six Three by six vector matrix. Excuse me. You can see that's this canonical parity check matrix The right hand side is the identity and this is our matrix a right here Some things we can say about efficacy of this matrix You will notice there are no columns of zero scanning scanning scanning no no columns of zeros Do we have any Repeated columns and the answer is no there's no repeated columns So this tells us that this code can correct at least one error and that's actually the best it's going to be able to do here We'll see that in just a second Because like I said before if you take the first column Um, let's take then the fourth column and the fifth column if you add those together So we take h e one plus h e four plus h e Six convince yourselves that's equal to the zero vector So that gives us that actually gives us the minimum dependence relationship So this matrix here would have a d three. So that means we can detect We can detect up to two errors But we can only correct up to to one and I want to show you exactly how that works So imagine the following the following four messages are received. So we receive x which is zero zero one one one one We receive y which is one one one one one one zero We receive z which is zero one zero one one one and we see w which is one one one one one one Okay, what how do we how do we detect these errors if there are any maybe there's not Let's calculate each of the syndromes. So we're going to take the matrix h and multiply up our each of these words So if we take h times x we actually end up with the zero vector So what that tells us is that since the syndrome is zero that means that x is in the null space Which is still in your code and therefore there is no error And therefore the message is just the message sent was zero zero one And one one one. So there was no error in there whatsoever Which then we can decode this process. Remember you erased the last few bits and the original message was zero zero one That's what the computer wanted to communicate. Uh, there was no error in here whatsoever What about what about h y if you compute h y the syndrome turns out to be one one one For which when we look at the matrix, where's one one one that was the third column So this tells us that the error The error is in the third position. So looking at our word. We actually need to go correct it So it should be one one zero one one zero. That's was the original message that was sent One one zero one one zero and so then erasing the last the last the the check bits we stick with the information bits one one zero That was the message that we wanted to send great Uh, uh, these are raised still Okay, what about h of z Um, if we multiply h by z we end up with the syndrome one zero zero looking at the matrix above Let's get rid of the previous case there In this one we see that one zero zero that's the fourth column So it turns out there was an error and the error actually occurred in one of the check bits The error wasn't in the original message the error was in a check bit. So the fourth bit should have been Zero right there. So the original message was actually zero one zero zero one one Or we can see that down here as well zero one zero one one And so erasing the the check bits were left with the information bit of zero one zero So, okay, there was an error the error was actually in the check bits So it means the original message got through but we still were able to detect it and go from there Now things get a little bit more funky when we look at our last example h of w here So the syndrome this time is one one zero. Okay, that's not zero. So we've detected an error error error We detected an error. So pay attention to that. We did that for h of y and h of z as well So h of z and y both had errors and a w has as well, but the difference here is when you look at h It's like one one zero. No, no, no, no, no, no. Oh, okay There are There's no column that looks that way. So where did this error come from there has to be at least two errors Now, how could this have happened? Hmm How could you have gotten this we could take Let's see. We could take the we could take column three And add to it column six So you could take column three Plus column six That's a possibility that would add up to be one one zero But is that the only possibility? Hmm if we kind of hum and hoe about this for a little bit It could have also been yikes if you take If you take h e one plus h e two You'll notice there that gives you one one zero And so it's like uh dang it that also would be a possibility and then If as if we were done, right you could also take these ones right here h e four Plus h e five that would also give you one one zero So this is sort of the problem here that our may or our decoding process has detected an error But we can't conclusively decide which one is it Was the error in one three one two was the error in four five or was the error in three six? And that decision would be critical on how we approach this problem. So this is what we do We've detected an error, but we cannot correct it Therefore we will request a retransmission of the original message and this illustrates the point this code can detect up to two errors But it can only correct up to one So there is this gap where error correction isn't always possible, but error detection can still make up the difference Isn't that pretty awesome right how we are able to detect and correct errors Um, and then this last one this one actually we're like a lot where we were able to detect the error But we couldn't correct and that really tells you the difference in terms of these In terms of these code words here. So that then concludes our discussion of algebraic coding theory for our lecture series I hope you enjoyed if you have any questions by all means feel free to post them in the comments below Um, if you feel like you learned something give it a like Uh subscribe to find more updates like this in the future more videos like this in the future I should say and next time we're going to go back to some more theoretical abstract algebra concepts focusing on the idea of isomorphism See you then