 So we can use matrices to create linear codes where the code is going to be the null space of a matrix But we want to pick our matrix a I mean we have a lot of freedom on how we can choose the matrix So we want to pick a matrix So that we can encode them the words the messages in an effective way We also want to hide efficacy when it comes to error detection and error correction And so what we're gonna do over the next couple lectures is describe a very simple scheme of encoding and decoding messages They'll give us a very easily a one error correction Level with again very little effort with how we construct these matrices. So we're gonna start off very very naive right now We're gonna start off with a matrix a where a is gonna be an n minus m by m matrix Right, so I'm gonna write that down a is going to be an n minus m by m matrix Seems kind of weird, but follow me with it on this one This matrix a will explain a little bit later how we choose a but for the moment Let's let a be a random matrix who cares with this matrix. We can then build a new matrix h Which we're gonna take a augment the appropriately sized identity matrix So we want to put the identity matrix here has had the same number of rows as a does a has n by m Or excuse me n minus m many rows here And so that will make h into an n by m has the same as the same number of rows that a does But it'll then be it'll have n many columns. We added we added some more columns there So we then create this matrix h for which we're gonna call this the canonical parity check matrix So this matrix will be used to create our linear code And if we choose a in an appropriate way We can actually have some parity check bits inside of our code words based upon how a is chosen like again I will talk about how we choose a in a future lecture now associated to the parity check matrix is what we're gonna call the standard generator matrix We're gonna call it g g is gonna be a matrix Which is going to be n by m in which case we are going to put on the bottom of the matrix the same matrix a and on top of it We're gonna put the matrix the identity matrix which is gonna be the appropriately sized identity matrix here So the number of columns of the identity has to match up with a so there were in many of those so there's gonna be in many columns in a and then how many rows are you're gonna get well a has n minus m many rows The identity will have m many rows. So you're gonna get n minus m plus m, which is where the n comes from So g here is gonna have to be an n by m matrix All right, and remember h is gonna be n minus m by n And so we get this generator matrix. So what's so important about these matrices? Well, they set aside a very important relationship So notice here that if we were to multiply together the matrices h and g well h is this augmented Matrix a augment the identity of appropriate size G is likewise the identity augmented a and so when we do matrix multiplication This is what often someone calls it blocked a block matrix or partition matrix for which because the sizes are compatible You actually multiply them out not necessarily term by term But by block by block and what happens is you're gonna get if you think about like row times column You're gonna get a times the identity plus the identity times a we add these together Well a times the identity on the right is gonna give you a and then the identity on the left of a is also gonna give You a again, so you end up with a plus a but since we're working mod to Every matrix is its own additive inverse. So a plus a is equal to the zero matrix Z zero right there And so in particular the product of h and g is equal to zero This is a very common thing that happens with matrix multiplication Their products could be zero by no means though I claim these matrices are non-singular or anything in that direction. Alright, so let's investigate this a little bit further Let's say that we have a specific received message X Well, actually, sorry, this will be this may let X be a unencoded message, right? We just have a vector. So it'll be something that belongs to z2m. Remember z2m was the message space These are gonna be the messages we want to send but they haven't been encoded yet if we multiply our Message by g we'll call that vector y And we're gonna see in a moment that y is then the encoding It's the encoded. It's the encoding of the of the message X here Basically multiplication by this matrix g is the encoding function. We'll see why that is Well, if you multiply h by y well y was just the same thing as g times x And so h times g x Well, like we saw a moment ago h times g is going to be the zero matrix and zero matrix times any vector would be the zero vector right here And so this tells us that y actually belongs to The null space of h which remember the null space is our code that's c So notice that we took a message x And by multiplying by this generator matrix g we turned it into something that's a code word So that's why I mean by it's in this encoding process. Now if you take If you take things like g of x right here, this is just a typical element that belongs to the so-called column space of g So the column space can be defined in two ways The way that's appropriate here is this we're going to take the all of the vectors of the form g x So said x belongs to z to m in this situation, but you can also define this to be the span of the columns Whoops the columns of g and that's actually where it gets its name But this is the interpretation that's helpful really right here If you think about matrices actually as functions of these linear transformations Uh, you have this map where x transforms into g times x you can think of oh g is this function In which case this is just this is just the image of the function associated to g that that's what this column space is and so what we've now seen is that Anything of the form g times x lives inside the null space So the column space of g is then a sub group or better yet It's a subspace of the null space of h right So this is this is a subspace inside of another subspace and I actually claim that these two things are equal to each other To see that let's actually consider The dimensions of these spaces right so g for example, it does contain A portion of the identity right here And so this is going to tell us that the rank of g is actually going to equal m because basically this matrix is You know you already have an identity on the top if you want to row reduce this thing You just have to kill off all the a's on the bottom which can be done So this thing would always this matrix is always going to row reduce It's r ref will look like the identity Over the zero matrix right there and so it's ranked the number of pivot positions Is going to be m So the rank of this matrix G is m now the rank is also the dimension of the column space It's the dimension of the image here So the dimension of this thing is going to be m So we have this m dimensional subspace living inside of well, how big is the null space? Let's think about that for a second. Well, the null space of h if we come back up to h, right the You see right here that h has also an identity inside of it This matrix is going to have n many columns and you have this identity, which is going to be n many You have n minus m many Um You have n minus m many pivots in there But because you have the identity everything in a can be written as a combination of these Of the columns of the identity matrix Therefore everything over here we can think of as non-pivots And so how many non-pivots are there going to be? Well a here has m many columns and so this matrix h is going to have m many non-pivot positions Which would count the nullity of the matrix is the dimension of the null space That's going to be m right here And so this is what we're seeing here is that we have an m dimensional space sitting inside of an m dimensional space Well from linear algebra if If you have a subspace inside of another one and they have the same dimension that actually forces equality So since the dimension of the column space of g is equal to the the dimension of the null space of h Which of course is just the rank of the nullity this implies that the two spaces are actually equal to each other so the The column space of g is equal to the null space of h which h here is the code Right the null space of h is the code Which means that if you take a message and you multiply by g that gives you a code word So the encoding process is essentially just multiply by g Let's look at a specific example here. Let's say that we want to encode the eight words that belong to z two three So we have a three bit message. So we have like zero zero zero zero one zero one zero you the idea there, right? So how can we encode this message here? Well, there's there's two options here one option Would be to solve the the system of equations, which I'll come back to that one in a second The other option is you just multiply each of these vectors by the matrix g So if you take zero zero zero times it by g you're going to get the zero vector zero zero zero zero zero If you'd multiply g by zero zero one you're going to get zero zero one and one zero one And so let's verify that calculation first for a moment, right? If we take our matrix g and we times it by the word Zero zero one right you take the first row times the vector right there You're going to end up with a zero then take the second row times this you end up with a zero again You take the third row times that you're just going to get back A one right here then I'm going to put a space here just to make it a little bit easier to read If you take the second or the fourth row times the vector you get a one The next one would be zero and then the last one would be one And so that's where you get this you just multiply your code word by The matrix and that's the encoding process is multiplied by it And so you end up with something like this and this will happen for each and every one of these things Now the way that we've constructed our parity check matrix and our generator matrix One thing I want you to notice is that the first three bits is always the original message with this scheme So you can see how decoding will be easy, right? If we know we have the correct Message received then you just ignore the last three bits and you get back the first three That was the original message and so these bits right here often refer to as the information bits The information bits because this was the information that we actually want to transfer But what are these other three bits do these are what we call our check bits these Check bits are essential in the decoding process Now, yes, we can get back the original message by ignoring the last three bits But we have to know did we receive the message correctly or was there an error in the process whatsoever? Were there was there an error the check bits will help us in that regard. We'll talk about that Uh, well, you know actually we could just talk about it right now for a second So there's a theorem right here that if we have our matrix h like so If this is a canonical parity check matrix, then the code is the null space It will consist of all of these Message all of these messages y You know that these are the encoded messages, right? Like I said, the first m bits are the information bits and then the n minus the last n minus m bits are these these check bits and If this was an authentic code word We should have that hx is equal to zero Like we said here, and so we've seen that the null space of h is equal to the call space of g the standard generator matrix This is just summarizing all of this stuff. Uh, we've done so far So if we start off with this canonical parity check matrix, we can create this n by m block linear code And we can use the the matrix g to encode messages Now if you just want to encode a single message just multiply it by g But if you want to produce the entire code all at once, uh, then it might actually be better to use the null space in this consideration here And we did this in a previous example, right? If we have the matrix h right here Um, you so you have zero one one one zero zero, right? You have this parity check matrix notice here You have the identity, right? You have the identity right here as well the matrix a in play was this one right here This is our matrix a zero one one one one zero one zero one This likewise is our matrix a in play Okay, we could find the code of the code by computing the null space of the matrix h right here Which i'm going to kind of skip over the details here If you write this as a homogeneous system of equations, you get the following If you reduce that using this the usual, uh, row reduction reduction technique scousy elimination You get the following and so what this tells us here Is that we're going to we're going to actually set x4 x5 x6 as our As our what what's the word here? These are going to be our dependent variables We're going to treat x1 x2 x3 as our free variables Because after all that's the original message x1 x2 x3 which can be whatever it wants the x4 x5 x6 our checks bits are then determined by the original message right here And so we can then come up with a basis for this thing Notice here our typical vector x Looks like well x1's whatever it wants x2 is whatever it wants x3 is whatever it wants x4 will be x2 plus x3 x5 will be x1 plus x2 And then x6 will be x1 plus x3 And so if you break this up into three vectors, there's the vector that only involves x1 That's going to be x1 times one zero zero zero one one because you can see that there was an x1 in the first position We see that right here. There was an x1 in the last two positions. We see that right here as well Okay, for the for the second one if we look at the x2, so we have the second spot the fourth spot and the fifth spot We get this vector right here zero one zero one one zero And then for x3 we have one in the third spot The fourth spot and the fifth spot we get zero zero one and one zero one like there So we get our three our three vectors, which gives us a basis for this null space right here And then if you look at the eight possible combinations of these three vectors you get the entire The entire code which contains these eight vectors right here. So in fact, this gives us a This gives us a six three linear code Right here and again, this gives us the exact same code We did if we multiply by the matrix The two will do the same thing you can either take the column space of the generator Or you can take the null space of the parody check matrix Now why does the why does h get the name parody check? Well, that's because you see these things right here, right? These right here are our checks that when you take You know, let's say we received You know, let's say we take the the message right here One zero one one one zero and let's say that when it's transmitted. There's an error in the first bit So we get zero zero one one one zero So what we can do is that we can check to see did our message get sent correctly. We know the first three bits are the message Um, but did it get sent correctly? We look at the check bits, right? The first check would be add together x two and x three So, you know, okay, we do that so we take x two and x three So we just take zero plus one that should equal one Which is what we have for that. So that checks off Okay, so then we do the next one the next one is x one plus x two So right if we add those together We should get zero plus zero which equals zero Which it's like That's not what happened. We got a one right there. So at this moment, we've now detected an error, right? There's an error detection in there. Now we we could do that just by multiplying by the matrix h We've seen that before but notice what happens on the next bit if we do the last check x one plus x three that should equal x six if we take x one and x three That's going to be zero plus one which equals one And so we see that the sixth bit is also a mistake So what's in common between five and six? Notice that x one is involved in both five and six and since there's two errors right there It's like the error was on the first bit So therefore the the original message should have been one zero one and then one one zero Which then says oh, we were trying to transmit one zero one So we're able to not just detect we could correct the The error based upon these check bits That's how they come into play now turns out in practice There's a much simpler way of doing this which we'll see in the next lecture But I just want to show you right now that we choose the matrix a So that we have these type of checks in the transmission so that if there is an error we could potentially correct them