 Welcome to this lecture on digital communication using GNU radio, my name is Kumar Appaya and I belong to the department of electrical engineering at IIT Bombay. In this lecture, we are going to have an introduction to error control coding. So far from what we have seen, we have had various practical issues in digital communication systems. Predominantly, these are noise or presence of a channel that affects the transmission that you send. And for each of these, we have been trying to find some ways to learn and compensate for these. However, as you have seen when we were discussing equalization and in the presence of noise that is, there is no way to eliminate these effects. So, you have to come up with ways by which you can further reduce their impact. In this context, error control coding is a very commonly used technique that can be used to both detect the presence of errors as well as to an extent correct for them. We will be looking at a specific set of error control coding tools, specifically linear block codes over the next few lectures. So, we will be talking about bit error detection and correction. So, all the discussions that we will be conducting henceforth are at the bit level before you perform the modulation operation that is conversion to a symbol or a constellation element. And we are going to talk specifically about binary linear codes. Binary linear codes are a class of error control codes that are very easy to implement and are very useful although they are not the only types, these are very useful and very commonly used. So, we will restrict our consideration to binary linear codes and then we will specifically focus on one particular code called the Hamming code. And finally, we will make a remark about some more advanced error correction mechanisms and where they are applicable. So, in terms of preliminaries, we are going to focus on a specific set of binary operations. Now, these binary operations are basically addition and multiplication, but modulo 2. In other words, the plus operation essentially acts like an R and the multiplication operation I mean plus operation acts like an XOR that is and the multiplication operation acts like a and and. So, let me just kind of give you look at these as truth tables. So, if you look at this plus right, this can be looked upon as an XOR operation while this particular table can be looked at as an and operation. Because 0 XOR 0 is 0, 1 XOR 1 is 0, 0 XOR 1 and 1 XOR 0 are both 1. Similarly, for in the and case 1 and 1 is 0, 0 0 0 1 1 0 are all for in the case of 0 and 0 of course, it is 0, 0 and 1 is 1, 1 and 0 is 1 and you can see that this essentially looks exactly like the truth table of an and gate while this looks like a truth table of an XOR gate. Now, the key idea for us is to utilize these binary operations in order to introduce some redundancy in the bits that you generate, these can be used to detect and correct errors. In other words, let us say that you have a bit sequence which is potentially coming from speech or from an image or whatever it is. Before you convert them to symbols for transmission, we are going to add some redundancy. What does this redundancy mean? Let us say you have 5 bits, you are going to make them 7 bits where these additional 2 bits are going to be used to detect some errors or you know correct some errors in the first 5 bits and so on. Now, this binary operations can also be looked at as in a particular cases as operations on a field and there is a huge array of discussions on algebraic error control coding which is more general than this, but we will be discussing this particular field, so to speak, where you have modulo 2 operations and all the other operations that you typically know like addition, subtraction, multiplication have some very common kind of meanings like if you know these 2, then you are able to use these to good effect to perform error control coding. Let us now proceed in our discussion and see what error control coding is about. First, we will now look at what the adversary model is, that is what is the problem that you are dealing with. The problem that you are dealing with is that bits get flipped due to errors. That is if you have noise, let us say you use BPSK or whatever your other constellation, noise causes the symbol to get detected incorrectly, the symbol being detected incorrectly results in a bit being detected incorrectly. Thus, we need a channel model that essentially characterizes this kind of errors that can occur at the bit level, so we focus on the binary symmetric channel. Intuitively, what the binary symmetric channel says is that if you transmit a 0, then hopefully a 0 will be received, sometimes a 1 is received. If you transmit a 1, hopefully a 1 is received, but sometimes a 0 is received. But if the probability of a 0 being flipped to 1 or a 1 getting flipped to 0 is equal, we call it a binary symmetric channel, that is what symmetry means. So, this is a channel model that takes in bits one at a time and yields one bit out with an error probability P. So, this channel flips each bit with a probability P. Now, one thing is typically we say 0 is less than equal to P, less than equal to 0.5. Now, this picture is basically a good way to understand this channel model, that is you have 0, you have 1 being sent, you have 0 being received, 1 being received, let us say this is sent, this is received. Now, let us say that you have a binary symmetric channel with bit flip probability P, a 0 is received as a 0 and a 1 is received as a 1, each with probability 1 minus P. But a bit flip can happen each with a probability P, that is there is a it is like whenever you get a bit, you toss a biased coin, if the coin comes up with the higher probability event you essentially just give that bit at the receiver. If the coin comes up with a lesser probability event, you then flip it and give it to the receiver. Now, why did we say that the P is less than or equal to half or less than half rather less than equal to is ok, but less than half. The reason P is less than half is because if the probability of a bit flip is more than half, like say let us say that P is equal to 0.9, then with probability 0.9 sending a 0 is going to result in a 1 and sending a 1 is going to result in a 0. So, you must essentially swap these 0's and 1's and then you will get a binary symmetric channel with probability of bit flip of 0.1. That is taking P to be more than half means you can just flip the 0 and 1 at the receiver and then you can just get a binary symmetric channel with a 1 minus P which is less than half. This is something which you can think about. So, we will be dealing with binary symmetric channels with 0 less than or equal to P less than half. This half is also a special case. The reason P is equal to half is a special case is because if P is half, then let us say that I am going to call this sent value as x and receive value as y. What you have is y is equal to x xor n or x plus n ok, where n is the n is basically resulting from the this is equal to 0 with probability half 1 with probability half. In this particular case, if you find out the relationship between y and x, you will you can actually show that y and x will be uncorrelated. In fact, they will be even independent you can say you know why because if you know if you are given n, then you know you can always figure out x. But if you are not given n, then this particular operation is essentially saying I am going to toss a coin and with equal probability I am going to flip them. So, this is as though you have no information about x ok. So, in other words if you are given y in this situation the probability of I mean probability of x being 1 and 0 is essentially half that is basically the you know intuition. So, in other words this implies that x is equal to y let me just erase this and just use consistent notation. I can write y plus x because minus is the same as plus in this particular binary operation regime. So, if sorry this should not be plus n. So, now in this particular scenario irrespective of what y is your x is going to be completely independent of y ok. I am going to leave it as an exercise for you to prove it, but the intuition is that even if you had no in no y you know just x is going to be just equal to x is going to not be dependent on y in the sense that this n essentially is going to just flip bits randomly and you are going to get no information. That is why we will always consider the case where 0 is less than or equal to p less than half ok fine. Now, so the binary symmetric channel flips bits with the probability p typically 0 is less than or equal to p less than or equal to half this is strictly less than, but that is ok. A binary symmetric channel with probability p equal to half is going to result in is going to essentially result in a problem in the problem being that you are going to have no information you are going to get no information at the receiver. Now, practically useful model is BPSQ over AWGN in this case you can model it as a binary symmetric channel with p is equal to q over root of 2 e b by n naught. How does this work? In the case of BPSK you are sending the symbols minus 1 and 1 when you send minus 1 there is a probability of q over root of 2 e b by n naught that that minus 1 is being detected as 1. If you send 1 the same probability q over root of 2 e b by n naught is there and that that 1 is incorrectly detected as minus 1 due to noise and this is a binary symmetric channel. Now, in this case also you can see that as n naught goes to infinity as n naught goes to infinity you are going to get q of 0 which is half which makes complete sense because when you have an infinite amount of noise there is no way you can detect your symbol as plus 1 or minus 1. So, there are of course, multiple simplifications for other modulations also you can do, but this is a very intuitive way of understanding it. A BPSQ over AWGN system essentially reduces to a binary symmetric channel under this kind of assumption. Let us now go towards a first our first useful example of a an error control code. Now, notice that I have deliberately said error control code and not necessarily error correction. Coding can be used for both error correction and control, but error control codes is a general kind of term that signifies that you can not only you can detect errors may be not correct them. So, you can there are some codes which can detect errors some codes which can detect and correct errors also. So, we are going to start with a code that can be used to detect errors this is called the parity check code or the parity check codes. The key idea is you can take k bits and add a k plus 1th bit. So, that the whole string has even parity. So, there is an example which is a 3 2 parity code. Let us actually just write that out once and then go through it. So, let us just expand our description there this is actually a single parity check code. So, let us say that you get your bits 0 0 1 0 1 1 0 1 0 1 and so on. These are your bits which are to be transmitted. Now, in the case of parity check code let us say we say it is 3 2. In this notation 2 is the number of information bits that is the original data bits that you are going to take and 3 is the coded bits for every 2 or in this case k information bits. So, in this particular case you are going to take 2 bits at a time and then output 3 bits at a time before you send it to their constellation modulator or whatever symbol modulator or whatever it is and convert it to OA form and do all those operations. So, the number of information bits in this case is 2 the number of coded bits is 3 and what is the rule? The rule over here is when you have 0 0 as the group of 2 bits 0 1 1 0 1 1 we are going to add 1 parity bit such that the sum total of all these are basically it has to have an even number of 1's because if you want to remember we are now in the domain of modulo 2 addition and multiplication. So, if you want to all the digits in this 3 block of 3 to add up to 0 that means that you have to have either all of them to be 0 or 2 1's because 1 plus 1 is 0 you cannot have 1 1 or 3 1's. So, 0 0 you have to add 0 because 0 plus 0 plus 0 is essentially 0 0 1 you have to have 1 because 0 plus 1 plus 1 is 0 1 plus 1 is 0 remember in the region in the system that we are operating in 1 plus 1 is 0 1 plus 0 again is 1 for obvious reasons and 1 plus 1 is already 0. So, we have to append 0. So, this is the particular code book that we have this is to the but this is the code book that we have. Now, with this code book let us say that these are the information bits. So, what are our coded bits we are going to add 0 0 and I am going to add a gap 1 0 1 1 0 0 1 0 this is 0 0 1 0 1 1 0 1 0 1 1 0 and this is enough let us just add the parity bit for these. So, for 0 0 what is our parity bit it is 0 for 1 0 our parity bit is 1 for 1 1 our parity bit is 0 0 1 our parity bit is 1 0 1 our parity bit is 1 1 0 it is 1. So, now, instead of sending let us see 1 2 3 4 5 6 7 8 9 10 11 12 bits you are going to end up sending 1 2 3 4 5 6 7 8 9 10 11 12 10 14 15 16. So, instead of sending 12 bits you are essentially going to send 3 6 9 12 18 bits this is where a redundancy comes in the parity bit is a redundancy. Therefore, you are going to incur overheads in fact, the codes the block codes and the binary codes I am talking about have a concept of rate that is the number of information bits divided by number of coded bits. In this case you can do 12 upon 18 or better yet 2 upon 3 because for every 2 information bits you have 3 coded bits. So, obviously your rate meaning earlier you were sending 1 bit per bit. So, to speak in an uncoded fashion, but now you are sending 2 bits 2 information bits in every 3 transmitted bits. So, 66.66 percent is the rate and 33.33 percent is the overhead that is technically you know you should you should be careful in while defining overhead one-third of the fraction of the bits are essentially overhead. So, your rate is one-third and you could if you want to define it as the number of bits over and above then you can say it is 50 percent overhead because every 2 bits it is 1, but maybe I would not call it overhead let me just call it redundancy that is one-third of the bits that you transmit are for redundancy. So, in general when you have an N k code we will go through it with the rate will be k by N. Now, let us actually analyze this particular code what happens. So, let us say that 0 0 0 is sent and let us say that exactly 1 bit error occurs among these 3 bits exactly 1 goes bad. So, then what that means 1 of the bits got flipped. So, that means you got either 0 0 1 0 1 0 or 1 0 1. These are the 3 possible 1 bit error patterns for this block of 3 coded bits. Now, for all of these you can conclude that there is a parity mismatch. So, 1 error has occurred right because there is a parity mismatch the mismatch among the parity because if you add all of these up you should get 0 again modulo 2. So, 0 plus 0 plus 1 is 1 0 plus 1 plus 0 is 1 oh this should be 0 I am. So, sorry 1 plus 0 plus 0 is 0 and you end up getting a parity of 1 you always your rule was that the parity should add up to 0 that is all the element bit should add up to 0. That means there should be an even number of 1s unfortunately there is an odd number of 1s. So, the parity check code therefore, can detect single errors and no errors of course, that means what is the benefit of this particular code. Let us say that you want to analyze bits 2 at a time you code them and make them 3 at a time if you get an error in a block of 3 bits you can say there is some error in these bits I am just not going to consider them. So, this allows you to make a bit of you know you can basically make some bit of decisions to say ok there an error has happened and I am not going to consider them. Let us take this particular example ok I am just going to take these let us see ok I am just going to take these. Now, let us actually come up with a bit pattern an error pattern ok I say 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 ok. Let us look at this one now what do you get if you just let us say this is a binary symmetry channel you basically have to just add these 1 below the other that is you have y is equal to x plus n and n is basically this kind of you know 0 with probability 1 minus p 1 with probability p independent of x this is the model. You get 0 0 and these are realizations of n basically 1 1 0 1 1 ok this is 0 because 1 plus 1 is 0 1 0 0 1 1 here you get 1 0 1 1 0 0 0 1 ok it has odd parity because the number of 1's is odd it is 1 reject we just do not consider it that is one way to look at we would not consider it 1 0 1 has even number of 1's consider it 0 1 0 has odd number of 1's do not consider it 0 1 1 has even number of 1's consider it now here is the interesting thing it till now in all of the places we saw there was only 1 bit error among the block of 3 like there is 1 bit error here 0 or 1 that is here there is 1 here there is 0 here there are 2 bit errors if you have 2 bit errors then the resulting thing is actually a tick but I am marking it is read because it is wrong your single parity check bit 3 2 parity check code made a mistake here because there were 2 bit errors and these 2 bit errors among 3 cause the even parity to return so what was sent was 0 1 but what you got ended up being something like 1 0 1 so essentially you made 2 bit errors while detect see how are you going to detect at the receiver if the parity is correct you drop the last bit let us say this is 1 0 1 parity is correct information bits was 1 0 parity added was 1 so at the receiver I am just going to take this one similarly let us look here 0 1 1 information bit was the 1 in blue which is 0 1 the 1 was the parity bit I am going to throw it out I am just going to take 0 1 great over here there is a problem 0 1 1 was sent but because of 2 errors 1 0 1 ended up coming and now I end up taking 1 0 which means I have ended up making 2 bit errors in other words in case there are there is more than 1 bit error the parity check code will fail and it will actually give you a wrong answer it will give you a wrong answer this means it is going to not even detect that there is an error it is just going to say it look like no error has occurred this is the price you pay similarly over here can you guess what it is there is 1 bit error 1 0 and odd parity will not take it so this is basically something which you can think about in case there are 2 bit errors the parity check code fails the failure mode being that when there are 2 bit errors it is fooled into thing that no bit errors have occurred and you end up making a mistake because the way we have designed the parity check code all you do is you drop the last bit once you drop the last bit which is the parity bit if everything is correct you get the information bits unfortunately you may end up making errors. Now the third bit acts like a check and it checks single bit errors that is how we discussed and if you want to expand to other sizes it is very easy to expand to various other sizes for example you can do a 5 4 let us say example 5 4 take 4 bits 1 0 0 0 0 0 0 0 1 0 1 0 0 and so on now the last bit always is going to give you a parity let us take an example let us say you take 1 0 1 0 it already has an even number of 1s add a 0 at the end if you take 1 0 1 1 add a 1 at the end so that you get 4 1s essentially so now you can do this and the general k k plus 1 k parity check code has a rate k by k plus 1 the redundancy is 1 by k plus 1 now if you make the parity check code larger and larger then there is a problem because if you make it larger and larger there will be many more even number of bit errors that is going to that are going to be detected and you will end up making a lot more mistakes so that is something you have to bear in mind. Now let us look at the probability of making errors in the case of a parity check code let us say take a 3 2 parity code the probability that you would make no error in a block no error in a block of 3 is 1 minus p the whole cube the probability of a single error is 3 p times 1 minus p the whole square why because you are going to have one error and the error can be in the first spot so p times 1 is 1 minus p the whole square or 1 minus second spot 1 minus p times p times 1 s p the whole square or 1 1 s p times 1 1 s p times p you have to add these up because these are mutually exclusive so you get 3 p into 1 minus p the whole square for the for more errors now if these two events happen the code will successfully be able to either detect no errors in which case you can accept the bits sequence and then just take away the parity or if there is a single error it will say no I got a single error now if there are more errors okay then the code fails in other words if you have a block of 3 bits okay if you have 0 errors or 1 error the code works beautifully but if you have 2 errors or 3 errors in which case if you just add those up you get 3 p square times 1 minus p plus p cube then the code cannot you know detect the errors and you know the code essentially fails it cannot detect it correctly and it will in fact give you incorrect answers but there is still a slight benefit from using this code in the case of you know in the case where you are going to deal with yeah in the case where you are going to deal with an uncoded system the probability of failure for a single bit is p the probability of failure for two codes is essentially you can check if you're going to take two bits right then the probability that you're going to make a mistake is essentially p square both are wrong plus 2 p times 1 minus p the whole square sorry 2 p times 1 minus p rather in this case this 3 p times 1 minus p the whole square is generally better okay you can verify this by doing a calculation in fact there's an exercise you can do so finding out that there's a single error is actually useful but you can't find out if there are 2 errors or more because you're going to make a mistake and one more thing is you cannot correct any errors using a parity code you can detect some errors you cannot correct them overhead is you can say 1 upon k plus 1 or redundancy at the rate is k by k plus 1 this is something that we have seen now this is an example of a parity check code that can be used to check the parity and detect some errors but this code cannot correct errors in the next class we will be discussing another family of simple error control codes block codes that can be used to correct errors as well and that can actually make I mean you can of course detect some errors and also correct them meaning that you can actually get back the information bits despite the coded bits being flipped let's see that in the next lecture thank you