 Welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering IIT Bombay. This lecture will be a continuation of our introduction to error control codes in particular block codes. In the previous lecture, we have discussed parity check codes. If you recall in the case of a parity check code, we essentially added one bit of overhead for a single parity check code that is where the whole code becomes an even parity code meaning that after adding the redundancy, the number of ones in the coded bit vector was essentially even or if you want to look at it from the XOR perspective, if you XOR all the bits of the coded code word, you will essentially get 0. We saw that you could make, you can essentially, you know, detect the presence of no errors if there were no errors and if there was a single error, you can detect that single error and therefore discard that particular code block. But if there is, there are two errors or more errors, then the code essentially fails and you will end up making mistakes. We will now continue our discussion with a different class of code words. So in today's discussion, we are going to now look at the repetition code. The repetition code, we will restrict our consideration to an odd repetition code. Essentially you repeat bits odd number of times. Of course, you can do it even number of times, but we are going to restrict it to odd number of times. The reason is because if you repeat a bit an odd number of times, you can then at the receiver just use majority logic to decode as we will see. Let us take the simplest example of a 3-1 repetition code. When we say 3-1, that means the number of coded bits is 3 for every information bit which is 1, 1 information bit. That means that you are essentially taking one bit and sending 3 bits coded. So this is actually a code that is very inefficient in the sense that you are essentially reducing the bit rate by one-third, bit rate to one-third rather because you are reducing the bit rate from one to one-third. You can detect and correct one bit error. Unlike the parity check code, you can detect a single bit error and correct it also. How? So let us actually have a small discussion. Let us say we say always the repetition code is of the form, in this case 3-1, you can say n-1 repetition code. We just repeat every information bit n times rate 1 upon n. The rate is 1 upon n, written in C or overhead whatever you wish to call it is n minus 1 upon n, 1 minus 1 upon n. How does this work? Let us take the 3-1 repetition code and let us take the way we do it. Let us take information bits 0, 1, 0, 0, 1, 0, 1, 1, enough I think because the coded bits will be very long in nature. So I am going to write 0, 1, 0, 0, 1, 0, 1 and 1. 3-1 repetition code, what will it do? Take the 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1 and we will say 0, 0, 0, 1, 1, 1, 1. Now let us say we add a BSC noise realization. That is we toss a biased coin with probability P and we just write down the noise sequence. Let us say this 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0. These are the noise sequences. Remember it is a block code. So we take blocks of 3 and make a decision. What is the method? Essentially find out the correct code word, what are our code words? 0, 0, 0, 1, 1, 1, these are the code words. So let us take the first one, 0, 0, 0, but we get 0, 0, 1 because 0 plus 1 is 1. Now one thing which is very clear is that the code words are 0, 0, 0 and 1, 1, 1. And if you do not get either of these then something wrong has occurred. In this case we get 0, 0, 1. What happens? It is neither 0, 0, 0 nor is it 1, 1, 1. So what is the conclusion that we have to draw? Typically the number of errors that occurs is minimum. This is something we will talk about formally. But the most likely error sequence is 0, 0, 0, no error, no error, no error. That is the most likely error sequence because 1 minus p the whole cube is the most likely error event. Why is that the case? Let us just think about it. See, we said 0 less than or equal to p and we say less than half. That is the most likely sequence of errors. So let us say if there are three sequences, let us say no error, no error, no error has probability 1 minus p the whole cube. 1 minus p the whole cube is essentially where p is less than half. This is a number more than half. Any other bit sequence such as 0, 0, 1, 0, 0, 1 will have probability 3 p times, sorry p times 1 minus p the whole square. Now p times 1 minus p the whole square is obviously going to be smaller than 1 minus p the whole cube. So maybe I will just write this again. Probability of error is equal to p. Probability of error in block of 3, probability of let us say I will write this more carefully of one error in a block of 3 is when you have a block of 3 binary symmetric channel realizations, you can have error in the first, error in the second, error in the third. So then you are going to have, let us say I will write them separately p into 1 minus p the whole square, 1 minus p into p to 1 minus p and 1 minus p the whole square times p. And of course, you have to add them if you want to find one error, but the probability of having, but each of these let us say p into 1 minus p the whole square. This is less than or equal to 1 minus p the whole cube. Why? Because if you take away 1 minus p the whole square, 1 minus p the whole square is essentially common. There is p and 1 minus p if p is less than half then this is true for each of these. This means that if you want to find the maximum likelihood event that occurs, the maximum likelihood event is the one where there is no error. Therefore, you want to find out the case where you want to essentially guess that there is no error, but in this case when 001 has occurred then the most likely occurrence is that there was one error. Of course, you could say that you know there is some other event something it could be could not be, but in this case the most likely occurrence is there is one error. Now the next thing we have to do is you got 001. There are multiple ways you can get 001. For example, you could have sent 111 and the first one and second one got shifted flip to 0 and 0, but that corresponds to two errors. If you send 000 and only the last one got flipped that corresponds to one error. Now again one error if you look at the two errors that is going to be p square and 1 minus p into 1 minus p into p 1 minus p into p square. These are the errors of the first error in the first two first and last and last two. This is even smaller than the above one. If the probability of having no errors is the highest, but if you do have an error the probability that one error has occurred is much higher than the probability that two errors have occurred. In other words over here the most likely occurrence that had happened was that you sent 000, but you ended up getting 001 and the less likely occurrence was that you sent 110 and the first two got flipped and you got 001. If you send 000 and you got 001 what is the probability that this is actually what happened if you write your base and all those sending 000 getting 001 probability is p times 1 minus 1 minus p the whole square that is this one. Sending 111 getting 001 probabilities p square times 1 minus p and since our p is less than half it is safe to conclude that the more likely was that one 000 was sent and since 000 was sent we will say the information bit is 0 ok. It is not because the first bit is 0 there are more 0s therefore, 0 over here again we are going to get 101 again 101 is not a code word 00 is a code word 111 is a code word 101 is not a code word. What are the two possible occurrences that what are the possible occurrences that could have happened you could have sent 000 the first and the last bits got flipped probability of that happening is p square times 1 minus p or only the middle bit got flipped probability of that happening is p times 1 minus p the whole square that is more likely because p is less than half. So, in this case use again it boils down to majority logic 1 was sent can you guess what happens in this case you get 010 it is the same as the previous 0 in this case no errors so directly conclude this as 0 in this case however, something interesting happens when you send 111 and the noise pattern is 110 that is the first two bits get flipped you get 001. So, now over here there is a problem because your maximum likelihood type of detection says I will assume minimum bits got flipped so what is that code word for which I have to flip the minimum number of bits to get 001 obviously 000 so here you will make a mistake here you will make an error in spite of using the repetition code right because your repetition code what it says is if you now flip more than 1 bit 0 bits and 1 bit is ok out of 3 if you flip more than 1 bit if you flip 2 or 3 I am making an error over here again 0 0 0 1 0 0 majority logic 0 111 0 0 0 I will just write 1 1 here also I will write 1 so this is where an error has occurred so what has happened if you now compare the original bit stream this 0 is correct this 1 is correct then there are 2 0s both are correct then there is 1 unfortunately there is a mistake then 0 11 these are correct an error occurs when 2 or 3 bits are flipped in a block of 3 bits ok. So let us now compare ok BSc if you just use uncoded the error probability for every bit is P but if you use 3 1 repetition code each bit is repeated 3 times so in a block of 3 you need 2 or more errors so let us say ok I should not equal to errors greater than equal to errors in a block of 3 that is if you have 3 binary symmetric channel realizations you must have at least 2 errors so for that if you want exactly 2 errors that is 3 P square times 1 minus P plus 3 errors 1 minus P the whole cube so let us actually calculate this let us say P is 0.1 let us say I will do I will do an example so example P is actually 1 by 10 BSc of P error probability is 1 by 10 because every information bit has a 1 tenth probability of getting flipped now 3 1 repetition code when will you make an error when 2 errors 2 or more bits are flipped ok you that is error probability is equal to 3 P square 3 by 10 square times 9 by 10 this is I actually I am sorry I made a mistake here let me correct it I am so sorry this should be P square 3 P square into 1 minus P plus P cube actually 3 errors is P cube ok so plus 1 by 10 cube if you evaluate this carefully you will get 27 plus 1 by 1000 is approximately 0.03 if I am not mistaken so from 0.1 you have reduced it to 0.03 in fact precisely it will be 0.028 so you have actually reduced the error probability significantly if you now expand to let us say let us say you have a 5 1 repetition code in the case of a 5 1 repetition code you are now going to have a different scenario you are going to now take major logic over 5 you are going to have errors with 3 or more bit flips among 5 bits 5 coded bits so you can evaluate the probability it will be you know it will be 5 choose 3 P cube times 1 minus P the whole square plus 5 choose 4 times P power 4 and you can evaluate all this this will be even lesser because the leading term is essentially over here it is P cube ok if you look over here right P square was the leading term 3 P square ok so you can actually approximate this as approximately 3 P square ok similarly over there you will have something like 5 choose I believe 5 choose 3 times P cube or so on ok so whenever P is a small number like 0.1 or something using a repetition code significantly reduces the error probability so the probability of making a bit error is 3 P square times 1 minus P plus P cube in this case approximately 3 P square we just showed that is much lower than P ok now the key issue is that the rate for an n 1 repetition code is 1 upon n and thus it is very inefficient because you got the benefit of correcting you know you know like in the case of 3 1 you can correct 1 error in the case of 5 1 you can correct 2 errors because of 7 1 you can correct 3 errors but the price you pay is that for every increase by you know you the rate becomes one third from one third become one fifth or one seventh so the repetition code is very useful in scenarios where you have to just you know send with great reliability and you have a very simple decoding logic that is at the receiver all you need to do is check the majority let's say you take a block of n bits which is n is an odd number just check if the number of zeros is more or ones is more and just conclude that that is what you have so given that that is the case the use of a repetition code is very useful it's very simple to implement but the price you pay is that the data rate becomes very very low and in extreme scenarios where you have a very poor signal to noise ratio very high you know p is very close to 5 then use of repetition code may be very useful but in practical scenarios there are better codes that are available so one thing that we will see from in the coming lectures is what more efficient codes what more efficient codes there are which you can use so in the next lecture we are going to introduce the concept of linear block codes and then go towards hamming codes thank you