 Okay, so we're going to talk today about read-salomon codes, continue to talk a little bit more about read-salomon codes and maybe compare them with BCH codes that we've done back in the day. So let me begin once again by talking about the destination of read-salomon codes. So if you want a theater-correcting Rf score over gf2-prm and lentus-n and read-salomon, so what should n be? So if you want to go over gf2-prm, then you should definitely give us 1. 2-prm, right? So it should be also equal to 2-prm minus 1, okay? And then how do I define the code? I can define the Rf score for those questions. I can say all multiples of what? It's plus beta, x plus beta-score all the way to beta-prp, okay? So I'll c of x c plus 4, so let's take v of c of x is less than or equal to n minus 1, okay? So I can do that, okay? So that would be a very valid definition for the read-salomon code. This code is called the generator polynomial, gfx, okay? And that controls a lot of the properties, okay? So like I said before, instead of starting from beta and going to beta-prp, 2t, you can start at beta-prp, b, then go to beta-prp, beta-prp plus 1, beta-prp plus 2, so on till beta-prp plus 2t, okay? You won't even have to go one step at a time. You can go in an arithmetic progression as long as the common difference is relatively prime to the order of beta, okay? So for instance, it's very common to take beta to be a clinical element of the F2 parameter, okay? So things like that you can do. So this is the most common choice. It defines what's known as a narrow-sense read-salomon code, okay? So if you pick like this, it becomes narrow-sense. And similarly, instead of picking n to be less than 2 power n minus 1, you might want to pick n equals 2 power n minus 1, in which case you get a primitive, okay? So those are, it's just some terminology in jargon. It's not really any major theory, right? Just to fix the ideas. So those are some additional things which I didn't mention, which you can do with read-salomon codes, okay? And the minimum distance, so the dimension k is basically n minus 2t, right? And the minimum distance t is 2t plus 1. And error correcting capability is, of course, it's t error correcting, okay? And now the code is over gf2 power n. It is of length n. Each coordinate of the code word is from gf2 power n, okay? So how many code words are there in this code? The size of the error's code is basically 1. The number of code words in the code and that will be equal to 2 power n raised to the power k. So that's a rather large, okay? So this is like both the code. Then as far as the decoder is concerned, we have a bounded distance decoder. It can be described algebraically. The basic point of that is if you have, so what you end up doing is you compute the sum of the number. So I define number 4 next. We're just going to describe the sum of the number. This would be h power 2t. Then you construct the kth polynomial. We'll take 1 power x1 and x2. So 1 plus xwx, okay? So we have w at us. And then you'll notice that if you write this polynomial as, sorry, 1 plus sigma wx per w. Then you get linear equations. What about sigma 1, sigma 2, sigma w and right, s1, s2, s2. Okay? So using these linear equations, you solve for sigma 1 through sigma w. And then you go and find the roots of the sigma of x. You get the error locations. Once you find the error locations, you can do a simple operation to find the error values. So that's the steps in the decoder. Another way to describe the decoder is very commonly done. Just to look at this product. In fact, this is how we derive these equations. How do these linear equations come from? Sigma of x times s of x is equal to what? So if you view this product modulo x star t, x star 2t, what should happen? You should get, we'll get some polynomial. Maybe I'll call it z of x. Okay? What's the special thing about the polynomial? It's got degree less than or equal to w. Right? Right? So in a way, in your decoder, you're also trying to minimize w. All right? Right? So s of x is given to you. You don't know this guy. You'll find this guy. And you know that z of x modulo x star 2t. Why am I doing modulo x star 2t? Beyond 2t plus 1, there'll be some terms. Okay? So we can't control that. So modulo x star 2t. I know that, I think it's x star 2t plus 1, right? Yeah. At the 2t, it can be there. 2t plus 1, it should not be there. Okay? So I don't know that. So another way of phrasing the same solutions is to find sigma of x, such that sigma of x times s of x modulo x star 2t plus 1 has least degree. Okay? That is the other way of phrasing the same decoding problem. Okay? There's some minor proving there to show that both of them are equal, but you can do that. Okay? So if you find the sigma of x such that sigma of x times s of x modulo x star 2t plus 1. What would mean the modulo x star 2t plus 1? Just throw away all terms, x star 2t plus 1 and higher. Okay? That product should have least possible degree. Once you find that sigma of x, you're done with the decoding. You can simply take that as a symbol. Okay? In what we did in the method that I just wrote, we did a list of the linear equations. And when you start at w equals t, then you keep going to t minus 1, t minus 2, till you get to the lowest possible degree. So it's just the same as this question. Okay? Most of the nicer decoding methods, which can actually be implemented in practice, they will always use that approach. Okay? They will start with an iterative method. They'll start with the sigma of x of square degree 1. Okay? Right? Sigma of x of degrees 0 means clearly, I mean, there's no errors, right? Sigma of x of degree 1. And see if this, also this equation is satisfied. Okay? So in fact, if it doesn't satisfy it, there will be some higher added terms. Okay? So sigma of x. So then you go to second degree, third degree, and so on. Okay? So you keep iteratively finding sigma of x, till you go to a point that you get this, get this possible, you keep iterating. Okay? So think about that for a while. And most of the bellicamp decoding method, Euclidean method, sequence determinant method, all these methods use this idea. Okay? How to solve this equation. In fact, this equation is called the key equation. Okay? The key equation, the first of its importance. Okay? So remember, you have to find the least degree, sigma of x. This is the least degree sigma of x, which will give you the lowest degree of z of x. Okay? In a way, there are so many least degree things going on here. Okay? Find the sigma of x, which will give you the least possible degree of z of x. That is the correct answer. But that will invariably mean that you find the least degree of z of x also. Okay? Is it okay? Is it clear? Okay? So that's the idea and bellicamp decoding. So this part of it, I didn't emphasize because I think some people are doing projects on it. Hopefully when they make presentations, they will go into some detail here. Okay? So that will be something important. Okay? Any questions on this? Okay, please. Okay. Both of them will end up being similar things. That's what we need some point. Yeah. So there is some proving there. But this will actually give you the Yeah, like I said, there is some part of it which needs to be proved, but it's not correct, but it will not be very difficult. You can prove it there. Okay? But the bell is definitely assigned a sigma of x so that z of x has least degree. That is the correct rule. And that will invariably involve that it will mean that sigma of x will have more degree. Okay? Some crazy thing can happen. I mean, so there are two. More than T and S, for instance, some crazy thing can happen. So you will take out all those things and prove it. Okay? So that's the bandwidth distance decoder. And I am not going to do too many details. But that will follow the quiz. Okay? The next, one last thing I want to mention is correcting a ratio of errors. Okay? And the DCS decoder also saw this. Okay? So before that, I want to mention very quickly. So if I don't pick m equals 2 power n minus 1, I pick n to be smaller than 2 power m minus 1. Okay? What happens? What is nice about picking n equals 2 power m minus 1? You get a cyclic code. Maybe you're happy about that. It doesn't really matter. Anyway, the product of GFX is valid all the time. But maybe you're happy about having a cyclic code, etc. Okay? So that's one reason. But if I pick n to be simply smaller than 2 power m minus 1, the code that I get is actually very closely related to the code that you get by n equals 2 power m minus 1. What is the relationship? Can you see something? It's being the relationship. So what you can show, like we did for the DCS code, you can just view the entire pal-de-sac matrix. So a strict n to be something, what am I doing? I'm simply setting the remaining part to 0. Okay? So I make n to be less than 2 power m minus 1. I have a shortened version of the original code. Okay? So it's a shortened version. So it's not really different from the primitive n equals 2 power m minus 1 code. If you just add 0s to the n, you will get a code word of the n equals 2 power m minus 1 code. Okay? So it's a shortened version. So that's one thing to keep in mind. So let's forget about that for a while. Let's move on to correcting errors and erasures. Okay? So what's the model when I have errors and erasures? Okay? So in general, instead of just... So far we've been looking at W errors. W less than or equal to t. Okay? In general, we can have W errors and say e erasures. Okay? What do I mean by e erasures? There are e positions in my received vector. Okay? So my received vector r now is going to be r0 or r1. rn minus 1. e rn belongs to what? Belongs to when a geocool pattern r z salon which denotes erasure. Okay? Right? Each vector h coordinate that I receive is either an element from gf2.rn or some special symbol which equals erasure. What does that mean? It means that some block before my decoder, some communication block before my decoder is somehow concluded that those m bits are not very reliable. Okay? So somehow based on other information and soft information and so forth. It's concluded that those bits are not very reliable. So I'm alerting my decoder to the very strong possibility that that position might be an error. Okay? So let's call this erasure idea. So now what happens is when you have W errors and e erasures there's some confusion at the decoder. There's a reason is what is the first step that you do in your decoder? You compute the syndrome which assumes that you know r exactly as in at least you have some idea of r. So say epsilon you can't compute anything with epsilon. Right? So what do you do? You have to make some substitution. It turns out what we can do is in bch we have to do some special thing, right? How many of you remember what we did for bch? It will put all 0 all 1, 2, 2 decodings all things can be stopped. Here it's not needed. Okay? It's enough if you simply substitute it to be all 0s for example. Or anything else. You can take any arbitrary values for those erasures and proceed with your decoder. Okay? So the first step when you have erasure is set erasure r2 c. Okay? So whenever you have epsilon in your receipt value simply put 0. In fact instead of 0 nothing else. It doesn't matter. Okay? It won't change anything. Now let's go and look at our error located polynomial. Okay? So you have an error located the polynomial which will be 1 plus x1x 1 plus x2x 1 plus xwx This will be the locations of the errors and you don't know any of these x1 through xw. And then you can also have let's say a gamma of x which I will call as the erasure location polynomial. Okay? So you have an error located the polynomial which will be 1 plus x1x erasure location polynomial. 1 plus x1 prime x 1 plus x2 prime x 0 and 0 1 plus x prime ex. And what are these x primes? These xjs are error locations who are unknown whether they are xj primes who are the known erasure locations. That's clear. Okay? So you have now two polynomials sigma of x which is an unknown degree w error located polynomial. Then I have a gamma of x which is a known degree e error erasure located polynomial. Okay? Now what I can do is I can multiply this sigma of x and gamma of x. Okay? Then what will be the degree of this guilt? W plus e but gamma of x I know. Okay? And then when I multiply this sigma of x I will again have very similar possibilities. Remember what will be my e of x the error actually error that happened? Okay? So e of x will be the error of x prime. It will be e1 x1 No, no, no, no. Let me just run this one here. So it will look like x prime i1 x prime e1 x prime w, right? ew Then you have the erasure operations which are say e1 prime x prime x1 x2 e e prime x prime je e right? So I have my error part and my erasure part and each s s i or let's say s s l the l symbol will be what? e of beta power l, right? So what happens then? I will have e1 remember xj is what? beta power rj and xj prime is beta power j, okay There's too many things here so let me let me make it m here. Okay? Is it okay? Right? So these are my error locators and erasure locators and now now if I do this my s will become x1 power l plus s e of l w power l plus e1 prime x1 prime power l s e prime x1 prime power l I am going a little faster but I can slow down a little bit also, okay? So these are my error locators These are my erasure locators Am I right? Right? So the error locators are unknown erasure locators are known, okay? So now when I took e of beta power l what do I get? e1 beta power l i1 but what is beta power i1? x1 so I took x1 power so until ew xw power Now what about the prime? e1 prime beta power lj1 x1 prime so I took x1 prime power So I have both these things Now if I define an s of x and multiply s of x by sigma of x times gamma of x I can use my same simplification rule as before and conclude conclude what? x c prime or x c prime Okay? Alright? So I can use my same method as before to multiply sigma of x times s of x and then conclude that x power w plus e plus 1 through x power 2t will have to be 0 on that problem Is that okay? Okay? Now I can conclude that sigma of x times gamma of x times s of x has no terms from x power w plus e plus 1 to x power 2t Okay? So now what do we do next? So I'll solve A friendly sigma Okay, so that's the important thing Keep that in mind Well, I know gamma of x already So I'll take gamma of x multiply with s of x happily I'll get some other polynomial Let me call it something else I don't care And then I'll take my sigma of x and multiply by that new polynomial that I know Okay, so this part is known Okay, so that's the crucial part Okay? So how many unknowns do we have in these equations? Well, only w unknowns and how many equations can you hope to get? 2t minus w minus e So as long as 2t minus w minus e is greater than w I can hope to solve it and you can show also that you can solve it Okay, so that's the next step which can be done So I can solve for w variables sigma 1, sigma 2 to sigma w So 2t minus w minus e is greater than or equal to w So if you prove the question around we'll see w plus e by 2 is less than or equal to Okay, it's the same as what we had before Okay, number of errors plus number of equations by 2 should be less than or equal to the error-correcting capability then you can solve it Okay? And the method is even simpler than the dch code you don't have to do any major modification to your decoder You don't have to run two different decoders one with 0, one with 1 and get confused and all that Okay? We'll simply do set it to any arbitrary thing for instance, if you want some simplicity just simply set it to 0 we'll get some evaluation then proceed with your decoder as it is except that you take your erasure letter polynomial multiply with your s of x and then proceed with the same decoder no major changes So okay? Any questions on this? Okay? Okay, so so erasure correction is very popular Okay, so for instance what will happen is you can imagine hard drives not in CD too much but hard drives particularly if some sector is known to be bad Okay? So erasure once in a while we'll go through and mark some sectors of that some parts will be known to be bad Okay? So what will happen is immediately give up some scratch scratch on the disc Okay? There can be circuitry which will find out whether there is a scratch or not Okay? And then what we read out of a scratch is just nonsense Okay? So whenever you encounter a scratch the circuitry that is there to protect the scratch will tell you a decoder that you are very sure about you have to just erase it Okay? So it will be erased and that will go as erasure to your decoder Okay? So it will be itself and decoder will use those erasures and try to correct it Okay? So what's the big deal about doing erasures that will be an erasure Okay? So for instance if W is zero you can correct price as many erasures as erasure Okay? And that helps Okay? And mostly if you imagine a scratch on the disc it's mostly going to be sequential Okay? So scratch on the disc will be introduced what's known as a burst erasure Okay? So it's not like random errors Errors if you start there will be many more errors for a while Okay? And you can imagine naturally that is very well suited for a reach element code Why is that? Because if there is a sequence of errors in bits there are very few simple errors Right? If you go to GF2 power M elements M errors in a sequence will most probably be like one or two simple errors One or two That's it Right? So that gives you an advantage with using reach element codes So that's why naturally for these kind of applications for optical drives hard drives magnetic drives Reach element codes are used quite heavily Okay? So today there was some more to change it also but many of the reach element codes are very popular they've been always popular for this Okay? How about the how about the how about the how about the how about the how about the how about the how about the how about the how about the Yeah, so I'm going to talk about DCH versus reach elements The question is about how do you compare DCH and reach element in terms of encoding So I'll tell you the comparison There are some very stark differences between the two Okay, so we're going to go over that next Okay? So that's the that's the thing So one last thing I want to mention is this notion of best error-correcting capability So what is the best error-correcting capability of a of a DCH of a reach element code Okay? We know the error-correcting capability is T What do I mean by that? It can tolerate T random bit errors Okay? So once again So I have to define this very carefully here So I have let's say an mk T error-correcting address code or tf2 barm Okay? Suppose I do this My actual code is we'll take mk bits and then encode it into how many bits and then bits Okay? So these bits are sent from the channel Let me say instead of random errors So as I say channel is introduced in random errors What is the maximum error-correcting capability? T So you can't go more than T There is a T plus one-way error sequence which cannot be corrected by me down to distance decoder Okay? So that's even very challenging question Okay? So on the other hand if I say the channel is constrained to introduce errors and bursts Okay? It starts at one point and it can introduce errors for bits starting at that point not more than T Okay? The next B bits So if I say that what is the maximum B that can be corrected by this decoder is my question Okay? Did you understand? So channel can introduce bursts bursts errors in errors errors in bursts of B some B So that is the maximum B that is always correctable So for instance when B is one clearly it's correctable B up to T is also correctable If you do errors in T bursts are length of T or random it can be corrected so obviously up to T it's possible is anything more than T possible if you are restricted to burst that's the question Can you introduce for things T minus for times M plus one Yeah, so do you see why you got that So the 10-thing thing is to say M times T Okay? So M times T is the best at a correcting capability So imagine what is happening So the code word when it goes through the channel I thought it's M times M bits but it's growing as C0 which is M bits M bits and then C1 which is another M bits C2 which is another M bits and so on till the last one CN minus one which is how it's going So my burst can start anywhere Okay, that is the important thing to keep in mind My burst can start anywhere and go for a length B Okay? If it starts exactly between two symbols If it was constrained to start only here then even if it is of length Mt I will only have T symbol errors but it can start exactly one before the this thing That's the worst case So it can start somewhere here So that's bad then what can happen Okay? T minus one times M plus this one Okay? So one more if you think you will get two people Okay? So only T minus one times M plus one burst errors can be largely always corrected by this can be guaranteed 100% to be corrected by this Is that clear? Did you see the problem so I got T minus one guys All the next T minus one will be erased because I get one here this one will be also affected If I even add one more one more symbol will be affected and that will go to T Okay? So max B plus T times plus So that many burst errors can be corrected Okay? So that's a huge advantage to reach out in burst if you know that the errors are going to be in burst There are several reasons that will be in bursts So if it is going to be in burst for instance fading is one more kind of scenario you can think of So if you go into a fade it's going to be in a fade for a while and then you come out Okay? So those are just sort of some different ways of thinking a word but of course nobody uses the installment codes wireless but there are ways in which you can justify Okay? So this is burst burst error correcting capability So normally if you know you're doing some burst there is a way to improve your burst error correcting capability Okay? So that is what's known as intelligent Okay? This is a very common strategy that's used in hard drives particularly So what you do is also it's a very simple thing So if you want to do intelligent output but n you take n code words I need some notation here so what are my n code words So let's say cn-1 now and c0 2 c1 2 cn-1 2 all the way down to c0 n c1 n cn-1 n Okay? So I take n code words Normally I would send one row at a time row wise, right? But instead of sending it row wise I can send it column wise If I send it column wise what happens to my best error correcting capability? It roughly gets multiplied by n Okay? So of course it would be slightly careful here You can position yourself somewhere this way So you'll also get n-1 times p-1 times something like that Okay? Roughly what happens is your best error correcting capability gets multiplied by n Do you see that why? Do you see why? If I send it column wise what happens? Even if I even if I erase all these days how many errors am I introducing in each code word? Because there is an error in all of these things Only one error is being introduced in each of the code words So roughly I can do this for p p such things so t times n times m is what we can expect But of course there is an adjustment you have to do There won't be exactly p times n times n It will be p-1 times m plus 1 times n minus 1 Something else will come but roughly p times n times Okay? So match d for this way goes to roughly p times n times n times n times n times n times n times So this is a strategy that's also used. What's the penalty you pay when you implement this strategy compared to the previous one? What is the penalty you pay in an implementation of this code? Yeah, you need to add a memory, there will be something called latency. So latency could be a critical thing in your application. If your application cannot call that latency, it won't work. So you have to wait for like n guys before you can send it out. Like we said in the receiving side, you have to wait for n guys before you can So those are problems in real life that you can open. Here's the standard way. Interleaving is a very common thing that's done. Later on in the next class, when we're doing the next course, you'll see that interleaving is used for more fancy things than just this. But for now, we can think of the best electric thing as a good way of doing interleaving. So the next thing I'm going to point out is the comparison between BCH and Reed Solomon codes. So it is a bit difficult to make proper capsule comparisons. You'll see why. Because Reed Solomon codes are O, R, O, G, F2, R, N. They have symbol error correcting capability, D. BCH codes are binary and all that. So it's a bit confusing to make the comparison. So I'll describe it in a certain way. So I will say my block length is 6 plus N bits. Somehow I fixed that. My block length has to be N bits. Not symbols or anything, N bits. It's fixed as N bits. And let's say my error correcting capability needed is being good for us. So one can imagine why this is reasonable. So it's a building a communication system. You might know that you cannot wait for more than N bits. So your latency requirements, there will be a circuitry in your communication system which tracks the timing. So if it keeps on in bits, timing has to be tracked. N usually is fixed by those things. I mean it's not fixed by something else. Your timing can only track for so long. Errors can only accumulate for so long. So you'll have some fix. You'll be fixed on some N bits. You can't say that. So those N bits, based on your error models, you might know that at most you'll get errors. So this is a reasonable place to start. So you say N bits is my block length. And I expect, say, at most T errors, there's no way or the probability that I'll get more than T errors is really, really small that I don't have to design for. I don't have to worry about it. So that's the way you can think about this setup. So now suppose somebody says I can build a VCH code for it. And suppose somebody says I can build a reach element code for the same thing. But remember, you'll say these are random errors. T random errors. Because the most model's randomness works more lightly. If you have, of course, best errors, then there are other things to worry about. But for random errors, our comparison will be for random errors only. Suppose I say this, I want to compare a reach element design versus a VCH design and comment on various things, complexity, performance, which is better, etc. So that's the kind of approach I'm going to take. There are other approaches, like I said. But this is one approach and we'll see how it works. So what will happen if I have to design a VCH code for this? What is your first step? What would you do a VCH code? I'm sorry? I'll define a beta, right? I'll define a beta whose order is greater than m. So one simple way of doing it is to take a beta, which is a primitive element of g of 2 power m and 2 power m is the smallest power of 2 greater than m. So you can do that. So that's the very standard way of doing it. So let's fix some that. So we'll fix m such that 2 power m is greater than or equal to n minus n, right? So it should be n. n minus 1 should be equal to, so let me just say, greater than or equal to. So that's fine. And then I fix my beta as a primitive element of g of 2 power m. So that gives me the beta of n over 1, right? Is that same? So how do you describe the smallest m which is greater than such that 2 power m is greater than or equal to n? There is a way to roughly say what it will be of the order of m. m will be of the order of log n plus 2. So that's the idea. So if you want n, you have to go to an m which is log n plus 2. Is that okay? There might be specific cases we can do something else, but let me just keep it generic like this. So log n is the kind of thing you have to go. Maybe 2 times log n or log n plus 1 or something. Maybe the next one you could go for that display. So we pick a beta that is primitive. And then I want direct correcting capability t. So what will be my k? n minus mt, right? Let's say we are in the domain. So t is not so high that this is going to be violated. So let's say k equals n minus mt. So we can do actually an assumption. But it is true. Then the k is going to be greater than this. So I don't know. I know the problem in assuming nk equals n minus mt, right? And let's say what happens to the RS code one. What do you do in the RS code? Should I pick the same beta from the same two patterns? No. So that's the important thing to keep in mind because my block length is fixed as n bits. Remember that is very, very important. So what I can do is I can come up with the length n RS code over some GS2 power, let's say capital M. What should be the constraint? n times m should be greater than or equal to n. So let's say it's equal. Mostly it's going to be equal. So it's greater than or equal to n. So what can you do? So there is a choice here. So how do I do this? How do I nicely give a flavor for what n will be? Any ideas? We're going to write 2c. Remember, the maximum n can go for a fixed n minus 1. 2 power n minus 1. So we can go to the maximum n. So let's say we take n to be the largest for a fixed n. So n can be roughly replaced with 2 power n. So 2 power n times n should be greater than or equal to n. So that will be n then. n can be much smaller than log n. Am I right? m can definitely be smaller than log n. 2 power n greater than or equal to n gives you log n. 2 power n times n greater than or equal to n will give you something smaller than log n. Maybe not classically smaller, but still definitely significantly smaller it can be. So if you want a specific example, you can take let's say n equals this number, I like a lot, 2030. So if you wanted a bch code, what is the smallest n that we have devoted? 2048 which is 11. What about the reach element code? n equals 8 is good enough. 255 times 8 equals 2048. So this is a good thing. This gives you m equals 11 and this gives you capital M equals 8. So we'll go lower when you go to bch code. We'll go to reach element code. Is that an advantage using a smaller field or is this an advantage? So today you may not think too much of 2 power n, but definitely 2 power n you take more circuit complexity than 2 power 8. So if you're spending so much effort and power and circuit area for 2 power 8, GF2 power 8 implementation, you'll be spending more for GF2 power n. For instance the chain search. Chain search is going to be painful. You have to search for 2040 guys over n bch here in the reach element code we'll be doing 250 seconds. A lot of things will be simpler in the reach element code. That is not an advantage. What is k? What is k for this code? n minus 2t. So that is an important thing. It's very, very important. They're okay. So when I multiply by m when I multiply by capital M to a small k which is small n minus 2 times m times t and the capital M is there. Okay, do you see that? So to multiply by capital M what happens? k times m equals m times n minus 2tm. So that m times will come in the 2t. So let's do this n equals to 2040 and fix a definite example. So let's say for instance something like 10 let's say 20 n equals 2040 Okay, so we have to fix we saw before that the bc of chenille m equals 11 and k will be 2040 minus 11 times 20 is what? 220 that will give you something like is that a problem? Okay, it will be 1820, right? That's called vk. What happens in the other square? Okay, so you can pick m equals 8. I'm going to use small m here. It's an abuse of notation. You can see that. And then 250 times what? 265 minus 40. Okay, remember that. This is the penalty you'll pay here. So it's going to be 214 228 and that will translate into 2040 what? What is 215 times 8? 17 20 Okay, in this Okay, so for the same error correcting capability and the same block length you can send 100 more message codes if you use bch codes Okay, you will have to send 100 less message codes if you're using reach element codes. But what's the advantage? Reach element codes will be doing only crossing over gf2 power 8 but then bch codes will be doing crossing over gf2 power 11 Okay, reach element codes will be only searching for 1 among 255 roots Reach element codes will be searching for 1 among 2040 roots So this thing is amplified significantly in the bch code So this is the way you can come back between the two. Okay Okay, so there are so many other things you can do for instance, a very common next thing you can do is writing expressions for capability of block errors in each of these two situations Okay, right If you do a bounded distance decoder, you know TRS can be corrected in 2040 Here it's a more complicated formula Remember TRS can be corrected in 255 symbols So you have to go from first bit errors to block errors and then use the formula Okay, so there are two recent formulas and maybe it's not very easy to see how they behave and actually have a plot here I'll maybe show you that Okay, so I think there's a way to keep forgetting this I think there's a way to include this in journal So let me just insert this plot here Oh, it doesn't like PDF, okay Okay I thought it was possible but anyway Okay, so I think what I can do is cut and paste this into windows that is possible So this is a plot of block error rate versus something which I call EBO or M0 I've not defined that in class yet but it's something like the error probability the P that you have over your BAC that can be converted into an EBO or M0 and I have to tell you that going from left to right goes to lower values of P Okay, on the left hand side you have higher values of P as you go to the right you have lower values of P Okay, is that okay? Alright, so one curve I have is called uncoded, what do I mean by uncoded? That will be P itself Okay, so that kind of gives you I think that's the mapping from P to EBO or M0 Okay, it is sum 1, 2 and map then it is a crazy kind of map You can ask me why I'm doing it, it's several reasons Okay, it's a very commonly accepted thing to plot with EBO or M0 and it goes to the uncoded P and you can see that P decreases from something like 10 power minus 1 all the way to 10 power minus 6 Okay, and I've used a log scale here to capture the plot much better otherwise you can't see anything So I've done log scale to capture this area blow it up a little bit and here EBO or M0 is the mapping So if I have like so a P of 10 power minus 2 I should go there and then map it to the EBO or M0 and then look at the other thing Okay, so that's how we read this map Okay, so I've plotted several things here For instance I've plotted what I've called RS 255, 191 65 Okay, 255, 191 or 65 Is that correct? Sorry? Yeah, so that's the minimum distance that's not error correcting capability So at 65 it should be 32 error correcting code Over G of 256 and that is the solid curve Okay, and then the dotted line is the same similar BCH code BCH code of the same block length 2040 and same minimum distance 65 which means it can correct 32 errors but then the K is different 1688 If you compute it you'll see 191 times 8 is smaller than 1688 Okay, it will definitely be bigger 200 times 8 itself is the only thousandth example 191 will be still further smaller So it will be like 153 something 1528 So that will be the number here So it's definitely smaller than 1688 So in the read Solomon code you're sending You're sending lesser number of message bits BCH code you're sending more message bits So BCH code has higher rate But then what has happened here? BCH code's error curve is to the left of the read Solomon code What does it mean? Performance is better For the same probability of error BCH code will give you a lower output probability of error than read Solomon code Because it's on the left hand side So for instance if I pick this probability of error here BCH code gives me the same probability On the other hand read Solomon code gives me a much higher probability So not only is BCH code better in sending more rate It's also better for sending Getting better performance In terms of what is called coding gain So you get better gain Compatible So it also has a similar plot So for error correcting code minimum distance is 129 I compare RS versus BCH Same block length And the K there will be similarly different And then you see once again that the BCH is doing better For random errors So this is basically the expressions that we derived Remember we derived those expressions For probability of block error For pure error correcting capability I wrote some expressions So I'll write down those expressions So I'll tell you what expressions I've plotted So if you compare With the same block length And same error correcting capability BCH is definitely better than Read Solomon for random errors What is the penalty you take You have to do more complex decoding In BCH So let's move on to the expression I'll tell you what expression I've plotted So if I have length n Error correcting capability t So for the BCH code Probability of block error With a boundary distance decoder So again boundary distance decoder Is what? Out of n bits If there are t or fewer errors I can correct If it is t plus 1 or greater I will say I'll assume this And what's the probability of block error 1 minus summation i equals 0 to t n choose i 1 minus t power n minus So that's a very simple expression for BCH What you have to do for Read Solomon You have to first go from BCH to symbol order That's the first step given in n Then what is the next thing So you have to design capital N to be Small n by m And then You have to say Probability of block error This is for Read Solomon Read Solomon Counted distance decoder 1 minus summation i equals 0 to t Capital N to psi T s power i 1 minus T s power Capital N minus The expressions are What you get here And then what you do for So these are the expressions that are Plotted in the picture For Read Solomon and BCH So I think That roughly Gives you the points about Main summary is BCH Capital N R s For fixed Fixed n and t In terms of components But R s is About lesser complexity And first BCH code You don't have any special First error correction So this is the Simple comparison I wanted to do Just to give you an idea So if you errors are random Most people will use BCH codes today And today complexity is not a big deal So for instance the dvds2 Satellite standard There is an inner code There is an outer BCH code for correcting errors The same error correcting capability We use BCH codes We don't use Read Solomon codes On the other hand In other communication systems Mostly older communication systems People use Read Solomon because They are inner blocks Produced block errors First errors So that's the model That we use today So we will stop here For now and this is pretty much going to be the end of Read Solomon codes From the next week onwards We will do some other topic Outside of Read Solomon codes