 Okay, so we're going to look at decoding binary BCH codes. Okay, so let's start the topic for the day. And we have something like this. So you have a message n that gets encoded by a p error connecting BCH code. You get a code with C to which error that gets added by the channel. And you get a received vector r. So our first step is to compute syndrome si equals r evaluated at beta power i. And this I do for i equals 1 to 2 times t. Okay, so I have two p syndromes. And I want to process these two p syndromes smartly to figure out locations of the errors. And like I said, if you want to do maximum likelihood decoding, you're not going to be able to do it. It's very hard. There are too many things to look at. So we want to fix ourselves to bandwidth distance decoding. So to that effect, the assume of e effects is of this form, right? So x star i1 plus x star i2 star i1 plus star iw. And w is plus 1 equal to t. So this is our bounded distance assumption. Once you make a bounded distance assumption, it turns out the syndromes can be written as follows. x1 plus x2 plus w. x2 can be written as x1 squared plus x2 squared plus 1 equal to x2 squared. Now we're going down to x2t. It becomes x1 power 2t. x1, 2, x2. And what are these x1, xw, so on? So it's j is basically what? Beta power pi hj. So this is what it equals. And what is beta? Beta is an element of gf2 power m. The order is greater than n. And if you're blocking the VCH code, all those things are as before. So we're not going to say that. So the question is, given the left-hand side of these equations, given the s1 to s2p, how do you find x1 through xw? Once you find x1 through xw, the problem is solved. So you know x1 through xw of beta power something. And this gives you the location of the error. If that gives you the location of the error, you go there and flip it. Now these xj's also have a name. They're called error locators. They refer to them conveniently in the text. They're called error locators. So there are elements of the field gf2 power m. They're not the locations directly. But if you write them as beta power something, that power gives you the location of error. So like I said, we're going to be substituting something here to get the linear equations. So we have nonlinear equations. So that we're going to make a substitution. And the substitution is a bit of a fancy substitution. What we'll do is, we'll define something called the error locator polynomial. Sigma of x smaller. This is going to be 1 plus x1x over 1 to 1 plus x2x. So this is our crucial substitution here. So notice these x1 through xw are made unknown. I'm going to make a substitution. So I'll tell you what the substitution is. Sigma of x is a polynomial. And what will be the degree of sigma of x? What will be the degree? W. And the constant term will be 1. So suppose we write sigma of x as 1 plus sigma of x plus sigma 2 of w, sigma w, x power w, small x. It's a degree w polynomial. This is equal to the same value of 4 plus 1x. So my substitution will take me from the set of variables x1 through xw to the set of variables, sigma 1, sigma 2. So I have a set of equations which are in terms of the x's. That collects 1 through xw. The error locators. So these are my error locators. I have equations for my error locators. And those are non-linear. And I don't like solving them too much. Not directly. On the left hand side here, I have four solutions of error locator polynomial. And like I told you before, these error locator polynomials will be symmetric functions of the x1 through xw. So you can write it down without too much trouble. So for instance, if you want a literal translation here, what would be sigma w? If you want to literally write down everything, what would be sigma w? x1, x2. I'll go back to xw. So for instance, anything else you want to write down? So for instance, what would be sigma 1? What is sigma 1? x1 plus x2 plus 1 through xw. So write. So like I told you, you can even write. For instance, sigma 2 will be what? x1, x2 plus x2, x3. x1, x3, so on. So all possible. xw minus 1, xw. So all possible kind of product of two variables at a time, sum of all of them. So that's what it will be. And like I said, the sigmas I want to call symmetric polynomials in the x1 through xw. And any other symmetric polynomial in x1 through xw can be written in terms of sigma 1 through sigma w with some linear coefficients. So that is the crucial idea. That's what we will use here. So I'll prove it to you in some other way also. But let me ask you one more question. Suppose I do the substitution. I go from where I locate as capital X's to my coefficient sigmas, sigma 1 through sigma w. I'll do that. Suppose I do that. Then suppose I get some linear equations and suppose I solve for sigma 1 through sigma w. Somebody solves for sigma 1 through sigma w. How do I go back to the capital X's? I should be able to go back. So in one way it seems like I can do some polynomial multiplication and figure it out. But suppose I have to do this. I solve for the sigma 1 through sigma w. How do I go back to x1 to x2? So there are the roots. This is what is important to know. The roots of the total length of the polynomial are x1 inverse, x2 inverse for xw inverse. So given the sigmas, you have to go back to the x's. What do you do? You go to your gf2 power n field. Find in that field all roots of sigma sx. And then take their inverses. You will go back to the x1 to x2. The roots are x1 inverse, x2 inverse for xw inverse. So you find the roots of sigma of x in gf2 power n. That is also important. If it cannot be in any other field. In gf2 power n, you find the roots. Then take the inverses. So we have a substitution. And we know how to go from one to the other. There is no problem. This is the proper one-to-one kind of substitution. You can go from x's to the sigmas. You can also go from sigmas back to the x's. It's no problem. This is good. So now I am going to accept this substitution in this set of equations. So I want to put in the substitution in this equation. So some equations are easy. For instance, s1 I can write as equal to what? Sigma 1. And you might be able to figure out how s2 also works. It's not too hard. So for some small numbers, it's easy. The reason is that x1 plus x omega squared is simply s2. So s2 will be s1 squared. It's sigma 1 squared. You can also do s3 with some work. So s3 is not very hard. But then how to generally do it is not very obvious. So what happens is that this t increases. It's not very clear what we do with those things. So for that, I am going to use a little trick to show you how this works. This is not the most general thing possible. But it's a useful little trick that gives you the picture immediately. So what I am going to do is the following. I am going to define something that I call the syndrome polynomial. So there will be a lot of polynomials to keep track of. So this is basically a simple definition. It says s1 x plus s2 x1 plus s3 x3. So I am going to do s2 p x2 p. So this is going to be my syndrome polynomial. So on the face of it, it seems like something really ridiculous. We don't really care about it. We just put s1 with some polynomials and we put some coefficients back. So let's expand it. So let's see what we can do. So if I expand it, I am going to get what? s1 is what? x1 plus x2 to xw. So if I do that, I will get this. I will write it a little differently. We are okay? What is s2? What is x2? x1 x2 plus, no, x1 squared x2. x2 squared x1. x3 squared x2. Is that right? Is that right? Did I get it right? I am going down to x2 p squared x2. So it's 2 times x squared. Then it's 3 times 3. So I am going to say it's 3. x1 power 3, sorry, x power 3 plus x2 power 3 plus x2, x3 power 3, x power 3. I am going down to x2t power 3, x power 3. So now let me just put that back. Coming down to x2t. Same mistake again and again. So it's x1 power 2t, x power 3. I am going down to x2 power 2t, x power 2t. I am going down to x3 power 2t, x power 2t. Here, here, here, here, here. Plus x2t power 2t. Is that okay? Last one. Last one should be? W. Oh, W. Oh my goodness. Alright. I am going down to x2t. The power goes all the way up to 2t. So I have written it up in this very painful form. The nice thing about this form is that it gives you a relationship between S, the coefficients of S and the coefficients of sigma. That comes from the following observation. It's a pretty little observation. There's no real reason why it should work this way. So what you do is you take S of x in this form and multiply by sigma of x. And you observe something about the coefficient of the resulting polynomials. So that is the important point yet. So let's see. Let me write down what sigma of x is. Take S of x in this form and then multiply it by sigma of x. So what's sigma of x? Let me write it in this form. Let me throw a little challenge at you and see if you can observe something nice about this form, which will give some simplification. So this is my sigma of x. I'm going to take S of x and multiply by this polynomial. S of x in this big nasty form here. I'm going to multiply by each of these things. And I'm going to say I'll observe some very useful property of the product. And that will give me some nice equations involving the coefficients of S and coefficients of sigma. Specifically, I will observe that in S of x times sigma of x, some coefficients will be 0. Powers of some experts on things will always be 0. So once I make that observation, on the left-hand side, what is the coefficient of any power of x? It will be some Si sigma j kind of thing. So that's just linear expressions in sigma. On the right-hand side, if I say 0, then I get useful equations which I can use to solve. So now I want you to start at this for a while and tell me why a lot of cancellation will occur in this product. There are two terms. Yeah, see, so if you notice, look at the first row here. So look at the first row here. What happens when I multiply the first row with that row alone? 1 plus x1x. Only with 1 plus x1x. What happens when I multiply something like this? I will have some x1x. After that, what will happen? Whatever I will have will be x1, 2t plus 1, x bar 2t plus 1. So everything else will get washed out in that product. Then after that, I also have to multiply with these things. But what is the maximum degree of these things? W minus 1. So obesity only includes factors up to W. So from x bar W plus 1 all the way to x bar 2t, I will have no terms occurring because of the first row. You see that? You do this multiplication. You first do a multiplication with 1 plus x1x. So if you do that, you will get x1x. And then the next term will be x1 bar 2t plus 1, x bar 2t plus 1. And then I multiply all the remaining things. But that degree is only x bar W minus 1. So at first it will give you terms from x all the way to x bar W. So from x bar W plus 1 all the way to x bar 2t, I will never have any terms in the first row product. Now what will I do with the second row? So we start with the 1 plus x2x. First multiply the 1 plus x2x. And you have the same effect as the first row. So now I do this row after row after row. And finally conclude very easily that in SFX 10 sigma effects, the coefficients of x bar W plus 1 all the way to x bar 2t are 0. So that is the crucial observation here. And that gives us very nice linear equations. Yes, of course, there is very general theory for Newton's identities and symmetric polynomials which will give you these equations. But in this simple case, we can easily derive the Newton's identities. Also, in fact, we are not deriving it. It's not very hard to do that. Is it okay? So let me write that statement more precisely. So we will observe. I am not going to go into too many details here if you want to try and multiply them all. In SFX 10 sigma effects, the coefficients of x bar W plus 1 to x bar 2t are 0. So that is the crucial observation here. Okay? So now let's try and write it down. So SFX is what? S1x plus S2x squared all the way to, I may write also 3. S3x bar 3 plus all the way back to, S2t minus 1 x bar 2t minus 1. So the reason why I am writing out these terms is they just help us to visualize and write down the correct thing in a little while. Times sigma, which is 1 plus, S1 by 1 minus sigma 2 by 12 by 3x bar 3, S1 by 1 plus sigma W minus 1, x bar by 2 minus 1 by sigma W plus 1. Okay? So this is the product. What is the coefficient of x bar by 2 plus 1? Sorry? Yeah. So you get these equations. So it's already get S1 sigma W plus S2 sigma W minus 1 all the way down to, so 3 sigma W minus 2 all the way down to, up to the last term. S W sigma 1 and then plus S W plus 1 and that has to be equal to 0. Is that okay? Is that clear? Okay? So one can write down more equations. So all the other equations can be write. So you can try repeating this. X bar W plus 2. What will happen to X bar W plus 2? It's going to be S2 sigma W, is that right? S3 sigma W minus 1. S4 sigma W minus 2 all the way down to, what will you have here? S W plus 1 sigma 1 plus S W plus 2 equals 0, right? So like this you can keep on building, let's just for fun write X bar 2 W. Yeah, so if you go all the way like this, right? So S W sigma W plus S W plus 1 sigma W minus 1 all the way down to, up to the last term. Okay, so it all depends on which is greater, right? So for instance, W is less than T. So 2 W will definitely be less than or equal to 2 T. So I can always fit in this, right? So it will be S 2 W minus 1 sigma 1 plus S 2 W equal to 0. It will get equations of the form. Okay? So how many equations I have here? W equations. How many unknowns I have? W unknowns. Okay? Right? So I have W equations and I have W unknowns. So let me write it down in a matrix form. Okay? And we'll start at the matrix and make some comments about what can happen. Is that okay? Is it clear? Okay? So what we are doing? Okay? So these are not the only identities, they are rather useful identities. But these identities are useful as in they are very general. Later on when we see we can induce easily. Okay? The reason why I stopped at 2 W is because I have W equations. Okay? I want W unknowns, W equations and that's very nice. Okay? So let's go through and write this in an equation form. Okay? So I'll make a lot of comments about implementation efficiency and all that later. But right now we just want a decoder which works. Okay? So let me write down a matrix form from A X equals B, right? So standard form for the limit equation. Okay? So on the right hand side I'm going to have SW plus 1, W plus 2 all the way down to S 2 W. Okay? It's what I have here. And this side, okay? I want to have sigma W all the way down to sigma 1. I can have it in any order but that's just a nicer order because then the equations come out very nicely. So I'll write sigma W, sigma W minus 1 all the way down to sigma 1. And now I can write this, the rows very easily. S 1 through SW, S 2 through SW plus 1 and so on. Okay? So that's the idea. So I'll go along through SW plus 1. Am I right? Is it SW plus 1? No, no. SW, you know? SW. All the way down to SW minus 1. Okay? Okay? So this is a set of linear equations which I can easily solve. No big deal in solving it. Assuming that. And can I easily solve this equation? Yeah. Yeah. This is full rank, right? This is giving answers in terms of determinant. It's okay but when this has full rank, okay? So this matrix has full rank or this determinant is not zero. I can uniquely invert this and find sigma 1 through sigma W and I'm done. Okay? But I have to know very well when the determinant of this matrix will be non-zero. Okay? So one can do some computations and show if W is the actual number of errors that occurred. Okay? Then this matrix will actually have a determinant on it. You can show that. Okay? You can also show more interesting things. If W is there, if W prime is the actual number of errors that occurred and if W is smaller than W prime, okay? I think you can show that the matrix will still have full prime but if W is greater than W prime, the matrix will not have full prime. You can show such nice properties. Okay? So let me write that down. W prime is the actual number of errors that occurred. So I'm not going to prove these things that I've used to prove it but I'm not going to prove it here. The actual number of errors. Then I believe you can show, so I'm going to have to check this but I think what you can show is if W less than or equal to W prime, the rank is full. If W is greater than W prime, the rank is zero. Okay? So there's a way to prove it but it's not very hard to prove. Once again, use the Ant-Mount kind of argument and write down S1 as W prime as the actual number of errors that occurred at some point. You write it as a sum. Then you can write this matrix as a product of some two or three different matrices and then you can observe that each of them will be full prime. If W is less than W prime, when W is greater than W prime, two columns will repeat and you'll see that it's not... Again, same amount of matrix kind of writers. So you write this as a product of some two or three matrices and do it. So it's not very illuminating, so I'm not going to write it. And these are also not the best equations possible. I'll come back to it later on. Anyway, for now, these are good enough. We can be happy about it. Is that okay? All right. So this is the set of linear equations. So what do we have now? So let me go back to the original picture that we had and tell you where we are. So we are now able to solve for... Let me take this diagram and then try to complete it. So let me try and complete it. So we went from the re-locators as X1 through sigma of X and then here we solved for sigma of X. Remember, in our solution, in our method of solution, we also are going to find the number of errors in a way. So we will also find the W and we will also solve for sigma. How will we do that? So how do you use this information and find W? You start from W equals T. There's any way beyond T you're going to give up. And keep decreasing it till you get a full-ranked matrix. So when you're going to come to a point where you have full-ranked, you invert and find. This is a very simple method you can use to implement this. That's the idea. But remember, we are assuming less than T and the number of errors can be greater than T. So with actually greater than T, some crazy things will happen. I'll tell you what are the crazy things you can have. So you solve for sigma of X. What's the next step after you solve for sigma of X? You have to go back to the error locators. So what do you do? Let's say a degree is W. Then what do you do after you solve for sigma of X of degree W? Find root plus a gf2 bar n. How do we do this? Let's try another way. You substitute one after the other and gf2 bar n. It's not more than n in number. So you can just quickly do it. You compute it. You will find the roots. Now your problems can surface. So there are several branching out points here. There will be two points. In one side you can have failure. In the other side, you can have success. When will you have success? When there are exactly W distinct roots for sigma of X in gf2 bar n, you have success. You are finding roots for sigma of X in gf2 bar n. The sigma of X is degree W. It will have utmost W roots. But it can have any number of roots from any or any field. It can have repeat repeating roots. All kinds of crazy things can happen for a general polynomial. When will you succeed? When there are W distinct roots for sigma of X? Why is that? Because the way we wrote down the equations, we assumed W is distinct roots. And then we assumed W errors. And then we went and did the computation. If in a sigma of X, we are not getting W distinct errors. Something is wrong. Something else has happened. There have been too many errors. And our equations are not working out properly. So failure will happen in any other situation. So what is the situation in which you have success? W distinct roots. And these W distinct roots, I can tell. X1 inverse, X2 inverse are going to X1 inverse. So those are the error locations. Once I know the error locators, you find positions of errors. It's very easy, right? So you write find whatever ij such that Xj equals beta power. So you go back and do that. So it's very easy. In fact, the roots itself will come in the power notation. So you can quickly just look at it and find the ij's. So once you do that, what do you do? You take this R and then XR it with the e-cap. What is e-cap? e-cap is 0 everywhere else except that ij's. So e-cap gives you the error locations. It's 0 everywhere else except that R. And this gives you the error. Notice this failure is this new thing that happens in the bounded distance decoder. So in a matching likelihood decoder, there's no question of failure. On those, you find the closest forward. Here, when I find the closest forward, when I cannot, I just say I can't do it. So what we're going to do next is look at some examples and try and write down these equations to see just how the linear equations look like. And like I said, there are a lot of much simpler equations involving the versus and the sigmas than the ones I wrote down. But once I wrote down, they're nice and consistent and complete in a way. The other ones are very smart. So this is not really the optimal implementation for this decoder. If you go and pick up any paper or a book, the implementation they'll talk about will be very different from this. Particle implementation, mostly what people do is a version of what's known as belly camps algorithm, which is an iterative way of doing it. It's a completely different algorithm from this. They use very different kind of equations. They do not use these equations specifically. The idea there is very simple. That's simple. It's very different. They use other kind of equations. I'll point that out in the examples. There's no point in doing that in theory because it's old stuff. It's in the paper. People have implemented it. It has it. So it's very commonplace. I don't want to go through that in detail. But I'll take some examples. In the examples, I'll show you how that method will work. So just roughly, what those equations are, how do you think about it? How do you go about doing it? That's what we'll do. But this is a nice method. This set of equations I talked about, they will generalize in a nice way even when you go to non-binary BCH codes. See, when I go to non-binary codes, the same kind of equations will hold. See, that's the power of this method. So I don't have to redo this thing again for the non-binary system. That's why I like this. You don't want to do anything like this. So here, the solvings are sigma effects. This is the critical step. Anything else is not that complex. That's the critical step. And there are lots of algorithms to do it. Here, Bellicant's algorithm one is something called the Euclidean algorithm. There are so many variants of this. And we'll not see all of them. It's just time consuming and not to illuminate. So we won't see that. But the other states have got states for what it means. So to illustrate this Bellicant method and all, I'll take some examples. I'll try to show how that works. So let me see how it is to motivate that. So let's start with an example. Maybe we'll just see this method first, and then we'll go to the other methods. So we'll pick our problem 3, and it goes to 15, and we have 16. Let's pick T equals 2 first. So just get that. The easiest possible test. So the 0's are... So T equals 2 is really easy. We know how to do it. We have another method with a paradigm equation, which is very nice. So let's pick 3 equals 3. So the 0's would be what? Beta, beta 4, beta 5, beta 3, beta 4, beta 5, 1, beta 5, 6. Those are the 0's. And then... So you're going to get... So let me try and describe this Bellicant method to you very briefly. So I had this Sfx, and then I had a sigma of it. I multiplied it, and in the product I observed, there are... So the product is basically there are some terms up to... So let me just call it some T0, plus T is a bad notation. So what can we use here? What notation can we use? Let's say alpha 0. So we go to alpha 1x. Just go on full. It's going to be all the way up to... X bar w, right? And then we'll have a whole bunch of 0's. And then we'll have terms greater than... So we have plus X bar 2p plus 1, right? Am I right? I'm sorry? I'm sorry, alpha 0 is? Yeah, I mean, so there is nothing. Something is there all the way up to here, and then beyond that, you don't have anything. So people also look at 1 plus Sfx times sigma of X0. That also is used sometimes. In some cases that's useful. But anyway, it doesn't matter. So let's just keep it like this. So here you have something. So in a way, what you want is in your decoder, you want your w to be minimal, right? You want to find the smallest possible w. In our matrix method, we are starting with the largest possible w and then coming down and down. It's very inefficient to think about it. The smallest possible w is much more likely. So you can start with the smallest possible w. So if you want to start with the smallest possible w, people will phrase the question. So what they do is they first view this equation modulo x part 2t or x part 2t plus 1. So basically they just ignore all these things. I don't care about anything that happens beyond 2t. So I only worry about the first terms all the way up to, let's say 2t. So till 2t, I have to make things 0 from the w plus 1. I don't care about anything beyond that. And then the idea is to find least degree sigma of x that satisfies above equation. You find the sigma of x of least degree which will satisfy some property like this. So what do I mean by that? So sigma of x remember as degree w and h part w plus 1 all the way down to all the way up to h part 2t should be 0. So you find some sigma of x of smallest degree which will do this when you are done. So you just start putting in sigma of x. You do it iterative. So first what you do is you find the sigma run of x which is of let's say degree 1 that achieves that. And then you check what happens when you go to larger powers and all that. So you do it iteratively. So you start with this and then you move on to sigma 2. And finally when you converge finally you get the correct answer. So this is the bellicamp kind of method and it's a bit difficult to what you can describe it. I can give you the algorithm. It's done one step after the other. You compute something and then you adjust for it. But there's no, at least I don't know in a way in which I can put down some general principles and derive it. So that's the problem there. But in implementation it's very good. So some of you are interested in doing a computer project instead of this too you can take this. You can implement bellicamp's algorithm. I think Matlab already has an implementation but you might want to implement it also. So you can try an implement bellicamp's algorithm for a general field. So you can try to do that step by step and maybe present it in class so that people can figure out what it is. So if one of you are interested I think when two or three people came and spoke to me about it. So this is one possibility. Bellicamp's algorithm is quite nice. There are a few other algorithms like I said but this one is very good. So we didn't take this approach, right? So for us that approach was something else. Do you want to start with this? I think we should actually start here because there are two of us and basically one of us is... No, no, that's not how it's done. That's not how it's done. So describe it, would you sound very weird? So if you notice. So if you notice I want to do SFX. This is what that's one X. And then I have sigma of X. But there are some very, very simple relationships. In fact, sigma one X1 plus sigma one will be equal to zero. So sigma one and X1 will be the same. So when we do the product what happens in the first few times? So let's look at the first few times. If you look at the first two times, you have S1X plus what? S1 sigma one X2. Then what is the X2 term? S2 X2. So I want a degree one sigma of X which will make X2 is zero. If my weight is one, W plus one should be zero. So how do you make... I don't know how to do that. How do you make W plus one zero? How do you make W plus one zero? It's very easy. So remember S2 is what? S1 square. But it doesn't matter. So how do you make this zero? We'll set sigma one plus S2 by S1. In which case it's just equal to sigma one. Except this equation is a trivial equation. It's very easy. So how do you assign sigma one of X? I simply set it as one plus S1. Now I have to see if this is good enough. So if this is good enough, what should happen? The degree three term should also go to zero. So I will assume sigma one of X is good enough and compute the degree three term in the product. I will see that it is non-zero. Suppose I see it as non-zero. Then I have to adjust the sigma one of X. Although I adjust for it, I will add another term or I will add something else. So my adjustment is a little bit complicated. I can't describe it very nicely. It's just painfully algorithmically complicated. That's all. You just make a small adjustment. You can derive it to zero. So that's how you do it. So it approaches it from this side as opposed to from the other side. So like this side, I don't want to go into too many details in the bellicant algorithm. I mean I would be happy if somebody takes it up and then makes a presentation in class and also sees it as an implementation. That would be a nice way of learning this stuff. So this is how you proceed. So you first start with this and then you compute degree three term in S of X, sigma one of X. If it ends up being zero well and good, I mean just zero will never do anything. If it is non-zero then you have to adjust your sigma one. Maybe you have to go to a degree two term. So you do it step by step. At no point in time you try exhaustively all possibilities. That you never do. So it's just an adjustment based on the previous sigma, sigma's. You can see, you can also show how it is done. So this method is quite nice. You can try to do it. But then like I said, it's difficult to prove its correctness and all that. I can't do anything else. But the other method is just straight forward. So let's take five minutes. So what I'm going to do is write down the general equations and then we'll see how it works. So that would be our method. So if you say t equals three, first of all you would assume like WRS happened. Then you write an equation which is what? It's the same equation, right? So let's just say three at us. First assume W equals three. That's T is three, right? We'll start with T equals three. And your matrix is what? That's one. That's two. That's three. That's two. That's three. That's four. And then what? That's three. Is that correct? Does it go all the way to SW? That's four. That's five. And then your equation would be sigma three, sigma two, sigma one equals what? S4, S5, S6. Am I right? It's the same equation. From S1 through S6 you already know and computed it. So this is your equation. So write this down and then find the determinant of this case. If determinant is non-zero, that's it. You know already what you have to do. If determinant is zero, what do you do? You write it W equals two. If determinant is non-zero, you're done. Done as in, I mean, you don't know if you're done or not. Done in finding sigma effects. You found sigma effect. If determinant is zero, you look at W equals two. So if you put W equals two, what matrix will you get? S1, S2. S2, S3. Sigma two, sigma one. Equals S3, S4. Again, you see if determinant is zero. Determinant is non-zero, we're done. Determinant is zero. Then you assume a single level. A single level is very easy. You can do it in very different ways that we can also check. So even in the same method we'll give you. So you put us one times sigma one equals S2. So it's like sigma one to be S2 based. Same method will work in this case. Okay? It's clear. So if you want, you can take some numerical, not specific examples and then simplify this. It's not very hard, but like I said, this is not the most efficient way of doing it. It's a very easily proven method. The only thing I didn't show you was when this property of the rank, W prime errors actually occurred. W is less than or equal to W prime. Determinant would be, W greater than W prime. Determinant would be zero. W less than or equal to W prime. Determinant would be non-zero. The transition I didn't prove except that everything else is easy to see. Okay? So... Yeah, so I think for people who wanted assignment request too, I would suggest bellicant's algorithm. There's also one more algorithm called euclidean algorithm. Okay? So somebody can take a bellicant's algorithm, somebody else can take a euclidean algorithm for binary BCH codes. There are lots of simplifications. Okay? So we can try to implement them. Okay? So that's something you can, you can meet me after this class maybe I'll be able to discuss that. Okay? So I think that's all I wanted to say about decoders for BCH codes. So go to the picture. I'm going to keep this in mind there is a syndrome computation and there is a root-finding step. Okay? It's syndrome computation. And there's lots of simplification in each of these steps. So what do you do when you find symptoms? You know that RFX is a polynomial of degree N or N minus 1. Okay? N could be very large. Okay? And you don't want to evaluate it for so many steps. So what people usually do is when RFX comes in, they'll have a bank of dividers. Okay? And what will you divide by? What can you divide by? You can divide by something, some polynomials and get a very small degree polynomial and evaluate the syndrome with that. How can I do that? So suppose I want to find S1 which is R evaluated at beta. Okay? What can I divide RFX by? To simplify my... Minimal polynomial of beta, right? So you take the minimal polynomial of beta and divide it by RFX. Okay? So we'll get a reminder. It's enough if you evaluate those... put substitute beta and get a reminder. Right? Because the other step will go anywhere. Minimal polynomial of beta. So first step usually will be a bank of dividers with minimal polynomials. Okay? So the first minimal polynomial will divide the remainder and will also give you several other syndromes depending on your T. Right? If you like T is 5 or something for beta, beta squared, beta power 4, everything will get the best... and this might be useful. They'll always get sometimes. Okay? And then the next one is the beta power 3. You'll again divide by the minimal polynomial of beta power 3 and then evaluate it only under the reminder. Okay? No point in evaluating it. So that way you can simply select a little bit. So root finding step really there is no method. Okay? So you have to just substitute one thing after the other and check it. And that method is sometimes called chain search. Okay? So you might say even though it has a name you might think that it's very smart way of searching but actually it's just a substitution. Okay? You can't really do anything else. You have to substitute one after the other and check it out. Okay? And the critical step like I said, the solving for sigma offense. There's how quickly you can do it. Okay? Okay? So t squared operations. So these things will take something like t squared only but I think t which was not wrong it would be proportional to t. But this one is proportional to t squared. I'm not wrong. Okay? So think about that. So doing the critical step and how this to implement it will change a lot of a lot of problems. Okay? So if you are finding out you're going to solve the problem and then finding out the... Like I said, nobody implements the material just discussed fully. Okay? People will do bellicam. And bellicam, like I said, you don't try to find W and all that. So you build it up. Some other method like that people will do. Okay? So to just give you a flavor, I think for instance the dvb s2 standard, right? The digital video bar bar bar s2 satellite communication standard. It has a dch code and I believe the length was 65 out 46. So they are beta and all of them 3 of 2.16 If you ever do like two operations I think this is correct because I'm not wrong I'll come back and say a bit more. But anyway, this one through the field and I think they use p equals 10 if I'm not wrong. They use p equals 10. Okay? So you have to correct 10 errors in length. I'm not sure if n is 65 maybe n is actually less than this some other numbers it could be like 56,000 or something if it's something very large and you have to correct 10 errors. Okay? So if you think about what you are accomplishing in this if you think about this in our studio think about what we are accomplishing how many error patterns are there with weight less than or equal to 10 if n is same 65,000 5,000 okay? 1 plus 65,000 if I choose to all the way down to 65,000 545 choose 10 how big is that? Okay? So you add up all that you will get a huge number okay? In this method what we are actually doing is you are considering every single error vector in that list and finding which one was there which one was which one was it that actually happened so it's quite a non-trivial operation in terms of scale it is it's very very big it's not a very small it looks like some writing down some equations but when you do it it's quite impressive so I think if somebody is taking up the implementation for BCSD coding you should actually try it on this code if you take like a gf2.16 matlab it's quite easy matlab has the value of field implementation you don't care about speed or whatever you can actually show that when you add 10 errors you can correct it so it's quite a non-trivial algorithm