 Okay, so we're going to talk about decoding binary bcs codes next to our next big problem. Okay, so I'll only give a brief introduction to this and show you some examples and maybe do some quick working to show you the general method. The exact proofs and all that sometimes you'll be missing, sometimes it's not too critical. Wherever it's needed, I'll give you some details. Okay, so hopefully so far the construction of binary bcs codes is clear enough to you. Okay, so let me repeat that once again just to set the stage for this. Okay, so usually we think of t here at a k, boundary bch code of length n. Okay, so this is something we've been talking about, what do I mean when I say that? Okay, so it means the length is n and n is usually odd. Okay, so that's one thing I did not emphasize that clearly n has to be odd, right? Why should n be odd? Sorry. Well, usually we pick n to be odd in this case because, see, remember we need an element beta in gf2 program for step 1. Okay, actually n odd is not really needed, so let me just skip that. So n is not odd, what I mean is n plays a role in the construction. Okay, so let me take that back, n is odd for some other reason. I don't know why I said that. Okay, so let me take that back, n may not be odd or even. So what do we need? We need a beta n gf2 program and how does n enter the picture? We need order of beta to be greater than or equal to n. So usually if you want order to be equal to n, then n will have to be odd. Okay, so that's the correct statement but I should have made. Okay, so I'm sorry for that. Okay, so hopefully you can understand that. If you want order of beta to be equal to n, then n will have to be odd. But if you just say order of beta is greater than or equal to n, n can be anything. It doesn't matter. Okay, so that's the understanding. So n plays a role in that way in the design. Okay, so it does not have a better role but it plays a role in picking beta. Okay, so once you do that, how does t enter the picture? Okay, t enters the picture in terms of the zero first. So what are the zeroes of the first? Beta, beta first all the way to what? Beta to b minus 1, but what is my d? d is 2t plus 1. Okay, so we go all the way to beta to t. Okay, the zeroes are beta to beta to t. Okay, so what does that mean? This means that you generate a polynomial. Okay, so here's the crucial entity here. I was wondering the polynomial and it played lots of useful roles. Okay, gfx is what? The healthy n is minimal polynomial of beta. Minimal polynomial of beta squared all the way to what? Minimal polynomial of beta squared 2tfx. Okay, one very easy simplification you can do is to get rid of all the even powers of beta and this. Okay, so that's something what I've been talking about. You can simply write it as m beta m beta by 3fx all the way to what? m beta power 2t minus 1x. Okay, so that's good enough. You know that everything else clearly has some other conjugate which is already included in this list and this is good enough. So this gives you t different minimal polynomials I mean, not assuming, knowing that the maximum degree of each of these things is m, the degree of gfx is clearly less than or equal to m times t. Okay, and we saw that gfx plays a crucial role. The code itself was equal to m. So the code m of x times gfx equals to degree less than or equal to n minus 1. Okay, so clearly from this description you see that m minus k equals degree of gfx. Okay, so this is a crucial result. Okay, that controls everything. Okay, and going back to this LCM formula, you see that degree of gfx is smaller than or equal to m times p which implies k is greater than or equal to n minus m. Okay, so if you want to compute the dimension exactly, you have to list out all the conjugates of beta to beta power 2t. Identify these minimal polynomials, figure out how many repetitions are there, what they are not repeated and you multiply all of them together, find the degrees, add up all the degrees, you find the degree of gfx exactly. And from there you get the k exact. But in case if you are not interested in the exact k, you just want a quick estimate. k is greater than or equal to n minus mt. And I said something about this. This thing is tied for t, small t. When I say small t, compared to what? Compared to n. So t is small compared to n. In relation to n, if t is very small, then it is mostly tied. Okay, so k equals n minus mt. Alright, so this part is clear, right? So the zeros of the code play a lot of useful role. Okay, so since we have this formula, it's clear that if I say csx is a code word, this is written only as what? h times c transpose is 0, or rho by rho you can go through and write, but cf beta has to be what? 0, cf, beta square has to be 0, so until cf beta power 2b has to be equal to 0. So this is written only as, so every code word polynomial has beta to beta power 2t as roots. Okay, roots, but the roots are over gf2 power n. Even though the polynomial is binary, the roots are over gf2 power n. Okay, so that's the trick in the definition. So clearly all 2t conditions are not needed here, right? All the even conditions can be dropped and you can only keep the odd condition because you know the code word is binary. Okay, that's very important. If you did not know the code word is binary, you can't drop the odd, the even things. All right, so this is the general description. Hopefully this is clear. We also saw some simple encoders. We saw systematic encoders, how the author achieves systematic encoding, and all the rest of it. So now we're going to see decode. I'm going to start by looking at decoding with a very simple example, and we'll see the ideas there, and then we'll extend to the more generic case and the idea of a very similar, but it's interesting. Okay, any questions at this point? All right, so before we start looking at the decoding, let's just try to see how complex the decoding can be. Okay, so I've been saying that decoding is not very easy. So initially when we saw some examples, we saw the decoding for small numbers, doesn't seem all that difficult, but in general decoding can be more complicated. So I want to convey that a little bit first before we begin to the decode. So first, let's see what happens to syndrome decoding. So if we try syndrome decoding, what happens? So syndrome decoding, the main idea is the syndrome table, right? So you'll have some syndromes. How many syndromes will you have? You'll have 2 power n minus k of 1. If you want to do binary syndrome decoding, you'll have 2 power n minus k of 1. First of all, if n is, say, one of the examples I took was what? n minus k equals 120 or something. Remember, t equals 10. So n equals 10 over length 4095. n minus k equals 120. So that itself begins to become scary. So 2 power 120. There's no way you can remember the table. Another way to think about why it has to be so complex is if you have a t in a correcting curve, definitely the e-caps that you have to put here have to include all the binary vectors of length n of weight less than or equal to t. How many such things are there? Number is greater than or equal to 1 twice. n choose 1 plus n choose 2. And the way to work, n choose t. So this thing grows very fast with t and n. So roughly, if you think about it, this will grow as fast as n power p or something. k in terms of polynomials. If you want to go really large, n and large t, there is a useful approximation. You can approximate it as 2 power n times some function which is usually referred to as h2 of t by n. So this is a very good approximation for this. So if at all you want to have t and n, t by n being constant, if the t growing linearly with n, that's very large, it's not going to happen. But even if t is growing logarithmically, this fraction t by n might be reasonably large and this function will be large there and this is growing roughly in almost exponential cases. Actually not exponential. m power t is the largest thing. Depending on how t grows with n, it can become exponential. That's the point that I'm making here. So the point is the syndrome decoding is not going to work directly. Because anytime you want to go to even as large as t as 10 and you have a blocker in the 4095, it's gone. It's gone to prove that 120 or something which you cannot do anything about. So syndrome decoding is not practical. It's not even bad with n and t. Is that clear? That's the first thing you have to convince yourself about. You're not convinced about yourself, this fancy decoding I mentioned will not be really motivated. But you can see why this is hard. If you don't believe me, you can try it. You can go to 3095 code with t equals 10 and try to build a syndrome table for it. You'll not succeed. It's going to be really hard. No computer can give you that. But this is ruled out. So we need a smarter method and the smarter method basically involves computations in gf2 power. As it is, we had a paradigm matrix over gf2 power. Why do you want to go back to a binary syndrome decode? Why can't we do decoding in gf2 power? So that's the first idea. You compute syndromes in gf2 power and then try to use those syndromes and do some intelligent solution. So that's the first idea and simplifying the decode. So you want to compute syndromes in gf2 power. And the second idea which is also equally important is the following. So let me state these two ideas. So simplifications, the first one is decoding in gf2 power. This is a bit of an annoyance because n can become fairly large and doing computations in Galois fields will involve some big tables and may not be the cheapest thing that you can do. But anyway, today's VLSR is so powerful that you don't have to worry about all these things. It's not a big deal. But anyway, so you can do decoding in gf2 power. That is the first idea which simplifies the decoding. The next one is something known as we will not do ML decode. We will give up on maximum likelihood decoding. So what is maximum likelihood decoding? Given any code received word, you have to find the closest possible code word. And here we have clearly seen if you want to do syndrome decoding or if you want to do full-blown minimum distance decoding, just the depiction from this clear point of view, for instance, it's going to be hopelessly complex. So all there are, 2 power from 3,000 code words in general case. And you can't just really go and find the closest code word that easily. And if you want to do syndrome, syndrome is also too many. 2 power 120. So it becomes very large. So we will give up on maximum likelihood decoding. We cannot do maximum likelihood decoding. We will settle for the last one as bounded distance decoding. So this is a crucial idea of that. It's a lot of simplification. What is bounded distance decoding? If, what you have to add a word for is less than or equal to t decoding will succeed. Else, there are no guarantees. Any two can happen. I will not talk about what happens if the number of errors is greater than t. Remember in syndrome decoding or ML decoding, what am I guaranteeing? Whatever the error vector is, I will find the closest possible code word. But as the numbers become large, as n and k become very large, both these methods are not very easy to do. In fact, people have shown some general hardness results on ML decoding, saying that it's very, very hard to solve these things. We will take a lot of effort, too much effort to solve these things. So what people do is, to say, I will come down a little bit. If I know that the error vector is, if I know what, if it happens that the error vector is, has made, within the error correcting capability, then I will guarantee that it will correct. If it has gone beyond the error correcting capability, I will not even try to find the closest possible code word. I will simply throw out my hands and say, I give up. So I am allowed to do. So that is the difference between these and this. Not only will you decode a fail, because there is an error, actually the error vector pushing you from the code word to some other code word, it can also fail because you have given up. You have given up on the computation point of view, saying, I can't do beyond this. So both these things will happen in this boundary distance decoding. The boundary distance decoding is what makes VCH codes very, very practical. Even up to D equals 10, you can happily implement boundary distance decoding. And that's good enough. You can correct 10 errors and that's a lot of errors that you're correcting. With very less complexity, you can correct. All right? Any questions on this? This is kind of a strange idea. If you're not used to it, I can give you a very simple example. So let me give you a very simple example to illustrate boundary distance decoding. A small n and all that are the very big n. Every small n, I will illustrate what I mean. So you will see what I mean. For example, if you have parameter check methods, for example, this is my parameter check method. Let's say this is my parameter check method. Suppose I want to build a syndrome for it. Syndrome for it. I can go ahead and do this, this is not very hard. I can build a syndrome. How many syndromes will there be? Eight different syndromes. Eight different syndromes, 1010, 1010, 1010, 1010, 1010, 1010. So, it is getting a little bit smaller, that is hopefully 1 1 0 1 1 1 ok, there are 8 different terms then what are the e cups? So, 6 0s and then what do you do for 0 0 1, 0 1 0, 0 1 0, 0 0 0, 0 1 1, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, 0 1 0, all right. So, this is kind of a trivial example, do not beat me up. So, what is the last 1 1 1 1, what do you do for that? You have to take a weight 2 e cap, what is the weight 2 e cap you can take? It is a 1 0 0, 0 1 0 right, that is fine, so this is the way I do it. When the V equals 3, which implies V is 1 right, so this is the maximum likelihood So, if you notice what is the error correcting capability of this code, only one error can be corrected. What is the ML decoder doing? It is correcting the weight 0 error, which is trivial, anybody can do that, it is correcting all the weight 1 errors, 6 of them and it is also correcting a weight 2 error, which is the ML decoder is able to correct, if I have to implement a boundary distance decoding for this code, what would I do, if the syndrome is 111 what will I do, I will give up, so in this case it is too trivial to give up, you know it is very easy to quickly compute it, but in the boundary distance decoder technically if the syndrome is 111 I will just give up, it is outside of my error correcting capability, so the decoder can fail, even if the syndrome becomes 111, well the maximum likelihood decoder will always give you some answer, it will give you the closest code in some sense, so the boundary distance decoder is clearly not optimal, but it has slightly better computational gain than ML decoder in this case, but in the general case it has a huge computational gain, it is polynomial time bound to the decoder and usually per moment time, so it is very, it is a huge gain compared to ML decoder, so if you put a larger number this will become more clear, this is a very trivial example, so it is very easy to see, but in general it can become much more, so these are the two important things, we will decode in GF2 param and we will stick to down this boundary distance decoding, in a way we will say I will assume that the weight of the error vector is less than or equal to t and proceed with my decoding, in case something happens my decoder will give up, the actual weight whether it was greater than t I will give up, somewhere the decoder will fail I do not care, so let us begin with an example and example of decoding, and we will see the idea as to whether it is simple or not, whether the loss is sometimes it is interesting, so for example I am going to take an equal and beta to be the middle element of 0.16, I will take t to be equal to, so remember this is the first non-trivial case that we are seeing, two error correcting code, so far we have never seen a two error correcting code, one error correcting is usually very trivial, why is one error correcting very trivial, you can do it mostly it is the hamming code, hamming code or something similar do it and you can do the syndrome decoder, so only correcting one error how many syndromes are there, n plus 1, this n plus 1 is very small, you can make a syndrome table and then correct it, so correcting one error is very very easy, so when you go to correct two errors how many syndromes are there, 1 plus n choose 1, then n choose 2, so immediately you are going to the n square range, so when you go to the n square range what is wrong, I mean think about implementing the decoder, so bits are coming at a constant weight, you want to roughly be able to send out one bit, so every bit you receive, so every time your bits are getting clocked in the decoder at some weight, you want to be able to roughly send in linear time, everything should go, if you have an n square complexity thing sitting in that, you will start accumulating so many bits and you will have to stop once in a while, stop once in a while, it just will not work continuously smoothly, so n square is very bad in general, so you want at least n log n, n log n can be managed, but at least n, if it is n it is there, so usually it is what you want, so you don't want no complexity then some big scale up multiple of n, so 10, n, 20, n, 100 n you can deal with, n log n is ok, it is not very bad, once you go to n square, you will start accumulating so much by the time you can finish it off and send it out, you can't process it, so n square is usually very bad, so people will do crazy things to avoid n square, so here is the first situation that if you have to do a syndrome decoding, so bounded distance decoding, I will need a table with n square entrance, so let's see if we can do something else, so that's what we are going to see, so let me do the setup once again, I have a message here, what will be the k and all for this, let's do a quick computation, p is 2 means 0 power, beta, beta square, beta power 3 and 2 power 4, up to 2t, so clearly from here, you see that gfx is basically minimal problem of beta times, minimal problem of beta power 3 and what we know the degree will be, it will be 8, so beta is x power 4 plus x power 1, then for beta power 3 it will be x power 4 plus x power 3 plus x power 1, so it is going to be 8, so k is going to be 7, so these are some things which are not so important, so we will do an encoding, let's say we do a systematic encoding and we produce a code word c, so remember I have been writing vectors, I will also come up with code words as polynomials, so both of them are equivalent for me, so remember what is the equivalent, so I have a code word c0, c1, c14, I will come up with a code word c0 plus c1x plus c14, and what do I know about every code word, it has 4 roots which are beta, beta square, beta power 3 and beta power 4, so essentially these two are the only things which are important, the other two are guaranteed, so see now it goes through a channel, remember how did we model the channel, channel basically adds the error vector, now the error vector also I can think of as a polynomial, so that's also very common and you will get to receive the error which is part of the polynomial, remember this is equivalent to a polynomial, again binary or you have an error fix, so you have a received polynomial or a received vector and you have to process it, the first step is definitely a syndrome computation, but I don't want to compute syndrome in binary, I don't want a 7, I mean 8 bit syndrome and deal with it, I want the syndrome in gf16, essentially I will also have an 8 bit syndrome even in gf16, but I will think of the syndrome as a gf16 element as opposed to a binary 8 bit vector, so I will show you what it is, so what are the two syndromes that I will compute, the two syndromes I will compute are as problems, so syndrome I will compute s1 as r of beta and s3 as r of beta power 3, so I think hopefully you can see that very clearly, but this syndrome, so what do you mean by syndrome, so once again I can write it down, s1 is r of beta, what is r of x, c of x plus e of x, so what will happen if you put c of beta and get 0, so only thing that will be left is e of beta, so when I do r of beta, of course I don't know e, I can't evaluate e of beta for knowing e, but I know r, I can evaluate r of beta, I evaluate r of beta, but I know that the answer is the same as e of beta, so I know this is my syndrome, s3 which is r of beta power 3, which I know is also equal to e of beta power, so my 3 is quite bad, so I write it more clearly, but again it's more clear, so we have two syndromes, remember each of these syndromes are elements of gf16, which means for representing each syndrome how many bits do I need, 4 bits, so 2 syndromes together anyway are 8 bits, you don't reduce your syndrome length by any chance, any measure of imagination, you still need those 8 bits for physically representing them, but how do you think of it abstractly, each element is an element of, each syndrome element is from gf16, you have to think of it as 8 bits, I will think of 4 bits together as an element of gf16, so that is a new thing in this idea, so now the question is given s1 and s3, I have to find e of x, if I find e of x, I am done, so if in general you ask this question, you will go back to your syndrome decoder, you can't do much, how do we avoid it, how do we proceed further, so we assume the bounded distance condition, so to proceed further, we assume bounded distance, we will say that the weight of e is less than or equal to 2, e is 2 in this case, so we assume it is less than or equal to 2, so that is the next step that we assume, we assume weight of e is less than or equal to 2, so what is the bounded distance condition, so once you do that, e of x is at what sum, e of x is x star i plus x star, it is suppose for sum, so we will suppose e of x is at this sum, so there is one condition here, if I say assume weight of e is less than or equal to 2, the weight could have been 1, so if I write it like this, it doesn't assume the weight equal to 1, but let's say we will assume x star i plus x star j, you see the weight equal to 1 can also be handled later, so I will assume basically, weight of e is equal to 2, which is the best case, so I am going to assume that, then try and do my decoding, and in the best case, we will see that it will come out better, so that e of x becomes x star i plus x star j, is that clear, it is clear right, so I can write it like this, so e of x is the polynomial sum, in the vector sum what will I have, 0, 0, 0 in the ith place, so we do 1, then the jth place also we do 1, let's call it b0, we will deal with the weight 1 errors also, not saying we won't deal with that, it is much easier to deal with the weight 1 errors, we will come to it later, so we will first assume the weight 2 errors, and we will see how to do them, it will be much easier, try to do a special case of this one, we will see, so now what happened to my syndrome, what was 1, e of, remember I don't know i and g, don't think I know i and j, i and j are what I have to find now, once I find i and j I am done, what do I know, I know this syndrome, I know if I put in x equals beta and e, what will I get, I will get s1, and that is remember this is r of beta, now it will be beta power, I have to press beta power, so of course I can also put x equals beta squared, then what happens, simply I will get s1 squared, so that will not be anything new that I can use, that will not give me anything, that is why I am not using 2, I wonder why I am not using 2, not using 2 because it doesn't give me any new information, which I can use, the next one s3, this is s3, s1 and s3 can compute, s1 and s3 I can compute, this was beta power, so can we store equations, we have to find, i and j, the clear, so for instance, let me just quickly, tell you how to deal with weight 1 errors, if the error of the weight of the error, was actually 1, first of all if weight of the error was 0, what will happen to s1 and s3, it will not be 0, so that is one way in which you can quickly determine weight c, what about weight 1, if there was only one bit error, what will be the relationship that satisfies between s1 and s3, s3 will be s1 power 3, so you just check that, you check if s3 is s1 power 3, see if you get s3, if you get that then you get out the same, there was only one error, now if the term solved with s3 is not s1 power 3, both of them are not 0, then weight has to be 2 error, so that is why I said 0 error and 1 error are very easy cases, which you can quickly deal with, I will not worry about it, we will only look at that 2 error case, how do you solve this, this is not very hard, you have solved this before, s1 or not solved it, so in the examples, I told you how to solve this kind of equations, how do you solve it, you factor beta power 3i plus beta power 3j, times beta i squared, so that is what you do, so let us take this guy, then you see s3 equals beta power i plus beta power j, times beta power y, plus beta power i, beta power j, plus beta power 2j, so what is this guy now, s1, so from here you get, what about these two guys together, s1 squared, so you get immediately, that beta power i times beta power j equals, s3 by s1, what, minus, but I guess minus is the same as plus, plus s1 squared, so for instance, if there is one case immediately you would see, I mean if you are paying really good attention you would see, only if s1 is not 0, you can do this, what happens if s1 is 0, and s3 is not 0, can it happen, and the two error case, it cannot happen, but actually if there are three errors it can happen, some crazy things like that can happen, so that is the thing for me to say that I have to give up, so if I see that s1 is 0, but s3 is not 0, then it means I have to give up, so the number of errors has crossed the reasonable limit of t that I have put there, I will give up, so these are things that you can build into your decoders, you can find out if you are going around here and there, so in general if s1 is not 0, s3 is not 0, you can do this, so what do you know now, I know beta i plus beta j, I know beta per i times beta per j, what do I get out of this, I get a quadratic equation, so what is the quadratic equation that I get, even if you write it down, I will get what, say some y squared, what is the equation, s3 by s1 plus s1 squared equals 0, so what will happen if I solve this, to get beta per i and beta per j, so suppose the roots are, let me write it, let me write it more carefully and get roots y0 and y1, so I get a quadratic equation, how do you solve a quadratic equation, the formula won't work, how will you solve it, sorry, you can't do a complete square formula, formula comes from completing square, 2 is equal to 0 in this, you can't do it, so what do we do, let's try another way to substitute, why are you substituting not too bad, remember your n and 2 power m are going to be close, so it will at most be doing n operations and it's not n squared, as long as nothing is n squared, you are okay, and you go to n squared problem comes, so it's only n as many as n steps, it will be only n operations, it will be only n operations, so it's very easy to do, so you solve for it and find roots y0 and y1, if it turns out that you have 2 distinct roots, y0 and y1, then you can declare that you have found the 2 8 error, but if you solve all this procedure and finally you find that there are repeated roots or some other crazy situation, what does it mean, somewhere you went wrong, there was more than 2 errors, and it didn't work out, so that's why it done the distance decoding proceeds with the assumption that let's say 2 errors happen, and if anything exception occurs, something is wrong and I will give up, so this is the idea, so let me repeat once again what you do, find the syndrome, write the syndrome in terms of beta, pari, beta, parj, and the point is from these equations you are trying to solve for beta, pari and beta, parj, how do you do that, from the syndrome equations you go to a polynomial equation, square plus s1, y1 plus s3, s1 plus s1, square equal to 0, this is a quadratic equation, how do I solve it, I drastically try everything, I solve it, I get 2 roots y0 and y1, if y0 and y1, if y0 are not equal to y1, declare, first of all, I may not even get 2 roots, so remember these are all finite fields, this is some equation, sometimes I may not get 2 roots, I may not get any root, so in all those cases, you declare that something went wrong, I give up, if you find y0 not equal to y1, then you declare, you declare what, so you have to write now y0 and y1 in the power notation, so you have to write y0 equals beta, pari and y1 equals beta, parj, remember y0 and y1 are elements of gf2, pari and so that's also another thing I forgot, get roots y0 and y1 in gf16, it's very important that you look for only in gf16, if you cannot find 2 roots in gf16 or if you cannot find 2 distinct roots in gf16, something went wrong in your decoding, something is wrong beyond our assumptions, so we give up, if nothing else like that happened, you find 2 distinct roots and they are not equal, then you declare that the first root was the beta pari, second root of the beta parj, you have to look at that in the table, so in fact, if you assume that the table is already given, already y0 and y1 will get as powers of beta already, you just go and find the corresponding power of beta, so you give 2 i and j, you go to the location i, location j, look close to this, then finally you put the 4 i and j, and that gives you the decode, so where we have to do some n square kind of operation, the good thing was just linear, so that's the nice thing about linear and n matrix, so that's the nice thing about nice thing about, is this clear, I think there are a lot of examples that one can come up with, but it's not going to be illustrated, so I'm not going to give you an example, I think in your tutorials, there will be some examples where you can work out and see how this thing happens, and you can also try, it's very easy to try, take the gf16t equals 2 code, add some 4 errors, some random 4 errors, and try this process, we'll see something we'll take, so you can quickly write this in matlab, it's not a very hard decode at the right, we'll see something we'll keep taking, so this is the essence of the BCSD coding idea, for t equals 2, one can write it down very easily, the simplification is very trivial, and it can, any questions, anything disturbing you about this, perfectly happy, but like I said at any point in time, if what you expect does not happen, you basically declare failure, you just give up, say that something weird has happened, I'm not going to do it, so that way it's very different from ML decoding, ML decoding you don't have the choice, you have to find the closest code, trying your algebraic method, and if it fails somewhere, you just give up, so these decoders are also called algebraic, so whatever they're called algebraic, you can see why they're called algebraic, this k everything involves algebra, so you have some equations, you can solve them, some unknown variables, you do some cancelling, you eliminate one equation from the other, you get some parade equation to solve where it's called algebraic decode, any question, you have a question ? I'm sorry? ? Yeah, I'll come to it, yes, the point he's trying to make is you have just two equations here, you happily eliminated what do you do if t is 10, there is a method for it, I'll come to it, that's the next one, so what I'm going to do next if this is reasonably clear to you is to point out the generality, in case t is general, some t what do you do, so this method will not be so trivial but we will use some standard entities it's been done before, we'll use some entities and some people, do you have a question? ? ? No, no, no, it's just only if there are two errors, you're saying if y0, y1 and all, you're assuming only if there are two errors, there can be four errors and in this process you can get both of them to do five errors, six errors, I mean so many errors are possible and you're assuming two errors, you're not beautiful, okay so just to give you a rough idea of correcting beyond the error-correcting capability of VCH codes is considered very, very hard, there are very few precious algorithms that are available everywhere, let's say how hard is it to correct beyond t, I mean there's a bunch of equations you can play around with it, it turns out it's very hard it's not very easy to correct errors beyond t, so there are some methods, there are a lot of probabilistic methods, there are lots of algebraic, not lots of, there are very few algebraic methods, so one method is quite successful, maybe if you have some time, I'll try to talk about it, the algebra involved becomes quite intense, so once you go beyond t errors, that's the only problem, okay, so what are the ten minutes, so what I'm going to do is get started with the setup of the algebraic decoder general thing, yes ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?