 So, we have been looking at binary cyclic codes and I will tell you about this ring Rn which is the sort of all quantum numbers for binary coefficients addition and multiplication basically do not below XRn plus 1. Let me ask you a related question. Let's see if somebody gives me an answer. So, we saw that this was a ring. Is this a field? Yes or no? So, there is one person saying no, everybody else saying yes. So, the only field you can get out of this is the binary field 0 1. Once it becomes n becomes greater than or equal to 2, XRn plus 1 can be factored. So, what does it mean when XRn plus 1 is factored? There will be some element inside which will not have an inverse. So, it is not possible. So, it will not be a field. It is a ring. But it is definitely a ring and we can check that. And I said I will define something called ideals. So, an ideal I of Rn is what we saw, what we saw. I think this needs proof. I am just writing it down. Ideal I of Rn is the cyclic code. So, what is the cyclic code definition? It is C0, C1, Cn minus 1 is in the code and by the plus what is also in the code? Cn minus 1 is in 0, Cn minus 2 is also in the code. That is the definition for cyclic code. What is the definition for an ideal? So, it is a subspace. Ideal is a subspace of Rn and then what else should be true? A of X is in the ideal and then B of X is in R. Rn what should be true? What would like to be of X times? R of X should be in the ideal. These are the two properties that define the ideal here and we saw that cyclic code of length n is the exact same thing as an ideal of Rn. So, this is true always. And then we saw some powerful properties for this ring Rn. It happens to be what is known as a principal ideal domain. So, every ideal here is a principal ideal. So, as in just like the integers are a principal ideal domain, when you have an ideal, it is always generated as a multiple of some number. Likewise here, every ideal here is always generated as all possible multiples of one polynomial. In fact, that one polynomial is not necessarily unique. There are several polynomials you can come up with whose multiplications will give you this. So, that is a bit different from the integer case where it is unique. So, that one specific generator polynomial is that polynomial of minimal degree in the ideal. So, that one has a lot of powerful properties. So, that is called the generator polynomial. So, it will generate a polynomial minimal degree polynomial in I. So, like I said the first thing we showed was this polynomial is unique. What happens if it is not unique? How do you prove that? Take two of them and then add them. You will get a strictly lesser degree polynomial which violates the minimality of our ordinal effect. So, it has to be unique. Then what else should be shown? Every code word or every element of the ideal is a multiple of, well every code word GFX divides. So, this minimal degree polynomial we call GFX unique one. This GFX divides every polynomial in I over what? Sx. So, over the S2x, over the binary field. So, you do not have to do any modulo x bar n plus 1 to make it divide. Just divide, there is no problem. Then we showed GFX also divides, divides what? x bar n plus 1. So, that is very nice. So, the final thing gives us a very firm handle on all possible cyclic codes. Suppose I want to list out all possible cyclic codes of length n, what should I do? So, I know GFX divides every C of x bar n plus 1. So, this property is crucial in tabulating all possible cyclic codes of length n. So, it is very easy now. So, I know how to factor x bar n plus 1. How do I factor x bar n plus 1? I have to find an order n element of some finite field. And all its powers will be roots of x bar n token, all the n roots. So, it factors into linear factors in that field. And then what do you do? How do you get binary factors? You just group this if you want to cosets together and you get the binary coefficient factors. So, you factor x bar n plus 1 into a product of reducible over S2x. Once you do that, all possible GFX can easily be identified. You just take all possible factors of x bar n plus 1. From there, you get all possible cyclic codes. Another interesting point here is n minus k. So, you have an nk cyclic code. n minus k equals to what? Degree of GFX. So, that just comes from the property that GFX has to divide every possible probe idea. The probe idea has to have degree n minus 1. So, if GFX has some degree d, then you can have n minus that many probes. So, that will be the dimension of the probe. So, this is another factor that's interesting. So, we saw this before even for the DCHKs. So, the DCHKs actually got some result like that. That's a special case. So, I know this is Shastra week or whatever. We might be totally distracted by various things. I don't know if you saw. If you had time to go back and look at this, I think about this. Is there any question on this stuff? Seems okay. Simple enough. So, that's the generator polynomial. So, one nice thing about this is that it gives us a good generator matrix. So, you can find the generator matrix. How do you find the generator matrix? You can find the matrix G, which has to be k cross n. In the first row, I will put GFX. So, what do we mean by putting GFX in the first row? Okay, you see, GFX is going to be sum. Okay, you can show that you cannot have anything other than one. One plus GFX plus one of two. X bar and minus k, right? Okay, so that will be the column for GFX. Okay, why can't this be one? Well, why should this be one? Why can't it be zero? That should be the right X bar and plus one. The zero is no way it will be the right X bar. Okay, so that should be one. Okay, so when I put GFX in, how do I do that? One G1, G2, zero, two, Gn minus k, which should be one. All of that will be zero. The next row, what would I put? X times GFX. How do I X times GFX? GFX shifted right by one. So, likewise, you go all the way down to X bar. Sorry, k minus one times GFX. That will be the last row. That gives you a k cross n generator matrix for us to click code. Okay, so this is a nice binary generator matrix. You can do this for the DCS code also. Okay, so that keeps you at X. Okay, so that's the generator matrix. And the generator polynomial plays a crucial role here. Okay, how do you know that the rank of this matrix will be k? This is a diagonal path with one dot. Clearly, there are, okay, there has to be nearly one. Right, so this will take this right here. Law will be one. One, one, one, one, one, one. Okay, and that path clearly is going to have rank k. You cannot have rank k, so that's fine. Okay, so a lot of other questions we can ask. So what about the binary parity check matrix? Okay, so one way of answering that is to say, you take this generator matrix, do both in elimination. You do the nice. And then you use the IP, P transpose, I formula to get the parity check matrix. One can do that. There's no problem. But then it doesn't seem very nice. It doesn't give you any ideas on where the system is coming from. So for that, we'll use something called the parity check polynomial. Okay, just like a generator polynomial, there was something called the parity check polynomial. Okay, perfect polynomial. Okay, it basically is defined as fellow. So hope perfect is defined to be. Okay, so the symbol basically means I'm defining h of x to be something. X bar and plus one divided by b of x. Okay, I know g of x divides x bar and plus one. So I do the division. I'll get h of x. So h of x has a degree n minus k. So what will be the degree of h k h of x? What will be the degree? X bar k, right? So if you go k minus one, x bar k minus one plus x bar k. Okay, so I know for sure that this h zero will also be one. And h k will also be one. How do I know that? Okay, so it's very easy. So g of x times h of x has to be equal to x bar n plus one. So you equate the coefficients on the power of x bar n and equate the coefficients in the constant term. We'll get that these two guys have to be one. Okay, so it's very easy. Do that. So these two have to be one. So this will be the sum of h of x. The middle things I don't know. Some of them will be non-zero. Some of them will be zero. It depends on g of x. Okay, so that we can't control. There are other things we can do. Okay. And the claim is, claim is the following. We can easily prove those. So you put out a c of x in the code. This implies, in fact, it implies that c of x times h of x is equal to zero in r n. Okay? It's not zero in f two x. It's zero in r n. What does that mean? c of x times h of x is zero in r n. What does that mean? In f two x, what does it mean? Yeah, x bar n plus one derives c of x times h of x. That's a trivial kind of statement. Why is it trivial? c of x is already a multiple of g of x. Okay, c of x is sum of m of x times g of x. What does g of x times h of x? x bar n plus one. So clearly it is a very trivial sum. That is also true. I mean, you can think about it for a while and write it down. We'll see that otherwise. If we have c of x times h of x equal to zero in r n, then it also implies that c of x. For example, if I want to divide this and some part of it will go into h of x, the remaining part has to go into zero. There's a c of x. It can't do anything else. That's the claim. And the proof will stick. It's easy to see. Okay? So now what I'm going to do is c of x is equal to c of x plus c of x plus c n minus one x bar n minus one. And h of x, let's write it in the form of zero plus one x. That's k x bar k. This is zero n r n. Okay? So let me do the multiplication in r n. Multiplication in r n is very easy. How do you do multiplication in r n? You want to do any major division in r n. So please say x bar n to be one. Anytime you get a power greater than x bar n, you reduce it by that much. If you have x bar n plus one, what does it become? x. x bar n plus two becomes x square. So if you do multiplication in r n, and try to get the coefficients, come, let's say, x bar zero to x bar n minus one. Okay? So x bar is what I should get. There's x bar n minus k minus one. If only those things will be meaningful, that's not the meaning. So you see that. Okay? So let's write it down. So let's write down the coefficient of x bar zero. That's the coefficient of x bar zero here. Let me see who gets it right. What will be the first term? C zero is zero, got it? That's it. Departure. Okay? What else will you get? Yes. So you can do h one times c n minus one. That will also give me a constant term. This may x bar n. Okay? And then h two times c n minus two all the way through up. h k times c n minus k. Is it okay? All right? So, so x bar zero is a bit confusing. So maybe let's start with x bar one. Okay? So if you do x bar one, what do you get? Okay? So maybe I should maintain the multiplication for the helpers. Okay? So let's do the, let's keep what we're trying to do. Okay? So let's keep that very short. So suppose I say x bar one, what happens to x bar one? I think this is slightly nicer. Let me see. Oh, maybe, maybe I should do something else. Okay? Let's start with x bar n. n minus one. I think this is better. Yeah. x bar n minus one. What's the coefficient of x bar n minus one? That's zero times c n minus one. What? That's one times c n minus two. Okay? We don't get any crazy kind of situation to do n minus one. Okay? We go all the way down through. That's k times t n minus k. Is it okay? Right? This will have to be equal to zero, right? So this is zero n r n. Clearly, I'm doing the multiplication in r n. So the possible coefficient should go to zero. Okay? What about x bar n minus two? That's zero times c n minus two plus h one times c n minus three all the way down through. That's k times c n minus four minus one. Okay? So you see, you see what's happening. Sorry, n minus k. It should be a minus one here. Minus two here. Okay? Alright, so you can see what's happening here. That zero to h k is kind of remaining the same. Initially, I'm multiplying from c n minus one to c n minus k minus one. And then the next step, I'm multiplying from c n minus two all the way to c n minus k minus two. Okay? So I want to go all the way down through. Let's say, I'll go down to x bar k. Then I'll do x bar k. The right thing. There are n minus k of them, right? No? You like it? Is it okay? What is the problem? Okay? So what is x bar k? That's zero times c k plus h one times c k minus one. I'll do it. Two. That's k times c zero. Four zero. Okay? So you have n minus k equations, right? n minus n minus k is k. So you have one to n minus k. The reason why I'm looking for n minus k equations is in a tile digit matrix, you'll have n minus k linearly different conditions. So what I'm going to do is I'm going to write this in matrix form. Okay? Let's write this in matrix form in a particular way. Okay? So I want an n minus k cross n matrix so that h times c transpose will be equal to zero. Okay? So I'm going to put c here this way. Is it okay? So how should I put the c? I should start with c zero. c one all the way down to c n minus k. Okay? This is my multiplication. I'm writing it just to aid my writing down. So that should be the first row. Oh, my c n minus one gets multiplied by h naught. So I should have h naught on the right-most side. Then that should h one be to its left, right? Which one? All the way down to k. Is it okay? What will be the next row? So here we'll have zero. What will be the next row? So this will be zero and h naught, h one, all the way to h zero. Is it okay? So you've shifted it left over. I'm keeping c the same. I'm moving this to the left. So all the way down to h k coming here. Okay? Zero, zero. Okay? Is that clear? So the same equations have been written in a matrix form. Okay? Now this is an n minus k cross n matrix such that h times c transpose is zero for every single codewall in my code. So what kind of a matrix is this? It's a tidal check matrix. How do I know that the rank is equal to n minus k? You can see that h zero and h k are one and you have clearly some eggplant angular matrices on both sides. So there's no way the rank is going to be less than n minus k. Okay? So rank is equal to n minus k. So this is a valid tidal check matrix. Okay? So then we'll start our notation for this also. There's one confusing part is the first row is the mirror image of h fx. Okay? It would be nice to have h fx itself. But it's not quite h fx. It's the mirror image of h fx. Okay? So let me call h four effects as x power k of h fx per degree of h fx. Okay? x power degree of h fx times h fx inverse. What does that do? What does that do? It does the flip. So if you have h zero plus h one x plus h two x square all the way to h k x power k, this operation will give you h k plus h k minus one x plus h k minus two x square all the way to h zero x power k. Okay? And then I can write this part of the check matrix in a nice form like this. So in the first row I can put the last row here just to make it look similar to my generator matrix. So that's the first row, h four effects. Am I right? That's the next row. What is the next row? x times h four effects. Okay? All the way down to, what is the last row? x power n minus k minus one h four effects. Aha! This is the paradigm check matrix of my cyclic code with generator matrix G of x. It also looks very much like a generator matrix of cyclic codes that we had before. Okay? Except that the only thing we're not sure of is, whether h four effects divides h power n plus one. Well, h four effects divides h power n plus one. Yes or no? Yes. Yes? How do you show that? Okay, h four effects divides h power n plus one. h four effects divides h power n plus one. So basically we, h four effects divides h power n plus one. Okay. So basically you can also show h four effects will be right. Let me claim this. h four effects divides h power n plus one. It's true. It's true that it will happen. How do you show this? It's quite easy. Okay, so you can kind of follow the argument that you said. Or you can just put that h power n plus one is g of x times h four effects. Okay? Now do the reversal operation on this equation. Okay? What will you get if you do the reversal operation yet? The experiment. The experiment. C of x inverse. C. B. Okay, experiment. What's the experiment? X bar n times g of x inverse. No, no. Do the reversal operation on the left-hand side. Don't do anything else. What should you do? X bar degree of the left-hand side. What is the degree of left-hand side? n. Instead of x, you put x inverse. So you'll get x bar minus n plus one. Then you multiply the x bar n. What will you get? You should get the same thing. This is like inside a symmetric polynomial. Can you do reversal on a symmetric polynomial? You'll get the same thing. Okay? So you do reversal, you'll get x bar n. So here if I do reversal, I'll have to do x bar n times. I know the degree of this is also n, right? X bar n times g of x inverse. x of x inverse. So how can I distribute this x of n? Put n minus k for g of x inverse. And put x bar k up to x of x inverse. So what is this k? This k is the reversal of g of x. This k is the reversal of h of x. This is h bar of x. Clearly, h bar of x divides h bar n minus n inverse. Okay? So this h bar of x is a valid generator polynomial for a cyclic code of degree n. Okay? It divides h bar n plus 1. So you can generate an ideal with it. And it will be a proper cyclic code. There's no problem. Okay? So what we've shown, by looking at the value of this matrix carefully is the dual of a cyclic code. Remember, what is the dual of a code? It's the code for which this guy is the generator matrix. So the dual of a cyclic code is also a cyclic code with generator matrix given by this h bar of x. This is what we've shown. Okay? So that's the final result. The dual of a cyclic code is also cyclic. If this k is generated by g of x, this k is generated by, dual is generated by h bar of x. How do you find h bar of x? You first find h of x. First find h of x. And then dual of x. Which bar of x is? The reverse-elected. It's bar degree of h of x. Finds h of x. Is that okay? So anyway, I mean, so it's good that the cyclic can all die up and it's very nice, but one thing that's very nice about itself is for cyclic codes, one can come up with a paradigm check matrix, binary paradigm check matrix, which is very, very easy to describe. So what is so nice about it? It's like I said, easy to describe. This describes so many entries, n minus k cross n, if n is 1000, k is 500, you might think it's a big matrix, but then what should I do? I only have to describe the first one. I only describe the first one, and then you can quickly find it. So paradigm check matrix is very nice. And like I said, the paradigm check matrix is used in several decodings and smart ideas and decodings. So it's good to know that paradigm check matrix is in binary ones. You can do this for a BCH code also. You can do this for a BCH code, but then there is a slight confusion. If n equals all of us beta, you can do it. Then this is greater, what happens? You're getting a shortened code. For a shortened code, the generometric is easy. paradigm check matrix is a bit more involved. It's not that easy when you shorten. It's a little bit more involved, but you'll be able to do that. It will not be systematic also, but you'll be able to do it. Is that okay? So I think this is all that I want to do about cyclic codes. I mean, I'm going to move on and share the connection between BCH and cyclic codes more explicitly next. I don't think we have to really look at anything more besides this. Let me just think for a while. Do anything more. Okay, so maybe I should just write down some simple things. So if you're using cyclic codes, so the encoding, and these are some of the principles which are quite useful, encoding and error detection are very easy to implement. Okay? So that's the first thing that you might want to take from in case you want to know all the equations just to know those final big picture views. Encoding and error detection are easy. Why is encoding easy? How can we do encoding? It's completed. M effects times Q effects. So in case you want systematic encoding, what can you do? What do you do with systematic encoding? Do X bar n minus K times M of X, divide by G of X, take the remainder and put it in. That is also very easy, just to register. What is error detection easy? Sorry? Yeah, exactly. So you have to only do this multiplication. Okay, so this property is what makes error detection very, very easy. Okay, it makes error detection easy. Okay, it's not enough to evaluate one row of the parameters. You have to evaluate all the rows. But all the other rows are simply shifts of the first row. So you can do a very easy shift register kind of circuit. Then keep shifting it, shifting it, clocking out. If everything is zero, then it is a valid, valid code. It's not a valid code. So error detection and error correct encoding can be done very, very easily. In fact, you can also do something else. Okay, very simple. Without doing any HFX also you can do it. What can you do? Take the RFX, divide by G of X. Like I said, division is also something that can be very easily implemented. You divide by G of X and see if you get a remainder. Okay, so error detection can simply do the opposite. Just divide by G of X. See if the remainder is zero. So divide division, there is a very standard LSSR kind of circuit that's available. One can do it very easily. Okay, so we divide by G of X. Here you kind of multiply by G of X. This guy is getting on my nerves. This is very, major work is going on upstairs and we're just doing it at all times. Okay, so you multiply and you divide by G of X on the other side. Okay, so couple of things are not really addressed explicitly. One is how do you construct a simply code with a guaranteed minimum distance? Okay, so we have not done that very explicitly. The other thing is how to do decoding. Okay, so we can do syndrome detection. There's no problem. But then it becomes hard like we motivated before. A nice solution for both of those problems is the BCH code. So BCH codes are simply codes which have good encoding, good error detection as guaranteed for any C-click code. On top of that, you have guaranteed error correction capability. We also have simple algebraic implementations of the decode. Okay, so for both of those decode, guaranteed error correction capability, guaranteed minimum distance and ease of algebraic decoding, you have to pick the G of X very carefully. Okay, so that's why this notion of zeros of a C-click code something. Zeros of a C-click code are basically what? Good stuff? Zero codes. Okay, so these are called zeros of the C-click code. Okay, so what is so special about the zeros of the BCH code? Okay, so if you look at the example, what is so special about zeros of BCH code? Okay, what is so special about the zeros of the T and a connecting BCH code? What are the zeros? Okay, so if you have beta as the value of the element, you would get what? Beta 2, beta squared, beta power 3, so on to beta power 2P. Okay, so the zeros, the powers of beta that are zeros of the BCH code come in what's called an arithmetic progression or a very simple progression in this case is 1, 2, 3. They are constitutive. Okay? You have 2P constitutive powers of beta as which says the BCH code. And if you go back and look at the derivation of the minimum distance, the fact that these were the powers of beta and they were constitutive both are very, very critical. Powers of beta, of course, it has to be power of beta. But the fact that they were constitutive, the third any break in the middle was used very, very strongly in the proof. If you go back, we went back to the Vandermonde matrix, right? And the Vandermonde matrix works only when you have constitutive powers. Otherwise it will not work. Okay? So that's the crucial idea here. And then you can also go back and look at our algebraic decoder. Okay? Our algebraic decoder will not work if you did not have constitutive powers. Okay? So at least maybe I didn't show you all the simplifications that the bellicamp iterative method, all these things will not work that easily if you did not have constitutive powers as zeros. Okay? So having constitutive powers of beta as zeros is the crucial idea in the construction of this algebra. Okay? You can extend this a little bit. In fact, you can say I don't want beta, beta squared, beta by 3. I can, in fact, start at some b. Okay? It can be b plus 1, b plus 2, b plus 3, all the way to b plus 2. For some b. Okay? That's why I said, you know, I've necessarily started wanting. So unless you have any 2t constitutive sets of zeros. Okay? So if you start at b equal to 1, it is called at b equal to 0. I'm sorry. If you put b equal to 0, it is called a narrow-sense bch code. It's just terminologies. It's jargon. It's completely unnecessary. But usually 2t b equal to 0. So it becomes something called narrow-sense bch code. Those also are the terminologies that are relevant to us. But the crucial idea I want to emphasize is if zeros are constitutive powers, in fact, for the generalization, for the constitutive also, you can also have an AP with common distance being relatively prime to the order of theta. So you can do that also. That also will give you the same thing. Zero's constitutive implies guaranteed minimum distance. It also gives you implementable algebraic decoder. Boundless distance, right? So boundary distance wouldn't be a problem. If you go under boundary distance, I think it will work. So that's the crucial connection between cyclic codes and bch codes. So initially I think cyclic codes were studied and great that hoping that there will be a lot of other codes which are not necessarily consecutive or maybe they are building blocks or so many other codes and people are doing it with a lot of enthusiasm that as it has turned out, many people have shown that it do not really work. When you don't know if you want to get the capacity or things like that, as we'll see in the next course, cyclic codes are not that useful. So today I would say not many people are looking at cyclic codes very seriously, except in some storage contexts. There are some more interesting ideas that I need. So that's the cyclic codes idea and the next thing I want to go through. So let me just do the bch code connection more precisely and give some examples about some of these things. So if you look at the construction of a bch code, you have to take n equal to order of beta. So I did this because I showed the cyclic property. So you can see clearly it has to be once you pick n equal to order of beta, bch code becomes cyclic. So this is one of the properties I proved just based on the definition. You see it's a cyclic that is the main connection. So let's take some explicit examples just for fun. So if you pick n equal to order of beta and then let's say you pick t equals 1. What happens if you pick t equals 1? A tidy technology such as one code, 1 beta, beta squared over 2 beta power n minus 1. A very common example is to pick n to be equal to 2 power m minus 1. So what happens if we set n to be equal to 2 power n minus 1? How can you pick beta if n is equal to 2 power m minus 1? So we just simply take set beta to be the element of gf2 power m and it is primitive. So a bch code that n equals 2 power m minus 1 is called the primitive bch code. So what codes we've been looking at are basically called primitive narrow sense bch codes. Narrow sense because we pick b to be equal to 0 and then primitive because we pick n to be equal to 2 power m minus 1. That's the special case we're looking at. These things are not necessary in general but the special case is that. So if you do that, what kind of a code is this t equals 1 that I'm selecting bch code? We know it by some other name also. Anybody identify that? 2 power n equals 2 power m minus 1. I have a t equals 1 which is b equals 3 code. What kind of a code is that? It's also the hamming code. Let's call it the hamming code. So you might say I described it decently. How did I describe the hamming code? I said each column has n bits. I picked all the non-zero n bit vectors put one after the other and you get such a code. And I'm doing the same thing here. I'm doing the same thing here. Remember each beta also has a vector notation which is n bits. And now I have all the possible distinct n bit vectors What is nice about this order is in this order the code is sickly. So the hamming code is the same code is sickly in that order. What do you mean by order? Why is the order suddenly important now? So if I pick my n bit vectors in the natural order 1, 2, 3, so on it will not fit this if it doesn't fit this then the code is not sickly because I promoted my code website so it won't be sickly anymore. I have more guarantees but if I pick my n bit vectors in this order I suggested the 1 beta, beta not only do I get a hamming code but I get a hamming code in the sickly form. So let's see one specific example here. If I remove the example what if we generate a polynomial for this hamming code it's the minimal polynomial of what? Beta. So it's the minimal polynomial of beta. Is it okay? Right? How do I know that this is the smallest degree polynomial in my code? From the sickly code side hamming code I basically, so the BCH code I basically define this to be the generator matrix now that we generate a polynomial we know now the properties of the generator polynomial we know that generator polynomial is the polynomial of least degree in the sickly code How do I know that the n beta effects is the least degree polynomial in my sickly code? Yeah, it's the minimal polynomial that's the definition polynomial with binary coefficients it has beta as its root Of course, n beta effects has to be the minimal degree polynomial, nothing else can happen Okay? Now you can also go back and see my arbitrary definition was a b-recting course, I'll come back to that but let's keep this in mind and let's do one explicit example we'll pick n equals 3 before n equals 7 g of 8 per meter Okay? So if you do the hamming code you might pick the parity check matrix from the natural order which would be what? 010 011 010 010 If you list out the code words of this code, defined by this parity check matrix it will not be sickly You can see that there will be some code words to sickly code shift which will not be in the code So this is not sickly How do I make it sickly? I have to design it in the order suggested by beta So for that, we will pick beta part 3 equals 1 plus beta and then list out the elements of g of 8 So I get 0, 1, beta beta squared which is okay, beta part 3 which is 1 plus beta So let me write down the vector notation 0, 0, 0, 0, 1, 0, 1, 0 1, 0, 0, 0, 0, 1, 1 and beta part 4 equals what? beta part plus beta which is 1, 1, 0 and beta part 5 equals beta part plus beta part 1 is that right? 1, 1, 1 and then beta part 6 is 1 plus beta part 1, 0, 1 So if I put my time to check matrix in the order of beta so the first thing should be 1 this thing should be beta which is 0, 1, 0 the next thing should be beta squared which is 1, 0, 0 already we are seeing a difference in the order here 1, 0, 0 and then what? beta part 3 which is 0, 1, 1 am I right? beta part 4 which is 1, 1, 0 beta part 5 is 1, 1, 1 and beta part 6 is 1, 0, 1 If I put it in this order it becomes sickly so the 16 code words of this code you will see it this way so that is a nice thing to remember about the coming code in this particular order in which you pick the columns you can make it sickly and that is suggested by the powers of beta and what is the generator polynomial model for this x part 3 plus x plus 1 so not only do I know it sickly I know that every code word is generated by multiple of x part 3 plus x plus 1 only in this form it is not in this form I do not know that so this is nice about the coming code ok so let's go to the arbitrary case so if you look at the p of the correct 2 BCH code ok I gave you the definition for the generator polynomial what was it? basically the LCM of m beta of x m beta square of x I will go to 1 beta power 2 p of x does that make sense now in light of our new definition of sickly codes ok now to go to the BCH code I am looking for the least degree polynomial which is an element of the code I know beta beta square all the way to beta power 2 p of roots of my code word so on the least degree one LCM of the product LCM of this minimal polynomial has to be the answer so you see that's how we define the generator polynomial generator polynomial definition I gave before makes sense as a sickly code ok so that's why all the code words are generated as multiples of x everything works out yeah I did it kind of in reverse but when you do the sickly codes later you see that there is a unity between these two ok the reason why I did the matrix first is what is most critical about the BCH code is the fact that it can be decoded very easily that comes just from the matrix and the consecutive powers it's got nothing to do with sickly the sickly part is nice it's good it ties up everything together but in practice the decoder comes really from the consecutive books you have not much to do with the sickly ok ok so that's the that's the two other correcting BCH code and that's pretty much all I wanted to say about BCH codes there are there's lots of work that was done in the 60s and 70s to flesh it out so this almost most of the problems are well understood in BCH codes in a way I mean understood in a sense if you know I love it but it can be solved or not ok good problems that cannot be solved nobody looks at it so for instance one of the problems is if somebody gives you an arbitrary sickly code it says this is my GFX ok you don't pick powers considerably or anything like this some arbitrary GFX ok can you compute anything about compute the minimum distance ok so those are considered difficult problems nobody tries to solve so what is interesting in my opinion even today's can you correct more than TRS ok what guarantees can you give so ok I suppose you can come up with some decoder which maybe you will correct TRS more than TRS once in a while but can you give any guarantees can you say that like 50% of the weight T plus 1 errors can be corrected so there it's possible I don't know if it's possible or not if it's possible such a fraction of T plus 1 errors can be corrected can you come up with a decoder like that that could be something which is still open and people are still interested ok but like I said in the DVB the digital video processing standards DCS codes are used and if you can come up with a practical algorithm which can decode more than TRS it's great ok but go back to our algebraic decoder and see that correcting TRS was all that is possible right so we use the 2D syndromes to correct TRS you have to correct T plus 1 errors there's no way you correct all of them ok and it turns out for getting close to capacity and doing very well you don't have to correct all of them ok that is the interesting part ok so because most channels are statistic it's not like T plus 1 errors the worst possible thing will happen all the time ok so most of the time things will be ok when is there a T plus 1 errors most of the time it might help you ok so it might be in a way in which it's observability so then what you need is I have to say all the T plus 1 errors I can correct the sufficient fraction of all T plus 2 errors I can correct the sufficient fraction maybe even all the 2D errors ok so you should be able to correct larger error patterns but not all of them there's no way you can correct all of them but a sufficiently high percentage of them you can correct them then it turns out you can do very good performance guarantee you can get close to capacity you can do all these kind of fantastic things which modern codes are able to do and modern codes don't even have a guaranteed minimum distance ok so that's the interesting point of fact to this ok so you have a guaranteed minimum distance you can correct that to the accuracy capability but then what you can't correct anything so it turns out what's interesting in practice today is either modify the code construction or modify the decoder so that you can correct large fractions of errors beyond the error correcting capability ok so some of the projects that people might sign up for I don't know are about that for instance the chase decoding algorithm it's about that ok you correct beyond error correcting capability and there's also one more algorithm which is like that ok what can you do to go beyond the error correcting capability in this decoder that is still of interest today in practice ok and of course I mean we don't have the time to look at it but maybe one of the persons who is doing the project when they make a presentation you can remember this idea ok another idea which is very important is this motion of staff decoder like I said decoder is beyond error correcting capability we have open and interesting issue as of today it might change later on you know these things point of these things is things change and today some things are interest tomorrow something else is a interest ok so if you don't do these things properly then you will not be able to manage those changes also good to know that ok decoder is beyond ok like I said not to guarantee so you are correcting it because it's not possible can I correct a large fraction of errors beyond ok so it's that possible that's an interesting question which is the option the next idea is this motion of a staff decoder so this is because very very important today ok so far our decoder model was very very simple we said for every bit that we put into the channel how many bits do you get up in our channel model if you remember I had I write down C and then I say plus E and then I write down R so in this there is one implicit assumption what is that assumption for every bit that I put into the channel I get only one bit out ok so such models are called hard decision models decoders you built from them are called hard decision decoders ok today given our advancement and single processing and all that this bad situation is no longer true ok for every bit that you put in you can get definitely more than one bit let's say 8 bits of course ok so what do you mean by saying 8 bits ok so you have to know a little bit of bits of communication here in the next course I'll talk more about it but if you're doing this communication now or you've done it in the past you'll know what I mean ok so these bits are converted into some signal vectors signal vector has a vector space signal space no patient so you can write the signal vectors in some say two dimensional space or one dimensional space or something like that so they're coordinates ok so these coordinates one can think of them as having so the noise gets added to these coordinates if you're transmitting a vector coordinate noise gets added to these coordinates and you can have more than one bit accuracy in your so suppose you send so this code 0 or 1 gets mapped into plus 1 or minus 1 in some signal space notation some signal vector or it's minus 1 multiple ok so something like that ok so minus 1 and plus 1 is actually what's being transmitted so the noise will add as let's say a real number to this minus 1 and plus 1 ok so what you get at the receiver can be thought of as a signal which has a lot of data in it ok you don't have to quantize it to one bit you can quantize it to 8 bits ok minus 1 plus 1 ok noise gets added so you get the entire range let's say minus 10 to plus 10 you don't have to quantize minus 10 to plus 10 in just one bit say greater than 0's 0 to 10 less than 0's minus 10 to ok you don't have to do that you can do say 8 bit quantization of minus 10 to 10 so you get 8 bits of information so each bit you put into the channel can you decode and use those 8 bits if you can use those 8 bits you get what's called a soft decoder and the chase decoder is again another example of this so hopefully things repeat you have it for the chase decoder so when you do that you will see how they are using these 8 bits or more bits of information in fact loosely in theoretical models you will assume that the received value is real and then see how you use the real value in the decoder in implementation you have to either the floating point or fixed point but eventually you can work with it and as a model it's good so these two are still very much open and interesting problems in my opinion at least on bch codes so people will be interested in that so beyond this there are not too many interesting current open problems if you are just so this kind of brings us to the end of bch codes and we are going to move towards read Solomon codes that I will do in the next class and they are more interesting they are much more widespread than bch codes and we will make a comparison between bch and read Solomon codes you will see there are some interesting comparison points between bch codes and read Solomon codes so we will stop here