 Okay, so let's begin. We're going to continue with our readmuller boards. So let me remind you once again how we are thinking about these boards. So we have, first, if you want to think of readmuller boards, after readmuller boards, the length to parent, you're going to think of binary loader boards. How many of them do you have? 312VM. And then this will be a polynomial of a total degree. I'll simply say degree by degree. I mean total degree. That's when I go up. And then what do you do? Go from here to a code word. And the thing about this code word, it basically has an evaluation of X below 100. So you go through every single vector in 01M, evaluate this polynomial F from those vectors. You'll get how many points. So how many gates here? There are 2 power m vectors. So if you evaluate it and make a vector, you'll have that 2 power m gates. So perfectly, that's from 0 to 0. So this was one view of the code word. Another thing to say is that this is actually a Boolean function. So there's one more view. Then you can also think of the matrix ideas and all that. And then we've been seeing a bunch of properties. One type of property is that you need to readmuller code. So if you want to readmuller code, opt-order readmuller code of length 2 power m, you can do a uv plus v constriction. You will come from the r-th order readmuller code of length 2 power m minus 1. And we will come from the r minus 1-th order readmuller code of length 2 power m minus 1. So that's the rest idea. So you will come from belongs to Rn, r comma r minus 1. We will belong to Rn, r minus 1 comma r minus 1. So from here you can build up the generated matrix in a nice way if you like. That's one view of this one. So from this property, we showed the readmull equals 2 power m minus r. So that's another property which is quite nice to show. The thing we're going to show today is few more interesting properties. And once again, we will play with this motion of the polynomial evaluation and the vector interpretation. So both of them will go hand in hand. So you think of a polynomial and then the vector that results when you evaluate it on every single possible value for the variables, you get a vector and then you play with that vector. So this relationship is what gives all the nice properties for readmuller code. So that's where everything comes from. So we'll continue with that. And the property that we see today is about the Boolean. Boolean of r comma m. So you can show that if you take the r power readmuller code of r, Boolean will be another readmuller code of r minus r minus 1. So this is the result. So the first thing we're checking is that the number of vectors that we expect in the Boolean is the same as under I can say. So here, the dimension is what? What's the dimension of the Boolean of r of r comma m? 2 power m minus k r m. What is k r m? 1 plus m choose 1 plus m choose 2 plus r m choose r. So that's the dimension. What is the dimension here? 1 plus m choose 1 plus m choose 2 all the way to m choose 1 minus r minus 1. Are these two the same? Well, it has to be the same. I gave you the result. But how will you show that these two are the same? 2 power m is the same. Okay. So you can do that. See, 2 power m will be actually 1 plus m choose 1 plus m choose 2 all the way to m choose m. You write it that a, subtract it out. All the first things go away. And then you use the identity that m choose k is the same as m choose m minus k. We'll get the results. It's not very hard to do that. It's straightforward computation. So you see, at least as far as dimension is concerned, these two match. So if I show that the r of the chess is contained in the m of chess, I'll be done. If dimension is the same, both of them have the same number of elements. So if I show that the right-hand side, that is the codewords of the m minus r minus 1 power rate miller code, the codewords of those is contained in the dual of the r m rate miller code in the left-hand side. What do I mean by saying a codeword here is contained in the dual? It should be up to the level 2. Every single codeword of the original two. So a codeword of r m minus r minus 1 should be up to the level 2. Every single codeword of r m r comma m. Once I show that, I'm done. So that's what we will show. We will show the following. So let's say some a belongs to r m minus m minus r minus 1 comma m. So let's say it's a codeword. And a b which belongs to r m r comma m. That will be half. We have what? a dot b equal to what? 0. This is what I want to show. So now there are various ways of doing this. So of course we are going to use the polynomial idea. Remember the vector a is associated with the polynomial of degree 1. m minus r minus 1. The vector b is associated with the polynomial of degree r. So at most r. So what will happen if I multiply these two polynomials? I can multiply these two polynomials. What will the evaluation of that product polynomial correspond to? How will it be related to a and b? Do you see that? So the question I'm asking is a will be a 0, a 1 over to a 2 power m minus 1. If a i is what? Some polynomial f of b 1 will be given. You are looking at a vector i. So what do you think of i? So what is i? i is just an integer from 0 to 2 power m minus 1. So I can now take with m bit representation. So every integer from 0 to 2 power m minus 1 has an m bit representation. I'll take its m bit representation, take the first bit and use it for v1. Second bit I'll use it for v2. So instead of thinking of substituting variables as bits, m bits, I can think of it as an integer from 0 to 2 power m minus 1. Nothing wrong with it. So i is an integer from 0 to 2 power m minus 1. It has an m bit representation. How many bits are there in an m bit representation? m bits. First bit, second bit, third bit, all the way to m bit. I will simply use v1 equal to the first bit, v2 equal to the second bit, v3 equal to the third bit. So that is what I mean by this substitution. So this i, I can think of as a value that is taken by v1 through vn. Is it clear? Is it clear as a way to give out a very clear explanation for this? Is it clear? So I can think of, suppose m is 4, I can think of the fourth bit between the four bit vectors that's going all the way to the front like this. Or equivalently, what can I think of it as? 0, 1, 2, all the way to sustain. So if I say I substitute, let's say some 9, what does it mean? I'll actually be substituting 1001. So both of them are equal. I can think of it as 4. This is my vector a. Likewise, how do you know that the degree of excess less than or equal to m minus r minus 1? Likewise, I'll have b between 0 and b1. I have to go to b to power minus 1. And where is bi? Some g evaluated at the same i. That's where it goes. So how do I know the degree of g? Degree of g is less than or equal to r. Now what happens when I multiply s and g? I'll get some other polynomial. And its degree will be less than or equal to m minus 1. Now if I evaluate that product at i, what will I get? I'll get a i times b i. So what happens if I evaluate f of b1, b2 bm times b of b1, b2 bm at i? What will I get? a i times b i. Is that okay? So this new vector, let's call it some b, that I'm going to define. I'll define a new vector b, which is b0 b1 b2 bar minus 1. And this bi will basically be the product evaluated at i. And you know this bi equals a i b i. So what happens with our b summation bi bar 2? So if I show that the weight of b is going to be even all the time, then I'm done. Am I right? All I have to show is the weight of b is zero. So this will be zero is the weight of b is even. How do I know the weight of b will be even? What is the degree of this guy? What's going to be equal to m minus 1? So what happens if I take a polynomial of degree less than or equal to m minus 1 and evaluate it at all the possibilities? Will I get an even weight vector? I will get an even weight. So that implies b belongs to the regular code of m minus 1 m. And that clearly implies weight of b is even. So it's a very simple proof except that you go back and forth between this vector and polynomial and it can be very consistent. The first time you see it will never really be clear. But if you think about it, it's a very kind of an obvious easy proof. Okay? Is it okay? So I'm too slip correct. All right, so you take a vector from the first code. You see that it's an evaluation of a polynomial at all points. Take a vector from another code. It's an evaluation of a polynomial at every point. And if I take the two corresponding coordinates a and d and multiply together it's the same as evaluating the product polynomial at the same position. But interestingly, the product polynomial has degree less than or equal to m minus 1 which means it belongs to the regular code m minus 1, m. And that you know is an even weight code, isn't it? The vector in that code is even weight. And that shows you that there's two after them. So this will work. So even m minus r minus 2. So if this degree is less than or equal to m minus r minus 2 also the same logic will work. So why can I not conclude that the regular code of m minus r minus 2 is the dual? The dimension will not agree. The dimension will be smaller. There are more codes in the dual. But when you go to m minus r minus 1 interestingly the dimension matches. So it means that this is the dual. There's nothing more. So you can stop there. That doesn't, that also makes sense, right? See m minus r minus 2 is clearly contained in m minus r minus 1. So that this also does happen. It's not a very surprising fact. And all these codes are basically contained in each other. So it has to work that way. Okay. So this makes a very interesting family. So if you think about cyclic codes, right? So we had first of all, you know, of course there are very easy samples to make. So if you look at linear codes, the dual of a linear code is also a linear code. So it's a nice, close step for it has. Likewise for cyclic codes interestingly, the dual is also cyclic. And for read Miller codes again, what do you get? Dual is read Miller code. Okay. So all these things are nice. I mean from VCH codes also you can show dual is VCH. Read Solomon codes also you can show dual is the read Solomon extracts. All those things you can show. So it's nice how do you say it? It's nice close property with respect to the dual. And that maybe you can see it's a product. Okay. So even if you don't have any immediate use for it, you'll see that there's a use for it. Okay. So here's the sooner enough that it's good to know that the dual also has some good properties. In the decoding, we can use this code. All right. Any questions? On how I did this? So the first time you see it, this proof can be confusing. That's how all proofs in read Miller codes go. We'll use the property of the code and the polynomial evaluation and then go back and forth. Okay. Is that all right? So let's take a simple example. For much more than equals three, look at what happens. Okay. So it's nice to see the one equal to three example. So I'm looking at three. So let's write down the vectors. So one is going to be one, one, one, one, one, one. The three is going to be zero, zero, zero, zero, one, one, one. The two is going to be zero, zero, one, one, one. The one is going to be zero, one, zero, one. Zero, one, zero, one. We're going to have to write it back with three, two. This will be zero, zero, zero, one. The three, one. This will be zero, zero, zero, zero, zero, one. Then it will be zero, zero, zero, zero, one. So, if I want to generate a matrix for rate mill upon, well the 0th order is really not that interesting, but anyway let us do it. 0, 3, it could be what? Only the first row, so you get the repetition, the correct, get the repetition form. So, it will be n equals 8 repetition form. And then you have the first order rate mill upon, which will be an 844 code. What will be the gender matrix? The first 4 rows. And then if I want the second order rate mill upon, it is going to be 874 code. So, this will be the even rate code. So, let me not talk about the r of 3, 3, which is identity code, not really. So, you can use our result now and see that, so the 0, 3, what should be its doing? m minus r minus 1. So, you put r equal to 0 and this 3 you get 2, 1. So, what happens if I put r equal to 1? You get the same thing at length. So, this guy is in fact a self-dual code. The code is its own dual. So, it is an interesting little matrix there. So, let us look at the gender matrix for that guy, this is the gender matrix. So, it has a very nice symmetric looking structure, so self-dual. So, one property of the self-dual code is this k will be equal to n by 2. That is correct. So, the self-dual k equals 1 minus k, right? So, k should be equal to n by 2. So, what the satisfied work? And another way to check the self-dual properties, any two rows of the generator matrix will be orthogonal to each other. And that is kind of enough to show self-dual. So, you look at any two rows of the generator matrix here, they will all be orthogonal to each other. Because g is equal to h, g and h are the same, you know, g times h transpose has to be 0. So, g times g transpose has to be 0. In both conditions you check, you will see the self-dual, you can do that very easily. And you can do if m equals 4, you will get a more, I mean this, the list of binary vectors in 16, it is a bit more painful to write it down. You want to write it down. To do that, you will get a different list. So, what I am going to do is just write down for m equals 4, 4 is 1, 2 rows, 2 rows again, m of r m, 3 rows of m, and 3, 4 b, 2 rows of each other. So, this will be the replication code and this will be the even-made code. And what will you have? r m of 1, 4 and k will have 2, 4. So, these two will be 2 rows of each other. So, what is the dimension here? 6 k comma 5 and 8, 8 is the minimum distance. So, what is the dimension here? It has to be 11. The minimum distance is 4. Is that right? So, there is another useful and simple rule you can keep in mind if you are worried about how do I select the vectors. So, if you have 16 vectors, right? There are 16 basis vectors, 1, b 1, b 2, b 3, b 4, so on and on the way to the last day, b 1, b 2, b 3, b 4. If I want the basis vectors for r m 2 comma 4, another rule to keep in mind. So, I may not have taught it like that. It is very easy also to think about it. You look at the 16 basis vectors, you select all the vectors whose weight is greater than or equal to 4. In this list, we select all the vectors whose weight is greater than or equal to 4. They will be the top 11. It is not very difficult to think about it. You can see that once it goes to b 1, b 2, b 3, your weight will go down to 2. Only if you have 2 of them, your weight will be greater than or equal to 4. So, that is a useful rule to keep in mind. Among the 16 vectors, all the vectors whose weight is greater than or equal to 4 make the generator matrix or the 16, 11, 4. The minimum distance plays a nice role that way. So, in this construction, it is not very useful because anyway you can go from the top and pick it up that in the other chronicle product construction I gave you, I said there is a fabrication of this which shows up. There you may not know where which is coming. There you can use this rule very nicely. If you want the app that I read more about, what will I do? I look at all the rows of that matrix whose weight is greater than or equal to 2 power m minus r and that will give me my generator matrix. So, that is a rule that is useful to remember. So, just push the side information. So, these two are the ones. So, let us just do anything aside. We have once again r m of 0, 4, 0, 5, r m of 4, 5, you have r m of 2, 5, r m of 3, 5 and then you have r m of 2, 5. What will happen to 2, 5? It will be self-dual. So, you can show every time m is odd, r m of m minus 1 by 2, m will be a self-dual. And you can do quickly dimensions if you like because these things may be different. Then first thing 32, 6 out of the 16. What is this one? 32, 26. What? 4, right? This thing is what? 32, should be 16. You can do the addition that should be 16 and then you have 8. So, it is a nice order to these numbers. What is the nice order to these numbers? 2 power 5, 2 power 4, 2 power 3. So, that is a nice thing to keep in mind. So, in fact, that will be generally true. So, what will happen to r m of m minus 1 by 2, m? What will be its coordinates? 2 power m, 2 power m, 2 power m, 2 power m minus 1. Why should it be 2 power m minus 1? Because I know it is self-dual. So, it has to be equal to that. And then what will be the thing? 2 power m minus m plus 1 by 2, right? So, this will be the parameter. So, in this nice case, it turns out that m plus 1 by 2 is the same as m minus 2. It may not happen in general. So, 2 power m has to be odd. But m minus m minus 1 by 2, that will be m plus 1. So, this will be a self-dual code. Okay. So, that is all I wanted to say about Bruins and the structure of this product. So, let us move on to encoding. We will talk about encoding. So, here we will use one more interesting property. So, the first encoding we can do is to use the generator matrix. So, whether likely to be problems if we use the generator matrix. So, you might say it could be a huge matrix. You have to store something. But we have the recursive structure, right? So, maybe you do not store too much. I mean, maybe you can get away with some savings there. But there is one more problem which you cannot go over to use the generator matrix again. What is that problem? It is not systematic. So, it is also being not systematic. And maybe you do not like it too much. Okay. So, it turns out we can use one more structure for the readmuller codes. It turns out readmuller codes are very, very closely related to cyclic codes. Okay. So, in fact, you can puncture one bit of the readmuller code and get a cyclic code. So, that is the fact. Okay. So, basically, another idea is to use the puncturing. So, you puncture, puncture one bit of the readmuller code to get a cyclic code. Okay. So, how do you do that? I will tell you how to do this. I will prove this very rigorously. I will prove this too painful and unnecessary. So, I will show you how it is done. And you will see it is kind of obvious just to work out that way. But the puncture, this is not an interesting idea. So, how do you use this idea? Suppose I tell you that this is true. Suppose I tell you that the code which you can puncture in the readmuller code. So, once you puncture it, you get a cyclic code. Then what you can do? You encode the cyclic code even systematically. Then how do you generate the punctured code again? You add an overall parity. Okay. So, puncturing is some parity of the original thing. So, you can do that. So, this can be used very nicely to build an encoder. So, this property can be used to build a systematic encoder. Okay. That is the idea. So, which one should we puncture? That is the first question. So, I will tell you the answer. And then we will do this first by example. Okay. So, we will do a very simple example first. And then we will go through and see why it should be true in general. Okay. So, for example, I am going to take this. So, we will begin with the simplest example. We will take Rm1, R3. Okay. We will take the first order readmuller code of length 8. So, how do we render the matrix? What should we do now? I will take the dual of this code. Okay. So, what is the dual of this code? Oh, Rm1, R3 is a dual. So, it does not matter. Okay. So, that is nice. So, let us take the dual. So, apparently, it makes it because also the generator matrix is what? You have 1111 on top. I am going to write the 1111 at the bottom because that is what we are maybe used to. Okay. I am going to write the remaining stuff. Okay. So, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1. Okay. Let this remind you of something. That is the form of the tidal digit matrix. Now, suddenly, I know it is the tidal digit matrix also. Obviously, I did not know that. Now, I know this is also the tidal digit matrix. But when you start it, it takes away. You should be reminded of something. What are you reminded of? The Hamming code. Right? The 7-4 Hamming code. Are you reminded of that? Which part of it reminds you of the 7-4 Hamming code? This girl. Okay. This part is just the 7-4. 3. Nothing. Right? So, what is the additional operation that has been done? We saw this before. Yeah. We are adding, extending the code. We are extending to add an overall tidal digit matrix. So, extension and puncturing are opposites of each other. Right? So, if you take the extended version and puncture the last bit, what will you get? You should get... No, I should be careful there. Extension and sharpening are opposites of each other. If you extend and puncture, what will happen? If I puncture this, what will happen? Okay. Oh, no. What happens when I puncture it? It should... So, puncturing is easier than the extended matrix. The bariatric matrix puncturing is a bit more painful. So, what should we do when we puncture? Yeah, you are right. I mean, it will go away. This will also go away and that will also go away. So, we will get back to the 7-4-3 Hamming code. What do I know about the 7-4-3 Hamming code? It is just a cyclic code. Not in this order. In what order is it a cyclic code? How should I order the columns to get a cyclic code in the Hamming code? Yeah, take the integral element of GF8 and do 1 alpha, alpha squared, alpha 3, all the way to alpha power 6. Not in this order. Don't do 0, 0, 0, 1, 0, 1, 0. First one will be 0, 0, 0, 1. Next will be alpha. It should be 0, 1, 0. Next will be alpha squared. It should be 1, 0, 0. Next will be alpha 3, which is 0, 1, 1, etc. So, you put it in a different order. You get a cyclic code. Okay. So, I will show the regular code and then evaluate the polynomial in a different order. Okay. Right? What do you get when you do that? You get a Hamming code, which is a cyclic code. So, that's the idea. Okay. Essentially that is the idea. So, this, so this guy is cyclic after all column notation. Okay. So, the same thing you can show is true for an arbitrary readmuller code. So, there it's a little bit more difficult to prove that present rate. So, it's not as easy as to read for extended the code. I can easily do this for any problem, but the extension maybe is not too hard. Okay. So, I have to go to the DL of a readmuller code. It should be again a readmuller code. It will have an extension like this. So, I can function it. Then I will have the evaluations at non-zero vectors. I have to re-order the evaluation according to the primitive element. Okay. So, I can always take the primitive element in GF2 param and re-order the evaluation. So, first I will be just evaluating in the natural bit model. I have to now re-order it in terms of the alpha, alpha square analogy. Come in some crazy order. Once you re-order it and evaluate it, it turns out that readmuller code is cyclic. So, that's the idea. Okay. So, let me write it down. I will write down the result. I will give you a hint as to how you prove it, if you won't really prove it. Okay. So, this is the result. The main result is you take a readmuller code. So, in general, you take a readmuller code of R command. Okay. The puncture, first bit. So, what do I mean by first bit? Puncture the evaluation corresponding to? It's quite easy to do this. So, what do I mean by puncture the evaluation corresponding to 0000? Simply then evaluate it. Okay. So, you start only with the non-zero vectors for v1 through vm. Okay. And then what do I do? Evaluate in the order 1, alpha, alpha square, algebra 2, alpha, part 2, param, minus 2. And what is alpha? It's a primitive element of gf2 param. What do I mean by evaluating a polynomial in m binary variables at alpha? What do I mean by that? What does that mean? How can I evaluate f of v1, v2, vm at alpha, part i? What does that mean? Previously, we saw evaluating it at the integer i, where i was between 0 and 2 param, minus 1. So, now what do you do for alpha, part i? Well, I take the vector representation. I know it is n bits. So, you put v1 equal to the first bit in the vector representation. v2 equal to the second bit in the vector representation. Okay. Except that now, in the previously when I did simple binary representation, it will come in the natural binary order. Now, it won't come in the natural binary order. It will come in some crazy order according to the primitive sequence in the value of the arithmetic. Okay. So, that's the only thing. So, you evaluate that. The only thing I skip this, I am not evaluating it at 0. Okay. 0 I am skipping. I am only evaluating it once from alpha to part i, minus 2. So, if you do that, it turns out this readmiller code will be the punctured readmiller code. Okay. So, in fact, after puncturing, we usually do a star. Okay. So, it will be not the puncturing. So, rm of r comma n star is the punctured version after the first person puncture. And then evaluated in this sequence, you get a cyclic code. Okay. So, this becomes cyclic. Okay. So, all right. Do you have a question? Is it okay? That's the idea. I mean, I am not... It's just a notation. You see, rm of r comma n is the code of the punctured. If I put a star, it means that I am puncturing. Punctured version is denoted as rm of r comma n star. Okay. So, for instance, the 743 Hamming code will be rm of 1 comma 3 star. Sorry. Yes. Yeah. The first bit will be drop. Yes. How can it be a cyclic code? So, what do you mean by which system is cyclic? All one system is obtained by a cyclic shift of itself. Right. So, see, the way I think of cyclic codes is you have a code word. You have a code word. If cyclic is shifted, it should continue to be a code word. That's all. Don't think of obtaining the code word as a cyclic shift of something else. And that does not really need it. Okay. Now, the cyclic shift of gfx. See, multiple of gfx. So, you have to add also. Okay. Is it okay? You see, I mean, it stands as a difficult result to prove in general. So, I am not going to prove it. So, what will be the parameters of rm of r comma n star? See, the rm of r comma n is a 2 rm comma k of rm comma 2 rm of r2. Okay. So, rm of r comma n star is equivalent to 2 rm minus 1. What will be the k? k will remain the same. Okay. So, you can quickly see that this will be true. Okay. So, k will be the same. Okay. So, you will have the same number of polynomials. It's not really difficult to see that. Okay. And 2 rm minus r will become, definitely, 2 rm minus r minus 1. Okay. So, you go around the way. You will have an out-weight vector. You can show that's all. Okay. And this can be a cyclic code. Remember that. That's an advantage. This can be made a cyclic code, which means it has a generator power on it, GFX, which you can use in a systematic encoding. Very nice. Okay. Okay. So, what is nice is, again, you won't see this result. It turns out, for GFX, both code, there is a formula. What do I mean by formula? Some way to generate GFX. Okay. Somebody asked me, what is the formula? If you want, you can see our book, the Linen Kostile book, has this formula. So, you can see it. It serves. It's a complicated formula. You're not going to do the derivation class. But there is a formula. Okay. So, it's not very scary. Somebody will tell you how to find the GFX for everything. Okay. But it's not a BCH code. In many cases, it's not a BCH code. In some cases, it is a BCH code. Very few cases. Mostly, it's not a BCH code. Okay. It will be a subcode of the BCH code. Okay. So, it will be a cyclic non-BCH code. Okay. And let me give you an inclination as to why this has to be true. Okay. Why will this thing work out to be cyclic? Okay. Remember what I want to show is, if I have a code word, every code word corresponds to a polynomial evaluation. Okay. Now, this polynomial evaluation is basically evaluation at alpha power i. If I cyclically shift it to the left, what happens? I'm evaluating this side. Alpha power i plus 1, in a way. Right? Right? I have to show if I do f of b1 through vm evaluated at alpha power i plus 1, it is the same as some other G of b1 through vm evaluated at alpha power i. Right? If I show that, I'm done. But that's how you do cyclic. Okay. So, it turns out this function, instead of thinking of it as a function of binary variables, you can also, if it's polynomial, instead of thinking of it as a polynomial with binary variables, you can also think of it as a polynomial with coefficients from G of 2 power i. Okay. So, it turns out it's possible. It's a very complicated, higher quickly polynomial. It's a bit more complicated to explain it to you. So, you can think of it always as a polynomial over G of 2 power i with coefficients from G of 2 power i. You put the argument alpha power i directly into the polynomial. You'll get back a bit. Okay. You'll have a polynomial over G of 2 power i with G of 2 power i with coefficients. But when you plug in every alpha power i, what will you get? You'll get a bit. Okay. You'll never get another element of G of 2 power i. It's a crazy kind of polynomial. Right? Think about it. So, there are polynomials like that. So, it's quite advanced to think about what these polynomials will be. But there are polynomials like that. So, you can think of this polynomial evaluation not just as a polynomial in multiple variables, but binary values. You can think of it as a polynomial with a single variable that that single variable comes from G of 2 power i. Okay. Once you go to that kind of thing, multiplying by evaluating an alpha power i or i plus 1 is very simple. You simply take, you take S of x and put, instead of x, you put alpha x. Okay. You'll get a G of x, which is S of alpha x. S of x evaluated at alpha power i plus 1 will be the same as G of x evaluated at alpha power. Something like that. Okay. So, it's very easy to go once you have a polynomial over G of 2 power. Okay. So, we won't see the proof, but that's the idea. So, what I allow you to do is, in the BCH and RS codes assignment, there is a last problem. You go to my web page and look at BCH codes and RS codes. There is a last problem. That problem basically describes the construction of each element codes by polynomial evaluation. Okay. Try that for a while. Okay. Instead of going to the read mail, of course, it's much more fancy. Multiple variables and all this. The each element code is much easier. You can try solving that problem. Okay. There are things for that problem. Try solving that problem. We'll see exactly what I mean. Okay. There is a way to show each element codes, acyclic and all that. So, it's nicely come out. Okay. Try that. And that will give you, I think, a test of how this proof will work. Okay. So, in general, if you're interested later, come and talk to me. I can give you references. You have to see it. Okay. So, that's the idea. So, essentially, the main result we have is quite useful and powerful. Okay. The punctured version is acyclic. And it has to generate a polynomial which you can compute. Somebody knows the formula for it. You can compute it. So, it's six variables. Okay. So, you can do encoding very easily. Okay. All right. So, that's the read mail encoding part. We'll, as far as encoding is concerned, we'll simply start that. We're going to move towards decoding. And, okay. So, of course, the main reason why I did encoding is to convince you that the encoders are simple. Okay. We're going to have a very large read mail encoder. Encoding is very, very simple. And the next thing we're going to talk about is decoding. Okay. So, the decoders are read mail encoders is really, really interesting. Okay. So, it's one of the most interesting and useful decoders. Because it will be a decoder which we have never seen so far. So, a very implementable decoder which can correct errors beyond minimum distance. Okay. So, if you think about it, it's a scary kind of. Okay. So, we never saw a decoder like that. So, it's a very simple decoder. But it's probably to you. We'll say it's the simplest thing in the world. It's much more simple than all the belly cam, massy, you know, some era locator, whatever, the erasure, and all this. No such conclusions. Very simple decoder. But it turns out it can correct errors beyond minimum distance also. So, it's powerful. But on the flip side, it won't have, it won't correct all the errors that it should be corrected sometimes. So, if you have an error correcting capability t, less than t, it might not correct. Okay. So, in many ways, it's a very interesting kind of decoder. Okay. And we'll see in the next course, we do the next course. Many of the decoders that we'll see in the next course will do like that. Okay. So, they will not, they will not have any guaranteed error correcting capability. But they will perform very, very well because they can correct much larger number of errors most of the time. Okay. So, that is the idea. So, it's an interesting career. And we will the first kind of decoder that we're seeing that that plays a role. Okay. So, I'm going to first describe the concept behind the decoding and then tell you how to implement that concept for your molecules. Okay. So, what is the concept behind decoding? This concept is called majority logic. So, you decode the majority logic. So, let's say we have a code C, which is an NK code. Okay. And I have a code word C1, C2, Cn. It is sent over some channel. Right. And now if let's have it correct, whatever it is, we will see R. R will be R1, QR. Okay. So, the first difference between the decoder and boundary distance decoders and all, we will not be attempting to find the entire code word directly or entire error vector directly. We will try to find the decode bit by bit. Okay. That makes, as it turns out, we're suppressing that it makes a big difference. Okay. So, whether you try to find bit by bit or whether you try to find the code word directly. Okay. So, in a way, if you look at the boundary distance decoder or even the syndrome decoder. Right. We're trying to find the code word directly. Right. We're trying to find the error vector directly. So, what we will do is, we will be trying to find every bit. Okay. I will be only concerned about decoding C1. Okay. In fact, what will happen is, we won't even try to make sure that all the individual bits that we decoded together will actually be a code word. We don't even have to care about that. Okay. So, we will simply say, I decoded these bits. As you know, it's systematic encoding. I will simply take a subset of those bits and say, that's my message. Okay. I don't even have to worry about ensuring that the entire decoded code word I put out was actually a code word. Okay. So, that is called bitwise decoding. Okay. So, this is a bitwise decoding method. So, we will try to find CI cap for every item. All right. And like I said, there's no effort to ensure that C1 cap, C2 cap, so on, till C1 cap is actually a code word. We won't even try to do that. But we'll try to find each bit of the code word. Okay. Okay. Estimates for each bit of the code word. Okay. So, that's the first idea. Okay. So, the next idea is, again, these are all powerful ideas, which you'll see later on in the next second course, that people have used to build very powerful decoders. Majority of it is not that powerful, that the concepts here are usually useful and more in general. Okay. So, first and bitwise. Okay. So, what is really nice about bitwise is you're not worried about N. N could be very large, 100,000. But I'm only worried about one bit at a time. Okay. So, maybe it's not too scary. Okay. The next idea which is usually useful is, you use some local structure around the ID code. Okay. So, what do I mean by local? I'll come to it. Okay. So, the local structure can be defined in so many ways. Okay. The local structure is used to estimate CIF. So, these two ideas have proved to be very powerful today. Okay. So, in the modern interpretations of codes, doing bitwise decoding. And how do you do bitwise decoding? You use only the structure around CI, so to speak. Local structure. Okay. So, again, when we go to a very large block length, local structure is much easier to deal with than the global huge structure. So, it takes like 2 power N minus K and all. You won't even see. Okay. Can you agree to go one bit and what's around that bit? What do you mean by what's around that bit? Let me clarify that. Okay. So, my idea is, at least in a very logic decoding, this is how we interpret this local structure. You look for parity checks that involve CIF. Okay. So, this is the important thing. And this is used even in modern codes. You always look for parity checks that involve the ith bit. Okay. So, that is local structure. Okay. What do you mean by parity check? What is a parity check? Every way of the parity check matrix is a parity check. Are those the only parity checks? No. Every codeword in the dual is a parity check. Okay. So, when I say parity check, I mean codewords in the dual. Okay. So, whatever, when I say codewords in the dual, that parity checks that involve CIF, what should this codewords in the dual have? Right position should be 1. Okay. Okay. So, we have this read Miller code, we are stuck to decoding it. And how do we do it? We try to exploit local structure of read Miller code. Okay. So, another interpretation what people do is, people don't care about starting with the read Miller code. They will say, I will define a code with very nice local structure. Okay. I will, as it is, when I define the code itself, I will define a local structure that is very nice. Then you don't have to go hunting for codewords in the dual with one and right position because you demand it like that. So, you will just see it and you will pick it up. Okay. The read Miller code, since we are doing, like, some kind of a legacy compatible implementation in a way, so you have the read Miller code, you are stuck with it. You don't have a choice of defining your own code. So, you take the read Miller code and then you try and find its local structure. What is the local structure that you can find? Try to find codewords in the dual, which have a one and the right position. Okay. So, now I want to remind you of one powerful thing about the cyclic property that you saw. Okay. Because you have the cyclic property, what is nice is, at least for the punctured version, if I find local structure for the right bit, let's say that I equals one. If I find local structure for the first bit, what have I also found? I found local structure for every single bit. Why is that? You simply cyclically rotate it. Well, you know, dual of a cyclic code is also cyclical. Okay. So, if I have a palliative, if I have some codewords in the dual, I simply cyclically shift it. I'll get at the codewords of the dual. And because I cyclically shifted it, all the ones in the first position will become ones in the second position, third position, whatever. Okay. So, that's where read Miller code is nice. So, you have to find local structure for a particular bit. The same structure will be applied according to the cyclic shape. So, if you cyclically tackle with a cyclic shape, the cyclic order comes in with alpha, alpha, r i. Okay. And you'll see the local structure might be easier to find than the normal bit order. Because 0, 0, 0, 0, 0, 0, 0, 1 next step. Okay. So, when you do the cyclic shape, there is slightly tackle. As long as you tackle with that, it will do. Okay. That's the idea. So, these are the partial ideas. Like I said, they are very, very modern interpretation where the tool to be very useful. But we'll apply it to the read Miller codes first. So, we'll try to find parity checks that involves the i. And I'll tell you what majority logic is. Okay. So, that we'll do in the next class. Okay. Sometime next week, in the last class for this week. Enjoy your Prashatabh break. I don't know if you're doing something interesting for Prashatabh. Anyway. So, have fun. We'll meet on Monday.