 Anyway, so we are going to continue our study of read-move course and since we are first class after let me try and do, what am I doing here? No, it's not good, okay, one more at a time. Okay, at least we've written off at one site right? I guess it didn't turn out very well but that might reduce the steep drops and let's continue. Okay, so we are going to look at read-move course and there are several ways to view it and we saw that different views of the same read-move course gives you different properties which we can explore and implement as well. So the first, let me remind you once again, what was the read-move course R, N, okay? So the basic idea is when you think of a code word, okay, so you have to think of it coordinate by C0, C1, C2, all the way to C2, bar and minus 1 and this indexing, okay, so this indexing 0, 1, 2, etc. we think of the indices as different objects, okay, so that's something new in read-move course. The indices usually, I mean, they just go from 0 to 2 bar and minus 1, what's the big deal, right? So it didn't really matter and any of the codes that we saw, so so so, but read-move codes are basically valuations on the indices themselves. So in a way, the indices play an important role and how they tie up, how they tie up to the destination gives us a lot of nice properties, okay? So how do we think of these indices? Essentially, we think of the indices as m binary value. So the value is taken by the m binary value, so if our value goes from 0 to 1, taking out 2 bar m minus 1, possible values here and you think of each CI as basically an evaluation of some body number of both bodies, right? What is the probability of this polynomial? The degree of x is less than or equal to r, right? So for the arbitrary read-move code, the degree of x is less than or equal to r, okay? So the order in which you do the evaluation, okay, so could be, for instance, you could choose the natural binary order. What is the natural binary order? 0, 1, 2, so on. Yeah, the integers, both of them are the same, right? So m bit integers, m bit binary vectors are the same as the integers from 0 to 2 bar m minus 1. So you can do it that way. Or you can do it in any other order, okay? So you should be motivated by some reason for trying some other order that is not the regular binary order, okay? So what can be one motivation? One way is to view these binary vectors, m bit binary vectors as elements of GF2 bar m, okay? And then that gives you a different order. What is the order? 0, 1, alpha, alpha squared, so on. Where alpha is a primitive element. So that changes the order in which you evaluate, okay? And we saw, wouldn't you have fully proved that I gave you some motivating arguments for why if you evaluate in that order and puncture the first game. Okay, you puncture the first game, evaluate in that order, what code do you get? You get a puzzle, very closely related version of the Riedmiller code which we called rm, r, m, i and star. The nice property is that code is cyclic. Okay, so cyclic and like I said there are, people have come up with formulas for coming up with those 0s of that cyclic code. Okay, 0s means actually the 100th parameter, okay? So it's possible to use that in the end code, okay? So that was one use for it, okay? And then the last thing I saw, I was talking about was decoding. And I said there's something called majority logic decoding which is going to be used. And majority logic in decoding involved finding suitable parity checks from the dual code. Okay, so if you remember that right, I'm going to talk about that a little bit more, but to pick those suitable parity checks from the dual code, we will view this n-bit binary vectors as a kind of entity. Okay, so we'll view it as some other type of entity, but we'll think of it as what's called a finite geometry. Okay, so we'll think of it as points of a geometry and we'll think of lines in the geometry, planes in the geometry and that will give us a good handle on constructing these vectors, the dual code vectors which are good for majority logic decoding and good for decoding, okay? So that is the crucial idea in majority logic decoding. But before that, I'm going to talk about majority logic decoding in general. The third preference to read molecular codes so that we get a handle on what it was about. And like I said, versions of this majority logic decoding are used even today in modern decoders. So it's a very relevant thing to study. Okay, any questions on any of this stuff? Seems fine. Okay, so one thing I don't have is a tutorial on read molecular codes. I'll try to add it over this week. Okay, so maybe hopefully today it's not possible tomorrow I'll upload. Okay, so here's the majority logic decoding. I think I started this before we closed last week, right? Did I not start this? Okay, so let me read the word. Suppose you have an n k code c, okay? The dual, the dual is one n minus n comma n minus k code c plus. Okay, so that's the property of the dual. If you have a code word c and c, and if you have a code word b and c plus, c dot v is zero, right? So what does that mean? c1, c1 plus c2 v2 plus four until c and vf equals zero. Okay, so if you think about it, if I fix a v, suppose I fix a v in the dual code word. What is this? For a fixed v, if v is some constant, this gives you some kind of an equation that is satisfied by every single code word I've made. Okay, so this, for a fixed v, this is an equation satisfied by all experience. Okay, so that's the property of the dual. It's a definition of the dual. So in a way, every code word of the dual specifies a parity check that is satisfied by code words of the dual, of the original code. Okay, so why do I say this is a parity check? Remember, each v i is either zero or a one. Okay, so once I fix a v, there will be some c a's which will show up here, some which won't show up. Okay, so another way to write the same way is summation c i equal to zero i subscript what? v i is equal to one, only equal to zero if you like. Okay, so that's the, this is the same. So in this form, you see that this is clearly a parity check. Okay, so whatever you have once in the v i, those bits of the code word should XOR together to zero, should satisfy even parity. Okay, so that is the idea of the parity check once again. Okay, so in a way, so usually if you have an unknown quantity, for instance a decoding problem, the code word is unknown. Okay, code word plus error vector is what you receive. Code word is unknown. It's usually good to have equations involving the unknown bits. Okay, so entire bits it may, this entire bits like this, give you equations like that. Yes, you have equations involving the unknown guys. You try to solve several unknown guys using those equations. So that's how we use these things. Okay, so specifically what we'll do is we'll try to find orthogonal parity checks. Okay, so let me not solve for you. Parity checks, orthogonal on, I'm sorry for the delay. Optogonal on, say the i to the n. Okay, so we'll go to find these guys that are searchable to do whatever they are to them. Okay, so when there are a set of parity checks orthogonal on the i to the n. Okay, so if I have v1 and v2 both belonging to the dual. So what? v1 i equal to v2 i equal to 1. And, okay, outside of the i place, v1 and v2 should not have any one in common. Okay, these are the same words. If I try to write it down, I'll write it down like this. Okay, so therefore, j equals to v1 j equal to 1. And, so therefore, j, we put j prime equal to 1. What should be their intersection? Okay, it should not have any other intersection. In fact, v1 and v2 should be non-zero. Okay, so non-zero. Okay, I won't allow them to be zero. They're zero when it's not a check at all. Zero is zero equal to zero. Okay, it should be non-zero. And, in the i place, we should both be one. In every other place, if v1 i is one, what should be v2 i? Should be z. Okay, so that should happen. Okay, so there is other ways of writing it. If you want, I'll try one more way of writing it. So if you look at this guy, v1, 1, v2, 1, v1, 2, v2, 2, so until v1 and v1, v2, 1, what should this vector be? Yeah, it should be this, what is called the canonical i-th vector. Right, so what are the canonical i-th vectors? Zero everywhere else. One of the i-th horses. Okay, the i-th place should be one everywhere else. There are various ways of writing it, but these are equivalent. Okay, so v1 and v2 are said to be optimum parity checks on the i-th vector. Okay, so the word optimum needs tactful interpretation. It doesn't mean that v1 dot v2 equals zero. It will not be zero. What will it be? It will be in fact one. So it's optimum on the i-th bit. So you should think of that as one phrase. You can't interpret it separately. Optimum on the i-th bit. When they have another optimum on the i-th bit, if you remove the i-th place, they will not even overlap, not just being optimum. It's a very strong condition. It will not really overlap. Okay. So you can imagine, if I'm trying to decode the i-th bit, I don't know the i-th bit. It's an unknown to me. v1 gives me an equation about the i-th bit. Why is this v2 interesting? I have v1 already. Why would this specific type of v2 be more interesting to me? I don't know. See, v1 gives me one check on my code work. c1 to cn. Okay, so I have one equation. Why is it that this v2, which is optimum on the i-th bit, is not interesting to me at all? Why would it be interesting to me? Not in this. So in a way, the information given to me by the characteristics represented by v1 on the i-th bit and the information given to me by the characteristics represented by v2 will be what? They will be independent. They will be statistically independent. Why is that? Because they involve different code words and they are corrupted by different error vector parts which are independent. So they are independent. So whenever I have independent information like that, I can combine it much more easily. That's one way to think about it. And that's how it's used in modern decoders. So people like these kinds of checks, the batteries. But for majority logic, we'll do something far simpler. So what I'll do is, I'll assume, so let's say, suppose, there are j parity checks of c1 and cn. Some j. There's some j of them. Yup. I'll put them all on. Big i. Okay? So here's my claim. Okay? Okay, let me first tell you what this majority logic decoding is. I'm going to give you the claim. Okay? So here's j parity checks. What are these j parity checks? Let's tell them something. Okay? So I'll tell you what these j parity checks are. The first checks will be something like ci. Okay? Everything will have ci, right? And the remaining things, I'll name it something. So here's also j1, let's see, j2, let's put on the j what shall we call it? Sorry? And on equals 0. Okay? And then the second one, the one scaring is going to be there, and then I'll have... Okay, so this... this notation is the remaining cn, right? So what do I do? Let's think of some other notation here. What do I do? Index it. Okay, maybe it's just j1, j2, jn1 plus 1, jn2 plus n1 plus 2, it's again a denotation but you know what I mean, right? So let's give n1 plus n2 equals 0 surrounding, the last one will do the part of it, it will again come here and then if you have c something else, c something else, I will go over to cj. Okay, so it doesn't matter what notation I have. What do you know about these bits and these bits? They are nothing in common. Okay, no intersection. Okay, that's what I mean by saying, I'm putting different notation here. Okay, the bits that I have here and the bits that I have here are nothing in common. Okay, so here the loan is common. Okay, so what can I do from the first equation? Okay, suppose I have a scenario where I receive, so in the decode I am going to receive r which is c plus e. Okay, I am receiving r which is c plus e. Okay, so what I can do is to estimate the ith bit. Okay, so like I said before, these are, I think I mentioned in the last class. I was talking about how these mathematical decoders are decoders. We don't try to decode the whole block, we try to decode one bit at a time. So right now the decoder will try to decode the ith bit alone. How can it decode the ith bit? It can take the first equation and try to find the ith bit. How can I find the ith bit? I know c i plus c j 1 plus c j 2, so on to c j n 1 is 0. Okay, so one thing I can do is the first equation is doing an estimate of c i hat. And what is that estimate? I simply take the received bit for and do the same addition. So r j 1 plus r j 2 plus r j n 1. Is that okay? So this people, you might be willing to believe that this is a reasonable estimate. It's not a bad estimate for based on the equation. Okay, and what about the second equation? Okay, so this is me. Another estimate which is r j n 1 plus 1 r j n 1 plus 2 r 2 r j n 1 plus 1. Okay, so my question is going to be j plus 2. So again, our equation marks here, so you can see. Okay, so I will put a question mark here, but if here what I am doing, right? So I basically go back to this, these checks and replace all the c's by r's except for the c i. c i, I will think of it as finding the estimate. Okay, so looking at the final estimate. So how many estimates do I have? j estimates. Okay. So how do I combine these j estimates? It's one idea and majority logic. It's to simply take a majority. So what the c i has to be? Majority of c i at 1, c i at 2, 4 on 2. For simplicity, you might want to think of j as r. Okay, so if it's even, then you wonder, okay, what would be equal? What do I do with r? So that's the confusion. Confusion. So let's just say we will take, see, j as r. So that was a clear majority. And the majority will give you the c i. Okay, it seems like a nice idea. Now my question is, you know that these things are optimal on the type. How many errors can this majority logic decode at correct? What is the error correcting capability for this majority logic decode? Does it have any error correcting capability at all? It does, whatever. How many errors can it correct? It's j by 2. Are you saying it's j by 2? Error correcting capability is j by 2? That would make sense. What if I claim? What if I claim? If y plus p, or a vector, if less than or equal to, let's say j by 2, flows, then c i has equal to c i. That's correct. Do you believe me if I say that? Let's say j is r. Okay, don't worry about even. Variable correct? No, I'm only worried about the height bit. Don't worry about decoding the entire code word. Error correcting capabilities are the height bit. I have j, orthogonal part itself. When will my height bit be accurate? Up to what weight of e? Is this correct? The statement is true. Yeah, that's true, right? You see that? Okay. So if you have only j by 2 errors, and you have j equations, and all these equations involve different bits, they don't overlap at all. So at most, how many of them can be corrupted by the error? j by 2 of them. More than j by 2 cannot be corrupted. So more than j by 2 are corrupted, weight of e has to be greater than this floor of j by 2. So error can... See, when I say corrupted, I should be very careful. Corruption can happen in a favorable way to me also. See, if two error vectors go and corrupt the same thing, then I'm okay, right? Only when a number of guys go and corrupt something, I actually make an error, okay? So I'm being very loose here. I'm just saying if any number, any one error vector, any one error is involved there, I'm done, okay? So if you say that my error correcting capability is clearly j by 2. So weight of e is less than or equal to j by 2. I'm guaranteed to recover c i hat is equal to c i. But you can see already why the more than j by 2 also I might be able to correct several instances. So that's the nice thing about these kind of simple decoders. So it doesn't rely on major algebraic properties of matrices and all that rank and all that. It's a very simple decoder. So even if you have more than j by 2 errors, there can be several instances where this decoder will work, right? There is an even number of errors in the places where... in the equations and the error has no effect at all. So those kind of cases also it will work. So in fact, you can correct more than j by 2 and several instances, but not fully. But all error vectors weight less than j by 2 can definitely be corrected. Yes? Do you know beforehand what my j is and what my grand digits are? Sure. So the question is, can I know beforehand what j is? I mean, I can, right? Why watch out for me? I mean, I know the code already. I know how to do it. I go and search over each and every code word I have to do well and find collect all the optimal biotics. It's a difficult problem to collect the optimal biotics, but if you collect it, you can design your j. So j can be a design parameter. It's not wrong. So complexity of this process, isn't it? See, the search for the j quality checks happens only once. After that, you store the optimal quality checks. After that, what is the complexity? You're taking j xr and then finding majority. Majority is nothing. You just can't conceive it's greater than something. So it's nothing. If you compare it with the read assignment, the deli can massively correct, this complexity is real. Right? Don't think of the searching for the j as part of your complexity. Well, you don't have to do it online. You sit down at home and do it slowly or take one year over it and then if you don't do it, it'll happen only in microseconds. You don't have to do the search real time. Definitely. You have to do a search real time, yeah. When it becomes a basic term. Yes? Here, my j is fixed. It's not like our maximum. Boundaries should be put up when I go across every possible value. So far, I've told you I have to decode only one bit. Right? I'm just decoding bit by bit. So, my question is, what if you have to keep searching for the optimal checks for each eye every time? It becomes painful. And the other related question is, what is each bit as a different set of automobiles? You know the store, all these things is painful. That's why the cyclic condition will come and help you. If C is a cyclic code, right? What is nice if C is a cyclic code? C code is also a cyclic code. So, I'll find optimal checks on one bit and done. Okay? So, you just keep shifting it. You'll get optimal checks on the other bits. So, that way, so using those ideas, the complexity can be handled nicely. It's not that hard. And the other decoders are not heavily implemented today. Like you said, well, it's not very human today. Well, it can be messy, not that hard. You can definitely implement it. If you are constrained by spares and all that, the power is a big concern. I mean, you can think of more logical options. That's a problem. Okay? So, this claim is clear, right? I don't have to really prove it. Okay? So, this is a repeat. Okay? So, think about what will happen if there are more than G by 2 errors. If there are some real cases where correction is possible, if the action of error is correctable, can be decent even if it's slightly above repeat. Okay? So, now how do you find optimal parity checks for all the bits? Okay? So, if code is sick like... So, what's the first thing? If code is sick like simply shift click, check spark, provide. Okay? So, you find optimal checks for one bit. You simply keep shifting it. You go out and work. Okay? There are ways to implement it so that it works out quite well. Okay? So, if it's not sick like, then it's a bit of a problem. Okay? And when will you say that the majority logic decoding achieves the full error correcting capability of the code? Yeah. So, if it's sick like and if j equals minimum distance, then you can be sure that it's achieving the full error correcting capability of the code. If j is not minimum distance, then you cannot be sure. Okay? And if it is not sick like, you have to sign j equations for every single bit before you can say that the error correcting capability of the majority logic decoder clockwise is j by 2. Okay? But if it's sick like, it's enough if you find for one. Okay? So, if it's not sick like, you have to sign j equations for every single bit, orthogonal of bit 1, orthogonal of bit 2, another, it could be another side of j, I don't care. You have to sign j before I can claim j by 2 errors can be corrected for the whole block. Now, it can only be be sure to work for the decoder. Okay? That's the other thing to keep in mind. But if it's sick like, then of course it's okay. One bit, for one bit you find everything is fine. Okay? So, that's majority logic decoding. The some analysis you can do, for instance, if you say, but the, if you assume, for instance, the weight of the checks, okay, the weight of all the checks is uniform, just a simplicity in the analysis. You just say, it's also weight 5 or something like that. Then you have to look at, when you flip each bit with probability p, what's the probability that an even number of errors will occur in the first place and then even number, an odd number of errors should occur in a maximum of j by 2 boxes. So, it's basically a does and then kind of problem uniform likelihood. You can come up with expressions for the binomial terms and even compute it. It's not very hard, but there is some calculation to compute the fraction of errors that can be corrected beyond j by 2. So, up to j by 2, you can correct it. Okay? All right. So, any questions on this? Majority logic, it seems to be enough. Okay, first. So, we're running out of time today. So, so, so, so, so, so, so, so. So, what we'll do, I'll do a rough plan for the rest of weeks away. We're going to do, I'll show an example for the read molecular code that will take the simple 844 read molecular code and find out the general particle checks. So, we'll start with that and see how that looks. And then I'll give you some general results on how, in general, there should be a lot of general particle checks for everybody. So, it's also an interesting, interesting topic to look at. And for that, we will need this view of the M bit vectors as forming a geometry. And we'll want to find lines on. So, what's so nice about lines and what is the connection between all the general particle checks. If you think about it for a while, any two lines can intersect at most one point, right? So, if you say the lines are the checks, they can intersect at a particular point. So, that's what's nice about these lines and where all the lines are passed through a point, they cannot intersect in more than one place. So, we'll use that to find out the general checks that will come to the model. So, we'll start here for today. And so, short class. We'll put from here to model.