 Okay, so this is what? This is lecture 27, 27. Okay, so the last thing we were seeing was this update rule at the check node. How do you quickly compute it? How do you write it down? We try to simplify it. Finally, we had some expression, we had to deal with this function. And I want to talk a little bit about that function, right? What was this function? Log tan hyperbolic, sorry? Yeah, mod x pi. Okay, so I think it's good to, I think people when trying to invert it had some trouble with signs and all that. I want to talk about it first of all. Another thing I want to point out is we are only worried about the magnitude here, right? The sign of f of x is not that relevant. So I might even think of this as that being a model as outside. Okay, but usually, you don't put that, I'll tell you why it's not really necessary. Okay, so how will tan hyperbolic x look if you plot it versus x? I don't know, maybe x by 2. Okay, how will that look? What is the value at 0, 0, not 1, it is the infinity, it will get to 1. Okay, and then it will rise and what will be its slope at 0, I think it will be something non-zero. So it will get some, it will start off somewhere and then go this way. If it has just to x by 2, you get up to minus 1. If I do mod x by 2, it will only go in that way. But anyway, I'll plot x by 2. So I'll get something that looks like this. Okay, so what's important is it lies between plus 1 and minus 1. Okay, so to be able to take logarithms and all, I have to do modulus, but what will happen if I take logarithm of tan hyperbolic of x by 2? Will I get a positive value or negative value? It will all be negative, right? Because tan hyperbolic is less than 1. Okay, so if you take logarithms, you're always only going to get negative value. So log of tan hyperbolic of x by 2, after you take logarithms, will always be negative. Okay, so you can as well add, I mean you can take absolute value and add, that's what we want. It's the same as just adding and then ignoring the sign at the end, just taking the positive magnitude value. Okay, so if you had to plot this function f of x, if you want to be careful about it, it would look something like this. If you just plot x versus fx, okay, for the positive part, it will look something like this. Okay, it will be a function like this and it will be symmetric around the x equals y line. Okay, so that's what I claim by saying it's its own inverse, right? If you swap the x and y axis, the function doesn't change, it looks exactly the same. Okay, so it will be symmetric that way and that you can show. Okay, yeah, the value is negative always. Yeah, when you might have to do minus f of x, minus f1 or something, but it doesn't matter. If you worry only about the magnitude, you might as well flip it up and put it back. Okay, so you'll have some minus sign issues, but for simplicity, I'll simply write f as being its own inverse and I'll write f of x is log tan hyperbolic mod x by 2. Okay, so the sign problem is understood and you have to take care of it when you do your, okay, hope that's clear. Okay, so there'll be one issue. So when we deal with this function f of x, I'll simply write f of x is log tan hyperbolic mod x by 2 and they'll say f of f of x is x. Okay, so there is a sign, up to sign these things are true, we'll just take sign only, only magnitudes. Since all of it is negative, this will work out properly. Okay, so there's one comment I wanted to make about the derivations we did last class. So let's proceed now. So let me rewrite what is it that we had. Okay, so if you have, okay, so this is the recap of what we did last class. If you had x being y1 plus y2 plus, so until y, let me say d, okay, then and if pi 0 is probability that pi is 0 and let's say there's one more thing. So and yi is the log likelihood ratio, right, so log of pi 0 by pi 1. Okay, then yx we are able to write as, write in two ways, basically the magnitude of yx will be what? f of summation i equals 1 to d f of yi. Okay, and then what about sign? What? Product of the signs. Okay, so is that clear? So there's one more, so this is one minor point here, I don't want to beat about it too much, but one thing that you can observe it's very interesting. So sign you typically think of as what? Sign is sign of x, so I want to write down what sign of x is. Okay, I want to put i here. Sign of x is usually think of it as plus 1 if x is greater than 0 and minus 1 if x is less than 0. Okay, yeah, put greater than or equal to somewhere. Okay, so that's how you think of sign of x and this product rule holds. Okay, so suppose I do something else, suppose I say sign bar of x is 0 if x is greater than 0 and 1 if x is less than 0. Okay, suppose I define another function which is 0 when it's positive and 1 when it's negative, the same product rule can be written in a different way. Okay, so you can write sign bar of yx is as what? So in fact you can notice sign bar is what? Minus 1 to the power sign x. Am I right? No, no, no, sign x is minus 1 to the power sign bar of x. Do you notice? Okay, so if you do this replacement here, you will see minus 1 to the power sign bar of yx equals product of minus 1 to the power sign bar of y. So when you multiply it out, you will see you will get addition. Okay, and then addition I can do modulo 2. The reason is minus 1 squared is 1. So I will get the whole equation to be consistent. Okay, so this is sign bar of y1 plus sign bar of y1. Okay, so basically the reason why you do this is in your implementation or your program, you can deal with the sign separately and the magnitude separately. Okay, so that is an advantage in some cases. The sign will go in one way and then your magnitude will go in one way. Then you multiply the sign and magnitude to get the final actual value. Okay, so that those are things to keep in mind when you want to implement. So you can deal with it in different ways. Okay, so bit is always better. Storing minus 1 and plus 1 might be a little clumsy in implementation, but 0 and 1 is always easy to understand. Okay, so those are a few comments. So let me go back to the message passing decoder now. So I am going to go to the soft message passing decoder. When we last left it, we were looking at what? What were we looking at? We were looking at step B of iteration 1, right? So it is what we were looking at. So let me write down what that is. Okay, so I was looking at a check node. Okay, well check node of degree E, right? Okay, and I said the messages that it would have received in step A, we will suppose are yi1, yi2, yiE. So those are my log likelihood ratios of the corresponding bits that are connected on the other side of those edges. Okay, so that is what it would have got, right? So let me just see. And now some processing needs to be done. So what is the processing? Suppose you concentrate on the first edge. Okay, we wanted to figure out the message that needs to go there. Okay, so I will call it v1. Okay, v, no, no. Did I have, do you remember the notation, what is the notation we used for in Gallagher A? v is the message from check node to bit node. Did I indicate the iteration number there? I did that. Okay, so v1. Okay, so I will say v1, I will put one here for the edge number. Okay, so this is the first edge. On the first edge in the first iteration connected to this degree, check node. Okay, so this is my message, v1, 1. Okay, right? So I am going to deal with, what do I have to send? Okay, so I already done this computation. This will have to be the log likelihood ratio of the bit i1 calculated using the other log likelihood ratios like before, like using what relationship? Using the relationship that bit i1 is the XOR of all the remaining other bits. Okay, so how will I do that? I will do it in two ways. I will compute the magnitude of v1 and the sign of v1. How will I compute the magnitude of v1? I will use this f function, right? f of what? Okay, yeah, y1 by 2, am I right? f of y1, right? Okay, I have defined the function to be that itself, plus f of y2. No, I think yi2, no, I am sorry. Okay, yi2, yi3, am I making a mistake? Why is everybody staring at me? I do not have to write modulus, right? Yeah, it is already there, absorbed inside the function. Yeah, right? Let me write it down once again, yi2. Okay, I will just clearly erase the whole thing. Okay, since the modulus is already absorbed into the function, I might as well write it as f of yi2. Yeah, I mean I could have written a sigma, I just did not want to write it, I wanted to write the whole thing out. But remember, this is my magnitude. My f is only going to give me positive values, right? It would not give me negative values. So, this is the magnitude. How do I do the sign? Okay, I can do it in both ways. I can either do sign of v1 or sign bar of v1. Both of them would be the same. I would do sign. Okay, you can convert to sign bar if you want. The sign of v1, one would be what? Product of sign of. Okay, so I want to product, I will just write it out. I will write it out fully, I am sorry for this, just to show how it works, right? Sign of yi2, sign of yi3, so on to sign of. So, this is what I would do for the first edge. How will I do v21? Okay, so how do I do v21? On the second edge, what would I send? Yeah, I just skip the yi2 in all the summation but include yi1. Okay, so the magnitude if you had to do, it will have to be f of yi1 plus f of yi3, right? And you go on to f of yiE. Okay, so likewise you can write that expression for sign. The product will involve yi1 to yiE, it will not have yi2. Okay, so likewise if I keep on writing, what will be magnitude of vE1? Okay, so I will have to do the summation from yi1 to yiE minus 1. Okay, so those are my update rules in step B of the first iteration for a degree E check node. Okay, all the E messages that it will have to send out back to the bits that are connected to it, you will have to do this. Okay, if you had to actually implement this, right? If you do it brute force, if you compute each magnitude separately, it will involve a lot of additions. How do you simplify your computation here? Add all of them and subtract one at a time to get each of your magnitudes. Okay, all of them are positive, it will work out that. The same way for the sign, what will you do for the sign? Multiply all the signs and then you multiply out one after the other to get the individual things. Okay, so those are simplifications you can do in your actual implementation. Okay, so that does not look too scary now. Okay, so for any E one can do this without too much confusion. Okay, so let us go to iteration 2, sorry, with suffix i. So, well I am taking an arbitrary check node and I want to say it is connected to bits i1, i2, i3 so on till iE. I want to say that. If I say they are connected, if I put y1 through yE, if you go back and look at the way I wrote down the channel LLRs, I said yi is the LLR of bit i. Okay, but if you put y1 there, it is not bit 1, it may not be bit 1, it is some bit i1. So, initially I put y1 through yE, people objected to it. So, I changed it to yi1 to yiE. Okay, so that shows you something about, well, about consensus. Okay, so let us move to iteration L now, step A. So, what is going to happen to a degree D? Okay, bit node I need to put a circle, degree D bit node which received yi from the channel. Remember, the channel is always there. Okay, it got yi as the channel likelihood ratio. Okay, so in step B of iteration L minus 1, I need to see what would have happened. Right, the same bit node would have received, what? It would have received D different messages and in my notation, I am going to see, there will be a notational ambiguity here. I am going to simply call it with VL minus 1, I am going to say 1 to D. Okay, so hopefully you won't confuse with the previous thing, I am simply calling them just for ease of notation. It is possible to be consistent, it is possible to number all the edges if you want and then put that suffix as the edge suffix. Okay, I am just simply, so you see what I mean, right? You number all the edges and every edge you can put a suffix there. I am just not doing that, I am simply calling it V1 through VD for convenience. Okay, so it is possible to do that. Okay, those are its messages. Okay, so now I have to figure out how will this guy process, I am sorry V2. Okay, again once again I look at the first edge and then we will figure out the other things as we go along. Okay, so I will call this U1L, okay, the message from bit node to check node. So what are all these Vs, VL minus 1s, what are they? Are they bits or they are all LLRs, right? What are the LLRs of? Estimates of what bit? Bit i, the ith bit. What is yi, yi is also? Another LLR which is again an estimate of bit. Okay, so I am going to assume now, I am going to make my iid assumption now. I am going to say all these estimates are independent, they have come from different parts of the code word. They do not reuse any received value in deriving all these estimates. So this goes with the neighborhood tree like assumption. If your neighborhood tree like up to depth 2L, depth L or depth 2L, whatever depth that is. Okay, so depth 2L I think then all these LLRs would have come from different received values of your code word. There would have been no repetition in my neighborhood, so all of them will be independent. So when I get probabilities from independent events, what can I do to get a consolidated probability? Can multiply all of them. Okay, so I can multiply the probabilities and if I do in terms of LLR what will happen? After multiplying, if I take log it will become summation. Okay, so the message U1L should be an updated LLR which uses yi V2L-1 through VDL-1. So a very simple nice thing using the iid assumption to do is to say U1L will be yi plus V2L-1 plus so on till VDL-1. Okay, I am simply going to add all the remaining. The reason is I can multiply those probabilities because they all come from independent constraints and independent received different received values. They do not overlap. Okay, so maybe I will try to give you a different interpretation based on the neighborhood and talk about this independence and I need to be a little bit careful when I talk about it, but hopefully the slightly hand-waving argument is clear enough. Is that a question? So I don't have a, okay, so the question is, see what I am doing here is very different from what I am doing at the check node. Do you agree? This is not what I did at the check node. Okay, so here all of these LLRs are for the same bit. The check node does not receive all LLRs for the same bit. It receives LLRs from different bits and it has to update one of the LLRs according to the condition here. All the LLRs are for the same bit. Okay, so using the other, so the way to think about it is, okay, so I will do it just so that you can get a picture in your head. Okay, so think of the tree, okay, the neighborhood tree. This UV2L-1 is coming from one part of the tree which means it will include all the constraints of the check nodes in that part of the tree and all the received values from the bit nodes in that part of the tree. V3L-1 will come from another part of the tree. Okay, it will include all the constraints of the check nodes in that part of the tree and it will include all the received values in that part of the tree. As long as these parts don't overlap, I can simply multiply those probabilities and say that the overall value I get after multiplication includes the entire tree. All the constraints have been included and all the received values have been included. If you do all that, so what am I trying to compute? Remember, I am trying to compute APP, LLRs, probability that bit i is 0 given the entire received vector and all the constraints are satisfied, all the constraints of my parity check are satisfied. So the way this is going is, I am doing depth by depth on my neighborhood tree. Okay, so I am consolidating at each part. I am including more and more received values, I am including more and more constraints. Okay, so that's the way to think about it a little bit further. So I think that's clear enough. But you can be convinced at several levels. You should be convinced at this level also. Yes, why do I multiply? Yeah, so what you have to compute is, see, so the way to think about it is, this is something, some probability, this is a ratio of what? Ratio of some probability that C i is 0 given R i, this event. This is some probability of C i is 0 given R i alone. What will this be? C i is 0 given what? That's the question you have to ask. Given all the other checks and bit node values in that part of the tree from which it came from. Okay, so that's what it will be. Likewise, you have to ask for each of these questions. This will also be probability that C i is 0 given something else that the other part of the tree. What do I want this to be is the next question. I want this to be C i equal to 0 given R i and all these other guys. Okay, so maybe I call this as some M1. Okay, so maybe I call this as this entire thing. What all it includes? Maybe I'm going to call it as some capital M1. All that is included here, maybe I'll call it MD. Okay, so maybe I should be consistent. I'll say M2. Okay, what do I want U1 to be? I want it to be the probability that C i is 0 given R i, M2, so on till MD. All those conditions have to be satisfied. Okay, that's what I, you know, all those conditions have to be included in the computation. That's what I want to do. And since all these guys are independent, when I write it down as probabilities and events, I can multiply those probabilities. Okay, so usually you must have been used to independence probability of AB is probability of A times probability of B. Okay, this is also something very similar. Okay, something very similar. I have, I'm conditioning on independent stuff and evaluating probabilities independently and I want to consolidate everything I can multiply. Okay, so when I do that, it works out this way. Okay, so think about it. It's a little bit, maybe I'll write down an actual tree example at that point. Maybe it'll be a little bit more clear. Maybe later on, maybe sometime next week, we'll see a proper example where these things work out very nice. Okay, so the question is, so he's asking what I really want is what, what do I want finally? I want CA equals 0 given the entire received vector R. Okay, so this is what I want finally. So he's saying, can I stop when my neighborhood includes all the bits? Okay, the point is one needs to be a little bit careful. See, remember, you're assuming that your neighborhood does not have repetition and all your computations are based on that repetition. So this probabilities will be accurate only when there's no repetition. If there is a repetition, these calculations that you did are not accurate. See, what did you do at the check node? You made the independent assumption to be able to multiply all of those things up. If those assumptions are not true, then your computation is wrong. The probability that you're sending is not the actual probability. Okay, so that's why this is not the MAP decoder, but it's some message passing decoder which is an approximation of the MAP decoder. So when I write this down, you should be careful. See, I'm not saying these are the accurate probabilities. These are accurate if my neighborhood tree like assumption holds. The moment that fails, this is not accurate. Okay, but what I want to be able to do in real life is even when it fails, I want to be able to run the message passing decoder. Okay, so you remember Gallagher A, right? I showed you simulations. Even when those independence assumptions fails, what happens to the message passing decoder? It succeeds. So I want to be able to do a similar thing in the soft decision decoding also. Even if those assumptions are not true, I want to keep it ready, hoping that it'll do something good for me. Okay, and that actually happens in soft decoders. So that's why you make the IID assumption and come up with rules which are valid under the IID assumption. And then even if it is violated, hopefully it won't go too far, of course. Okay, any other question on this? Okay, so we can only approximate this. Okay, again approximate also I'll put in brackets because nobody has really shown that these things are true, but CE or SE? I don't know, check that. Okay, I think it's CE. Okay. All right, so after all that explanation, hopefully the rule is a little bit clear. So you notice I'm including YA always. Okay, so I have to include YA always, right? Because I mean the way my assumption went YA would not have been included in all of these other things. Okay, that's how I did my calculations. Okay, so let's do U2 now. How will I do U2? U2L will be what? Okay, so it's very easy now, YI plus V1L minus 1 plus V3L minus 1 plus so on till VDL minus 1. Okay, so on till UDL which would be YI plus V1L minus 1 plus V2L minus 1 plus VD minus 1, so maybe I'll reproduce this U1L here just for completeness. YI plus V2L minus 1 plus V3L minus 1 plus VDL minus 1. So the update at the bit nodes is decidedly simpler than the update at the check node, right? You don't do any non-linear lookup or anything. It's just simple addition and how will you actually implement this addition? If you were to do it, you don't have to do it. Yeah, add all of them and subtract one at a time. That will always be more efficient than doing the entire addition individually. Okay, so that's the update rule at the bit node. So I can write down step B now for iteration L. Okay, so it's not very different from what I wrote before. Instead of YI1 through YIE what will I have? I'll have U1L through UEL, that's all. So then just replace YI by this. Okay, so replace YI1, YI2, YIE with some U1L, U2L and repeat same steps. So now you can keep on doing iterations. Go on and on and on forever, right? As many iterations as you want, you can do. You'll keep getting better and better values. Okay, so there's also a question of decisions. What about output LLRs after a particular iteration? I want to be able to make a decision after every iteration. So how do you do that? So I'll write down here, decision after iteration L. Okay, after the, in iteration L after step B, okay, what will a particular bit node get? It will get V1L so on till VDL, right? Suppose you say this is a degree D bit node it would get. Okay, so when I want to make decisions, right? When I wanted to pass a message back to some bit node, I had to ignore one information. But when I want to make decisions overall, I don't have to ignore anything. I'll just simply take everything together. Okay, so I'll say output LLR equals YI plus V1L plus V2L plus so on till VD. So what will be CI cap after L iterations? Yeah, if 0 if this is positive and 1 it is, if this is negative. Okay, and how do you decide if you can stop after iteration L? You take your CI cap and calculate H times CI. The H times CI cap works out to 0, what can you say? Maybe you can stop saying that I've achieved the code. Okay, so CI cap after iteration L is 0 if output LLR greater than 0, 1 else and CI cap after iteration L will be C1 cap L, C2 cap L, C and cap L. Okay, so you can have a stopping condition. There's also other things that possible. So typically in a VLSA implementation, there's usually no point in stopping ahead of time. Usually you have a clock and you always wait for the whole thing to end. So you can say after 10 iterations, I'll stop. I don't care about checking this in the middle. But you can also have an optional stop condition. In software, this might be useful. If you're writing a C program for simulation, this might be useful. Okay, H, C cap L transpose equal to 0 can be a stopping condition. Then you know that your output is actually a code word and you can stop. Okay, so those are ways of implementing this. So it's also good to visualize the stop soft decision decoder in so many ways. On the tanner graph, if you were to visualize, what's happening? You're getting soft values on the bit nodes from the channel. In the first iteration, soft values are going on the edges. Then again, step B, soft values are coming back on the edges. Step A of iteration two, soft values are going back again. Okay, in the first iteration, step A, from a bit node, all the stop values, all the soft values that are going out are all the same. But from iteration two onwards, that'll no longer be true. Okay, the values can be different on each edge. Okay, so that's the way to visualize it on the tanner graph. In an actual implementation on C or something, usually sparse matrices are easy to represent. Okay, so you can think of those edges as being positions on my sparse matrix, just like I showed you the animation with Gallagher A. So each of these ones of my parity check matrix can actually represent the message that is flowing on that particular edge. Okay, so and the zeros on my edge will not be anything, they'll be black and then they can be positive or negative. You can visualize it on the matrix also. It goes back and forth, back and forth, you update the matrix. So in fact, a lot of people think of bit node processing, the update of the bit node as column process, update of the check node as row processing. Because in implementation, you always think of a matrix, right? So the graph, well, it's a useful tool to visualize it. On an implementation, maybe the matrix is what is easier to think of. Okay, so people will say row update, column update, row update, column update, things like that. Okay, there are so many other modifications for this when it actually comes to implementation. Okay, so actually, I mean, I could have done an animation for this also, but I don't know if there's much point in it. Just give me one minute. So we'll look at something else. Yes, sir, question. Okay, so the question is what happens to the messages that are passed as the iterations go along? Can we say they converge, etc., etc.? So things like that are very, very difficult to answer. Okay, so it's an immensely complicated system. This depends on too many parameters. So the only thing we'll do is we'll try to do an analysis like we did before. Remember, what did we do for gallery? We were only, we first assumed all zero code word. And then we were worried about the probability that a particular message is 1 at iteration l. Okay, so we'll do a similar thing for this also. Okay, but now the messages are actually soft values, right? They are real numbers. So what is the exact thing that we have to look at is something that we have to worry about. But if you want to look at the whole system in terms of convergence of the set of messages, I don't know if you'll get very meaningful results. It's tough. If you actually do the simulations and look at what happens, all kinds of crazy stuff happens. So it's tough to analyze these kind of complex systems. Any other question? So in an actual implementation, in C or something, you can do floating point representation for the messages and actually implementation VLSIP usually do fixed point. And it's okay, man. It will all work out. It will all work out quite well. Okay, all right. So let's start off with the analysis. Okay, so density evolution for soft message crossing decoding is the tricky thing. And we'll do that. Okay, what do you want to do it? Yeah, so you can do it with intensity of color. See, another thing to keep in mind about these LLRs is suppose I say LLR for a particular bit is one. Okay, that is one thing I can say plus one. Okay, what does it mean? I mean, how confident am I that that bit is zero and not one? Okay, I'll have a certain degree of confidence. But if I say LLR is 100, okay, suppose I say LLR for a particular bit is 100, what does it mean? It's almost zero. Okay, so when LLR is plus 100, the way I define it, it's zero. I mean, I know that for sure. I don't have to worry about it. So the magnitude of the LLR tells me something about how sure am I about that particular bit. Okay, same thing is true. If the LLR is minus 100, what does it mean? It is one. You can do the calculation. You do e power 100 and all. That's it. You don't have to worry about the other probability. You don't work on it. Okay, so the magnitude of the LLR tells me something about how strongly I believe that this is zero or one. Okay, the sign is what? Sign is whether it is zero or one. It's not a problem. So for instance, this question was, how do you represent it in an animation? So in an animation, when I want to show something, I should use different colors. I should previously, I had I think green for zero and red for one. I should have a whole bunch of colors between green and red in different proportions mixing as the LLR increases. The LLR increases, I should go towards green. Magnitude of the LLR increases and it's positive. Magnitude increases and it's negative. I should go towards red and they'll get a whole bunch of other colors in the middle. So that's the advantage of this soft. I mean, that's the whole point of soft decoding. So the fact that you're not just worried about the bit being zero or one. You're worried about how confident you are that the bit is zero or one. Okay. All right. So now density evolution becomes more difficult. So now I have to previously, I only had to track the messages that were flowing, well, the messages that were flowing on the edges for Galaga A where very simple random variables, they just took two values, zero or one. Okay. So I could simply track the probability whether actually I was tracking the PDF of that random variable, PMF of that random variable, right? But that PMF of a binary random variable is just one value, right? We know the one value, I know what the other value is. I was actually tracking that. Now, also, I will try to track the random variable which represents the message that's flowing on an edge. That's what I'll try to track once again. Okay. But now that random variable is actually a continuous random variable. So I love to track its PDF. Okay. So that's what density evolution in this case will try to do. We'll try to track, track or find just give me one minute, track PDF of, sorry. No, I don't know that thing is not coming. So I don't know, I've not figured out how to make it come. But I think it's getting recorded. So do you know how to make it come? I don't know how to make that disappear. Yeah, recording is going on, but I can't figure out if audio is getting recorded or not. Maybe you should have make this, can is it possible in Windows to make this disappear and show up only when it's needed? Yeah, I think you should do that. If you can do that, then maybe, maybe then that'll be okay. So we'll try to track PDF of messages. Okay. So that's what we will try to do. So it's good to start with the regular graph and think of this properly. So on a regular graph, what happens? On a regular graph, every neighborhood, see first question to ask is this is the same question I asked last time also, right? So it's okay. So it's possible for me to just hand wave through this, but before that, I want to just point out why all these things that we are handling through are slightly meaningful. Okay. In the previous case, we had neighborhoods for the regular case. So we'll start with the regular case. So I could just track one random variable for the message, right? Because the reason is every neighborhood is identical in the regular case. And the message after l iterations is a function of that neighborhood. And all the neighborhoods are identical. And if they are all tree like, the random variable I get for the message or the message will be identically distributed across all bits. Okay. So it's enough if I define one random variable for the message that's flowing on an edge. Okay. So that's the first question you should ask, right? I mean, if you have a graph, maybe you should have asked this in Gallagher A itself, but maybe people don't worry about it. If you have a graph, there are messages flowing on different edges. And in real life for a practical thing with finite n, the different messages will have, will probably have different probabilities, right? But we don't, we don't worry about it. We're saying we're going to let n tend to infinity since the messages, the graph spars for a finite l, all my neighborhoods are exactly the same. So the PDF of the random variable or the message that's coming out on the top will be exactly the same. So we'll depend on the exact same things in the exact same way. So if I denote a random variable for a message, I can have the same random variable for every edge. I don't have to keep using different random variables to track all of them. Okay. That another thing that helps me there is the all zero assumption. Okay. If I cannot make the all zero assumption, all these things will, will be problematic. Okay. So we'll once again make the all zero assumption. Okay. Again, this is justified by the symmetry of the channel, whether it is plus one or minus one, the channel doesn't care. It adds the same Gaussian random variable to it with the same mean and the same variance. Okay. So another assumption we'll make is neighborhoods are three like okay. Since I have a regular graph, I'm saying neighborhoods are three like all my neighborhoods will be exactly the same and all the random variables and all the messages will be according to the same distribution. Okay. So I don't have to have different distribution for different messages. If we go to irregular, what will I do? How do I take care of these neighborhoods being different? You average over all possible neighborhoods and then you say I have a concentration result that says any one case is very close to the average. So I don't have to worry about it. Okay. So that's the way to do it in the regular case. But for the regular case, you'll make all these assumptions and proceed. Okay. So that's one thing. You're tracking the PDF of the message. So, so the thing will get more complicated. Okay. So you'll have to worry about when you add random variables, when you add independent random variables, what happens to the PDF gets convolved. Right. So when you do some complicated function like f function that we had on a random variable, what happens to the PDF? How will you find it? So our function f is a nice monotonic function, right? Log tan apolic. So you can use one of those standard Jacobian formulas to convert from one PDF to another PDF. Okay. So all that all those tools we already have so one can possibly imagine tracking the PDF through your check node update and the bit node update. Okay. Previously, it was very easy, just simple, discrete probability. Now you have continuous random variables, but the random variables are only being added and some nonlinear function is being applied to them. You know how to handle all those changes on random variables in terms of PDFs. Okay. So you know what to do to the PDF and all these things happen happen to random variables. So we should be able to track the random variables. That's one thing. The other thing is what can I say about probability of error from the random variable and its PDF? Suppose I give you the PDF of a message. Okay. How will I compute probability of error? What is the probability that that message is in error? What's the meaning of saying the message is in error? When will the, when is the message is, when will the message be in error? Yeah. So I've already made the all zero assumption. So I know my code word was all zero. So when is the message in error? If it is negative. Okay. So that's the thing. Any message is in error because messages are LLRs. Any message is in error if that message is negative. So once I have a PDF for the message, I can quickly find what's the probability that the message will be less than zero. What will you do for that? Integrate from minus infinity to zero PDF. You'll get the answer. Okay. So that's another thing to keep in mind. Message is an error if it is negative. Okay. So what should the PDF be for the, for the message, if my probability of error needs to be zero? So how, how will it look if I plot the PDF for a message? Yeah. So it's, it's support will be purely positive. Right. So it will not have anything on the left side. Okay. It'll go off to something like that. Okay. So, so that's the thing that we will be tracking. So previously, what did we do for Gallagher? We had this PL and we wanted that PL to tend to zero as L tends to infinity. So what will we want for the PDF as L tends to infinity? We will want it to move to the right. Okay. So initially it will not, it will not be in the right, right? You know how to find the PDF for BPSK, WGN. It'll have some part which is negative also. Right. Slowly you will want, as you iterate, that PDF you will want it to move to the right. Okay. As it keeps moving to the right, eventually the probability of error will become zero. Okay. It will, well, we'll see, we'll see how it works out. Okay. So I want to stop here. I don't want to do a pickup with that later. We'll start looking at this next week.