 So this is lecture 24, am I right? So, we've been talking about quite a few things, mostly in connection with the Gallagher A decoder, decoder for LDPC codes. So we saw quite a few whole bunch of properties, hopefully it's all clear to you. The first thing is, it's an iterative decoder. In each iteration, there are two steps, right? There is a bit to check step and the check to bit step, okay? So those are all terminologies that you'll, I mean, things that people use to describe the different steps, I called it step A and step B, but step A is basically the bit to check stage and step B is check to bit stage, okay? So this is a specific example of what's in general called a message passing decoder. So roughly the way to think about it is, you have a graph which represents some constraints, right? And you decode by passing messages from between nodes along the edges on that graph. Anytime you do that, it's a message passing decoder. You can imagine, for instance, the very familiar Viterbi algorithm. One can imagine it's a message passing decoder where you're sending something along the edges and all these things are all possible, okay? So this is an instance of message passing decoding, it's iterative, okay? And then we saw some analysis. The entire analysis was hinged on this recursion, okay? So we were able to do this recursion, right? We were able to develop this recursion, which is probability of an erroneous message from bit to check at iteration L, okay? So we were able to write that as a function which was parameterized by the row weight and the column weight and the arguments were the probability of transition P and the same probability in the previous iteration, okay? So a step like this is called density evolution, okay? So why density? Well, probability density, in this case, it's only probability, right? So there's no real density going on. But when we move to soft decoders, you'll see instead of iterating with probability values, you'll be iterating with PDFs, which are probability densities. So at that point, it will become density evolution. But in general, any procedure like this for analysis is described as density evolution, okay? So to do all this, to do density evolution properly, we needed few assumptions to hold. One was this, all zero code word assumption, okay? How was this justified? When I didn't really fully justify it, I said because of the symmetry of the channel, something like this has to happen. I mean, we didn't run through the entire proof, but the proof essentially depends on the symmetry of the channel, okay? So that was one assumption that was needed. What was the other thing that we needed? The IID assumption, which again comes from the cycle-free neighborhood assumption, okay? So I'll call it tree-like neighborhood, okay? So whenever you have a graph which has no cycles or closed loops, it's called a tree, okay? So the tree-like neighborhood assumption, okay? So you'll see this will also be the same, okay? So most density evolution that you do will depend on the all zero code word assumption and the tree-like neighborhood assumption, okay? So today, very recent work, it's possible also to do density evolution for some other scenarios, okay? Because people have done some work on it. But it's way more complicated than what we can do in a class, okay? So this is just the standard density evolution out there. The high point of density evolution is the notion of threshold, okay? So threshold in our case was some p star, okay? Below which if p is less than p star, what happened? Pl will tend to 0 as l tends to infinity. If p is greater than p star, pl would tend to a constant value, okay? So threshold is almost like one value which determines the performance of your code. Suppose you decide to use a WR, WCWR, regular LDPC code on a binary symmetric channel, okay? How will you decide whether decoding is going to be successful with high probability? Look at the transition probability and compare it with your threshold. If the transition probability is less than the threshold, then you're going to succeed with high probability, otherwise you're not going to succeed with high probability. One can make that statement more or less, right? I mean, it's not exactly true. I saw it in the plots, but it's okay, more or less true, okay? So the crucial nature of these two assumptions, well, this assumption will anyway hold. There's no problem with that assumption. But this assumption we saw was not so crucial in practice, okay? So I showed you plots where I took codes which I know had cycles of length 6. It's quite small and I still ran how many iterations, 10 iterations. And you could see that the behavior was quite close to the threshold, okay, track the threshold. So this assumption, while it is necessary for the technical validity of density evolution, in practice might hold even when the tree-like assumption is violating, okay? So maybe somebody very smart will come up with a proof for why that is so, or maybe some other change will happen in the future. As long as it doesn't happen, we can always fall back to the simulation and justify the assumptions, okay? So I just want to illustrate this threshold p star with another plot, just to show how to compute this threshold, hopefully. So length 4 cycles are a problem, they are a bit of a problem. So question was, I said length 6 cycles seem to be okay. What about length 4 cycles, right? So typically you'll see length 4 cycles are a bit of a problem. It'll be slightly worse and it will also give you some other problems with the BER plot. So maybe that's something you have to use when you, if you try to extend the proof for density evolution for graphed cycles, why do length 4 cycles matter more, okay? All right, so let me see if I can find this picture, which shows this density evolution and finding, yeah, I'm going to show it here, just give me a minute. Okay, I'm not able to see the, okay, maybe I don't have it, okay? So let me just quickly check, I thought I had it, okay? So maybe it's on my thing, I'll show it to you later. So, okay, so the question was, how do you find p star once again? So it's not too difficult. So you basically use this definition itself, okay? So how do you find p star? What is the property of threshold, p less than p star implies? What, pl tends to 0, okay? p greater than p star implies? Pl will tend to, will not tend to 0, okay? So it will tend to some finite value, okay? You just use this definition. You start with a very low value of p, you'll see it will tend to 0. Keep on increasing it slightly, till you get to a point where it does not tend to 0. The place where that transition occurs is your threshold, okay? So it's very easy to use this method to do that, okay? Is that fine? Okay, so the next few things that we're going to do is slowly extend this further, okay? So the first extension, so another point we observed is, okay, I just showed it to you. The 3,6 regular code rate half had a p star of 0.04, okay? So that's one thing we saw in that example, 3,6 regular LDPC code, okay? So I'll say code has threshold, but you should realize that really has not much meaning, right? How did we think of a code? The code itself, basically when I say 3,6 regular, it's actually an ensemble of codes, okay? So you make a random selection from that ensemble, you will get this threshold, okay? So this has threshold 0.04, okay? The rate of this code is what? Half and capacity is, if you remember, 0.11, okay? So there is a gap, so it's far away from the capacity, not very far, but quite far, okay? Maybe we would like to have a code which is better than, which has a threshold better than 0.04, okay? So what else can we do? If you want to keep rate as half, what other regular codes will have rate half? For what other WCWR, will you have rate half? Yeah, WR is 2 times WC, right? That's the only constraint you need. So if you put 4 here, 4,8 will also have a rate half. So if you keep doing that, you'll see for 4,8 regular LDPC codes, the threshold under some decoding will slightly become better, but after that it will start becoming worse, okay? So roughly this is the best you can do with regular codes, okay? 4,8 is slightly better in some cases, but in general with regular LDPC codes, this is the best you can do, okay? So you cannot get any closer, okay? If you want to do better than that, you have to study some modifications, okay? So you remember the way I defined LDPC codes was what? Any code with a sparse parity check matrix. And then I said, I'm going to look at regular codes, okay? So you'll have to relax that assumption and look at other sparse parity check matrices, possibly to close this difference, okay? What is your question? I'm sorry? No, okay, cap is capacitive, sir. So I'm saying 1 minus h of 0.11 equals half, I've done this calculation before, okay? I showed you the plot also sometime and you said 0.11 is where it gets to half, okay? All right, so this is the scene with regular codes. So maybe you need something else from the ensemble, okay? But then the definition is still very general, right? Sparse LDPC codes, maybe there are so many sparse matrices, you need more structure. There's a very nice way of introducing that structure based on column weights and row weights once again, okay? So that's one direction in which we'll move, okay? So the next, so this is things to come, okay? First one is irregular codes, okay? What are called irregular codes, okay? That's the, that's one direction. The other direction that we need to move towards is soft decoding, okay? So this is a, this is a very huge and real selling point for LDPC codes and practical applications. The fact that you have very efficient soft decoders which are implementable in today's VLSI and DSPs and such devices, okay? So that's a major plus point and we'll have to do that but I want to postpone that a little bit or maybe I'll do that first. I don't know, I'm still debating if we should jump into irregular codes. Because the moment you jump into irregular codes, the things become more complicated. You'll have to do a lot of notation and lots of things to understand. So at some point we'll have to make a decision, maybe this class will make a decision and either move towards irregular codes first and then look at soft decoding, then go to soft decoding slowly. I think that's the best thing to do. I'll do regular codes first, then describe them a little bit and get you used to the idea, then figure out how to do density evolution for irregular codes and all that. And then we'll move to soft decoding, okay? So let's just complete this picture here with the BSE and then move to soft decoding, that's better, okay? So both of these are quite important. And we'll begin by looking at irregular codes. So what am I going to do now, okay? So I want to generalize that this requirement that every column should have a constant weight and every row should have a constant weight, okay? So I'm going to generalize that a little bit. But I know ultimately that the row weights and column weights are going to control the performance of my decoder. Why do I know that? Assuming Gallagher A decoding, okay? So I'm going to always assume I'm going to do Gallagher A decoding. Whatever matrix I get, I'll do Gallagher A decoding, okay? So if you go back and look at Gallagher A decoding very carefully, okay? It depends on the column weight and the row weight and go back and look at the way we analyzed Gallagher A decoding. The density evolution is parametrized by the column weight and row weights. So you know the behavior of Gallagher A decoding is going to be controlled by the column weights and row weights, okay? So you'll see the way you define irregular codes will also be by restricting column weights and row weights, okay? Right now you said all columns should have weight three, all rows should have weight six, okay? And then you got the three, six regular construction. You'll now say so many columns should have weight, say two. Then so many columns should have weight three, so many columns should have weight five, okay? Likewise, you'll define some constraints. Same thing you'll do for rows. And then look at all codes which satisfy those constraints. Those become irregular codes, okay? So it's a very simple definition, okay? So that is your extra degree of freedom that you need, okay? The distribution of columns, weights of columns and weights of rows, okay? Instead of being one value, it's now a selection of values. So how do you choose those values, okay? For that, more work is needed, okay? We'll slowly come to it, but we'll now assume, begin by assuming that somebody gives me this constraint. Somebody tells me so many, there are so many columns of weights, weight two, so many columns of weight three, etc. So many rows of weight six or seven or whatever, right? So somebody tells me that, and then I'm going to study the set of all LDPC codes which satisfy that constraint, okay? First I'm going to do that, and then we'll see how to come up with those constraints, okay? So let's slowly take us to this notion of irregular LDPC codes, okay? So, okay, so first thing is notation and specification, okay? So we always, initially when we specify codes, we start with block length, but for LDPC codes, we don't do that, right? We only start with this column weights and row weights, and then we say we go to a block length large enough where this can be accomplished, okay? So I'm going to start by saying how many columns have weight, weight two, three, so on, okay? So for that, I'll use this notation Li, okay? Li will be fraction of columns of weight i, okay? That's what I'm going to say, capital Li to be, okay? So what is i? I may be, I mean you don't want i to be zero, right? I mean, but typically one can start with say i, 1, 2, 3, etc. So you'll see later on, typically even i equals 1 will not be allowed, okay? But one can imagine that's how I define my L, okay? So later on maybe i equals 1 is not a bad idea, not a good idea, but right now we'll just allow, we'll just say this is fine, okay? Similarly, I'll have rj which is fraction of rows, okay? fraction of rows of weight j, okay? Likewise j goes 1, 2, 4, okay? So for regular codes, how would I do it? Yeah, so L sub Wc will be equal to 1 and all other Li's will be 0, r sub Wr will be equal to 1, okay? And every other r sub j will be 0. So regular codes are a special case of irregular codes, okay? So maybe that's a bad definition, maybe we should say these are general LDPC codes as opposed to saying irregular, okay? Regular codes are a subset of this, okay? So that's the way to think about the regularity, okay? So instead of writing down vectors like this, it's also convenient to collect them together into a polynomial, okay? So we can do that also if we want. We can say L of x equals summation Li x power i, okay? And r of x equals summation rj x power j, okay? So it's just a way of thinking about it. It's just a vector. That's what's most important. You can think of it as a polynomial if you want. Then it becomes a degree distribution polynomial, okay? So these two are supposed to be distributions with the node perspective, okay? I'm taking the fraction of bit nodes of weight of degree i, okay? fraction of check nodes of degree j, okay? So the numerator, the denominator, I have total number of columns. The denominator, I have total number of rows which correspond to node perspective. As we move along, we'll also define an edge perspective degree distribution, okay? There's a little bit more involved, but we'll start with this first and then we'll move on to the other thing, okay? So the next thing is rate, okay? So how did it work out for the regular case? The regular case, rate became a function of just wr and wc, 1 minus wc by wr, okay? It became a very simple function. In this case also you'll see, you can write down the rate as a function of capital L i and capital Rj, okay? I'm going to give you a few minutes to try and do that, let me see, okay? What was useful in the previous case? Something was useful. Counting the number of ones was useful. So how many lines? It's three lines, okay? So people are telling me that the rate, designed rate, okay, remember this is only designed rate, why? The actual rate can be larger than this, right? Because you can have the number of rows being, all your rows being linearly dependent, in which case the actual rate will be larger than this. But in general, the design rate, like they've calculated, will work out to divided by summation J rj, okay, is that fine? Simple enough, everybody agrees, okay, good. So for the regular case, it will become 1 minus wr by wc by wr, okay? So now you'll see the rate, there is some advantage to using some polynomial type notation. If I were to write L of x to be summation L i x power i, then R of x to be summation rj x power j, okay, so you can do some nice, right, what can you do to simplify this, okay? You can say this is 1 minus L prime 1 divided by R prime 1, okay? So it's more succinct, edit L prime, L prime is derivative, okay? So it's just a convenient shorthand, you think of it as polynomials, all these expressions become, you want to write some fancy sigmas, just L prime of 1, R prime of 1, okay? Maybe you think it's not that crucial, but it's there anyway we can use, okay? That's the design rate and the actual rate, as I said, can be larger than this, okay? So that's the definition, this is clear, right? It's clear, it's okay, people are happy, okay? So it's good to think of some examples, always find it difficult to give examples. The first example is the regular examples, I will not talk too much about them. Second example I want to talk about is maybe some very simple irregular example, okay? Suppose I say I want to fix, I'll say I'll take L1 to be 0, okay, that's for fun. I'll take L2 to be 0.5, okay, what else can we do, sorry? What about L0.5, that you like, okay, which L should we take, 4, okay, everybody agrees so you want more, 4 is 0.5 is good, okay? Let's do 4.25 and then 8.25, just for fun, okay, 1, 2, 4, 8, is that fine? Can I make L8 as 0.3? No, why? Because L8s have to add to 1, okay? So those conditions need to be satisfied, for instance here I can say L of 1 equals R of 1 equals 1, okay, okay, now I'm going to say I allow only, okay, let me say I want to allow 2 consecutive degrees for R, I don't want to allow all kinds of degrees for R, okay, for my row weights, I don't want to allow all kinds of weights, I want to allow only 2 consecutive weights, okay, some X, not X, some WR and WR plus 1, okay, only those 2 weights I want to allow, I want you to figure out for a rate half code what those 2 weights will be and what the fraction will be, okay, I'm going to say rate half, okay, R sub W equals, not equals 0 and then R, W plus 1 not equal to 0, all other RJ equal to 0, I want you to figure out what W has to be and what the fractions have to be, hopefully it will work out to something decent, I've not tried this before so I'm just running it cold on you, we'll see, but I know it has to work out just based on this, you don't need any more information, you have to use 2 different things, you have to use R of 1 equals 1 and then rate equals 1 minus L prime of 1 by R prime of 1, so you'll have 2 variables, 2 equations, right, no, it's not working out, something is wrong, no, no, no, it should get something decent, no, no, no, no, that's to work out, think about it for a while, can you not, can you not choose some values, is it going to work out, R4, L4 and L8, that's sad, you're not getting, okay, maybe you want to change something, maybe make L8.5, may L4 0, play around with it, you should get, I'm surprised that you're not getting, you don't want that to be an integer, okay, so then just pick something else, pick, yeah, something like that, no, don't pick L7.5, pick L7 to be 0.25, yeah, try something, if you want it to work out, I don't know, there is a contention that maybe you don't want this 8, maybe you want this to be 7, okay, try that, try that with 7 also, that works out with 7, with 7 it works out, okay, so I'm being told, if I change this 8 to 7, things will work out, okay, and you will get W equals 7, and RW is what, half, and well, we just say R7, then R8 is half, okay, please check if this works out, we have enough people agreeing we can proceed, did you get this, you fine, okay, so looks like I think for accurate solutions, you need some nice things, but typically you can live with 0.5001 as your rate, you don't need 0.5 exactly, right, so if you allow for such things, you'll see for most distributions, this will work out, okay, anyway, so here is a irregular code, so let me write it out afterwards, so now to denote irregular codes, so regular codes we had WR, WC comma WR, okay, so notation for irregular codes basically is LXRX, LDPC code, so you see this is another advantage of the polynomial representation, right, so in our examples, what will be a regular code, for instance an X power 3, X power 6, LDPC code will be a regular 36 LDPC code, so when you only have 1 degree, no point in writing X power 3, you might as well write it as 3 comma 6, but the other example that we saw is what, X squared by 2 plus X power 4 by 4 plus X power 7 by 4 and X power 7 by 2 plus X power 8 by, okay, both these codes have rate equals half, okay, design rate half, okay, so that's how the examples for irregular codes will come, so suppose I ask you the question, can you characterize all LFX and RFX for which design rate equals half, what kind of a characterization will that be, what will you get, yeah, so all you have to do is do a 1 minus, right, so what do you know, suppose I want to look at, okay, so suppose I ask a question, all LFX, RFX such that design rate equals half, what is the only thing I care about, half equals 1 minus summation Ili summation j, rj, okay, so I am using capital R for two things, hopefully it's not clear, okay, so when I say r alone its rate and if I say rj it is, or RFX it is a degree distribution, okay, I am sorry for that, just worked out that way, so then what do you do, all it works out to be is a linear equation, right, it's a nice linear equation, it says sigma Ili minus sigma j 2 rj equals 0, okay, so this is the linear equation you get in In rj, nice linear looking equation, I times Rli, so maybe it's not that good, but it's at least a linear equation, now I can do a further step and do the same restriction, so it's very typical to restrict like this, it's very, very typical to restrict rw not equal to 0, rw plus 1 not equal to 0 and rj 0 for, for all other j, 2 should be for Li, yeah, okay, so some such thing, okay, so other j, so for j not equal to w you have, this is a very typical restriction, okay, so you basically for r you have only one variable, rw, what is rw plus 1, it's 1 minus rw, okay, so you already get that and then everything else is 0, so you will get only two values for r, okay, you might ask me why I want to do that, but just it turns out it's good enough, okay, so we'll see later on, but so it's good to concentrate on those things and for L it's typical to restrict the maximum i, so I want to say Li equals 0 for i greater than or equal to greater than, let me say strictly greater than, some dL which is my maximum left degree, okay, so beyond a certain weight I'm going to say Li is 0, okay, it's very typical to restrict your space this way, okay, and then how many variables do you actually have? You have L1 through LdL, but there's only dL minus 1 there, okay, because they have to add up to 1, okay, so dL minus 1 and then w and rw, so you have only so many variables and there's a very nice equation which characterizes what those variables have to satisfy, okay, so that will give you the set of all distributions which have rate half and they satisfy all these requirements, okay, typically we'll restrict ourselves to these codes, even among this set of all irregular codes, we will restrict ourselves to codes which have these type of distributions, okay, well, again this is what is used typically, okay, so and it's good enough for our purposes, so dL will typically be 10 or 20 or not more than that, you don't want too large a degree, you'll see large degree means more calculations here there and all that, so it's not a very nice thing to have a large degree, yes, yeah, I mean it's, yeah, most of these restrictions are justified by simulation, that's what you're asking, so for instance the code that gets to within 0.0045 dB of capacity has dL equals some 8000 or something, okay, but everything else, it satisfies the same, so that's how it goes, okay, so these are the definitions for irregular codes based on node perspective, you should get some degree of comfort with these LI, RJ and all that, okay, I know it's not too difficult, it's just basic counting and simple arithmetic but it can get a little bit tricky, okay, so the next thing to worry about is how do you construct, okay, suppose I give you this L and R and suppose I give you an N, N large enough say how do you construct an LDPC code which satisfies all these things, can we use Gallagher construction, is it very easy to use Gallagher construction, it's not easy, right, so Gallagher construction worked very well just for the regular case, so you'll have to go back to the socket construction and in the socket construction you'll have different sockets for different nodes depending on the fraction that you have, okay, but the socket construction will always work, right, do you agree the total number of sockets on the left hand side equals the total number of sockets on the right hand side and if you pick any permutation you will get a valid tanner graph, from there you can go to a parity check matrix which satisfies all these constructs, okay, so the socket construction will work in the general case, that's a nice advantage to have, okay, so that's about construction, I don't want to say more about construction, we'll just stop there but it's possible to do construction, okay, the next thing I want to talk about is the same thing from an edge perspective, okay, so it's possible also to define these fractions from an edge perspective, so okay, these capital L and R are nice but it turns out for density evolution you will need these edge perspective things, okay, these are much simpler, density evolution is nice and nice to describe if you have edge perspective degree distributions, okay, so what is Li and RJ, Li is the fraction of nodes which have degree i, likewise I'm going to define rho i to be fraction of edges connected to degree i bit nodes, okay, so if I have to calculate this fraction what will come in the numerator? Denominator, okay, let's start with the denominator, total number of edges, okay, the numerator you will have number of edges that are coming from degree i bit nodes or number of ones that are coming from weight i columns, okay, denominator you have total number of ones in the parity check matrix, the numerator you have number of ones in the degree i or weight i columns, okay, what's the connection between rho i and Li? Do you have a connection between rho i and Li? Well, it's not quite straight forward, one needs to do some work for it but let's just start with this now, okay, that's rho i for you, okay, so this is defined for i equals 1, 2, so on. Similarly, okay, well I'm sorry, this is not rho, this is usually denoted lambda, okay, I'm sorry for this, okay, this is lambda, then there is rho j which is fraction of edges connected to a degree j check nodes, okay, if you want to calculate this from the matrix in the denominator you'll have again the total number of ones, in the numerator you'll have what? Number of ones in degree j rows, okay, or weight j rows, okay, there's a very simple connection between rho i and Li and rho j and rj, okay, so how do you calculate that? Suppose I have to calculate rho i, what should I do? For starters I might want to think of block length as n, okay, how will I compute lambda i? What does it mean to have degree i bit node? Okay, degree of a bit node is the number of edges that are connected to it, okay, when I say degree i bit node all the bit nodes which have degree i, all the columns which have weight i, weight of a column is equal to degree of a node, okay, I think I made, I must have made that remark sometime, did I do that? Maybe I made it in passing people didn't register, okay, so in the tanner graph the columns correspond to the bit nodes, rows correspond to the check nodes, number of ones, each one corresponds to an edge, okay, so if you look at a weight i column that bit node would have degree i, it would have i edges connected to it, if you look at a weight j rho that corresponding check node will have j edges connected to it, okay, so all these correspondences should be clear, okay, so how will you compute lambda i given for an L of x comma r of x LDPC code? How will you compute lambda i? Li into i divided by summation i Li, okay, do you agree? Okay, so just multiply by n to see what happens, okay, what is Li times n? Total number of bit nodes of weight i, do you agree? Li is the fraction of bit nodes of weight i, so Li times n will be the total number of bit nodes of weight i, multiply that by i to get the total number of ones from weight i columns, now why am I summing over all i? That gives me the total number of ones, okay, that is all, so this n will cancel, so you will get L times i divided by summation i Li, okay, that clear, okay, similarly what is rho i? Rho j, I am sorry, same way, j r j divided by sigma j r, okay, so apparently there is no apparent reason why you need edge perspective degree distribution, seems like more arithmetic to confuse and complicate matters, okay, so eventually we will have some use for it, I will show it to you when that happens, you will see at that point this edge perspective thing is really really useful, okay, so it is good to work through this arithmetic whenever you get time make sure you understand this very very clearly, okay, so just like before we can define rho of x and L of x, okay, but for a change we will do rho of x as summation rho j x power j minus 1, okay, similarly lambda of x as summation lambda i x power i minus 1, okay, just for fun, okay, x power i or x power i minus 1 is the same, I mean I know how to convert from one to the other, so it does not matter, okay, okay, now I want you to spend some time, this will really test the arithmetic and this understanding of get an expression for rate in terms of rho and lambda, the design rate, let us see, let us see, let us see, people are still struggling with the derivations, when I can write down the expression but then there is not much learning there, so you have to struggle with this, you are done, you are happy, it is a very simple calculation but still it is just, there are so many i's and l's and r's and lambdas and rho's can get confusing, lambda cannot come in the numerator, see, make sure for the regular case it works, 1 minus, oh, you will get integrals on all that, yeah, 1 minus integral rho from 0 to 1, that is the formula, okay, so the final answer that you should get will look like this because I know the formula, okay, so the design rate will work out to 1 minus summation rho i by, rho j by j, I am sorry, let me write it down clearly, rho j divided by j divided by summation lambda i divided by i, okay, so another way of writing it in a very simple way, you will see why I put j minus 1 and all that, okay, so I can write this as integral 0 to 1 rho x dx divided by integral 0 to 1 lambda x dx, okay, so this is the formula for the design rate, okay, so make sure you can get to this formula, okay, okay, so for this example that we had, which is what, x squared by 2, did I get that right, x squared by 2, x power 4 by 4 and then x power 7 by 4 and then x power 7 by 2, x power 8 by 2, in the node perspective, what is the degree distribution in the edge perspective, okay, so you will get x and then you will get a x power 3 term, then you will get a x power 6 term, here you will get x power 6, you will get x power 7, okay, so what are the constants that come here, you can just plug it in and solve it, we will get some numbers and we will stop with this for this lecture, okay, what do we get, I am sorry, okay, 111.75 in the numerator, okay, so what is, okay, so you are saying it should work out to 1 by 3.75, 1 by 3.75 and then 1.75 by 3.75, okay, you might want to check this calculation, okay, it is a simple calculation going back and forth, what about this next one, denominator has to be the same, numerator you will get 2, okay, 1.75 by 3.75, 2 by 3.75, okay, those are the edge perspective degree distribution, so it is also possible to say, instead of L of x, comma R of x, you can also say lambda of x, comma rho of x, okay, so another exercise that you should try is, given rho of x and lambda of x, how do you compute L i and R j, how do you go to L of x and R of x, okay, make sure you have a set of formula for that, it is not, again once again there will be something involved but it is not too difficult, okay, see previous formulas were given L i and R i, R j, how do you go to lambda i and rho j, so that was i L i divided by summation, j R j divided by summation over all, okay, how do you do the reverse, okay, it is a simple thing, one can write it down and get that, okay, so make sure you can do that also, okay, so we will stop here.