 Okay, so this is lecture 20 once again and so the last thing we were seeing were things like coding gain and capacity and all that I am not sure if that sunk in very well but I think it is not too crucial in this course but you should know that there is such a notion. So the crucial idea was there is at least for BPSK over AWGN, there is something called capacity, maybe it is denoted C, it is some function of what SNR, that much I was able to say. So what is capacity? Capacity is a rate, in this case since it is BPSK capacity is going to be between 0 and 1. So it is the rate at which you can transmit, C is going to be a function of SNR, it is a monotonically increasing function, I even showed you a plot from 0 to 1 you can actually calculate then plot it if you want. So usually you think of fixing the SNR and what is the maximum rate at which you can transmit okay but there is also the dual or the other way of looking at it with the inverse of this. This function is nice monotonically increasing and all that, so obviously the inverse will exist okay. So if you invert it you will get SNR as some inverse function of C okay. Suppose you know you want to transmit at a particular rate, what is the minimum SNR that you need for achieving what, arbitrarily low probability of error, what do you mean by achieving? I mean again you do not know if you are going to achieve it or not but at least you know that there exists a code which will achieve arbitrarily low probability of error okay. So this is called the SNR achieving a capacity exercise. So both views are the same, so that is why for rate half for instance I said the minimum SNR you need is some 0 dB, 0.2 dB okay, so that is the idea, you can also view it that way okay. Coding gain is kind of defined in this form okay, so for instance we said C equals half implies SNR is greater than or equal to say some 0.2 dB right, so this was one point I gave you. Suppose it for uncoded, for uncoded you wanted an SNR of around 13 dB for getting 10 power minus 6 bit at a rate okay, so if you say arbitrarily low bit at a rate is the same as 10 power minus 6 okay, then you see with coding one can potentially get a big reduction in SNR but the problem with this is you are not accounting for the rate normalization, when you do rate half you are spending more energy and you are actually using the channel more okay, so you have to normalize with respect to that. After you normalize the right way of doing it is in terms of EB over N0 okay, so all you have to do is take this formula and say the okay, so this SNR is actually capacity achieving SNR okay, so likewise the capacity achieving EB over N0 will be what? It will be what do you do to SNR to get EB over N0, divide by 2 times rate okay, so how do you do that? 1 by 2C FN was C okay, so this is a very simple way of moving to EB over N0 if you will and then you can normalize with respect to this and then compute SNR, for C equals half there is no difference right, notice for C equals half what happens, it is going to cancel with that you do not get anything, so it is not a big deal, so you can even think of EB over N0 it is the same thing as SNR but uncoded EB over N0 will come down okay, for EB over N0 for uncoded systems what do you need around 10.5 dB you can get 10 power minus 6 okay, is that clear? So for uncoded EB over N0 is about let us say 11 dB okay, just to give you a round number for 10 power minus 6 BR okay, so for rate half EB over N0 is greater than or equal to let us say around 0.2 dB okay, for maybe even 10 power minus 6 BR arbitrarily low is what we know but let us say 10 power minus 6 okay, so you notice for error-free transmission around 10 dB of coding gain is possible okay, what do I mean by that? In the BR versus EB over N0 plot okay, in the BR versus EB over N0 plot the uncoded curve will hit 10 power minus 6 at around 11 dB, you can have an actual code for which the curve will hit 10 power minus 6 at close to 0.2 dB okay, so that difference is the coding gain okay, so around 10 dB or 11 dB of coding gain 10 dB I think let us keep it as 10 dB, 10 dB of coding gain is possible by constructing a good rate half code and as I told you very recently people have constructed rate half LDPC codes which can get very very close to this point okay, so 10 dB that is a huge coding gain even accounting for the rate normalization okay, so that is the I think that is that should be more than enough for this class that notion of capacity that the fact that capacity is important and all that okay, any questions on this and I also showed you as a simple calculation that for the repetition code you would not get any coding gain right, I showed you those two simple calculations, so let us see a few more plots okay, so I will start with the hamming code okay, so you know the hamming code right, so what do I zoom, should be something to zoom now, yeah we can maximize, thanks can you see it better now, is that better, will you snubbly clear okay, so that is the BR versus EB over and not plot for the hamming code for the 7-4 hamming code okay, so the simplest code out there okay, so I have plotted three curves one I have said is uncoded okay, this is the uncoded guy okay, this one is the uncoded one, so maybe what I should do is I should insert it into the windows journal right, let me do that I think then I can write on top of it, can you see the removable disk okay, so that is nice okay, so that is the EB over and not plot can you see it, can you see it right okay, so this guy is the uncoded okay, the uncoded curve okay, maybe you cannot see the colors very clearly okay and this one is the maximum likelihood decoding for the hamming code, the green curve okay, so this is the maximum likelihood decoding for hamming code, so soft maximum likelihood decoding okay, so you can see, well you cannot see the numbers here very clearly I have said coding gain at BR of 10 power minus 3 is around 1 dB okay on top of, oh maybe I didn't, maybe I took the wrong curve for the coding gain okay, so this is not correct, maybe it is not 1 dB, it is less than 1 dB okay, so I guess as 10, yeah so hamming code will not give you that much okay, so I am sorry, yeah I will comment on okay, so you see that is the coding gain, I have somehow taken the wrong distance there, so you see this is the coding gain for the hamming code, I have to look at it with respect to the uncoded picture okay, can you see a lot of, the screen seems to be, but is that because of the picture or is it because, is it that always projector is it, yeah it is pretty bad I have to make that change okay, anyway so that is the coding gain, so hamming codes, well I mean so, but this is remember this is hard decision, soft decision ML decoding, so what should I do, how is soft decision ML decoding done for the hamming code, what do you do, you forgotten soft decision maximum likelihood decoding, what do you do, yeah you do correlations, so how many correlations do I have to do, 2 power, so it is a 7-4 code, so you have to do 2 power 4 right, 16 code words, so 16 symbol vectors, you have to correlate the received vector with 16 other vectors and pick that one which give you the maximum correlation, so it is quite complex, it is not a simple decoder okay, on the other hand the syndrome decoder for the hamming code is very simple, but it is a hard decision decoder right, you have to make hard decision first and then do the syndrome decoding and that is this plot, this plot is the hard decision, so you see hard decision syndrome decoder and you see that is not giving you that much coding gain, so doing hard decision is quite suboptimal okay, in fact it is not giving you any gain at all up to some point and after some time it is crossing over okay, so eventually at very high SNRs, at very high SNRs it will give you some coding gain, but low SNRs it is not going to give you coding gain, it is going to give you a loss, what is the question, it is not becoming better, at no point is it better than soft decision, yeah the green curve is sufficiently far away okay, the uncoded is crossing over with the hard decision decoder, so that can happen you know I mean hard decision is quite suboptimal and the uncoded can be better, can be better okay, so you are saying okay, so will you see this crossing over that happens all the time, I don't know it depends on the code, depends on the code, but yeah I mean at lower EBO over N0s there is really not much coding gain to get okay, in general it will happen, but it will yeah yeah we will say yes there is at lower lower BERs higher EBO over N0s there is possibility for slightly larger okay, so yeah so I am going to talk about that next okay, so how do you think I got this plot okay, so how did I do correlation with 16 vectors and figure out what the error was and of course I can't do any analysis right, it is not possible, so this plot was obtained by simulation okay, so I generate a lot of code words, I do the BPSK modulation, I add Gaussian noise, I transmitted across I mean so basically do the whole thing in MATLAB okay, add noise vectors, do the correlation and count the number of bit errors that actually happened okay, and then divide the number of bit errors divided by the total number of bits transmitted, I get bit error rate at a particular EBO over N0 okay, it is a very easy simulation to do, but at least for the Hamming code it is a easy simulation, otherwise it is tough okay, so that is how I got this plot, but the uncoded curve you can obtain and in fact even the hard decision you can get an expression, that is why it goes all the way down below to 10 power minus 6 and the ML curve stops at 10 power minus 3, why did I stop at 10 power minus 3? Yeah see if to get a particular bit error rate reliably I need good statistics for simulation, so I should get at least 100 errors right, unless I get 100 errors I cannot really say anything, if I have to get 100 errors how many bits do I have to send at 10 power minus 3 bit error rate, 10 power 5 right, so I have to do a lot of simulations, a lot of computer time okay, so I do not want to do certain kind of simulations, so I stop at some point okay, so when you do simulation all these things become important okay, so let me come back to this question, so you were saying can we say that linear codes will achieve capacity, yeah for binary symmetric channels even for binary input AWGN channels, one cannot give linear codes that enough okay, anyway so that's the plot for the Hamming code, you might say it's a simple enough code but at least giving you something right, for the repetition code you might be totally disappointed saying it lies on top of the uncoded curve, but Hamming code it's a simple enough code, it's giving us something, so it's not too bad okay, so the next picture that I want to show you is for other codes some we have seen in the class some we have not seen okay, so this graph is not from my simulation, it's a book on convolutional codes by Zygan Girov and I think somebody else I'm not sure, it's from a book, I think it's fundamentals of convolutional codes okay, started the picture for a while and try to make your conclusions, so you have uncoded BPSK okay, then EB over N0 and DB on the x-axis, 10 power minus 6, so you notice everybody does the same thing okay, so 10 power minus 6 is the real point and then you have EB over N0, you see the uncoded line cuts it around 11 DB okay, 10.5, 10.5 I guess, it's where it cuts it and okay, so this code you might know what is this code, 256192 rate 3 by 4 reach Solomon code okay, I believe that's hard decision decoding okay, and what is that doing, that's cutting the EB over N0 axis at 10 power minus 6 at around 6 DB, so you're getting about 4 or 4 DB of coding gain with a reach Solomon code okay, and then there are these old bunch of things named as Witter B okay, so those are convolutional codes okay, so we have not talked about it, we'll talk about it after we read about, after we do LDPC codes, but those are convolutional codes and these lines are all capacity okay, so R equals 3 by 4, 0.86 DB is the capacity okay, R equals 1 by 2, approximately 0 DB is the capacity okay, so I get about the R equals 0 line on this, it's not too relevant to us okay, so you see while a coding gain of close to 10 DB is possible even at R equals 3 by 4, your good old reach Solomon codes, your good old give me some time please, your good old reach Solomon codes are able to give you only about 4 DB of coding gain, there's still another 5 to 6 DB which is out there which the reach Solomon codes are not able to give okay, so you need other techniques and even the Witter B codes and convolutional codes are going down to 5 DB but they're not able to go down to 1 DB, 2 DB, those kind of numbers you're not able to see with convolutional codes and reach Solomon codes okay, in fact for a long time people believed that it's not possible to do any better you know, you can't really get all this coding gain in practice okay till the recent rediscovery and discoveries of turbo codes and LDPC codes all these were not considered possible yeah, so forget about that curve really I mean just look at this line, forget about the curve okay, the capacity is 0 DB and 0.860 okay that curve forget about those lines, those lines are not relevant to us okay, so well reach Solomon codes are not bad you know I mean so you don't have to be worried, so upset about using reach Solomon codes they give you good coding gain, they don't give you great coding gain but it's okay, it's not too bad and convolutional codes we'll see I mean convolutional codes are also very easy to implement they're not too not very bad but they don't give you too much coding gain okay, so you should get used to this kind of a picture okay, so from now on most of the analysis kind of analysis we do will be based on simulation mostly okay and many of the motivations will come from simulation I say in simulations we observe that this is what happens you know, so that's how the many of the analysis will come from now on and you should know how to get this kind of a plot given any code and given any decoder you should be able to simulate it and get this plot okay, it's very easy to do that you know the definition of eb over and not know the definition of bitter right, so you just simulate get the code if you can analyze yeah it's great even if you cannot this is good enough okay, any questions on these kind of things, so what I want to tell you by all this is there is some hope out there even for traditional codes you give you some coding gain but if you really really want to get to capacity you have to do something more okay, so for instance we've been designing reach Solomon codes keeping distances the main criteria right minimum distance was a criteria we wanted to get as close as possible to the best possible minimum distance you do that you do all your codes even if you do bch codes you'll see they'll they'll be around 5 dB only, so bch codes and reach Solomon codes are great minimum distance properties but they don't give you the coding gain necessary to take ud capacity okay, what we do from now on when I define LDPC codes you'll see it's one of the most strange definitions out there that there is absolutely no concern about the parity check matrix in fact we'll say we'll generate the parity check matrix randomly okay, so you remember the care and concern we took to generate the bch and reach Solomon parity check matrices for LDPC codes I'll say we generated randomly okay and you'll see that'll give you a performance that pushes you to the coding gain okay it's kind of very contradictory at the end of the day the trick there is trick there has got to do with the proof of capacity okay, so I'll briefly mention that before I introduce what LDPC codes are and we'll see some basic definitions, so proof of capacity is the key to most of these things so proof of capacity involves what are called random codes okay it does not involve reach Solomon codes or LDPC what do I mean by random codes well you basically generate the code at random so you take for instance a generator matrix and put random entries for each bit 0 or 1 okay if you do that it turns out many codes are good no if you many codes end up being good many random codes end up being very good in terms of from a capacity point of view those are things that you can show okay if I have to show it it's a long derivation maybe maybe it's not so great but one thing that people knew for a long time is any random code is good when you make the block length very large okay suppose I want rate half codes as I keep making block block length very large I don't have to worry so much about constructing the parity check matrix very carefully I can generate it at random in many cases it'll be good okay that was also known but what is the problem when implementing that in practice suppose you generate a random 0 and 1 what is the problem decoding is a problem how do you decode okay whenever you say those codes are good you assume ideal decoders either the soft input ML decoder or the hard input ML decoder but decoding is a big problem you can't decode that is why reach Solomon codes are very nice all the structure helps you in the decoding but if you do random codes you can in fact do way better than reach Solomon codes get close to capacity in many cases if you can decode correct if you can do the decoder so that was a problem okay so I know I have not proved this to you but take that as an axiom okay so if you construct a code at random it's very likely to be good except that you can't decode it so the main motivation behind this LDPC codes and all that is can we have a code as random as possible but still decode okay in fact even decodable by some approximate algorithms of decode can it get close enough to the optimal algorithm and then can we claim that it will get to capacity okay so that will be the logic okay and it will seem strange at many places because it will deviate a lot from the very rigorous way in which we've been constructing parity check matrices but believe me when I tell you when you actually implement it and see the BER versus EBO and not plot these codes will be way better than reach Solomon codes and the convolutional codes that have been designed taking a lot of care okay so so so let me write just a few lines to tell you how it works it's also known that many random codes are good many random codes can well I'll say can can approach capacity at large block lengths when I say many you know what I mean is you just generate any one random code randomly there is a good probability that it will get you close to capacity so suppose if you if you can decode it well what would you do just generate one random code after the other do your BER versus EBO and not plot and check which one is close to capacity you'll quickly find one that is close to capacity okay that's what it means when I make this statement okay quickly in the sense we'll eventually find no that's the thing yes I don't know I don't think it's there is a connection between entropy and this there are other other I don't I don't want to make such connections okay so there is it's basically a it's it's more of a combinatorial property it's just that when you generate it at random they'll it'll it'll work well that's all okay so this was also known and and the modern modern codes okay what is the problem the problem is decoding okay problem with random codes is decoding decoding okay so implementable decoding is very very hard so in modern codes the approach taken is the following okay so you might say maybe this is not a very satisfying approach but at least it gets you to capacity okay the modern modern codes okay have a random part in their construction okay so okay they have a random part in their construction and decoding is efficient well efficient within codes efficient in today's technology okay you can build chips and you can write code or in DSP or whatever to decode these things very fast okay so that's what does mean that's what I mean when I say efficient okay so the philosophy is very different yeah ideally it would be great if I can have a deterministic nice construction like the Reed Solomon codes which will also take you to capacity but it seems like so far people have not succeeded in doing that if you have nice deterministic constructions then it's not taking you close to capacity with with efficient decoders if you have these kind of pseudo random like constructions then you have efficient decoders and then you go to capacity okay so there is a certain degree of trust required and there's also analysis possible for these codes okay so I'll hint at how to analyze it and maybe one or two analysis the steps we'll do in detail it's not possible to do the entire thing in great detail okay so this is kind of a prelude to this soft decode and this is prelude to what we're going to see next LDPC codes and if you want to gather all the thoughts together we have moved slowly away from BSE to VPSK or AWGN we're worried about capacity and seen what's the maximum possible coding gain and then we've seen very quickly that the traditional codes don't take you very close to capacity and then given you lose arguments for why some degree of randomness might be needed in your code construction so that you can get close to capacity okay so it's it's not very rigorous and unless you do a whole bunch of theory carefully it's difficult to make it rigorous also so gone through very fast and giving you some motivation okay so with all that background let's move into low density parity check okay so the shortened shortened version for the name is LDPC codes okay so so I'll give you the definition it will be very surprising it's not it's really nothing to really define for LDPC codes not like installment codes they don't have a specific parity check matrix okay so any linear code okay an linear code let me say why do I say any may not confuse you too much okay a linear code with a sparse parity check matrix what do I mean by sparse parity check matrix is said to be an LDPC code okay so what is sparse sparse is very few ones lots of zeros okay so I mean it's it is nothing I think there's nothing in this definition right so how do you construct that LDPC code how do you construct an LDPC code let's take a parity check matrix put a few ones here and there okay you get an LDPC code okay but but it's not that every LDPC code will be good okay so we can't say all that okay it's because I put a few ones I can't expect my code to be any good any good right for instance if I have an all-zero column what will happen obviously the code will not be good okay so there are further classifications careful study to figure out which of these LDPC matrices are very likely to be good okay again we won't have a deterministic construction which says this one will be good but we'll say if you have if your LDP if your if your sparseness of your parity check matrix is sparse and in addition it looks like this it satisfies some further constraints then it is very likely to be good that is the kind of statement that we'll make okay very different from the usual statements that you saw in terms of minimum distance and all that okay so one such low density parity check matrix construction which has produced reasonably good codes not the best possible LDPC codes but quite good LDPC codes is what's called the regular construction okay so we'll concentrate a lot on regular codes regular LDPC codes okay so regular LDPC codes are parametrized by three quantities okay one is the block length n okay what makes it regular is I'll say each column of my parity check matrix has to have the same weight weight isn't same number of ones okay that will be my regularity in position okay so so I need that common weight for all the columns okay so that I'll call wc which is the weight of each column okay so regular LDPC codes let me say the parity check matrix is h okay not only that in addition I'm going to say each row of my parity check matrix should also have a constant weight and that constant weight I'll call wr okay so these are the three parameters that will define my parity check matrix and it has to be sparse how will I make it sparse wc and wr should be small compared to n okay in comparison to n wc and wr should be small then I'll have very few ones in my parity check matrix why have I not specify the number of rows I'm going to say these three are enough to tell me how many rows my parity check matrix will have why is that okay so can you so what so given n and wc and wr you can actually calculate the number of rows how will you calculate that okay so the trick to that is to count the total number of ones in your parity check matrix row wise and column wise what's the number of parity ones in the parity check matrix column wise wc times n what's the number of ones in the parity check matrix counted row wise number of rows times wr and both of those have to be equal right otherwise then something has gone wrong in your construction so the number of rows has to be equal to what n times wc divided by wr immediately that means what I can't pick wc and wr to be arbitrary numbers it depends on n also right right n times wc by wr should be an integer okay number of rows should be an integer I can't have 2.5 rows okay so it doesn't work okay so these three are enough to tell me number of rows of h equals n times wc by wr this has to be an integer okay it cannot be rational okay so you have you have a given a block length n and wc there are some constraints on wr you can't pick all possible wr only those which will divide the product and n times wc is possible okay okay so the next question is does do these three numbers completely define the parity check matrix or do they define a set of parity check matrix a set of parity checkmate that is clear right just because I give you the weight and column weight and row weight I cannot say I've completely defined my parity check matrix okay I can have more than one parity check matrix h which satisfies all these three constraints okay so that will be the degree of the level of randomness in my code I'll say I don't care about the exact code that you pick with these parameters pick any code with these parameters it's very likely to be good that's the kind of statement that we'll make okay so that's that that will be the randomness in this regular ldpc construction okay so let's see some examples the most common the so let's see some examples maybe I'll go to the next page to see some examples okay the most common value for wc the weight of the columns will be three okay so usually you pick it to be three two is too low you'll pick it to be three okay and n is usually not given so much importance okay the reason is you want to be able to go to larger and larger block lengths okay so you'll see n is not so crucial okay so whatever block length you have it's it's gonna it's gonna be very large so n will usually not will not bother too much about the block length okay so we'll only worry about usually wc and wr okay and suppose I say I want a particular rate r okay so suppose my rate is going to be say some capital r okay so clear okay so we'll fix usually wc which is the weight of the columns and then we'll fix the rate say for instance rate is half okay so you can fix the rate to be half okay from these two you can compute what wr will be okay can you do that let me see okay get an approximation for what wr is you can't get the exact value so some other things that will come in but you can get a bound on go you can get a nice relationship for wr okay you can say something can be guaranteed okay how do you do it okay so remember the number of rows was n times wc by wr assuming all the rows are linearly independent what's the rate of the code 1 minus wc by wr right so do you get that okay so maybe I should add it here I'm sorry I should add it here the rate of the code is I'll say equal to what number of 1 minus number of rows of h divided by number of columns of h right that's the that's the definition for rate right you can't just divide the number of rows of h divided by the number of columns that's the rate of the dual right assuming all the rows are linearly independent this will work out to n minus n times wc by wr divided by n which will be 1 minus wc by wr so what will happen what will happen if the rows are not linearly independent what will happen to the rate if the rows are not linearly independent will it go up or go down it can only increase so this number 1 minus wc by wr is a lower bound on the rate so if they are not linearly independent in general you can say rate is greater than or equal to okay so in most cases when you're randomly constricted you'll say maybe rate will be marginally higher so we can take 1 minus wc by wr to be the designed rate of the code okay so I'll call it the rate of the code okay so it's assumed that 1 minus wc by wr will be the rate of the code if at all there's some linear independence when you randomly constrict it'll be only 1 or 2 and it won't change much if n is very large okay n is very large it won't it won't change the rate too much so we'll we'll fix we'll say 1 minus wc by wr is the rate okay there can be a small change that but it will not be significant in the random construction so once I say wc is 3 and I want rate half what will wr be 6 okay so this implies so this is the design rate okay wr is 6 okay is that clear so for what block length n can I construct a 3 comma 6 or a wc equals 3 and wr equals 6 regular ldpc code can I constrict it for all n or is there some restriction on n n has to be even how did I get that n times wr by w6 has to be a integer but can it be any even number no it has to be sufficiently large even number okay you can't have n equals 2 okay obviously I can't constrict wr equals 6 okay I won't have weight 6 rows in my parity check matrix okay n has to be sufficiently large and in addition it has to be even if it's not even I can't satisfy both those constraints of weight of column being 3 and weight of row being 6 okay so usually what you do is when you want to specify this code this code will be denoted as a 3 comma 6 regular ldpc code okay so this is notation okay so I'm saying it's 3 comma 6 regular ldpc code so you should remember this is not even a code okay what is this actually just telling you a set of codes okay so remember that always okay it's never a code okay whenever people say ldpc code it's always a set of codes usually it's a set of codes and there's some randomness built into it and the understanding is that any one code you pick from that as long as it's random enough it's likely to be good okay I'm not proven all those statements but we will see some analysis for why that is why that is so okay so that's the way to interpret definition of ldpc codes nobody will actually specify the exact ldpc code and only space only say some kind of a some kind of a weight like this I'll say every column should have so much weight every row should have so much weight and you go and generate for a particular n n large enough say n equals thousand or something you can generate a thousand comma 500 3 6 regular ldpc code okay and that that that we are saying is likely to be good okay I'm not showing it to you I'm not even told you how to decode it okay so just said this is the construction okay so 10 minutes left okay so is this clear so it's very easy now suppose I say I want to rate 1 by 4 what will I do okay so you notice a 3 comma 4 regular code will give you rate 1 by 4 okay a 3 comma 5 regular code will give you what rate 2 by 5 okay so you can keep on calculating like this but you should remember for each things there will be some constraint on the block length right what is the constraint for 3 comma 4 n should be what n should be a multiple of 4 right right so the understanding is you'll only pick those kinds of n for large enough okay right if you pick n to be very small you have to worry about the combinatorics can I have an actual construction with three ones and four three ones in the column four ones per row the combinatorics might not work out okay once you go to large enough n it'll work okay I can it's possible okay so there will be at least one you can prove all those there'll be lots of lots of codes like this okay so that's why the assumption here always is you choose an n such that w c n by w r is an integer okay and then n is large enough yeah yeah and reasonable n itself you can do it you don't have to go to really really large n like 10,000 or 20,000 reasonable n you can do it it's not too bad okay but it depends on w w r and w c okay suppose now I say I look at a 3 comma 30 code what is the rate design rate that I can get 0.9 right I get a 0.9 rate but for this 3 comma 30 you might have to go to a large n okay right the only constraint that will come is n should be a multiple of 10 but you'll have a tough time constructing a n equals 50 code which will satisfy 3 comma 30 it may not be that easy okay so you might have to go to a large one first of all n equals 50 it won't even be sparse right so you'll have to go to a large n to make it sparse so maybe maybe thousand it's not a bad idea okay or maybe 5000 it's not a bad idea okay so you do all that you can get this code so the understanding is always that you pick an n large enough to get there okay so these are regular ldpc codes and most regular ldpc codes you'll see the good ones will have w c equals 3 okay so that's the okay so that's the next thing I'm going to talk about okay so how do you construct it there are several ways of constructing it so I'll provide two constructions and I think maybe we'll have time for one today maybe the other one we'll see tomorrow and we'll pick up from there okay so how do you construct suppose I give you suppose I pick an n large enough so that I know at least know intuitively that this should be possible how do I actually go about constructing it one construction is due to Gallagher in fact ldpc codes by the way were introduced regular ldpc codes were introduced by Gallagher in so let me say introduced by Gallagher and in 60s okay in his phd work he introduced ldpc codes in the 60s okay so he gave a construction for regular ldpc code so this is Gallagher's construction which we will see next okay so you know you know it's enough if you choose n such that w r divides w c times n okay but Gallagher's construction works when w r divides n okay so this this will require w r itself to divide n okay so that will be one further restriction but it's a very simple construction it'll work in a very very nice way okay w r has to divide n okay so I'll move on to the next page to show you how this construction works okay so so the parity check matrix or for an n w c w r ldpc matrix okay so matrix is according to the Gallagher construction is constructed as follows okay so I'll fix w c to be three okay just for convenience okay you for other things also it'll work this is really no problem and there's no reason why you should fix it to be three okay this will work okay and I'll I'll say some r equals n by w r okay okay so maybe not r some l okay I know that is an integer right I said it'll work only for w r n dividing n being a multiple of w r okay what I'll do is I'll take this overall parity check matrix and divide it into l parts okay so row one through row l will be one part row l plus one through two l will be another part okay did I get that right okay so likewise how many will you go to what will be the last how many number of rows do you have okay number of rows is w c times l right okay so it will go all the way to w c times l okay so how many such parts will you have w c parts okay you'll have w c parts and each of these things okay so I'll call this h one I'll call this h two if I do dot dot dot the last part will be h w c okay so I'll show you how to construct h one okay and you'll see other things will work out okay h one is constructed as follows okay the first row will contain w r ones put in a sequence okay the second row will contain again w r ones but starting from the next column here the third row will contain w r ones so on till the last one how do I know I can divide n into w r w r parts why no w r divides n okay okay so each of these things are w r once once this will be l in length this will be n I'm going to come to it okay so how do you construct h two you randomly permute the columns of h one okay how do you construct h three again another random permutation of the columns of h one how do you construct h w c okay I don't know why I said I'm picking w c to be three I've not used that at all okay so how do you construct h w c another random permutation of the columns of h one okay will it satisfy all the properties I want yeah each column will have exactly w c ones each row will have exactly w r ones okay you needed the additional assumption that w r has to divide n okay so this is one ensemble you can call it the one set of ldpc codes for a particular case this is according to the Gallagher construction yeah usually you can pick it at random okay is this construction clear okay so maybe we'll see an example of this in the next class I'll show you a very simple example for small numbers at that point it'll be it'll be slightly more clear but this is Gallagher's construction we'll pick up from an example from here in the next class there's also other constructions for the most general case which we will use actually so I'll I'll I'll use the other construction also because for theoretical analysis this construction is a little bit difficult okay there is another construction which is easier for the for theoretical analysis I'll do that but that's a little bit more abstract we'll slowly come to okay