 Okay, so let's summarize briefly. So I think today should be the last last lecture on LDPC codes. I don't think there's anything else I want to do. We'll move on to other topics as we go along. So let's let's summarize all that we did with LDPC codes. Okay, so first thing we saw was we saw things with respect to the BSC and Gallagher A decoder. Okay, and we did density evolution for that. So I started off with regular codes. So first thing I did was regular codes and then we saw irregular codes, all those things. Okay, and then we moved out to A to BGN, well, BPSK over A to BGN. Then we saw the soft message passing decoder. This decoder is also called belief propagation decoder. There are so many ways of, okay, so it's also called belief propagation decoder. So one thing you have to understand in today's research world, if at all you decide to move into research, if an area becomes hot, okay, millions of people will work on it at the same time. So you'll see, there'll be lots of works around this. So if you just search for LDPC, you'll hit so many papers, it's quite mind boggling. Okay, so the soft message passing decoder has been analyzed from so many perspectives, from perspectives of statistical mechanics, okay, from perspectives of dynamics, like for instance, non-linear dynamics, complicated chaos dynamics and all that. Then from good old probability, then almost everything is based on simulations at the end of the day, but still all those things are there. So belief propagation is one such thing, one way of viewing it, inference, all these issues, okay. And then we also saw the density evolution, which is quite, okay. So at the end of the day, what should you worry about in LDPC codes? Okay, the most important thing is degree distribution, right? Hopefully you agree with me. Okay, degree distribution controls performance. That is the biggest moral of the story. Okay, so anybody gives you, talks about LDPC codes, first question you should ask is what's the degree distribution of a code? So based on that, you find the threshold, based on that, you decide how good the code is in compared to the capacity or how good it is in simulation. And in simulations, we'll track the threshold as long as the block length is large enough, okay, it becomes large, okay. And the way we do analysis, we make a lot of approximations, we say the neighborhood is three like, and we assume, well, that's the main approximation, then we assume the all zero code with the symmetric channels, all that is true. Once you do all that, it seems like the analysis is so approximate that it should not work. But in reality, what happens, even if those assumptions are violated, the threshold is a very good measure for how good or how bad the code is. Just make your block length large enough, okay. And it's quite remarkable that you can get really, really close to capacity and implement the soft decoder with such large block length. So if you remember our computations from before, large block length, just going to be nearly impossible to even run the syndrome decoder. Forget about running a soft ML or MAP type decoder, okay, this soft decoder, while being suboptimal clearly, right, it's not the MAP or the ML decoder, while being suboptimal, is good enough to get you very, very close to the Shannon limit. So you don't need anything else. So many people pronounced that coding was dead a few years back, so LDPC codes are there. For anything you want, all you have to do is what? Design a suitable LDPC codes, code, okay. So what I've not done is LDPC codes for other channels, okay. So this is, this is an area which is still active, okay. So one channel that is very common in practice is the ISI channel, right. So you have some inter-symbol interference, even wireless also, you have inter-symbol interference, even in wire line networks, you have some DSL type thing has lots of inter-symbol interference. So how do you design LDPC codes for that? How do you compute threshold for that? All that is interesting theory, okay. So for instance, OFDM is a very popular method today to combat ISI, right. So how do you design LDPC codes in OFDM environment, okay. So what happens in OFDM is, you have lots of subcarriers, different bits go through, different subcarriers. So this Yi which had one distribution for you will end up having several distributions. So how do you do threshold for those kind of things? So all those things are still open problems. This is interesting areas to work on, okay. If you want to get into those areas, you should be very good at coding, okay. Whether or not you like coding, when I say coding, I mean programming, okay. So you see, most of these things involve programming. If you can't write fast programs, then it may not make much sense, right. So if your program will take five days to run, then give you an answer. By the time you check whether your idea is correct or not, it will be five days, okay. So after that, you would have forgotten what your idea is. Then what do you do for those five days is another question. So it's a big problem to get into these kind of areas where the result comes back after five days. You need a short cycle and you should write good programs for it, okay. There's no other way for it. So that's important. So if you look at the standards, there are a few standards which have already, which are already using LDPC codes. There's something called the DVB standard, Digital Video Broadcast S, okay, S is satellite, okay. So if you do video broadcast from satellite, this uses LDPC codes. And there's also what's called the Ymax standard. This is what 802.16e, right, okay. So it's an IEEE standard. This also uses LDPC codes. So you'll see in practice, to make the implementation easier, people will use specific types of LDPC codes. The construction will not be arbitrary. They'll actually construct a smaller matrix and then do some replacements, etc. They'll use ideas called what are called Protograph LDPC codes. These are most useful in practice. If you actually look at the LDPC code in Ymax, it's actually irregular. So they design for a particular degree distribution. And then they'll use permutation matrices, etc. very smartly to simplify your process, your construction process. Otherwise, it becomes very complicated. And in fact, it's one thing to construct a 500 by 1000 matrix, okay, sparse matrix. It's another thing to remember it, right? So how do you represent it in hardware and software? Okay, so it takes a lot of memory. And if you have to, if it's not very structured, it becomes very painful. Okay, so these Protograph codes are ideas to make that thing more structured. Okay, so you put in more structure on top of the randomness. So you remember, one of the philosophies when we moved into capacity approaching codes was what? You need a random element, right? If you just have a pure deterministic thing, you're not going to be able to, well, at least so far people have not succeeded in getting very close to capacity with purely deterministic constructions. Okay, so these are random parts. So you do a mix and match between the two to get a good performance. So that's one thing. The other thing you'll see is nobody will actually implement this full blown soft message passing decoder in practice. It's possible one can do it, but the most popular decoder is what's called a Minsum decoder. Okay, so the idea behind the Minsum decoder is the following. So if you look at the, see the bit node update is nothing, right? Bit node update doesn't involve anything, you just have to add. And adding is very easy and well, I say and all that. The check node update on the other hand is a major pain. You need a big lookup table. And the other problem with the lookup table is, since F is a nonlinear function, then you're taking log, which is again a very, very, very poorly behaved function, you cannot do uniform quantization and expect good behavior. Okay, so you might have to increase your quantization for doing the F alone and just becomes messy in VLSA. It's possible to do it, you will have done it, but there is an approximation called the Minsum decoder, which is much better. So what they do is the following. Okay, so if you look at this check node update, what are you doing? You're doing F of F of X1 plus F of X2 plus so on till F of F from XD. Okay, this is what you're doing, right? This is the most complicated part. Everything else is easy, right? This is the most complicated part. So what you do is, what is FF is log of tan hyperbolic of say some x by 2, right? This is what you're doing. So tan hyperbolic is between 0 and 1. Okay, and if you take log of that number, you're going to get a negative number. Okay, log amplifies what? Amplifies values close to 0. Right, if the value is close to 1, log will not distinguish anything. Okay, so out of all these F of values, which will be the maximum? See, which after doing F, which of these terms will be the maximum? The one that corresponds to the minimum Xi, right? Do you agree? Okay, the one that corresponds to the minimum Xi will have the maximum contribution. So what you do is, you only take the maximum contribution and ignore everything else. Just delete everything else from your consideration. Okay, so instead of doing this, I'll approximate it as F of X min. But what is F of X min? X minutes. Okay, so this is the approximation, it's called the min sum decoder, which simplifies your whole thing. And you can show the min sum decoder loses, you can do some further approximations carefully, and you can make it lose only 0.7 dB or so when compared to the full soft message passing decoder. Okay, right? So you pick the excess, which is minimum, and then take ignore everything else, you get them in some decoder. Okay, so this can be implemented very easily for all of them. So what you do is you find the two least minimums, replace everything else with the minimum and the minimum value of the next minimum. So it's very easy to implement this. One can do this very easily. So it involves finding minimum, which is not too scaring in VLSI without any problem, you can do this. Okay, so people implement something like this in practice in some decoders more popular. There are so many other practical issues that you take over. Okay, another thing I have not talked about is encoding. Okay, so what we have from the LDPC degree distribution and construction is what we have only the parity check matrix. Okay, so H, which you know is pass. Okay, so, but when you want to encode, what should you do? You should go from H to G in systematic form. Okay, when you want it in systematic form, you would convert this into IP, right? And then you put your message here and you'll get your parity part here as P equals MM capital P, right? Well, transpose whatever you take, make sure you take care of those things. Okay, so you'll get this P equals MP. But when you convert this part into I, you're doing a lot of row operations. Even though each row spars, when you do all these row operations, P will become dense. Okay, so the problem with this is since it's a random matrix, in encoding, you have to remember this entire thing. Okay, and if it is a 500 by 1000 matrix, P will be a 500 by 500 matrix. Remembering a dense 500 by 500 matrix is impossible. You can't do it. It becomes such a huge amount of memory, it's a waste of time. Okay, so you have to do some smart way of doing it. So what people do is there are a lot of ways of doing this. One approach is to do approximate upper triangularization. Instead of getting I times P, you get T and P. Okay, so you go from here to T, say some P prime. Okay, so what is T? This is approximately upper triangular. Okay, so for instance, how T will look is it'll have this form. It'll be something like this. This will be the nonzero values. Okay, it'll be triangular here for a long time. And then there'll be a small gap for which it is actually fully square or something. Okay, since this and you can make this gap very small. Okay, make this gap very small. Okay, and you achieve this purely by row and column swapping without any row operation. Okay, so only row column swap. What's the advantage of doing only row column swap? All these matrices are still sparse. There's no nothing becomes dense. Okay, and then you use some smart linear algebra and maybe with a small dense matrix, you can get away with it. Get away with encoding. Okay, this is one way of doing it. But you'll see in the practical codes in the standard Ymax codes and DVB standards, they will actually design H in a way that is suitable for decoding. Okay, so this needs to be encoding. Can also be designed for easy encoding. Okay, so they'll have what's called a dual diagonal structure. Okay, so it makes it very easy to encode. So that also is possible. All those things are ways of doing simple encode. Okay, so at the end of the day, if you can't encode very fast, there's no point in being able to use these things. Okay. Okay, so I think that kind of wraps up LDPC codes. They don't want to do anything more. I think all most of the other areas are being explored in the in the term papers and programming assignments. So when they come out to present in April, you'll get exposed to so many of the other areas and LDPC codes. And as I said, it's a recent area. It's exploded into so many different forms of so many different things. And out there, people will have a you'll have a tough time assimilating all of that case. It's interesting to any questions. Okay, so what we'll do from now on is move to convolutional codes. And if we have time towards the end, I'll do some turbo codes.