 Okay, so we have been looking at linear block codes and various things, I want to remind you of the number of things that you should know, okay? The first thing you should know is you should be very, very comfortable with the linear codes are these two matrices, G and H, okay? You should be very comfortable with what the dimensions are, what the rank should be, what's the relationship between all these quantities, right? There are so many things that you should be very, very comfortable with, what's the row space of G, what's the column space? Column space will typically never enter the picture, the row space of G, the row space of H will be the dual, all those things should be very, very comfortable with it. You should be able to go back and forth from one to the other, okay, using the systematic reduction, you can go from one to the other very, very easily, okay? The next thing you should be very, very comfortable with this quantity D, okay, which is the minimum distance, okay? You should know its definition very clearly for linear codes, particularly, right? Then the relationship between D and H is very, very crucial, okay? So this is what is usually used in practice and design and every other version, okay? So what's the connection between this quantity, minimum distance and the parity check matrix H, what is the connection? Minimum number of linearly dependent columns, okay? And you should contrast that with the rank, which is a completely different kind of entity, okay? They do have a relationship as I said, but it's not the same, okay? So those two things, and next thing is syndrome decoder, okay? What is the syndrome decoder? It is the maximum likelihood decoder for linear codes over the binary symmetric channel, right? That's one of the crucial properties. And how do you go about actually implementing a syndrome decoder, right? So what would you do? What's the important equation to solve in the syndrome decoder? S equals H times E transpose, right? How do you find S, the syndrome? H times R transpose, right? And then you find E, what's the characterization for E? There's a further solution, further importance given to the type of solution you need, minimum weight, okay? You do that, you're doing the optimal thing, okay? As I said, it's very complex, but still, that's the idea, okay? All right, so I want to just close out with the syndrome decoder with one final example, we'll just pick a code and then go through the syndrome decoder once again, just to remind you. And then I'll do a final calculation, a simple calculation to show how to find probability of block error. And then we'll put a rest to it and then we'll move ahead, okay? So this is the final example I'm going to do just to brush your, keep up, keep you up to speed. And what we've been doing in the past class was modifying codes, right? That's what we've been doing. I'll get back to that soon enough, but I want to quickly do an example just to drive home the point once again, okay? I'll pick my standard, I don't know, I might have done this several times before, but several times already. But this is a good example to, oops, I'm sorry. This is a good example to drive home the point, okay? Just to illustrate what is going to happen, okay? So if you were to build a syndrome decoder for this, you'll be trying to solve this equation, s equals h times e transpose, okay? So you could solve this equation for each s. How many different s's do you have? It's a three bit vector, right? Okay, you have eight possibilities. As I said, you can do this ahead of time for these small cases and build a table, okay? You can call it the syndrome table if you want, which would have s on one column and then what, e cap on the other column, okay? So you can do this. So let me write down all the possible syndromes. There's no reason to write it in this order, I just wrote it like that, okay? You can write it in any order you want. For each syndrome, you can easily solve for e cap, right? What's the solution, the minimum weight solution, which will give you this syndrome by inspection, one can very, very easily solve, okay? I'm going to quickly do it, okay? What about 4111? There are several possibilities, right? So I have to pick one, I'll pick this one possibility, okay? So these are my e cap, okay? So to do the optimal thing over the binary symmetric channel, optimal decoder over the binary symmetric channel, what should I do? Whenever I get a received vector r, I have to compute s, okay? h times r transpose, which will give me s, and then look up e cap, okay? And then you output r plus e cap as your c cap, okay? This would be the optimal decoder over a binary symmetric channel. What will be the probability of error, of error, which is what? c cap not equal to c for this decoder, I'm sorry, okay? So, I'm sorry, yeah, so that's the key point to note. If you keep thinking in terms of c, you'll keep running around in circles. It's not very easy to figure out, but immediately jump to e, okay? So if you say, see, these are the error vectors e cap that I'm going to output. If the actual error that happened was not equal to any of these e caps, then I will definitely make an error, right? You see why, right? The actual error that happened could have been e. If I decide e cap was my error, and if e is not equal to e cap, I'll make an error, and that's a very easy thing to calculate. So the only thing you have to calculate is what? This is the same as the probability that the actual error that occurred is not equal to one of these e caps, okay? So this can be very easily calculated, okay? So another way of calculating it is one minus probability that the error vector equals, let me say, let me be very clear, e cap from list, okay, from table, okay? This is one minus probability that e equals e cap from table, okay? So these are, these e caps in the table can be called the correctable error vectors, or correctable errors if you want, right? You see that those are the only correctable errors according to my syndrome decoder, okay? There could be some change possible here, okay? You can alter this if you want to and still remain optimal. But these are the only error vectors that are correctable. For instance, if I introduce an error in the first bit and the last bit and run this decoder, will I be able to correct it? Not with this decoder, no, okay? What will happen if the actual error vector that happened, okay? For instance, if e equals 1, 0, 0, 0, 0, 1, what is e cap? Certainly not e, but what will it be? Can you calculate that? Yeah, so you can easily find out, right? How do you find out? You simply go and find the syndrome. How can I find s if given e, I am not given r, no? How can I find s? I said s should be calculated as h times r transpose. Yeah, so you know h times r transpose is the same as h times e transpose. Nobody will give you e in the reality, okay? I'm just giving you this just to show an example, okay? So you calculate that, you see the syndrome is what? 1, 0, 0, you go here, look it up. You're concluding that there was an error in the fourth position, while the error was something else, okay? So you conclude the e cap is 0, 0, 0, 1, 0, 0, okay? So this is how, and you make an error, okay? This is a simple example to see how you make an error, okay? So how do you compute this quantity? Probability that e equals e cap from table. It's very, very easy, okay? Just simply find the probability for each vector here. What's the probability of this vector? What's the probability that e equals this e cap? 1 minus p to the power 6, okay? So likewise, you can find each of these case. For instance, if you wanna find the probability here, what is this? P times 1 minus p to the power 6. I gave you this formula, right? It's actually p times the weight of e, oh, 5, you're right. Okay? P times p to the power weight of e, 1 minus p to the power n minus weight of e. This is a very simple formula, okay? So you can use this and quickly compute this probability. Okay, so in this case, what will the answer be? What will the final answer be? What will be the final answer here? If you go through this computation, what will be probability that c hat not equal to c? What will this be? Right, 1 minus, you'll have this term, 1 minus p to the power 6, then what? 6 times p times 1 minus p to the power 5, do you agree? And then you would have? P squared, 1 minus p to the power 4. Okay, so that's the exact expression. If you want, you can compute that. Okay, I know I wrote that down at the very end, but it's just a simple application in this case. All right, so I'll sign off on the syndrome decoder at that point. We're not gonna go back to that again. Maybe in your examples or your assignments that you see, we might see more of this. Okay, so I'll urge you to do it. A slightly non-trivial calculation is finding bit errors, okay? Bit error probability. See, the previous probability I calculated was, probability that c cap is not equal to c, not 1 bit of c cap not equal to 1 bit of c. Okay, that's a more difficult calculation. There's no point in doing it also. Okay, you know, if one goes to zero, the other will also go to zero. Okay, maybe in a different way, but it'll also go to zero. Okay, so let's go back to what we've been doing. So, okay, the last thing we saw was operations on codes. Okay, so we saw two things. What are the two things we saw? One thing I introduced initially as a very important tool for designing codes for D equals four, what was that? Extension, right? So what do you do for extension? Extension is what? How do you extend the code? You add an overall parity, right? So it's very easy to see why if D equals three, if you extend, you'll actually get a code with D equals four, okay? So that's a simple design you can do for D equals four. Okay, so extension is add an overall parity. Okay, the other thing I saw was what? Puncturing, I saw puncturing, I described puncturing. Okay, we saw puncturing as the next top. What do we do when you puncture? How do you describe it in words? It drops some bits, if you wanna be very specific, you say it drops some parity bits, okay? It's useful to write down a few things. What happens to the rate when you extend? Does it decrease or increase? Decreases, okay? So for instance, one might think of extending as something that decreases rate. What is the benefit that you could potentially get? You could have an increase in minimum distance, okay? So it's not a, in some cases, if it's odd, it'll decrease. Here what happens? The rate, rate definitely increases if you drop parities, okay? Just make sure you drop parities. Most cases, rate will increase and minimum distance, what will happen to that? Yeah, it'll decrease. It's likely to decrease also, okay? So it would stay the same, but in most cases, it will decrease, okay? So those are the, those are the things to keep in mind. So puncturing is used a lot in practice, okay? So the reason is you don't want to keep designing a different code for different rates and in real life, you'll always need multiple rates for your codes, okay? So for instance, if you think of the famous wireless situation, something called fading and all these other things, which makes your instantaneous signal to noise ratio vary over time, okay? So you might have different signal levels or if you think in terms of binary symmetric channel, in practice, you might have a different P for, for different time, okay? And you, and you'll hardly ever find one code fitting all piece, okay? So what you do is, and it's also expensive to keep designing different codes. So what you do is you design a big code of very low rate so that it'll meet the worst condition and then you keep puncturing to get higher rate codes for better conditions, okay? So that's a very useful trick in practice. It's used a lot in practice. Puncturing is a very, very common technique. Another thing that's very common in practice is what's called shortening, which I'll describe more, okay? Shortening is also very easy to describe. Okay? So this is what I'm going to do, okay? So suppose I have a code C, let me start with an NKD code C, okay? I guess described by generator matrix G and parity matrix, parity check matrix H, okay? So how do I form a code word? Code word is formed as M0, M1, MK minus one, multiplied by G, okay? So, okay? So I'll assume G is in systematic form, okay? Okay, I'll assume G is in systematic form. So if I do that, then my code word is going to be M0, M1, MK minus one, and then I would have what? I would have parity bits here, okay? So how do I describe this parity bits? I'll simply write it as M times P, okay? So what is P now? I'll take G to be in IP form, okay? M is multiplying P, okay? So I'll begin by giving a simplest example of shortening. It's a more general thing. So when you shorten, the shortened code will actually have dimension in block length N minus S and dimension K minus S, okay? So maybe I'll call it CS, okay? The code words will be, okay? For the shortened code, code words are obtained by us. M0, M1, okay? So maybe K minus S minus one, okay? So the first K minus S bits will be bits of the message. The remaining S bits, I will set to zero, okay? Remaining S bits are message. So when I shorten, that's what I do. Last S bits are set to zero, okay? I've chosen last S just for ease of description, okay? You can choose any S of the message bits and set them to zero, okay? I'm gonna set the last S just because I can get this nice description. But if you want something else, you can change that also. Of course, if they're set to zero, in my code word, I don't have to transmit them, okay? So I won't transmit them, okay? I'll begin by transmitting parities, okay? You see that, okay? How do I compute the parities now? I'll simply add the zeros, okay? So I'll say M times P. I'll keep the parity computation same. And what is M? M is M zero to M k minus S minus one. And then what will I do? I'll add enough zeros to get my remainder length k, okay? I'll simply send those parities, okay? Is that clear? Okay, so shortening is, you're shortening, you have to think of it as shortening the message, okay? Since you're thinking of always a systematic version, if you shorten the message, you'll also be shortening the code word, okay? If you pick S bits of your message and set them to zero, those corresponding S bits in the code word also become zero, right? So that's what happens in the shorting. Is this clear? What's happening, okay? All right, so the parameters are clear, right? And minus S is very clear. There are S things, I dropped, I made them zero. K minus S is also clear, is it clear? Is K minus S clear? Yeah, it has to be, okay? So if you want to think about it more carefully, let's start with the generator matrix G, okay? So let's start with the generator matrix G, which was? I, K, and then P, right? What am I doing to the generator matrix when I shorten? In my message, the last S bits are becoming zero. So the last S rows can be what? S rows are what? Are not involved in any code word. So I might as well drop them, okay? So I can have to only retain the K minus S rows. But in the identity matrix, if I chop off the last S rows, what will happen to those last S columns in the identity matrix? It'll also become zero, that's what? That's what it means when I say the code words are going to be zero, I can drop that also. So I might as well drop those columns. So if I start with this, the last S rows are removed, and likewise, S columns are also removed. S columns become zero, okay? Is that clear? Okay, so you get the generator matrix of the shortened form which will be I K minus S, okay? And then P, I'll say one to K minus S alone, okay? You don't take the entire P, you only take the first K minus S rows of P, okay? So you just simply chop it up, okay? You just drop this, go from here to here, okay? Is that clear? That's how you do, that's what you do to the generator matrix. So it's very clear why this generator matrix also has dimension K minus S, right? The rank is K minus S. So it's N minus S, K minus S. What about minimum distance? It's a little bit more tricky, let me see. Yeah, so minimum distance will be low bounded by D, okay? Minimum distance of the shortened code will be greater than or equal to D, okay? Why will it be greater than or equal to D? How can I say it will be at least D? Yeah, exactly, right? So I'm not changing any of the code words. Each code word of the shortened code with zeros inserted will become a code word of the original code. Do you see that? So of course, if you had a certain minimum distance for the original code and if you only remove zeros from the code words, the weight is not gonna change. So you'll still get the same, at least the same minimum distance. Can you get more? Yeah, so it depends on where I remove norm. So you can get more, it's possible to get more, okay? So in fact, one can argue many of the codes that we use are shortened versions of some other code. Minimum distance increased, okay? So you could have minimum distance higher, okay? So if you want another way of thinking about it, you can think of the parity check matrix. What is the parity check matrix? Okay, what will be the parity check matrix originally? It's gonna be N minus K by N and I'm gonna have say P transpose here and then I, N minus K. Okay, what will be the parity check matrix for the shortened code? It will remain N minus K, right? Why? And then you'll have N minus S here, okay? So the parity check part, I will remain as such N minus K, but in P transpose, what will happen? Last S columns. Last S columns will become zero, okay? So you can also see how is it that you can compute the code word using the parity check matrix, right? M K minus S minus one and then you're putting zeros here, okay? And then you calculate the parities, right? This is how you calculate, right? Do you remember this? This is how you think about the code word in terms of the parity check matrix. Each bit of the code word is multiplying a column of the parity check matrix. I've set the last S bits of the message to zero, which means the last S columns in P transpose are completely irrelevant for me, okay? So I'm gonna chop that out and get this. And I will remain, the N minus K part will still remain the same, okay? So the crucial observation is D shortened is greater than or equal to the original D, okay? The reasoning is if we shorten code word, if you add zeros to it, you should get a code word of the original code and that has to be half distance at least, okay? Is that clear? Okay, so that's the last bit I want to say. I also want to say shortening is very, very useful, okay? The main reason is in practice, when you construct some good codes that we'll see later on, some algebraic codes like Reed-Solomon codes, it's very useful in practice, but they are very, it's useful to think of them, it's useful to construct them for a particular block length, like 255 or 511 or 1023. So those block lengths, these codes become very, very interesting and nice to construct to encode and all that it becomes very simple to describe them mathematically is very easy at those block lengths, okay? But you may not always want those block lengths in practice, okay? Right? You might want the same error correcting capability that these good codes give you, but you don't want to compromise on your block length choice. You don't want that code to dictate your block length choice or block length might be limited by so many other factors in practice, right? So you might want to shorten. When you shorten, what happens? The minimum distance does not change. In fact, it can increase. It won't increase in most cases, it'll be the same, but what do you have control over? Over block length, okay? What about rate? What happens to rate? Yeah. What will happen to K minus S by N minus S compared to K by N? Do a simple computation and tell me, in what case will it decrease and what case will it increase? You're using K is less than N. You have to use that. Okay, less than N, it will, what will happen? Will it decrease or increase? Please confuse. Do the computation. It will decrease, okay? So rate will go down, okay? Anytime minimum distance can increase, it will have to go down. That has to happen. But if K is greater than N, then it can increase. Am I right or wrong? Yeah. Okay, anyway. So finally, one can say for shortening, rate will decrease. Okay, yes, I had a question. Can increase? It's possible. Very much possible. Yeah, so think of, for instance, I can give you a simple example, okay? It's a very, very silly example just to drive home the point. Suppose my G is this and I shorten the last bit, what will happen? Right? You see my example? Right? If I shorten the last bit, my code suddenly has minimum distance four. Okay, so think in terms of some non-systematic cases to get the right picture. If you want a systematic example also, it's possible. You can come up with a systematic example also. Okay? All right, so that's where we'll stop as far as shortening is concerned. Let's keep moving ahead. The next little bit I want to do before we go ahead and look at designs for D equals five. Right? This first, actually, maybe we'll see an example for doing D equals four design. Okay, just for fun. We know how to do it, right? How do you do it? How do you do D equals four? Extend, design D equals three and then extend. Okay, so let's try it. Okay, so let's try an example for D equals four. I want you to construct a code which will have these two properties. Okay, so of course, what should you do to K? Maximize K, right? So or minimize the number of rows you would put in the parity checkmate. Okay, go ahead and try that. Okay, we'll try this first and then we'll see some nice bounds that relate N, K and D. Okay, so one can expect that. I'll show you how that works. So what code will you first try to design? 30, block length also will go down by one, right? So remember that. Block length 30, the minimum distance three code you have to try to design and then extend it to get block length 31, minimum distance four. I've put greater than or equal to four, but try D equals four, okay? Don't give me the 31, 1, 31 repetition code and say I've finished my, I want you to maximize K. Okay, so K should not be one line. You know for few should be done by now. So how do you think about this? Okay, so first you have to look at, let me say N prime equals 30, D equals three. What's the minimum number of rows you need in H? For N prime 30 and then minimum distance three. How many rows do I need for minimum distance three? Five, right? You can't do it with less than five, right? Right? I won't say enough people nodding their head in with belief. You know, you're just really nodding your head. Are you convinced you need five, right? You can't do it with four, right? If you have four, there are only 15 different possibilities and you have to repeat. You can't do anything more to it. Okay, so you need five columns here. And there are various ways of doing it, okay? Then you extend it, maybe this is H prime. You extend it, what do you do? Put all ones row and then put zeros on top, you'll get the 31. So this 31 will be what? K equals what? So you have a 30, 25, three code, right? You extend to get, to get what? 31, 25, four code. Okay, so that's fine. In fact, most of these you might say, how do I know just really absolutely maximizes K? Maybe there is some other magical constriction that will not do it. I mean, it's, one has to prove such things. Okay, so the best way of proving it is people maintain a table. K is a person called NJ Sloan who maintains a table of what's the maximum K possible for a given N and given D, okay? I believe for 31 and four, it will be 25, okay? I think so, I think it is 25. Somebody might want to check that, okay? So you can go ahead and check that. People have worked on it and figured out that this is the best possible given that requires some proof, okay? So the next thing we're going to take up is a couple of techniques to look at relationships between N, K and D, okay? Obviously they are interrelated, right? One can easily see that, right? D measures how far away two vectors are, okay? So you think in again, think in terms of the geometry view, right? Visualize that, right? Your stars are the code words and they have to be at least so much distance apart. Means those spheres around the stars should not overlap, okay? All these spheres you have to place in this big geometry and then they should not overlap. So there is a nice bound. So if you keep increasing your T, which is error correcting capability, the number of code words has to go down, right? You cannot keep generally putting as many code words as you want and insist on a very large T, okay? So we'll use those ideas to come up with some simple bounds on N, K and D, okay? And we'll see where we go and we'll see how good this code is first, okay? So we'll see that, okay? Some bounds, the first bound I'll do is called the hamming bound or the sphere packing bound. We'll call it the hamming bound, okay? So it's based on this sphere idea, okay? So you take this N, K code, okay? These are your, this is zero, one, N, okay? All your code words, all your vectors are here, I'm sorry. And then you have your code words, then you have your code words, yeah? So you draw a sphere of radius T around each code word. Yeah, T, I'll say T, okay? So these stars are your what? Code words of a N, K, D code, okay? How many stars will I have? Two power K, right? How many vectors will there be in each sphere? Inside each sphere, okay? Radius is, radius is T, which is floor of D minus one by two, okay? How many vectors will I have in each sphere? The code word will be there, right? And then what else will be there? Vectors that are at a distance, one away from the code word, right? They will be there. So how many of them do we have? NC one. NC one, do you see that? You'll have NC one vectors that are at distance, one away from the code word, right? We did this computation once before, okay? I want to remind you of that. What else will be there? Code words that are at a distance, two away, okay? Till what do you have to go? NC T, okay? And choose T, okay? So it's convenient to say and choose T, okay? So that is not confusing, okay? So this is the number of vectors in each sphere, okay? So that's the first thing. Okay, so I'm gonna write down the next two things and then I want you to put an inequality in between those two, okay? So I'm gonna multiply this column, this entry by two power k, and I want you to compare this with what? Two power n, okay? What will happen? Do you see why there should be a less than or equal to, because each of these spheres are non-overlapping, okay? So if you count the total number of vectors in each of the spheres and add them all up, you should get some number which is less than or equal to two power n, okay? So there you go. That's the first bound on that relates n, k, and d, okay? Okay? Okay? So for n equals 31 and d equals four, I want you to evaluate this and figure out how close we were with the previous construction. Suppose you take n equals 31, d equals four, what is t for d equals four? One, okay? So try to evaluate that and see if the number we got satisfies this bound. It has to satisfy, right? And then we can see how close we are, okay? So the bound people say will result in k less than or equal to 26, okay? So we are pretty close, right? We got k equals 25, okay? Maybe 26 is possible, okay? Who knows? Is that all right? Okay, so this is how you use the bound. You know, I mean you come up with some code, with some d, then you can go back, use the bound, figure out what the bound tells you for the same parameters and then see how close you are with your k. And then if you're close enough, you can feel happy about yourself, right? You've done something which is supposed to be very good. Okay, so this is how you should do most things in life, right? You should always try to design something, then come up with a bound to tell you how good it is and then see how close you are to that, right? So that's something you have to do. Okay, so that's the first bound, hamming bound. The next bound which is also very easy, is the singleton bound, okay? So I'm gonna prove it using a combinatorial argument, okay? So if you don't follow it, try to keep up. There's also another simple linear algebra based rank argument, but I don't want to give it because it's a useful tool that I might introduce as well now, okay? So we will use something called the pigeonhole principle. Sounds very fancy, but it's a very simple combinatorial rule, okay? So we'll just use that. If you've not seen it before, it's a good thing to see, okay? Suppose somebody tells me I have an NKD code, okay? What am I going to do? I'm going to write down all the code words of the code one below the other, okay? So I'll write down all the code words of the code one below the other. So maybe I'll say my code words are C1, C2, two. How many code words will I have? C2 power K, okay? I'll write down C1 as say C10, C11 till C1, N minus one, okay? So likewise I'll write it down, okay? So maybe this is C20, C21, C2, N minus one, okay? So seems like we are far away from a bound, but we'll get there eventually, okay? C2K1, C2 power K, N minus one, okay? So this seems like a very small list that I've written down, it's actually a very large list, okay? So I've listed all the code words of my code, all the bits one to each one together, okay? Now I want to look at, I want to look at what? The first K minus one positions, okay? I've chosen, look at that number I've chosen, I've chosen it very carefully. K minus one columns, look at K minus one columns, okay? Okay, okay? How many code words do I have? Two power K. How many possible K minus one bit vectors do I have? Two power K minus one. So what should happen in these K minus one columns? Each one is repeating. I don't know about each one repeating, there should be at least one repetition, right? That's called the pigeonhole principle, okay? If you have 10 pigeonholes and if you have 11 letters to put in them, at least one hole must have two letters, right? Otherwise it won't work. So likewise here you have only two power K minus one possibilities in your first, in four K minus one bits, right? But in the first, but how many vectors do you have? Two power K vectors. So there should be at least two vectors here which agree in the first K minus one positions, okay? Do you see that? Okay, this will tell you two power K rows, but only what? Two power K minus one vectors. That implies there exist two vectors C i and C j, okay? Agree in what? First K minus one positions, okay? There will exist i and j, i not equal to j such that C i and C j will agree in the first K minus one positions exactly. So what can be the distance between C i and C j? If they agree in the first K minus one positions, they could possibly disagree in the, all the remaining, how many? N minus K plus one. So the minimum distance of the code should definitely be less than or equal to N minus K plus one, okay? So that is the logic, okay? D is less than or equal to the distance between C i and C j which we saw just by the pigeon hole which will, which can be less, which has to be less than or equal to N minus K plus one, okay? We'll come to it slowly. Okay, so D is less than or equal to N minus K plus one. If you want to write it differently, you can bring K to this side and say K is less than or equal to N minus D plus one, okay? So this bound is called the singleton bound. It's a very famous bound. Codes that meet this bound are called MDS maximum distance separable, okay? Okay, MDS codes, MDS expands as maximum distance separable. No, meet this bound exactly. Achieve equality for this bound, sorry. MDS codes satisfy N equals, I'm sorry, I shouldn't write N equals, D equals N minus K plus one, okay? So it's a very special bound. It was studied a lot and for instance, the Reed-Solomon codes meet this bound, okay? So it's very, so one can constrict them. It's not like it's a very difficult bound to meet. For instance, codes that meet the Hamming bound are called perfect codes. I did not elaborate on that. Codes that meet the Hamming bound are called perfect codes. There are very few of them. One can list out all the perfect codes also, okay? So here, if you want some examples, for instance, the Hamming codes meet the Hamming bound. Okay, all the Hamming codes will meet the Hamming bound. All right, so here, if you want examples for MDS codes, you can show what are these codes? N1N, what is the N1 code? What is the only N1N code? If you want an N1N code, what is that? The repetition code, right? That's the only code, okay? This will meet the MDS bound, okay? This will satisfy the MDS bound with equality, right? Calculate it if you want, okay? D equals N, right? N minus one plus N, okay? So this is MDS, okay? What is the dual of this code? Okay, so the answer already came, but what's the dual of the repetition code? What are the dimensions? NN, what will be this K for the dual? N minus one, what will be the minimum distance? How do you describe the dual of a repetition code? Okay, what's the dual? All vectors that satisfy with dot product with the original code word should go to zero. What's the only code word in the repetition code? All ones. So what will belong to the dual of the repetition code? Any code word which adds, which has even parity, right? All the bits have to add to zero, right? Even parity things, okay? So all vectors of even weight will belong to this code, okay? This is the dual of repetition code. It's called the even weight code, okay? One can also check that this will be also MDS. In fact, one can prove that in general. If a code is MDS, its dual will also be MDS. Seems like a surprising property, one can prove it. Okay, in fact, one can also prove that there are no binary MDS codes. What does that mean? These two are the only, okay, of course there's the other trivial NN one. What is the NN one code? It's the NN one code. Yeah, it's just the message itself. You don't add any parity, you get the NN one code, okay? That's also MDS. Other than these three codes, there are no binary MDS codes. Okay, seems like a very difficult thing to prove, right? One can prove it. It's not too difficult, okay? There are no binary MDS codes. Okay, but you can, for instance, go through and check the code that we constructed. What is the code that we constructed? 31, 25, four, right? What is the MDS for this? You'll get K is less than or equal to 28. But you know anyway, K equals 28 is not achievable because I told you of that result, right? There are no binary MDS codes, okay? So K less than or equal to 27 is the only real bound here, okay? So this bound is probably not very tight, okay? So you can see it's not very, very tight, okay? But this bound is important. We'll come back and revisit that later. Okay, so the last thing I want to do, okay? So now we've seen a couple of results. These are bounds, okay? If you have an NKD code, these two relationships have to be satisfied. They cannot be violated, okay? The next result we'll show is kind of a positive result, okay? So it's not something that has to be satisfied. It's not a rule. It's something that is possible. We'll show, we'll find NKD for which I can construct a NKD code. I'll find a relationship between NK and D so that when three numbers satisfy those relationships, I can always construct a NKD code. I won't tell you how to construct it, but I know such an NKD code exists, okay? It's a positive result, okay? But it's still called a bound, okay? For some reason it's called a bound, but don't get confused by it, okay? Even if NKD don't satisfy that bound, what can happen? The code might exist. It's only an existence result, okay? So you should remember that. It's called the Gilbert-Varshama bound. It's a very, very famous bound. And the argument for constructing it is also very, very interesting. Okay, like I said, this is not really a bound. It's only an existence result. It's a bound on what are called very good codes, okay? So we won't worry about them, but it's only an existence result as far as we are concerned here, okay? So it starts by looking at how is it that we can construct a parity check matrix so that, okay? Suppose you want an NKD code. If you want minimum distance D, what should happen? In terms of the parity check matrix, D minus one or fewer columns should not add to zero, right? In the binary case, okay? They should not be linearly dependent or they should not add to zero, okay? So can I construct columns, column after column making sure that that won't happen. If you look at that very carefully, you'll get the Gilberto-Schmurr bound, okay? It's shortened as the GV bound, okay? So let me try to see how that works out. Okay, suppose I have R rows, okay? I fix the number of rows, okay? Right? Suppose my target minimum distance is D. Okay, by the way, minimum distance is also called D min by many people, okay? So D min is minimum distance. Suppose I have R rows and my target minimum distance is D, okay? Suppose by some magic I've constructed N minus one columns, already added. The bound is obtained by asking a question, when can I add the Nth column? When can I add another column to the set of N minus one columns? Assuming that these N minus one columns already satisfy the this minimum distance criteria, what is that? No set of D minus one columns among these N minus one will add to zero. I already D minus one of fewer columns among these N minus one columns will add to zero. That I've already satisfied. Now how can I add the Nth column? If you ask that question, when can an Nth column be added? Dead. If you answer this question, you'll get the GV bound, okay? So I'm gonna write it down and then we'll see how we all give it out, okay? So one column that I can never add is what? The all zeros, right? So that is one that I can never add. So I'll keep adding up. I'll keep accounting for the columns that I cannot add, okay? And if that count happens to be less than what? Two power R, then yeah. Then there is always a possibility that I can add a column, you see that? So that's my logic, okay? So I'll be adding, given that I've added N minus one here already, I'll make a count of number of columns that cannot be added, that have been already eliminated by this, okay? So of course the all zeros eliminated. And then what else? All the N minus one columns by themselves. They've been eliminated. I can't add them. Then or I'm gonna say, I'm gonna do an over count here, okay? I'm gonna say N minus one choose two, okay? And say these are the sum of any two chosen columns, okay? Okay, so these are, this is the all zero column. Here, each column. The next thing I should not do is I should not add the sum of two columns, right? I should not add. If I choose two columns from this N minus one and add them, that column should not be added, okay? So I should include that here. But why am I saying, if I put N minus one choose two here, it's an over count. Maybe the way the minimum distance and all worked out, maybe there are repetitions and maybe those repetitions are okay for me, right? For instance, if my minimum distance can be four, right? Two columns adding, two columns adding can be the same, right? My minimum distance would not be violated, but I don't want to get into the nitty-gritty of calculating all that. I'll simply say N minus one choose two I'll avoid because I want just an existence result. I don't want the exact existence result or anything, okay? Okay, so keep that. So sum of two columns, but keep this point in mind, this over count is very important, okay? At the root of this over count lies one of the most, one of the most fascinating unsolved problems in coding theory, okay? So sum of two columns, okay? And then I'll keep doing this. I'll say N minus one choose three, okay? So here you see how this can be suboptimal. I'll say sum of three columns I'll avoid, okay? But maybe sum of three columns here or equal to sum of three columns here and my minimum distance is only six and then I don't care, right? So you keep that in mind, but I'll say that, okay? This is scary. Okay, apparently I'm losing battery, okay? Let me check. Oh my God. Okay, so it looks like I'll have to save. So we'll suspend here. Oh my goodness. Oops. Okay, some very scary things are happening now. I thought I, okay, stop.