 Okay, so this is lecture 31, but we are going to be continuing from where we left off. Hopefully all of you remember this picture, okay, so this is the, well, this is not supposed to come. Okay, so this is the trellis on which we have been decoding convolutional code, the four state convolutional code, right? So the 1 plus d plus d square and 1 plus d square, the convolutional code and we came up to the third stage where something interested started to happen, okay. So I was looking at the first state, right, after stage 3, okay, stage 0 after stage 3, okay. So if you look at that, you see there are two possibilities now. The first time, you are getting two paths coming into state 0, one from the previous state 0, another from the previous state 1, okay. So now you ask the question from state 0 before stage 1 to state 0 after stage 3, okay, that is the, those are the paths I am concerned about, which is the path of minimum metric, okay. So if you ask that question, then all you have to worry about is out of all the paths that came through state 0 in the previous stage and state 2 in the previous stage, you have to pick out that path which has the minimum, okay. So then you go to state 0 and then you see it is a really simple problem because one part has only one branch, okay and that is very easy thing to do. So you look at the minimum weight, minimum metric path at state 0, which is a 3.24 metric path and the minimum metric path at state 2, a state 1 which is a 7.64 metric path, okay. So out of, from these two paths, you try to join a branch to come to state 0, okay. You can either join the top branch or the branch below. So you see if you join the top branch to the previous state 0, you get a total metric of 4.3 which is smaller than the metric you would get if you chose the other branch so you pick that path, okay. So you use the same logic for every single state, okay, you have to do it for every single state, okay. So that is also quite important, okay. So let us go to the other branches, I do not know if someone has been industrious enough to go back and work out the branch matrix for the other branches. If you have not done it, please do it now, 0.5, 0.1, tell me what the branch matrix are for the other branches. So you have to compute only two branch matrix, right, one for plus one, minus one and the other for minus one, plus one, right, you compute for those two and then you can just put it in each of these places. What is for the top branch, what is the metric for the, I am sorry, 3.06 for which one? The topmost one here, you get 3.06, so why am I not happy with that? 3.06, it is fine, okay, I guess I should be happy. What about the next one, the one below, I am sorry, 5.46, yeah, I am happy with 1.46, that seems like a, so the next one is also 1.46, then 3.06, okay, do you get that, okay. So then you repeat the same process for the other two states, okay, so which one would you pick? This one will clearly be a winner, okay, and for this, again, this one would be a winner, okay. I am going to ask a few questions now, okay, so we come up to this stage, we will stop here, I am going to ask a few questions now, okay. If you have to enumerate all the possible paths starting from stage 0 up to the end of stage 3, how many different paths will you get, okay. One answer is 4, okay, any other answer? If you have to enumerate all the paths, forget about metric, all the paths, how many paths do you have, all the valid paths, yeah, of course valid paths, valid paths but I do not care about minimum metric, maximum metric, nothing, just give me a total number of paths, how do you compute that, so do not worry too much about it, see every path corresponds to a code word, so all you have to do is compute the total number of code words, how do you compute the total number of code words, total number of code words equals the total number of messages possible, right, so all you have to do is find out how many different messages can you have up to stage 3, okay, how many different messages can you have, 8, right, 8, there are only 3 message bits, 8 possibilities, so you should have 8 different paths, valid paths corresponding to valid code words up to stage 3, at the end of stage 3 you should have 8 different paths, what I have actually done now is I am telling you that the paths that are in red ending in each of those 4 states, how many of them are there, only 4, I have already eliminated 4, the other 4 paths I will not consider, I do not have to consider ever again, right, why is that true, because the same logic once again, the path of minimum metric has to flow through one of these 4 states after stage 3 and it is enough if I look at the minimum path that comes to each of these states, okay, because if the path of minimum metric goes through a particular state I can split it into 2, right, that is the logic, so you use the same logic over and over again, you get very simple results out of that, okay, so you have only 4, okay, so let us do one more, okay, just to go through this rigorous exercise, just do one more and you will see one more interesting thing that is happening, if you do one more state, okay, so what are the metrics here, 1.64 plus 3.06, what is that, 4.7, right, this one here is 6.7 and this one here is 3.1, okay, so you write down the metrics, those metrics are called the state metrics after stage 3, so on, okay, so do for one more and we will stop, sorry, is that the question? I am sorry, yeah, I will give you, yeah, so you want to receive vector values, okay, so I will just pick, I will pick the easiest one, pick 0.5 minus 0.5, this was easy, no, this was easy, right, so we will pick just 0.5 minus 0.5, okay, top 4 or 2.5, everybody agrees, what about the bottom 4, 0.5, all 4 are 0.5, 4.5 and 0.5, which are the 4.5, top most is 4.5, 0.5, 0.5, 4.5, am I right, okay, go ahead and do the computations, am I right, okay, so now repeat the same computation once again, after stage 4, how many different total parts do I have? 16, how many parts am I considering because of this algorithm, only 4, see that 4 remains 4 as you keep going while the total number of valid parts will keep increasing, will keep getting multiplied by 2, okay, so while the total number of valid parts are growing exponentially in K, which is the message length, the number of parts that you are actually considering after a particular stage, only grow linearly in K, but they grow exponentially in memory, so K times 2 power mu, okay, so that is the number of, well not K times 2 power mu, only 2 power mu parts you will consider, but you will have to eventually store all K also, so that is the complexity roughly, okay, so even as you keep going, the number of parts you will actually store is only 2 power mu, it does not keep increasing beyond that, okay, so it is good to have a name for these parts, so these parts that you are actually considering are called survivor parts, okay, so these are the survivor parts after a particular stage, okay, so suppose I have to list the survivor parts after stage 4, I want you to write down the survivor parts after stage 4, okay, how will we write the parts, we will write it as a sequence of states, okay, that is the easiest to do, so I want to write down the survivor path after stage 4, so what is the survivor path ending at stage 0, okay, so survivor path ending at stage 0, So, I need a notation for this. So, I will basically say SP 4 0. So, that means, after the fourth stage, the survivor path ending at state 0. So, what is this? 0 0 0 0 0 0 I should have 5 states define my path. Am I right? Top path which just goes all the way through in the all 0 states. So, likewise write down SP 4 1, SP 4 2 and SP 4, sorry. So, easiest way to find survivor path is to backtrack. If you start at 0 and start going, you will get confused. So, backtrack from state 1, go to the previous state, because the previous state will always be unique from a particular state. The other way it will not be unique because one state can go to, can have both branches being in the survivor paths, but previous way it cannot happen. So, it will be always unique. So, you can backtrack, but if you can see it sufficiently in advance, you will see how it works. So, it is going to be 0 0 2 3 1. Do you agree? That is going to be SP 4 1. What about SP 4 2? 0 0 0 0 2. What about SP 4 3? 0 0 0 2. So, maybe you start at it and maybe you notice something. You will say, this is one example. You are not really showing anything very spectacular here, but you can already see one thing happening. What is happening to all the survivor paths? They are beginning to merge in the initial places. So, that is an interesting thing that will always happen. One can imagine why it should happen, but you can see that this is happening in this example. All the four survivor paths in the first two states are the same. So, they have merged there. In fact, many of them are same in the first three states. So, this is something that will happen in the Viterbi algorithm. As you keep going further and further down the trail, you will see all the survivor paths will merge up to a point. So, what does it mean? Once the survivor paths have merged, what does it mean? Yeah, your decoding is complete till there. Do you agree? Suppose, at some point you observe that your survivor paths are exactly the same up to some previous delay, then up to that your decoding is over. Because whatever happens later, you will definitely come back to that as your chosen path. So, if your survivor paths have merged up to a certain point, you can happily conclude your decoding for up to those points. So, in fact, what happens in practice is you do something suboptimal. So, you observe that at some point, some 10 stages before, typically the survivor paths merge. So, you do not actually check after that. So, you kind of finish there and say, after this I will declare whatever happens later. So, some such suboptimal thing you do in practice just to, but that is pretty good. That is usually very, very good. But this will always happen, all the survivor paths will typically merge after you go through a certain distance in the trails. So, that is something interesting which will help you in your decoding. So, now, we have to complete it. So, at any point you are ending up with four paths. I want only one path finally, I do not want four paths. So, that you will see will happen when you start using the termination. Suppose, I say I am going to terminate after this. So, what I am going to do is I am going to cut and paste this into the next one and then maybe move it slightly or maybe I will move just itself slightly to the left. And then we can see, it is a nice thing in doing it with computer. So, you can do all kinds of fancy stuff like this even if you go for a little bit. So, I have moved it to the left. I am going to try and terminate now and add, do this for a couple of more stages and you will see finally the decoder will just output the valid code. So, let us do that. So, we have to do one more stage, but I am going to terminate now. So, what does it mean when I terminate? There is no more messages. Message is still 0. Message bit is 0. I know the message bit, but I do not know the actual code word bits. So, I do not know how it actually went. So, when I put in 0s, what are the next states I will get? 0 and 1. Then finally, when I put in 1 more 0, I will get 0. So, I am going to give you some more outputs. I do not know, I just want to pick some random outputs, 0.1, 0.7. Will I pick this before? No. Then, 0.2, 0.8. These are my two outputs, minus 0.2, minus 0.8, 0.1, 0.7. Go ahead and do the complete decoding. You will have to do state metrics, branch metrics, state metrics, branch metrics like that. I am going to do it on the board, but you do not have to do the same thing. What are the branch metrics for the first thing? Anyone who has done this? 0.9 for the last top one. Yes. The next one is 4.1. The one below that. So, what are the branch metrics? 5. 1.6, 1.3 and 3.7. So, this is going to be 7.7, the top thing, 8.1 with the top thing. And then what about branch metrics here? The last one, 0.68, 0.68, 0.1, and then 4.68. Clearly, you would pick this and make it 8.7. So, this is just an arbitrarily cooked up example. Maybe in real life, it would not be this crazy. So, finally, what do you do? What is your output path or your output code word? So, you start from the last one and back track. So, it will actually be. So, how many stages do we have? 1, 2, 3, 4, 5, 6. So, it will be SP6, 0. That will be your. What will be SP6, 0? 0, 0, 0, 0, 2, 1, 0. That will be your. So, I am going to write that down next. So, the output path is SP6, 0, which in our case was 0, 0, 0, 0, 2, 1, 0. So, what is the message you had? I am sorry. You do not need to go back to the trellis and look at the input. This is a feed forward thing. So, what is always the input? Input at a particular stage will be the first bit of the next state. So, all you have to do is extract the first bit from the next state and you will get your U hat. So, the first state is ignored from the next state onwards. It is 0, 0, 0, 1. That is it. The last two are not messages. They are zeros to drive you back to the. So, the termination zeros, the last two zeros. This is your U hat. In a feedback, if your encoder had some feedback and it was a crazy shift register type thing, then you cannot do this very simple thing. Then you have to go back and look at your some table and figure it out. But if it is a feed forward system, then the next state is very, very easy and simple to figure out. The next state pretty much holds the input. All right. Any questions on this process? It is a very simple process. Any observations? Any comments? So, the next thing I am going to do is just for completeness sake, introduce a whole bunch of notations and describe what happens at stage i in slightly abstract terms. So, that is what I am going to do next. And I mean, if you understood the algorithm, you do not need this. But just to put everything together, it is very useful. For Viterbi. So, the question was this, how do you do a computation of probability of error? So, in general for soft decoders and ML decoders and MAP decoders and all, you just simulate, typically simulate. You are happy with simulation. Today's computers are fast enough to simulate. But at high SNRs for Viterbi and all that, you can estimate. You can estimate and the estimate is very good. And using union bound techniques, one can estimate. It involves finding some error paths, etc. Basically, you find the closest computer to the all zero code word, for instance. Then figure out what is the probability you will get there and then do a union bound and someone will get your way. So, it is tough to do the exact computation. Union bounds are possible. And at high SNR, when I mean high SNR, when you come down to 10 power minus 6, etc., those values are quite reasonably good. And if SNR is also good enough, high enough, but sometimes that will also fail. So, this is what you do in stage i. So, what do you have after stage i? When you reach stage i, at stage corresponding to stage i, you will have your received values. Using that, you have to compute your branch metrics. So, you have received values r0i, r1i. I will stick to just 2. If you have rate 1 by m, you will have m minus 1 of these. It is not a problem. And the first step you do is compute branch metrics. I will say, I do not know what to do. I will say s comma t. s goes from 0, 1, 2 to 2 power mu minus 1. For each s, what will be t? t will take two values for each s corresponding to the trellis. So, for each s, t takes two values. For a feed forward simple encoder, for a given s, how will you find the two t's? You shift and then put in 0 or 1. So, it is very easy to do. So, if I give you any integer, all you have to do is find this binary mu bit representation. Shift it right by 1 and put a 0 to get the 1 value of t. Put a 1 to get another value of t. So, in this computation, you will be using the output corresponding to this branch. And then you will do that. So, you can also optimize this computation and do it very fast. So, once you have the branch metrics ready, you have the whole thing set up. So, another thing that you should have before you come to stage i is, you should have processed all the previous stages. So, do not just generally jump to stage i to process all the previous stages. Once you are done with stage i minus 1, what will you have? You will have two things. You will have the survivor paths and you will have the state metrics. So, those two you will have. So, now we will use those two. So, I will say sp i minus 1 of s and what? This is the survivor path. So, this will be this is given to you. That is given to you. You also have state metrics. So, these two you already have. These two are available from previous computations. R comes from the channel. The received value comes from the channel. These two are available from previous computations. So, the whole point of doing the computation in stage i is to figure out sp i s and sp f m i s. That is very easy to do. One can easily describe this. For t equals 0 1 to power mu minus 1, you will have to do the computation. So, one can write it out in a proper fashion, but I am going to simply say you find sp i t. I am sorry. No, I am just going to say t because I already put s as the previous state. And sm i. For this at a particular t, you will have the, you will look at the two possible s s from which you could have come from. So, two possible previous states. How will you do that in terms of bit computation? If I give you a particular t on the right hand side, what are the two states from which it could have come from? You shift to the left and then add a 0 or a 1 in the first place. So, that is a very simple computation that you can do. So, if you think in terms of hardware addressing, etc., all these things are easy. So, you can easily do a simple shift to figure out what index the value will be sitting in. So, all these things are useful, but in today's MATLAB world, you guys probably don't even worry about all these things. If you actually want your code to run fast, you have to worry about all these things. Anyway, so you do this and then this is what you do in stage i. Once you come to stage k plus mu, sp k plus mu of 0 is your final output ML path. So, that is it. So, maybe I will write that down somewhere here. So, maybe I am out of space. So, I will put a box here and write down that. Final ML path is what? Sp k plus mu. So, after k plus mu stages, what is the survival path at? Stage 0. Only stage 0 after k plus mu. So, survival path there is. So, there are some, so if you have a rate 1 by m encoder, the only thing that will change is you will have m different r's and your branch metric computation will change. Nothing else will change. Everything else remains the same. If I have two input streams and several output streams, then what will happen? Number of branches going out of each state will increase. So, instead of having two branches, you will have four branches. But again, the steps is the same. You will have four branches going out of each state, four branches coming into each state and each branch will correspond to different outputs and all that. So, you compute branch metrics for all the branches and do the same thing. At each state, you look at the branch which will give you the lowest possible metric for the survival path and you pick that. Ignore all the remaining. Yeah, any m input stream and input stream also you can do this. Even if you have a rate m by n encoder, how many survival paths will you have after stage i? 2 power mu. It is always 2 power mu. It is always 2 path number of states. There is nothing more. So, number of states have to be very less if you want to decode. Yeah, the branch metric computation, probably a little bit more. I do not know. See, if you are smart about it, it will not increase very linearly. How many possible different, see, you will only have so many different symbols possible at a particular stage and you will have to do only that computation. And for BPSK and all that, in fact, a lot of, you do not have to do r minus s squared. It is enough if you do the correlation and maximize the supposed minimas. There are lots of ways of simplifying the computation. Okay, so that is fine. So, I think today if you want some systems which use these kind of decoders, until recently, maybe about 10 years or so ago, people were using bitter bee, I mean convolutional codes in deep space communication, convolutional codes and read Solomon codes in a sequence. I will talk about this a little bit more. So, convolutional codes were being used in deep space. They are now getting slowly replaced by some other codes. So, maybe potentially maybe for space communication, you will see turbo codes or some such newer codes soon enough. Maybe I think some of the satellites, some of them already used turbo codes, if I am not wrong. Another place where convolutional codes are used, even today is hard disk. I am sorry, I am sorry, that is not a place where it is used. They were used in modems, telephone line modems. So, that is one more place where convolutional codes are used. The reason why I got confused with hard drives is in hard drives, they use the bitter bee algorithm for doing something called equalization. Okay, so the bitter bee algorithm, if you have read some digital communication, if you have done some course, you will see bitter bee algorithm can also be used for equalization, for optimal equalization. So, that is used in several places. I got confused sorry. So, but for telephone line modems, long time back they were using convolutional codes. But today's DSL modems, I do not think use convolutional codes, I am not sure. Okay, but convolutional codes have a place in every wireless standard that is out there. Okay, so if you look at, I do not know, maybe there are people here who know wireless standards better than I do, but any one of these 3GPP, LTE, any one of those things you take, always has a convolutional code. Okay, so I think not sure what rate it is, maybe it is a rate 1 by 3 or something. And I believe it is some standards even have 64 state convolutional codes. So, these are used in your cell phones and you make a call, for instance, it is used a lot. Okay, so in terms of state of the art, one can implement 64 state at very high speeds, in terms of implementation. I have seen a lot of 64 state bitter bee decoders implemented very fast, I will give you exact numbers. But beyond that, it is tough today. I think if you go beyond 64 state, you have to go to 128 state or something, and then it becomes too nasty, too much memory going around. People usually never go beyond 64 state in terms of implementation. And we will see later on, when used in what is called a turbo configuration, these codes become really powerful. And for those people use only 8 state convolution codes. Use 8 state, use it in a turbo configuration. Okay, and there are few other recent work, but usually convolutional codes are considered old in the literature. If people do not work very actively in new problems, that is all considered reasonably old. Okay, all right. So, I think let us wrap up the bitter bee with that, then move on towards turbo codes. So, slowly going towards turbo codes. So, one ingredient before we go towards that is recursive convolutional encoders. So far, we have not seen recursive feedback convolutional encoders. I am going to see recursive convolutional encoders. Okay, again, we will see with examples. So, so far, what have we been seeing? We have been looking at generator matrices of this form. 1 plus d plus d squared, 1 plus d squared. Okay. So, what does it mean? Ultimately, what does it mean? This is giving you a relationship between the output at time n and the input at time n, n minus 1 and n minus 2. So, it is giving you some kind of a difference equation between the input and the output. So, very similar to filtering. So, what we are saying when implemented, it looks very much like some DSP filtering and all that. Okay. So, one can also imagine having a feedback type filter where the delayed outputs are also used. So far, we have never used delayed outputs. We have only used delayed inputs and it is a purely feed forward system. Okay. So, you can do a feedback and use delayed inputs as well. Okay. In fact, you can do a small modification to this G of D and get what is called a systematic recursive feedback version of this encoder. So, what I am going to do is I am going to divide each term by 1 plus d plus d squared. Okay. So, you have to worry a lot about what does it mean to divide and what is this division suddenly you have. So far, we had only polynomials in D, now we will have rational function in D, numerator also polynomial, denominator also polynomial. But all these things have a proper theoretical framework. If you are interested, I can give you references for looking that up. Okay. But we will just do it in an approximate way just to give you a feel for how it works out. Okay. So, I will convert to systematic form in the following way. Convert to what I will call a systematic form by dividing throughout with 1 plus d plus d squared. So, what you get as a result is what I will call GSD which is 1 and then 1 plus d squared divided by 1 plus d plus d squared. Okay. So, this is what I have got. I will give you interpretations for it. But before that, brief remark about systematic. Systematic is always useful. Previously, we never had the previous encoders were not systematic. Okay. The message did not appear by itself in the code word. Maybe one cannot give there is some advantage in having a systematic encoder. Okay. So, you may say why because the decoder side we never really had any advantage because of the systematic form. Right. Even though it was not systematic, we could quickly figure out what the message was. There was really no complexity as far as the message was concerned. There was really no problem there. But that is not the main issue. There might be other issues for why you want systematic. Okay. For instance, your input message might have certain statistics which you like. Maybe it is equal equivalence 0 and 1. And maybe if you do not make retain the systematic statistics, maybe you lose all that. Okay. So, there are so many advantages to having systematic encoders. So, that is one advantage that this readily gives you. But the other disadvantage is you are not sure what this means. Okay. So, previously we were we could easily implement 1 plus D plus D square and 1 plus D square using D flip-flops and shift register. Can we do the same thing here as the first question you should ask? Okay. One can do it. It is just a difference equation with feedback output delay. You can do it. But before we do that, let us look at what these equations mean. Okay. So, how do I encode? I multiply G of D with U of D on the left-hand side. Okay. So, if I do that, I notice. Okay. So, my V of D is going to be U of D times G s D. Okay. So, what is V of D? V of D will have two parts. V 0 D and V 1 D. Okay. So, if I do this, what do I get my V 0 D to be equal to U of D? Okay. So, this I can very easily implement. My message by itself goes to V 0. There is no problem. I can easily do this. Look at the next step. What is happening? V 1 D equals 1 plus D square by 1 plus D plus D square U of D. This looks a little scary in this form. But how do you make it look simple? Yeah. Take this denominator and bring it to the left-hand side. Just multiply throughout by the denominator. You see it readily simplifies. It becomes 1 plus D plus D square times V 1 D equals 1 plus D square times U D. So, in this form, it is nothing but a difference equation. Okay. Let me write it in time domain for you. Okay. In time domain, out of this D domain, what does that equation mean? It means V 1 N plus V 1 N minus 1 V 1 N minus 2 equals U N plus U N minus 2. Okay. So, from this, I can quickly compute V 1 N. How will I compute V 1 N? Simply move this case to the other side. Okay. So, at least conceptually, you can see that it can be very, very easily done. Okay. But from an implementation point of view, I do not want my number of states to be too large. I do not want too many D flip-flops in my implementation. I do not want to keep delaying the input with one thing and the output with one more D, two D flip-flops. And instead of having two states, they will have four states that they do not want. Okay. So, I have to be smart about my implementation. And this is actually quite standard. Even digital filters, people worry about reducing the number of delays and doing it properly. It is quite standard in linear system theory. How you implement these difference equations with minimum number of delay elements. Okay. So, I am not going to go into great detail here. And I am going to throw you a puzzle today before we wind up. Maybe we will solve it today itself. We have time. I am going to draw a circuit which I claim will implement those exact same things. Okay. And you have to come up with reasons for. Tell me whether it is true or not, or if I am just pulling your leg or whether it is whether you believe me or not. Okay. So, I am going to draw it. Let us see how it works out. Okay. So, here is how we should do it. Okay. So, V1 is the only non-trivial part. What is V0? V0 is just generally connected to. Okay. Question is, do you believe me or not? Maybe many of you believe me, but you have to check. What does it even make sense? Happy, not happy. Some people are happy. Some people are not happy. Think about it for a while. Yeah. So, exactly. So, if you remember your filter implementations that you might have studied in maybe systems theory, DSP systems and all that, when you have these IAR filters, there are different ways of implementing them. There is one, some canonical forms, very standard canonical forms. Okay. So, this I believe is one such form. I hope I am right. If I made a goof up here, then it would look a little bit embarrassing, but I think it is correct. Okay. So, this thing will give you what you want. Okay. So, the trick to that is, call this W. Okay. First figure out what W is in terms of? Yeah. Yeah. And then you delay W. Okay. So, let us try to find out WD. If this is W of D, what is this? W delayed by 1. Right. So, this is WN. This is W N minus 1. This is WN minus 2. Okay. And then what is WN now? UN plus WN minus 1 plus WN minus 2. So, what is W of D in terms of U of D? By 1 plus D plus D square. And what is V1 now? WN plus WN minus 2. Okay. So, that is 1 plus D square into U of D by 1 plus D plus D square. Okay. So, this is, so once you realize that this W is the key. Okay. One can very easily show this is true. Okay. This will come straight away from the difference equation that this implies. Okay. So, W, this is WN. Okay. Maybe we will write N just to be sure. N minus 1, W, N minus 2. So, write out this difference equation. You will see immediately you get 1 plus D plus D square into U of D. Okay. Then you see V of D will be simply 1 plus D square into W of D, which is 1 plus D square by 1 plus D plus D square. Yeah. So, typically in filters, you will see people will multiply by minus 1. The gains will get multiplied. Okay. You do not have any gains. I just have XORs. It is all of them are 1. It is no problem. It is all bits. So, you get that. Okay. So, there is enormous benefit in doing this. See, if you end up delaying V1, you will end up getting what? Two more D flip flops. So, that makes it 16 state as opposed to 4 state, which is actually what it should be. Right. The system, the non-systematic feed forward encoder has only had only 4 states. So, this one is also having only 4 states. So, you might as well do this. Okay. So, so the next thing is to draw the trellis for this. I have not, I have not come to that. So, the question is, how do we compare this encoder with the previous feed forward encoder that we had? Okay. So, there are a lot of formal ways of properly defining the code itself. See, I never define the convolutional code, the set of code words. I never define them. You have to define it properly. Make sure that denominators go away. And then, then you can show that these two codes are pretty much equal. If you list out all the code words, you will get the same set for a particular length. Okay. Yes. Do you have a question? Yeah. So, what will happen? There will be one difference definitely between this encoder and the previous encoder. The mapping from message to a code words will not be the same. But the contention is the list of code words is exactly the same. You can, you can show them. Maybe we, maybe we will look at it for small lengths and convince ourselves the rest. Okay. So, one can draw a trellis here. Now, we have to be very careful about this trellis. Okay. So, this can be a little bit convoluted and confusing because your next states are not easy to determine. Right. Your next states are different. Okay. So, it takes, it's important to have practice and drawing the trellis for this. Okay. And there will be some race conditions type situation which you will get confused about. You say, okay, this is already zero, but this has become one. So, that will change. So, this will change. So, you should know that there are some, those kind of things you should, you should have an understanding of. Okay. This is very standard in digital systems. You must have seen this before. Okay. When you do feedback with these, with these D flip flops, feedback, what do they call this one? Very standard. There's some, some standard arrangements. You have to be careful about when the input change and when the outputs change. Okay. So, those things are all some things to worry about. Okay. So, maybe in the next class, I will start out by just doing a trellis and then give you some description about the differences between this encoder and that encoder and this encoder. Okay. Those differences are important for us. Okay. When we make a transition to turbo codes, I'll use some of those differences to justify why turbo codes work. Okay. That's what we'll do. Okay. So, I'll stop here for today.