 So, this is lecture 29 and we are going to start a new topic today, it is going to be convolutional codes, so convolutional codes is going to be the topic. So relative to what we have been studying so far, so if you look at the codes that we saw so far, classical codes one can say the Solomon codes was the most important code we saw and then we saw LDPC codes. The way it worked out was for each Solomon codes you could prove very strong distance properties, you could show that it was MDS and all that, met the singleton bound, but on the decoding side the only thing you could do is bounded distance decoding, right? You cannot even do ML decoding or anything. If you know that there are as well as then there are correcting capability one can decode that was the read Solomon part, there are a lot of nice construction, encoding was easy, decoding was implementable, no problem, okay? So then we moved on to LDPC codes where the construction was random, it was not deterministic or anything like that, but there was a decoder which is the message passing decoder which is reasonable to implement and it gives you in fact capacity achieving approaching performance, right? We saw that at least the LDPC channel gives great performance, but the encoding is a little bit complicated, okay? So the biggest difference there was, there were two differences, one was the main thing is the decoding, right? The fact that you can do efficient soft decoding which can be, which can give you all this great performance. Then the construction also plays a part in it, but the construction is essentially random, right? So there is not really anything great about the construction. So convolutional codes in my opinion fall somewhere in the middle, okay? So the construction is deterministic, okay? So deterministic construction, okay? Very, very simple encoders, okay? Some of the simplest encoders out there. Next to repetition, okay? One can say convolutional codes have some of the simplest encoding algorithms out there, okay? So in fact the encoding is simply a shift register, so binary shift register with very small memory like 4, 5 things like that, absolutely nothing in that encoder, okay? And it turns out you can do soft ML decoding efficiently, okay? So you can do the, implement the best possible decoder, the optimal decoder which is a soft maximum likelihood decoder, exactly in a very nice implementable way. It's a simple algorithm which will run for you and give you the ML decoding, but since, so the performance certainly will be, in many cases it will be, you'll see it'll be slightly better than Reed Solomon one might say, but we saw some plots you remember, right? So when you go to really low rates like 3 by 4 and all that, both of them are reasonably the same. I don't know if you remember this plot that I showed you sometime back. So maybe we'll come back and see that. What is the comparison between convolutional codes and Reed Solomon codes for instance, okay? But since their encoding is deterministic and simple and all that, you won't get capacity achieving performance, okay? So it won't get you very close to capacity even though you can do optimal decoding, okay? So you can see, both are important, I mean you need to have good codes also, okay? So it's not just enough to be able to optimally decode, repetition code you can optimally decode, right? But it will not give you any coding in, right? So the coding in is obtained by a combination of both. You need a reasonably good code and soft decoding, okay? So you need both, okay? And the convolutional codes kind of lose out on one, okay? They're not great codes, but they have good decoders. And in practice, this gives you a good trade-off in many situations, okay? So you can imagine, so soft ML decoding will definitely be complex, okay? But the encoder is really, really simple. So if you have a situation like deep space communication, where there's a satellite far, far away, which is taking all its energy from the sun or something like that, then what do you want that encoder to be? The encoder should be the simplest possible encoding, right? Decoder here on earth can be as complex as it wants, there's no problem. So in those kind of situations, these codes are really useful, okay? So in many other situations like this, even your cell phone for that matter, one can imagine the encoding should be simple because it's running on battery, it's a very small device. Today you can put a lot of power in there, but still it's a small device. You probably want the encoding to be very simple. While the base station probably has power from the supply and then it can do a lot of complicated arithmetic, it can have complexity codes. In fact, convolutional codes sit on every one of your cell phones today, okay? So yeah, so downlink will have to be different. Maybe the same thing can be done, but it's usually different. So see the uplink is also power constrained. Maybe one can imagine downlink being, but that's not the right way of viewing it, but still it's what's done, okay? So your cell phone will convert your speech into digital thing and then it'll encode with a suitable convolutional code and send it to the base station, okay? So you see there's a lot of benefit there. So this is a philosophy and beyond this, there's really nothing much to teach. The soft ML decoding, you might have seen some version of it. It's basically Viterbi decoding. So you might have seen some version of it and the constructions, there is no, see for instance, for each Solomon code, there is a specific construction for the parity check matrix. Here, there is no specific construction. People have already done, they'll tell you this is the best construction and you have to just take it. There's no question of understanding it or anything like that. Even for the LDPC codes, we at least had some notion of there is some optimization which exists to find degree distributions. Here, there's nothing like that, okay? So once you do the construction, it's very simple, straightforward. So really there's nothing much. So I won't take more than two or three lectures of convolutional codes. So it's not a big deal. But the slightly non-trivial thing which we have to see is turbo codes, okay? So very recently, people found very smart ways of using convolutional codes as several convolutional codes together and making much better codes out of them, okay? So you already knew convolutional codes are reasonably efficient decoders. Now when you make better codes out of them, then you can get closer to capacity. So turbo codes can take you very close to capacity, not as close as the LDPC codes, but then one DB or so in practice, in simulations, one can check. So it's pretty good. So this is what we'll do. And let's see, let's see, okay? So the best way to describe convolutional codes is start by describing their encoding. And in fact, the best way to describe their encoders is to just start giving examples, okay? So we'll see examples are the only thing. The generalization will be very obvious from the example. Anything else I do will also be exactly the same thing. There's nothing much different. So it's enough if I do examples. So I'll follow that. I'll pretty much do example after example, wherever necessary. And the general statements, I'll, if you're interested, you can read it from the books, okay? So there are general statements also, but I will not do too much of them in this class, okay? So we'll begin by seeing example of a convolutional encoder, okay? So the encoder I'm going to show you looks like this. I'll simply draw the diagram. It's a finite state machine. So I can draw it with the set of D flip flops. I'll draw it like that, okay? So let me write that down. It's a finite state machine. I'll do a D flip flop implementation. Okay, it's in fact a very simple sifter register based finite state machine. So there's nothing much to do here. So in this particular example, we'll have three D flip flops, okay? And the input connecting here. So I won't show things like clock and all, okay? So I have to assume that all these things are clocked and synchronous and all that, okay? So I have two outputs. I'll put a XR gate here. So this is a multi input XR gate, this plus with a circle on circle around it. So it's basically modulo two addition, right? So I'm going to do that. I'll connect it to the different inputs. Okay, so the first guy will connect to this one, this one, this one, see be the output. The second guy I'll connect to all four. Okay, this is your input. This is output one. Okay, so of course, we'll have notation for all these things. But at the end of the day, it's very clear what's happening, right? So your input bits are going to be clocked in from the, from the left side of this picture. Okay, and as they keep clocking getting clocked in, you will start getting outputs also, right? Right? So it's a very simple encoder, I don't even have to describe what this is, but I will use some notation and give you a description. Okay, so the most important thing in a finite state machine is the state itself, right? So what is the state here? How will you describe the state of this finite state? Yeah, bits are the output of each of the flip flops. So I'll call this as zero s one s two. Okay, so how many bits do I need to describe the state? Three bits. So how many possible states can there be two part three, which is eight possible states. So it's an eight state encoder. Okay, so that's the first thing. Okay, so the memory of this encoder is three, number of states is eight. Okay, so this number eight is very typical. Okay, so people will always talk about the number of states, when they talk about a finite state machine, right? So it's an eight state encoder. Okay, so that's the way you, that's the way you talk about this is an eight state encoder. Okay, my input comes in here, my input will actually be a sequence, right? I'll imagine my input to be a sequence of bits, maybe a finite length sequence, if it's not very relevant, you'll see, just depends on a particular time instant, but I will think of my input as a sequence. Similarly, my output will also be a sequence. Okay, so my input sequence I'll denote as u, okay, u zero, u one, so on. Okay, my first output sequence, I'll denote v with a superscript zero, it's going to be v zero one, v zero two, so on. My second output sequence, I'll denote as v with a superscript one, which will be v one one, v one two, so on. Okay, so I'm writing u as u zero, u one, so on, but what is the first input that the finite state machine will see? u zero. Okay, so you should remember this thing is coming in the reverse direction. Okay, but we're used to writing the vectors as u zero, u one, but don't think of that as u one coming before u zero, u zero will come first. So the next time instant you'll get u one, the next time instant you'll get u two like this. Okay, that's how it's clocked in. So it's kind of inverse in that fashion. Okay, so maybe I should do zero here. I'm sorry. I should do zero here. I should not do one. Apologize for this. It's good to start with zero one, so on. Okay, this is my, is that clear? Okay, so what will s zero be? Okay, so at the nth time instant, how will the entire picture look? Okay, what is the input bit? Un, what is s zero? Un minus one. Okay, what is s one? Un minus two, what is s two? Un minus three. Okay, so the state is in fact, right, since I'm just doing shifting the three previous bits that went through my, that input. Okay, un minus one, n minus two, and n minus three. Okay, and un is the current input. Okay, so in current input bit is un, s zero is un minus one, s one is un minus two, and s two is un minus three. So you see why I say the system as memory three, right? So it remembers the three previous inputs and uses it in computing the output. What is the output now? What is v zero n? It will be un plus s one plus s two, but I already know s one and s two in terms of un minus two. So I can write the whole thing in terms of u itself. Okay, it's going to be un plus un minus two plus un minus three. What is this plus now? This plus is modulo two addition. Okay, so it's x off. So likewise, I can also write v one n s un plus un minus one plus un minus two plus. Okay, so, so I mean, different people have different ways of different, they like different things. For instance, some people might say, all you need is the last two expressions. I don't care about all this picture. What is this picture? It's just a waste of ink, right? Just representing at the end of the day, just this, these two equations, but some people don't like just those equations, they like this picture better. Okay, so we have the whole thing in front of you. Okay, so, so it's just you can either think of it as a picture and bits moving in and out clocked in and out or just this equation is implement this equation. And software, you might want to just implement this equation as opposed to thinking of some, some picture like this. Okay, so standard finite admission is really, really simple. It's nothing much to say. Okay, the encoder cannot get any simpler than this. It's one of the simplest encoders. Okay, the current bit. Yeah, we think of that as input. State is only previous, whatever happened. So see at, at time n, the system is at state s0 s1 s2. You should not include the input for the state. It's already in that state by the time it comes to stem. So it's got to do with negative edge postivate. I don't know. I don't know enough about these things. But anyway, that's how you think about it. Okay, whatever happens before that is the state. Okay. Alright, so there's one more way in which you can view this. Okay, just like we viewed it in so many different ways to write down the way. So, so, so one might want to write this expression so many different ways. This is a just a different difference equation, right? Maybe we are not very happy with difference equations, right? So you see, for instance, when you do filters, right, digital filters and DSP, the end of the day, it's all difference equation when you do finite impulse response filters, but you're not ever happy with just the difference equation. You want something called a impulse response. You want to figure out what happens when the response is an impulse. Okay, and then what is the impulse response can then be converted into frequency domain and you want to view it in so many different ways. Okay, so maybe we'll do that also with this thing, right? The end of the day, it's just a difference equation. Of course, the difference is you're doing modulo 2, right? You're not doing usually in filters, you don't do modulo 2. Okay, you're doing modulo 2, but still it's only a difference equation. So one can even think of this as being a linear system and worry about what its impulse response will be. It's got one input and two outputs. So suppose my impulse is an input, what is impulse now? One followed by a whole bunch of zeros. What will be the two outputs? That will be the impulse response of my, of my encoder. Okay, so I'm going to write that next. So I'll call, I'll denote the first impulse response as G0 and the second impulse response as G1. Okay, so as you, as you very well know, in a linear system, once you give the impulse response, using that one can describe the response to, response to any other input. How will you do that? Yeah, you convolve with the impulse response. So all that thing, all that we'll see when you see all those things are very simple extensions. Okay, so what will G0 be in this example? If I put one followed by a whole bunch of zeros as the input, what will G0 be? You can go back and look at the picture if you want. It will be, initial state is assumed to be zero. One zero, one one, that will be G0, right? The impulse response you see at output one will be one zero, one one, after that what? It's all zero. So one can stop there. You can say the impulse response is one zero, one one. What about for G1? One one one. Okay, it just depends on the connections. Okay, so it's going to be one zero, one one. I'll just stop here. I know it's going to stop here. G1 will be one one, one one. In fact, you know you might be familiar with your, if you're very comfortable with these difference equations and viewing them as IIR and IIR, unless you have feedback, you will never have an infinite impulse response for a input. Okay, so here you don't have any feedback, right? The way I wrote it down was a simple feed-forward structure. So I'll definitely have a finite impulse response. In fact, I can even say the length of my finite impulse response will be equal to three plus one before. It will never be greater than four, right? So one can define new quantities for that if you want. Okay, so it's also common to just give these two things, right? If I just give G0 and G1, I have completely defined my, defined everything I want. I can go back from here to the even that picture very easily, right? You see these ones are basically the connections. And the length minus one is your, is your memory, okay? Assuming, well of course, it goes to infinite length and there's some trouble, but the finite responses with feed-forward structures alone, this is enough. Okay, so all these things, the standard system stuff that you might have learned in so many ways, so it's very easy to do. Okay, so the next step is to say, once I have the impulse response, V0 can be written as U convolved with G0 and V1 can be written as U convolved with G1. This is convolution, modulo 2. Okay, so you do the same convolution, you do the flip and the slip and you do modulo 2, we'll get all your, okay? In fact, you can recover the difference equation from this. It's very easy to see, like Un plus Un minus 2 plus Un minus 3 will be the first one. It's all very, same thing, put in different language, okay? So the next step comes when you, when you're tired of doing convolution, flipping and slipping and all that, you want to think of transforms, right? So the transform that's very useful in DSP is what? The Z transform, right? So the Z transform is for real numbers. So instead of Z now, we'll do D transform because we're doing everything modulo 2, so you'll do D transform. Okay, so what is D transform of a sequence? Okay, so I have to define that. Okay, so if I have a sequence, I'll define it for a general sequence. If I have a sequence X equals X0, X1, X2, so on, remember I only care about causal, right? So I don't worry about the negative thing at all. We'll only, only write for causal sequences. I'll say my D transform, X of D is what? It's just motivated by the Z transform, okay? So you do X0 plus X1, we won't do D inverse and all that, okay? We'll just say D, X2 D squared, so on. That's my D transform, okay? So the advantage with D transform now is instead of convolution, you'll get what? Multiplication, okay? So polynomial multiplication is the same as convolution. So you write it differently, you get the same thing, okay? So once I go to D transform domain, I can make everything in, write down everything in D transform. So for instance, what will be G0 D? 1 plus D squared plus D power 3. What is G1 D? 1 plus D plus D squared plus D power 3, okay? And then what will V0 D be? It will be U of D times G0 D. So the same proof for why convolution becomes polynomial multiplication in Z transform, we'll carry over here with D transform, except that you'll do everything modulo 2 now, okay? So that's one more thing important, that's important. This polynomial multiplication, everything will be modulo 2. Likewise V1 of D will be U of D G1 D. So this is particularly useful. Suppose I give you, suppose I give you an input sequence, it's easy to convert it into a polynomial and then do polynomial multiplication, as opposed to let it go through the finite state machine one by one and be careful about which is one, which is zero, which is your state, which is not your state. You don't have to worry about all that. Convert the input into a D transform. Then you know your G1 of D, G0 of D, multiply, you'll get V0 of D, V1 of D. And then what you should do? From V0 of D and V1 of D, how will you get back to the actual output? Yeah, so take the coefficients of the, okay? But then you have to do a series to parallel type conversion because the time one, both the outputs are occurring like that, okay? So how do you translate that that's important? Okay, I didn't talk about it. I'll come back. Okay, so yeah, I'm coming to it. Okay, so another thing to keep in mind is finally, when it goes into the channel, the bits can go only one at a time, right? So you have to convert them into one at a time. So what people do usually is the final output V will be V0, 0, V0, 1, V1, 0, V1, 1, so on. Okay, this will be a final output. Okay, this is the output. Okay, I know I didn't write it down carefully, I should have paid attention to it. So you take the outputs at both the time and then you multiplex them one after the other into the channel. So that's what it is. So here again, when you compute this, you'll have to change it. So let's do a simple example just to get started. Suppose you say U is 11011. Okay, I want you to give me V. So what's V? Okay, maybe we'll do V0 first. What is V0? 111101. This is V0. Okay, what is V1? I'm sorry. What is V1? 1, 0, 0, 1, 1, 0, 0, 1. Okay, so first thing you'll notice is the length of U and the length of V0 are different. Why does that happen? There's two more or how many more? Three more. Yeah, the state has to clear out. So you have to wait till the state becomes 0. Okay, so you'll see that length extra 3 will be there. Okay, so right, so that's something you have to accept. Okay, so it will come out and you have to take that out. So that's the first thing. Okay, so if this has got length 5, the length here will be 5 plus 3. Okay, where does this 3 come from? This is memory. Okay, here it might seem simple. Okay, but if my last bit had been 0 in U, you might say this last would not have been 1. Okay, so but even then you have to insist on taking that extra 0. Okay, so this plus plus 3 will always come in V0 and V1. You have to clear out the state. Okay, so there's some importance there. We'll see later on that's important. Okay, and what will V be? 111010111010011. Okay, so for instance if I ask a question, what is the rate of this code? If I send 5 bit, so if I set my message length to 5, I'll have 2 power 5 which is 32 different messages. How many code words will I have? Of course I'll have 32 code words corresponding to each of those messages. What will be the length of each code word? 16. Right, so what's the rate? 5 by 16. Okay, if I set my message length as 6, what will be the rate? 6 by 18. Okay, so in general if my message length is k, what will be the rate? k by 2k plus 6. Right, in this case. Right, okay, so as k becomes really really large, what will this tend to? It tends to half for large k. So the extra 6 is because of clearing the state. Okay, so if you don't insist on that, you can do that but you'll see later on that's important. You have to clear the state. Okay, for your ML decoder to work well, you need to clear the state. So that will work out. So for finite length, it will be different. Okay, alright, so even though I described it as a finite state machine and all that, one can write down very easily the list of all messages and list of all code words. One can do that. Okay, so you will have 2 power k here and you will have 2 power k code words of length, length what? Length 2k plus 6. In fact, overall it will be a 2k plus 6 comma k linear code. In fact, linear block code. Once I fix a particular block message length, if I say my message length stops here, it becomes a 2k plus 6 comma k linear block code. Why am I saying it's linear? If I add any two outputs, will I get another output in the same list? You have to right? So it's entirely a linear system. I can describe my entire operation as a polynomial multiplication u of d times g of d. If I multiply u1 of d, u2 of d, oh yeah when I add it, I will get definitely another message which corresponds to this. So it's in fact, it's a block code in disguise when you fix k as a constant. Okay, but usually it helps to think of this as a sequence. Okay, without fixing k as a constant, k can be any large number that you want. Okay, it helps to in every most practical implementations, it's useful to think of it that way. So people usually think of convolutional codes as different from block codes. Okay, but in practice when you restrict your block length, you'll always restrict your block length. It becomes a block code, really. Okay, there's nothing wrong with that. So I'm going to start with all zero. I'm going to say I'm going to end at all zero. That's understood. Yeah, you have to clear out your state after tables. Once you do that, it comes out. Okay, so from all zero state to all zero state, it becomes a linear block code. Okay, like it points out, if you don't clear out your state, then all kinds of things can happen. It will become very different. You can't say anything else. But if from all zero state, yeah, for every k, you'll clear out. So once you do that, but what you do for clearing out is not really message. You have to clock in three other zeros. It is not really message. It's just zero. So you can't count it as message. So that's why you get that extra hit in six. Okay, all right, so this is the example. So, though the rate is less than half, it's typical to say that this is a rate half encoder. Okay, so because for every bit that's going in, you're putting out two bits and the design rate will become half if k becomes very large, right? If k becomes like 1000, 1000 by 2006 is very close to half. You don't have to worry too much about that. Okay, so it's usual to call this as a rate half convolutional encoder, eight state rate half convolutional encoder. Okay, so that's how you talk about this this encoder. So I can give you so many other pictures. Now, you can imagine what are the various possibilities for an eight state encoder. If I don't want any feedback connections, only feed forward connections. Just a question of varying those connections, right? Possible connections can either keep any one of them on or off. So you have 2 power 8, which is 256 possibilities. So you can enumerate all the possible convolutional codes of a given number of states. It's possible, feed forward. Okay, and we'll see later on it's enough if you do feed forward. Okay, we may not see I'll just maybe state the results saying feed forward is good enough. Okay, so one also talks about generator matrix. Okay, so what do you do with the generator matrix? It's all quite simple. There's really nothing more to say here. I'll say generator matrix is G of D. What do you think I'll do here? I'll just simply collect G0D and G1D and keep it here. Okay, and logic behind why I call the generator matrix is V0D V1D can be written as U of D times G0D G1D. Okay, this is how this works. Okay, so this is for a rate half encoder. So now I can generalize this half. Okay, for every one input I can produce three outputs, four outputs, five outputs, any number of outputs I want. Okay, so what will happen if for instance I do a rate 1 by m encoder? How many G's will I have? G0, G1, 2G, m minus 1. How many V's will I have? V0, V1 to V, m minus 1 and I'll keep multiplying like this and I'll get it. It's also possible to do m by n convolutional codes. Okay, you can have multiple input streams, multiple output streams. We will not see that in this class. It's not, it's not enormously difficult but just that they'll get a little bit confusing and we don't want to deal with that. So we'll only deal with rate 1 by m convolutional codes and in fact even turbo codes are typically constructed with only rate 1 by m codes. So you don't really need the rate m by n with some technique called puncturing which you can use to get the other way. So we'll use only rate 1 by m and this class. Okay, so we'll stick to encoders. A general description for them will be given by G of D being equal to some G0 of D, G1D, so on till G, m minus 1 of D. Okay, so I'll say all this is just feed forward. There's no feedback involved anywhere. It's only feed forward. Again, we'll restrict ourselves to feed forward. Okay, so maybe later on when we do turbo codes I'll show you why you need feedback and all that. So there will be some points where feedback will be needed but so far as far as convolutional codes are concerned we'll restrict ourselves to feed forward for simplicity. Okay, suppose I give you a G of D. Can you go back and draw the picture for the encoder? Does it fully specify the encoder? It's efficient. How will you know the memory? How will you know the number of D flip flops you need? The maximum degree of one of these G's should tell you, right? The maximum degree of one of these G's will give you the number of D's you need and just put it out there. You will get the picture. Okay, going from here to the picture is easy. So it's enough if I specify this. Okay, so mostly only this will be specified when people describe a convolutional encoder. They'll only specify the coefficients of these G of D's. So once you have that, you can design your convolutional encoder. Okay, so there's one more way to visualize how your convolutional encoder is functioning and that's crucial for the decoding. Okay, so that is the trellis representation. It's very, very crucial and again I'll do it for the example. You see it's easy enough for the general case as well. Okay, of course one way you know of visualizing a state machine is what? You might have seen this before this state diagram, right? So you put the states in circle and then put the connections according to the input stroke output and then you put the connections. Okay, but one thing that's missing in that state diagram is the time axis. Okay, so you don't have the time axis, right? You have to kind of keep track of time in your head if you start with some state. You have to say I'm going here, time in your head is a little bit difficult sometimes, particularly for engineers anything in the head is difficult. So what we do is we put the time axis also explicitly in the diagram and make it a trellis representation. Okay, so instead of just having a state diagram which is same for all time, you put the states at different time as different entities and connect them up. You get what's called a trellis diagram and that's particularly useful when you want to do decode. Okay, so that's the next thing we'll see, trellis representation. Okay, so remember my what is my update? My v0 is v0n is un plus s0 plus s2, am I right? And then vn1 is un plus s0 plus s1 plus s2. Okay, this is the example, the same example as before. I'm sorry? First one is s1 plus s2. Okay, so this is my update equation. With this I'll show you how to do the state diagram. Okay, so initially you're starting at the all-zero state. Okay, so my at the initial starting time I have the all-zero state. Okay, so I'll I'll put I'll put 0 here. Okay, so I'll denote my state as a 3-bit number. Okay, it's 0 to 7. Right, 0 is 0, 0, 0, 1 is what? 0, 0, 1. Okay, 2 is 0, 1, 0, likewise. You can do 0 to 7, it's very easy. Okay, this is my state. Okay, now I can have two different inputs. My input can either be 0 or 1. My input is 0, what will happen? I'll go to the same state and what will be my output? I'll get two outputs, 0, 0. Okay, so that's how this is how I'll denote it. It is again a standard way in the state diagrams. Say my input is 0, output is 0, 0 and then I continue to, well I go back to the same state. In a state diagram typically what would you have done? You would have looped back, right? But I want to bring back the time axis explicitly in the states. So I'll push my state to the next time instant which becomes the next circle on the right-hand side. Okay, so it goes to the next thing. Okay, what if my input is 1? Sorry? 0, 0, 1. 0, 0, 1 is the next state. So you remember you don't have to compute the output to figure out the next state, right? Once you know the input you can know the next state immediately. Now there is some confusion about how I'm going to write it. Okay, so I'm going to write my state as s0, s1, s2. This is how I'm going to write my state. Okay, this is a convention I'm choosing. You can also choose s2, s1, s0. It doesn't change anything. You can also choose s1, s2, s0 if you want. Okay, it doesn't change anything except that your head will go for a spin. Okay, just make one simple choice and stick to it. Okay, I'm going to choose s0, s1, s2. So my input is 1. What will be my next state? 1, 0, 0 which is 4. Okay, so I have to do 4. So maybe I put 1 here, 2 here, 3 here, maybe 4 will come here. I'm sorry. What happened? Okay, I'm doing okay, right? Did I make a mistake? 4 is correct. Okay, then I have to figure out the output. Okay, so output is going to be 1 and output is 1, 1 in this case. Right? My initial state was 0, so input input 1, I'll get output. Okay, is that clear? Okay, use these update equations if you're getting confused. See, remember this is the state after the transition. So when you compute the output, what state should you use? The original state that you were in. Don't use the new state. Okay, then you'll get what's called a race condition. Right? So you don't do such things. Okay, so you do very simple things here. Okay, then now what do you do? You just keep repeating this process, you'll get the full trellis. Eventually the trellis will become complete as in all the states will show up and then you can you can just repeat the same thing over and over again. Okay, so do the next state. Spend some time. Don't look at the board. Just keep doing it on your own and check if you've done everything correctly at the end. Okay, I hope you guys are doing a better diagram than what I'm doing here on the board. So, I'm sorry. Yeah, I'm not going to put outputs. You have to put the outputs. Okay, so I think my diagrams are getting nasty. So I'm going to just stop at this point. Maybe then show you a picture. Maybe I made a... Did I do it correctly? Maybe I made a mistake somewhere. I think I made a mistake somewhere. So let's just go back. Sixth set goes to three, no? Yeah, that's the mistake. And two comes to five. All right. See, it's very easy to make a mistake here. So I'll stop at this stage and then you can extend it. After that, you'll see zero and one have the same behavior. Right? There's no difference between zero and one as far as what state it goes to is concerned. Okay? Two, it goes to one and five, right? Four, it goes to. Six, it goes to three and... Okay, I'm not put the outputs. You can put the outputs. And after a while, you don't even have to label the states. It's clear. It's going from zero to seven. And what will happen in the next time instant? It'll get full and complete. Okay? So from one and zero will have the same behavior as far as transitions are concerned. Okay? The last bit is actually getting out now. So if two states differ in the last bit, it doesn't really matter. They'll behave the same way. The output will be different. The output will obviously be different, but the states they go to will be the same. Okay? So one can fill it out. Maybe I'll fill it out for this. Then what is the? Zero by zero one. Zero one, one zero. Okay? Zero slash one. Okay? Okay? So the outputs will be different. You can go through, complete the trellis. So this is how it works out. Okay? So it's common to identify that to identify each stage of the trellis. Okay? This is a stage. Okay? These are all stages. Right? So every stage corresponds to a time instant. Okay? So this is time zero, stage at time zero. This is a stage at time one. It's time two, etc. Okay? So at time zero, you get an input which is u zero and you produce outputs which are v zero zero and v one zero. Okay? So that's how you label the axis. u zero stroke, v zero zero, v one zero. Okay? Time one, your input will be u one and output will be v zero one and v one one. Okay? But actually the trellis captures all possible inputs and all possible outputs at every stage. Okay? So up to k stages, it's actually the k minus one, time k minus one, you'll be getting actual messages. After that, your input will become only zero. Okay? So that way you'll go back to the all zero state. I'm going to show that. Okay? So from here on, the stage will be complete and at stage k minus one, okay? So I think I should, I need more space here. So let me just show how it works. Two, three, four, five, six, seven, eight. Okay? So you have eight stages and say this is k minus one's time instant. Okay? You would have gotten, let me just get this guy out somewhere here. There's no good place for this. Okay? So I'll move it here. Okay? It's not too bad. Okay? So the last thing you would have done was put in u k minus one and then you would have output at u zero k minus one, v zero k minus one, v one k minus one. Okay? So after this, there's no more input. Okay? So what should your input be? Zero. Okay? So input will be zero and time k. Then how many inputs will you need? Three other inputs. Okay? k plus one and k plus two. You get a total of three kids. So when you see the input is only zero, the number of possible states will become four after time k. Okay? So input is only zero. So from eight, you'll go to what? You'll go to zero from one and zero and one. You'll go to two from two and three. You'll go to four from four and five. Am I right? Or am I making a mistake? Yeah, starting with zero. Right? So from two and three, you'll go to one. Okay? So that's where you'll go. Okay? So we'll go to one. Then from four and five, you'll go to two. Okay? So we'll go to two. Then from six and seven, you'll go to three because only zero is allowed as the input, but the output can be non-zero and that's very important. You can't throw away the output. The output is part of your code word. And then after that, time k plus one, what will happen? One more zero. So zero and one you'll get. In fact, these two guys will go here. These two guys will go here. And then at the last time in stand, you'll get back to state zero. Okay? This will be k plus. Okay? So this is a complete trellis representation for every possible code word that can come out of your convolutional code with k input message bits. Okay? So this is the trellis representation. Okay? And every code word is represented on this trellis. How will I find an arbitrary code word? Yeah, just follow the root of the message. You'll get code word. Okay? So one statement one can make on this trellis is every path on the trellis corresponds to a code word. Okay? So a valid path, you can't take all kinds of directions. Only some directions are allowed. So once you go to a particular state, you can only go to two other states. You can't go to all kinds of states. Every valid path on the trellis corresponds to a code word. Okay? So you can imagine doing some kind of decoding on this trellis later on. Okay? So that's what we'll see in the next class. There's no class tomorrow. So we'll meet again next Wednesday and we'll continue from.