 Hello, welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering IIT Bombay. This lecture will be a continuation of our discussion of maximum likelihood sequence estimation. If you remember from our previous lecture, we were looking at the problem of identifying a sequence of symbols that were sent through a channel which has a convolutional characteristic. In particular, since the symbols were getting mixed up, you had inter-symbol interference and the gist was that if you write down the noise model and find out the optimal detection strategy, it boils down to optimizing this particular expression that is sorry that is lambda b is real part of angle bracket y sb. Sb essentially is the signal that is received which contains the effect of the channel including the convolutional effect minus norm sb square upon 2. Now, we mentioned that if you are going to try all possible combinations of sb's that account for the channel and modulation, the unfortunate fact is that you need a large number of computations. For example, let us say you have a 1000 QPSK symbols to detect and there are there are QPSK implies that there are four possible symbols, you have to try all 4 power 1000 combinations and that seems very prohibitive. But one hint that we can get over here is that since y and sb are summations of signals and this they sum over time, could it be that we can get an incremental approach and start making decisions without having to look at all combinations that is what we are going to see. So, let us start we are going to look at the two parts in isolation. Let us say you take a real part of angle bracket y, sb that is summation over k real part of b star zk. How do we get that? Essentially, you perform matched filtering that is you perform matched filtering over y to get angle bracket y, sb and you get z and s y star mf of kt. Let us actually write this down and do it in a formal manner. So, let me now lambda b is real part of angle bracket y, sb minus norm sb square upon 2. Now, if you recall the way we constructed sb was sb is essentially summation bk t of t minus kt that is basically what sb is. So, let us actually define our zk as that corresponding to the kth term. How do you define it? Zk is y star p matched filter of kt. This should be very evident because angle bracket ysb is actually integral y of t, sb conjugate of t and that boils down to the same thing over here. You are just taking the kth sample over here. So, this is equal to integral y of t and if you remember from our previous discussion p matched filter of t is actually p star of minus t. Since, p mf is p star of minus t you can check that this actually is going to become t minus kt of tt and this particular quantity zk can be computed. What you need to do is you just need to take p you know you need to take the matched filter and you need to take samples. So, zk has the same meaning as you had in the case where you did not have any channel zk corresponds to sampled versions of the matched filter outputs no difference. Now, of course the I will remark that no difference meaning there is no difference in the definition of zk, but in practice zk has information about this symbol and potentially past symbols because of the presence of the channel. The next thing we are going to look at is angle bracket y, sb. Now, this is summation over k real part of sorry not real part sorry that is not the thing we real part we did not put. So, we say bk star zk how did we get this? Now, this should be evident if you write the definition down its integral y of t and sb star and if you now just if you look at what y and sb are they have the same quantity inside if you write it out you will get this the z essentially takes care of this. In other words it is basically summation of all these zk's because your zk has one p of t minus kt your sb has several p minus kt's in the summation. In other words you have summation let me not do this it is like integral y of t summation bk star p of t minus kt star dt and if you take the yt inside integral inside you essentially get this expression. Now, this is what you have and we are looking for an additive or incremental term. So, that we can just keep adding and make decisions on symbols as we progress. Now, this is what we have next we are going to now make a definition we are going to define the auto correlation sequence of the channel of course the sample channel. What does this mean? See if you think about it right if you now take the magnitude square of y, y has the channel convolved with it. So, if you now do y if you now take y square or you know if you multiply y by s you know and integrate you will get the channel correlated you will get h convolved with sorry sorry you will get the gc convolved with gc reversed. So, we are going to take these samples and use them wisely in other words this will come in handy as our derivation progresses for now just bear with me I am going to make a definition I am going to call this hm I am going to define hm as integral p of t p star of t minus mt dt always remember this capital T is the symbol duration. Now, this is my definition and hm is actually a correlation sequence because if you if it is an auto correlation sequence and it satisfies this property even for complex channels if you look at h star of minus m you know because it is an auto correlation function should have the conjugate even property you are going to get integral p of t I am going to put star here because of the conjugate p of t plus mt dt and you know that this integral is over t and I can always just substitute let us say t plus mt as u. So, this is going to be integral p star of u minus mt times p of u dt, but this is the same as this this is equal to hm. So, we are going to use this hm which is the deterministic auto correlation of the channel within our further derivation because it will come in very handy it is more convenient to handle than p convolved with p star sampled and all those it is very p convolved with p star sampled it is very difficult to express this hm is exactly p convolved with p star shifted that is basically p star of minus t shifted. Next with this auto correlation sequence we can move on to the next term which is the norm of sb square. Now, norm of sb square intuitively is just the energy or rather the integral square of the combined signal, but it has your symbols information in it. So, we just keep taking that into account and adding it let us start. So, norm sb square remember my definition of sb my definition of sb was summation bk p of t minus kt summation over k. So, I am going to write this as angle bracket sbsb, but I am going to use two different symbols to express this. So, that I can do some manipulation in the summation I am going to write this as summation let me see. Summation over k bk p of t minus kt, summation over m bm p of t minus mt no surprises I am writing the same thing I am just using the definition of sb and writing it again, but I am using a different symbol so that I can do the double summation manipulations correctly. In fact, think about it it is wrong or rather it is very confusing to write the same k here because then your double summations will go in correct. So, now let us just try to simplify this a little bit notice that angle bracket that is inner product is a linear operator. So, I can change the order of summation and you know taking inner products. So, I am going to now write this as a double summation over k double summation over m bk. Now remember when you write the angle bracket notation inner product the second term gets conjugated. So, I will get b star m and there is an integral it is actually integral. So, I am putting the integral inside p of t minus kt p star of t minus mt dt. Now if you see this particular term this has a relationship with your definition of hm because if you look at your definition of hm hm was p of t p star of t minus mt. So, I can get this form by writing this as I can always add kt to both side this is p of t p star of t minus mt plus kt something like this I can do this right. So, I can basically do this manipulation and therefore, I can significantly simplify this particular expression ok. So, I am going to now write this as let me see take the green this is equal to yeah. So, let us this is equal to summation k summation over m bk v star m integral p of t. In fact, let me not write the integral form ok h of minus k ok. And if you just verify this rm h of yes m minus k yeah this is correct because I have m minus k over here. Now, I am going to make a slight manipulation I am going to actually split this into 3 parts ok. I am going to split this into 3 parts based on m I am going to take m is equal to 0 or m equal to k rather m less than k and m greater than k. So, I am going to write this into 3 parts this is summation over k ok. I am going to take m to be equal to k in which case this becomes mod bk square because I am interested in the k symbol primarily ok mod bk square h 0 why because I am choosing m is equal to k plus summation over k summation over m for m less than k pk b star m h m minus k ok plus summation over k summation for the m from k plus 1 onwards bk b star m h m minus k ok. So, I have written the same terms except I have split it into 3 parts the first I have taken those terms that have m equal to k separately then I am taking summation over k, but those terms that have m less than k and then those terms that have m greater than k. So, in this manner I am essentially splitting the summation into 3 parts, but these 2 parts have something common in looking in them that is let me now go little up yeah these 2 parts the bottom these 2nd and 3rd terms have something common in them. We can make a slight manipulation by just playing a small trick let us swap the roles of k and m. So, this is equal to summation over k mod bk square h 0 plus summation over k summation and I am going to write this as m less than k ok m less than k. Now, bk b star m h m minus k plus and here is the trick summation over m summation over k ok sorry let me not write that m greater than k. So, let k greater than m, but that is the same as this ok k greater than m bm b star k h of k minus m ok. Did you see what I just did I basically just swap the roles of k and m, but now these summations look like they have something common in them ok. So, now by doing this I can now combine terms. So, this is equal to summation over k mod bk square h 0 ok. Now, I am going to write this carefully as summation over k ok summation over k and summation over m less than k bn sorry bk b star m h of m minus k plus b star k bm h of k minus m. I just combined these terms ok I actually just combined these terms carefully and if you see these two have the same form ok they have the same kind of roles and I am just taking the summation as a common summation and I have bk. So, if you just look over here as well I can get it. So, I am over here I am just swapping the roles of bk and bm. So, I am just getting this as bk b star m h m minus k plus b star k bm h k minus m. Now, here is the interesting trick this is summation over k bk square h 0 plus summation over k summation m less than k 2 times real part of why because bk bk star b star m bm and h m minus k is h k minus m conjugate. So, 2 times real part of and I can write this as I think I will you can write this as b star k bm h k minus m. I am choosing the second term you can equivalently write this as bk b star m h m minus k also no problem, but this is the way I just choose to write it no difference between the two okay. We have done the hard work of expanding the second term also there is a bk here there are bk related terms here this bk related term is the main term while this is the one which has the interaction between the inter symbol interfere components. This is where the mixing due to the channel comes in if there were no channel your h will be non-zero only for k equal to m and you will have only a bk term and you will get back your original awgn detection approach. So, that is something you should just bear in mind now my lambda b which is my overall metric for all my group of b symbols is equal to summation over k and I am going to summation over k if you now go back and check summation b star zk okay then I am going to just keep the k terms plus okay this is not plus it was minus sorry I am going to do minus of this because remember it was minus z square by 2 k. So, I am going to do k minus real part of b star k b m h k minus m this is what we get. So, this is the summation over here oh there should be a sum over m I am so sorry I missed the summation over m let me rewrite that part yeah minus summation over the k summation is outside m less than k okay real part of b star k b m h k minus m we are done with this. Now, we will now use the fact that in general our channel has a finite length in other words your channel typically acts like an FIR filter or you can approximate it as a finite impulse response filter. So, we are going to now make the assumption that h k equal to 0 for that is it is defined only for 0 1 2 up to l minus 1 okay. So, l is basically the length of the channel it has l taps or l non-zero filter coefficients because we are now performing this all these operations for the LTI channel. So, the channel can be expressed as a filtering operation. So, we are going to make this assumption now your lambda b with this notation is summation over k b star k z k minus mod b k square h 0 upon 2 and finally plus we are going to have sorry minus should not be plus should be minus because the minus term here minus summation and here is the interesting simplification that we get by making this m is equal to k minus l through k minus 1 because we said m is less than k. So, m goes from k minus l. So, we do not need to go all the way back all we need is the last l terms of memory. See if your channel introduces l amount of inter-symbol interference then your decision on b k should be affected only by the past l terms and that is getting reflected here. So, this inner product essentially accounts for the contribution of those past l terms on for this b k that is it. So, now we were all pining for this additive metric additive metric. Is this an additive metric? The reason it is an additive metric is because summation over k and we can go incrementally because every one of these terms adds up. Therefore, we can use some notation to write lambda k b k s k s k is the part of you know your s b, but the notation is not as significant. But we just need to remember the k term what happens this is equal to and I am going to write this as b star z k oh sorry I need to get my notion right b star k z k minus b k square h 0 upon 2 minus real part of b star k b m again I missed the summation. Let me just add an eraser and you know get the summation yeah h k minus m this is the additive metric additive metric that you can use to figure out your decisions on or rather your metrics on b k and so to speak that is you can use to evaluate lambda b. Now, here is the good part the good part is that this additive metric can be easily computed without having to go through all possible symbols because of the fact that you have to keep in mind only the past l symbols. Now, how do we use this? This additive metric itself does not tell you what the optimal b k is this additive metric gives you a hint on how you can combine various possibilities to give you b k. Let me give you an example the past l symbols can or let us say b k minus 1 b k minus 2 up to b k minus l. Now, let us say one past let us say the previous symbol b k minus 1 was some let us keep it an example. Let us say that you are doing b p s k and let us say that your l is 1 just for simplicity. So, then we have b star k z k minus and b p s k. So, this term is actually just going to be h 0 by 2 will not play a role. So, we can actually ignore this because it does not affect the computation for other symbols minus summation and I do not need a summation because I have only one term because l is 1 this is the k minus 1 term. So, I am just going to do b star k b k minus 1 h of I think it should be minus 1 yes. Now, over here what we are going to do is this we are going to evaluate this particular metric for b k being plus 1 b k minus 1 being plus 1 then b k being minus 1 b k minus 1 being plus 1. So, in other words we are going to enumerate all possible b k b k minus 1 and we are going to say plus 1 plus 1 plus 1 minus 1 minus 1 plus 1 minus 1 minus 1 minus 1 and we are going to evaluate all those matrix then we will do it for k and k plus 1 then we will do for k plus 1 and k plus 2 and so on and so forth till we get some place where we can make a decision on the past symbols. The question is can you do that this is where the Witterby algorithm comes in. So, what we do is we will actually draw what is called a Trillis diagram and let us say that at time k your b k can be plus 1 or minus 1 at time k plus 1 it can be plus 1 or minus 1 at time k plus 2 it can be plus 1 or minus 1 and what we are going to do is we are going to evaluate the metric lambda n lambda k rather for the appropriate plus 1 to plus 1 evaluate the metric for plus 1 to minus 1 evaluate the metric for minus 1 to plus 1 evaluate the metric for minus 1 to minus 1 and keep doing that and we keep we note that these matrix are additive because you had this norm square and you had the summation over k. So, what you can do is you can add the metric over here add the metric over here that will give you the straight path add the metric over here add the metric over here that will give you this path and among the two you notice that you go from this path to this point to this point only the metric which is higher you know assuming that you want to maximize it will survive. In other words once you make a decision here on one of these two paths you can say this metric is higher this metric is lower. So, we will just ignore this particular path. So, in other words when you go through with the Trillis diagram and implement this in practice you will see that there are these combinations and you that will allow you to make decisions in the past which prevent you from having to go through all the possible m power l searches I mean 4 power 1000 searches that is where the Witterby algorithm comes in and we will discuss the Witterby algorithm in the next lecture. Now, just to summarize what we did we split our real part of angle bracket y comma sb and norm sb square into convenient parts and then we come we basically use this in order to come up with an additive metric and this additive metric we are using to incrementally detect the received symbols optimally it is not obvious in the next class we are going to cover the Witterby algorithm and discuss the example so that it will become clear to you how you can implement it. So, the next lecture we will continue this discussion with the example problem that we discussed in class. So, we will stop here. Thank you.