 Hello, welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and in this lecture, we are going to continue our discussion on synchronization, parameter estimation and non-coherent communication that we started in the previous lecture. Just to recollect, in the previous lecture, we were looking at parameter estimation. In particular, we were looking at the unknown amplitude of transmission and we were essentially considering the intuition that if the symbols that are being sent are known, that is if you take a training mode, then you can use something like a weighted averaging depending on the method you are using to get the estimate of the parameter A. In case you have a blind approach, then the maximum likelihood estimate translates to an approach wherein you have to essentially take soft estimates that is you want to gather the received values based on the reliability to get an optimal maximum likelihood estimate of the amplitude. We will now continue with our discussion today. So a recipe that we want to use is the so-called likelihood function. So we just discussed this likelihood function aspect. Whenever we have any signal, let us say in AWGN, one question that we can ask is under AWGN, was S of t actually sent or was 0 sent? This is very similar to the binary signaling that we checked earlier, except that in this case, we want to just detect whether a particular signal S of t was sent or not. To do this, we can appeal to the same hypothesis testing discussion that we considered when we are using detection, that is we were detecting the symbols. In this case, we want to discuss the presence or absence of a symbol. Let us take our hypothesis HS as y of t equal to S of t plus nt, that is this is the situation where S of t was sent and HN to be y of t is nt, since nothing was sent, noise is received. In the absence of noise, this is very simple. For any nonzero S of t, all you need to do is to check for some amplitude and you are done. Unfortunately, in the presence of noise, you have some amplitude because of the noise. Therefore, it is not obvious whether S of t was sent or not, depending on what the amplitude of S of t is. Suppose, S of t has a very weak amplitude and the noise is very comparable or higher, then it is very difficult to tell whether S of t was actually sent. So, let us actually use the signal space concept ideas to come back here. Let us actually write this out. So, let us say likelihood function under AWGN. Let us say that S of t sent or not. So we have HS as y of t is equal to S of t plus n of t, HN that is not sent or only noise received. Now, we have only y of t and we can see y of t and we have to decide whether S of t was sent or not. Let us actually get our sufficient statistic. Let us get Z as inner product y s is just integral of y of t with S of t. So now, integral of y of t with S of t is going to give you two different situations. In the case of HS, let us say in the case of HS, you are going to get norm S square plus angle bracket S comma n. In the case of HN, you are going to get only angle bracket S comma n. Now, the situation is that you have to use Z to decide whether this happened or this happened. To do this always remember, we just need to look at the distributions and decide because you always want to use the maximum likelihood approach. You want to write the likelihood function and find out what essentially what maximizes this. Let us actually now write the distribution of Z under these two hypotheses. So Z under HN or HS rather is distributed as norm S square is a number, angle bracket SN is a random variable. If you remember from our discussion of random variables that come out from the projection of S on projection of the white Gaussian noise onto a signal S, you can actually check verify that its mean is 0 and its variance is going to be norm S square times sigma square. This is something which you can check. It is we covered this in the signal space discussion. This will be Gaussian with mean, the mean of this is 0, you are adding norm S square and the variance is sigma square times norm S square. Similarly, under the hypothesis HN we have Gaussian noise with mean 0 and variance sigma square norm S square. Our decision is very simple. We just have to find out for the given Y or given Z which distribution is higher. So, let us actually write down those distributions or rather I am going to ignore the constants because the variances are the same. So, in other words for this one it is e power minus and we will write Z minus norm S square the whole square upon 2 sigma square norm S square. And for this one it is going to be e power minus Z square by 2 sigma square norm S square. So, our task is to figure out which of these which of these essentially is higher because we want to find out the maximum likelihood estimate. Let us now go back. So, if you now do that right this is essentially the expanded I mean if you write the likelihood function with respect to Z you can actually just play a trick you can actually just do a division in fact the division looks like this. If you divide these you can actually just do you will get let us e power minus and you get Z square over here that Z square cancels with this 2 Z norm S square minus norm S power 4 by 2 sigma square norm S square this norm S square gets cancelled. So, you can write 2 here and cancel this. So, in other words you just have to check the sign of 2 Z minus norm S square that is Z is greater than or less than norm S square upon 2. This makes complete sense if it is Z is closer to norm S square by 2 it is more likely that S was sent if Z is closer to 0 more likely that S was not sent. This is consistent with our discussion of binary signaling basically if you took the ratio of the densities you get this and it is very evident that norm S square by 2 is the decision region. So, this is basically a way by which you can tell whether S was sent or not this is just an ingredient and the similar strategy will allow us to perform various other estimations as well. So, just keep a note of how we took the sufficient statistic Z and use that to evaluate the likelihood. So, this is just the work working out in the case of real AWGN this is what we just did. In the case of complex AWGN I am not going to work it out, but if you repeat the same exercise except that you took the distribution of the noise as well as complex Gaussian then you will end up getting 1 by sigma square times real part of inner product Y S minus norm S square by 2. This makes sense because if you essentially rotate the signal so that you are aligning with the dimension along S that is all that matters the dimension which is not along S does not matter because it is not relevant. So, that is why real part of the inner product of Y S is what actually matters. Similarly, you can extend it to the vector scenario. In the vector scenario you have Y 1, Y 2, Y 3 being you know S of t plus noise 1, S of t plus noise 2 and so on which are IID and you can just and you know you can just combine the distributions you have to use the fact that the noise values are essentially independent and identically distributed and use that fact to combine them effectively and this is what you get as the function that you have to you know you have to basically maximize this function that is basically the same as saying find real part of Y comms and find whether it is to the left or right of norm S square by 2. I think there should be this apology this should be by 2 yeah please just make this correction ok. Now, we come to the first problem of you know practical importance of course we checked amplitude let us now discuss maximum likelihood phase estimation. If you remember in the beginning of the previous lecture we checked that when you have a phase offset of let us say phi the m of t essentially got a cos phi. Similarly, if you look at the Q part it will get a sin phi therefore, the effect of this kind of phase offset is that your S of t gets multiplied by e power j theta and typically this theta is an unknown phase and that is what we need to estimate. In other words at the transmitter you have e power j 2 pi fct at the receiver you have e power j 2 pi fct plus theta that is you have a cos and a sin which is fine and for now we will assume that fc is also the same unfortunately you have cos 2 pi fct plus theta and sin 2 pi fct plus theta. So, that is the problem therefore, you have Y of t is S of t e power j theta I will make a remark often times in several references. Sometimes they say the receiver has cos 2 pi fct and sin 2 pi fct while the transmitter actually has cos 2 pi fct plus theta and sin 2 pi fct plus theta. This is just a very minor difference you can just you know make a note of the offset at one location and then just work it out you will get the same answers. In this case we have complex AWGN of variance n naught and the theta is the unknown that we want to estimate. Based on the discussion that we had just previously you have real part of angle bracket real part of angle bracket Y S e power j theta minus norm S e power j theta square upon 2 I will just probably just this is probably correct then let angle bracket Y S equal to mod Z e power j phi that is Z C plus j Z S then the real part of angle bracket Y S e power j theta is going to be real part of e power minus j theta times Z that is if you take angle bracket Y S that is inner product between Y and S mod Z e power j phi is Z C plus j Z S then the real part of Y S e power j theta is essentially real part of e power minus j theta times Z notice that this particular term does not matter because its norm is it does not depend on theta because norm of S e power j theta is the same as norm of S. So we look at real part of e power minus j times theta Z is the same as we did earlier except that the way to understand this is that we are now checking for various values of theta what this particular Z is going to be. So if you have mod Z cos phi minus cos theta minus phi that is basically the way we can understand it let us say Z has a phase of phi. So now if you write the likelihood function L of Y given theta is e exp of 1 by sigma square mod Z cos of theta minus phi minus norm S square like I mentioned this does not really depend on phi we have to just maximize this particular function by choosing the value of theta that maximizes this the obvious choices theta equal to phi that means phi like I told you over here is just the argument of this. So phi is basically tan inverse of Z S minus Z C therefore the optimal choice optimal choice of the theta is essentially the angle or the angle of Z which is Z S upon Z C intuitively this makes sense because if you actually just want to take the inner product between Y and S and keep rotating S choose that rotation that maximizes the peak that will give you the phase offset and this is something we can verify on Guru radio as well very very easily. Next we will consider I think there should be a bracket here sorry yeah next we will consider maximum likelihood delay estimation in the case of maximum likelihood delay estimation we have Y of t is S of t minus tau times e power j theta plus n of t that is we have complex AWGN of variance n naught. Now in this case we are going to actually solve the multi prong problem because we need to estimate the amplitude need to estimate the phase and we need to estimate the delay. So this is a three prong problem we need to find all of these. Let us now parameterize these parameters by choosing the value gamma which is a tuple or vector containing tau A and theta intuitively what we are saying is let us take various SS for different parameters of tau A and theta and keep performing the overlap integral and finding out that particular overlap integral that maximizes this particular quantity. Now unfortunately that involves a search over a very large space so we will try to look at the structure of the problem and maximize the you know maximize this by doing something a little more smart than a brute force search. The first thing is we use a match filter ok the match filter is similar to performing this kind of integral angle bracket Y S of S gamma is A e power minus j theta. So first thing is we are taking the match filter we are applying e power j theta so it is e power minus j theta essentially comes out we are also delaying it by tau. So our aim first is to and we are also scaling it by an amplitude A. So in a sense we are taking A e power j theta S t and doing the S star and we get A e power minus j theta integral Y of theta S star of t minus tau. Now keeping norm S gamma aside ok we are going to keep norm S gamma aside for a couple of reasons the main reason being that you know we are going to assume that norm S does not really depend on for example if you delay it norm S does not really change and even if you add a phase norm S does not really change. So this particular quantity we will not really consider for the moment we just want to maximize this like we did in the previous part as well. So if you take integral Y S star of t minus tau A e power minus j theta Y and we just plug in SMF we get A e power minus j theta Y star SMF tau that is we have the amplitude here we have the theta here we have the tau here all these are unknowns. Our aim here is to find the delay ok to find A we had another method which we just saw to find theta we had another method which we just saw our aim is to find tau. So over large intervals we are just going to assume that norm S gamma square is A square times norm S square. Remember S gamma was S modified to have amplitude scaling of A modified to have delay of tau modified to have extra phase of e power j theta. So as you know e power j theta goes away in the norm the delay we think does not matter because over large intervals norm S does not change if you shift it. So the likelihood function allow Y given gamma is going to boil down to there should be another bracket here sorry e power this bracket may not be needed apologies. So real part of A e power minus j theta Y convolve with the match filter tau. So let us now work on this we want to maximize tau the first thing is that it does not really depend on A ok and the sigma square it does not really depend on we want to so all our all we need to do is to essentially maximize over here I think I missed negative sign also I am so sorry negative sign over here. So we want to maximize this part inside real part of A e power minus j theta Y star MF tau minus norm minus A square norm S square upon 2 this really does not matter because we can just ignore it for the purposes of delay. So for the purposes of delay we just have to maximize this part how do you do it it is very simple A is a positive number. So to maximize so we can essentially take A out you know we can just remove this A here and put it here. So we have to maximize the real part of e power minus theta Y star SMF of tau that is it is intuitively very simple you know if you choose you know if you basically choose your mod Y star SMF to have the same phase as theta you get the modulus sign that is if you choose theta you know if you choose Y star SMF theta of tau as theta which is the phase then this becomes just the mod and now to maximize this all you need to do is you need to just take the maximum value of Y star SMF of tau with respect to tau. What does this mean intuitively intuitively what we are saying is that you keep shifting the match filter you keep shifting the match filter and constantly perform overlap integrals and find out that delay where the peak becomes the maximum. So this makes complete sense because let us actually just give an intuition ok let us say that say delay estimation a very intuitive picture for this in the case of in the case with Gaussian noises let us say that our template signal is this and let us say that we end up getting and this waviness is because of noise. Now what is our matched filter our matched filter let us say this is S of t or SMF of t is essentially ok this is the matched filter because if you flip this and overlap you get zero delay. So what you should do is you must know convolve this with this match filter and the match filter if you convolve is going to likely give you a peak. So this match filter if you convolve these you will get this kind of triangle you will get a peak here which corresponds to zero delay. This match filter if you now convolve you will essentially have to bring your match filter by so much that is this delay is going to be tau. So you will end up getting this particular peak as and this peak essentially is going to be tau. So even in the presence of noise of course this is going to be slightly wavy because of noise the shifting of the pulse because of delay in the presence of noise finding out the peak value and the two of you taking magnitude is going to give you the delay. This is something we will verify in new radio also if you add a random delay and then just do the match filtering and find out by how much the match filtering output is shifted you can essentially find out the delay. In other words you do not have to do multiply by match filter at this time multiply match filter at this point multiply match filter at this point find all those no because what is multiply by match filter and integrate at various points that is just convolution. So you convolve with the match filter find out the peak amplitude magnitude and that is going to give you the delay that is what we are saying here. This of course works for a single signal but we have a single with multiple let us say pulses and so on you have to slightly modify this but intuitively this matches our intuition wherever you get the highest correlation peak is where the delay is and just looking at that point will give you the value of the delay. Now for tracking frequency offsets things are a little tricky so let us actually just look at some in an intuitive approach. Let us say that you have your signal cos 2 pi fct plus theta. Now we have a loop filter and we have a voltage control oscillator and this is a feedback based system and this feedback based system essentially is going to give you away by which you can track this. Let us say for simplicity let us say that the frequency is the same. If you look at this particular multiplication cos 2 pi fct plus theta times minus sin 2 pi fct plus theta hat and I am only looking at phase offset for the moment I will get to you get to the frequency offset momentarily. If you perform this multiplication using your cos a sin b formula you can verify that you will get half of minus sin 4 pi fct plus theta plus theta hat. This I have put in red because this is essentially can be eliminated by putting a low pass filter and this is sin of theta minus theta hat. Now in the case of you know theta and theta hat being very close that is let us say that the phase offsets are very close. Sin theta minus theta hat is approximately theta minus theta hat right. So you can track the theta hat and try to keep matching the phase that is what this loop filter does. This voltage controlled oscillator essentially changes the frequency by changing a little bit of your you know if you just change the input slightly the frequency changes. It gives you a cosine or sin in proportion to the voltage applied. Now because the sin of theta minus theta hat is close to theta minus theta hat for small amounts of changes and if you just keep doing this upon time that is if you say theta minus theta hat and divide it by t that is nothing but the frequency offset. In other words this phase lock loop intuitively if the frequency offsets are within some suitable range is essentially finding the phase difference and over time it is just averaging the phase difference and that averaging of the phase difference is actually the frequency offset and this loop essentially allows you to track the frequency. Now one issue is that this phase lock loop works very well when you have a pure carrier that is when you have just a cosine or a sin signal and you are tracking the frequency it works very well even in the presence of noise because of the averaging that low pass filters perform. Unfortunately when you have practical signals that have data modulated on to them then you have to be very careful because you cannot use this phase lock loop as is. Sometimes you may use a modification or sometimes you may actually have to adapt to the data stream. If you use a training based approach then there are two approaches sometimes just send a bare carrier or sometimes you send a carrier with higher power and that particular location can be used to train the phase lock loop and the locked frequency essentially is used for some time till there is a drift. Typically people say that you know you have about a 10 to 100 ppm parts per million of offset between the you know between two clocks and if you that what that intuitively means is that for every 1 GHz 10 parts per million or 100 parts per million say 10 parts per million for 1 GHz essentially corresponds to 10 divided by 10 you know 10 power 6 times 1 GHz of offset. So, your offsets can be as much as let us say kilohertz sometimes phase lock loops are able to essentially search and track and once you are close to the actual frequency the tracker then kicks in and the loop essentially keeps the frequency locked. So, even when fc is not equal to fc prime, but it is close then you can say nominally fc and fc fc fc prime are close then it looks like a phase offset and just keep correcting the phase offset averaging this change of the phase gives you the extra frequency compensation that is needed. So, this is the intuition with which you are phase lock loop works. In the following class we will just look at the phase lock loop with a bit of an optimization based perspective and we will look at differential modulation as well. Thank you.