 Welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering at IIT Bombay. This lecture will be on synchronization and non-coherent communication. In this lecture and the sequence of lectures ahead, we will be relaxing one more assumption about communication systems. Earlier we were dealing with noise and we saw how noise affected the detection of your symbols. We will now look at the practical aspect of receivers where let us suppose that you do not have the correct phase of the carrier or phase of the local oscillator available at the receiver or the frequency offset. Then what can we do? Similarly, we will also have to estimate the parameters of the communication system. So, in this lecture we will discuss some of these aspects and follow it up with some experiments in GNU radio. So, there are several practical receiver issues such as local oscillator offset that is you may not have the carrier available at the receiver with the same frequency. There is always some offset, there could be a phase offset and you also need to know how you can perform synchronization that is where is the beginning of the symbol so that I can actually start getting those symbols. What is the sampling location and what is the actual correct time at which I have to grab the samples after the match filtering? These are some aspects that have to be figured out at the receiver. The receiver parameters affect performance significantly in addition to noise that is if you do not sample at the correct location, if you do not have the correct carrier frequency and phase then you stand to incur a large amount of errors because of the fact that you are not getting things right at the receiver. So, one aspect that we will be looking at is the concept of parameter estimation. In other words, how do you estimate the amplitude of the signal at the receiver? How do you estimate the delay? How do you estimate let us say the phase offset or frequency offset? These are all problems that fall into the category of parameter estimation because there are several unknowns and these fall into the category of there are several unknowns and you have to figure these out in the presence of noise as well. So, we have to come up with a statistically correct way of getting the best estimates in the presence of noise. Finally, we will look at non-coherent communication wherein we say that we do not need to know the precise phase or frequency of the local oscillator, but can we still just go ahead with the communication if it is nominally close as well. When it comes to receiver design, we will just be making some simplifying assumptions just to get the concept across. Recall that the signal being transmitted is summation b k g t x of t minus k t that is suppose that your symbols are b 1, b 2, b 3 or you know you can b minus 1 and so on. These are symbols from your constellation let us say they are from QPSK or PPSK or any such constellation. Let us say that g t x t minus k t is your template pulse shaper. Let us say it is a sink or a root trace cosine can be a rectangular pulse whatever pulse that you have selected based on the constraints that have been imposed to you in terms of power bandwidth and so on. Now, the band pass version of this over here is real part of x of t e power j 2 pi f c t that is you basically do the up conversion and you will end up getting x of t times e power j 2 pi f c t if you remember that is the x c of t that is the i component times the cos minus x s of t that is the Q component multiplied by the sign. So, that is the waveform that you will get and this is at f c nominally at f c plus or minus you know w by 2 assuming that the base band bandwidth of x of t is w. Some key issues that we are concerned with are delay that is where do our symbol where do our symbols begin where do we start making our samples like for example, if you do not get the peak correctly then you will incur lot of symbol errors. Sampling offset if you do not in the case of sampling offset if you do not sample precisely at the right point then you will incur some symbol errors, but one more important thing is do we have carrier offset that is are you getting the f c at the receiver correctly are you getting the phase correctly. In fact, we will see that if you do not have the phase or the frequency correctly you will start having issues. Let us just do a very quick exercise to just check that out. So, let us say that you have let us say that you have your symbol let me actually just call it m of t I am just going to say let us actually title this as phase offset. Let us say that you have your x of t I will simplify it to have just a real signal is this is x b p band pass m of t cos 2 pi f c t. Let us now say that at the receiver you have only cos 2 pi f c t plus theta I am ignoring the q part let us just stick to real, but the concept is similar. So, let us now do this if you do m of t cos 2 pi f c t multiplied by cos 2 pi f c t plus let us say phi. Now, this is of the form cos a cos b and if you remember cos a cos 2 cos a cos b is cos difference minus cos sum ignoring the scaling this is equal to m of t times cos difference that is cos phi minus cos sum that is cos of 2 pi 2 f c t plus phi. Now, this 2 f c term can be filtered out because it is twice carrier frequency if you just put a low pass filter you will filter it out. So, what you end up getting is m of t cos phi now this is actually not very good because if phi is unknown because let us say at the receiver you have a phase lag or phase difference between the transmit carrier receive carrier if phi is unknown then if phi is close to pi by 2 it will essentially null your signal or even if it is not close to pi by 2 if it is not 0 or you know if it is not exactly 0 your SNR is going to be degraded by a factor of cos square phi. So, this is actually dangerous. Similarly, you also have the concept of frequency offset. So, at least in the case of phase offset you only have one number to figure out that is the phase in the case of frequency offset you have m of t cos 2 pi f c t times and at the receiver we not only have a phi we also have f c prime. So, let us call it f c plus delta f times cos of at the receiver we have f c plus delta f t plus phi. So, your frequency of the local oscillator is actually not f c, but f c plus delta f. If you now evaluate this using the same trick which we have in a parsley that is cos a cos b is cos difference minus cos sum ignoring the constant maybe there is a factor of 2 I am ignoring that this m of t times cos difference 2 pi f c sorry that should be delta f because f c gets cancelled. So, 2 pi delta f times t plus phi minus cos of I am ignoring this 4 basically 2 f c term I am just not writing it I am ignoring it because that can be filtered out. Now, in this case you have to figure out delta f and phi because if you do not let us say delta f is close to some 50 hertz or some you know 60 hertz you are going to have an oscillating signal you actually are m of t you want it back at the receiver. So, that you can extract the symbols, but this m of t is going to have an additional you know modulated term and some parts are going to be high some parts are going to be low and figuring out your symbols is going to be impossible unless you re 0 out the delta f and figure out the phi. So, in other words whenever you have phase and frequency offsets whenever you have phase and frequency offsets getting back your symbols is a challenge and this is something that you have to address by figuring these parameters out. One more thing is that suppose that your m of t undergoes a scaling of let us say a then this a also has to be figured out because let us say that your constellation is of the form of quam 16 let us say it has like you know then if these are the addition regions let us say these are addition regions if your constellation instead is scaled and you have something like this let us say all your decisions are going to be wrong let us say that you have this all your decisions are going to be wrong. So, you need amplitude also needed. In other words you have to figure these parameters out where to sample what is the frequency offset what is the amplitude. So, that is something which we are going to see how we can achieve in steps. Let us proceed. So, just an introduction to parameter estimation you can recall that maximum likelihood and map that is maximum a posteriori theory were used when we wanted to figure out which symbol was transmitted. What we did is we essentially found out the appropriate probability distribution function we found out given this y or rather given this you know rather which of the symbols were sent which is most likely given that this y was received. So, that is basically one way to look at it that is for map and in the case of equiprobable symbols we could do ML wherein which of these symbols would maximize the probability that this y was received. So, essentially we formulated these problems the difference between parameter estimation and our detection was that your symbols were one of a few one of finite numbers let's say that is your symbols came from a constellation like QPSK or quam 16. So, let's say if you have quam 16 you had to go through 16 symbol choices and find out that one that maximize the arg max of your metric or you know minimized if you put it in the you know in the distance format. So, you have to find out that point which is closest in the case of estimation rather there is a difference we have to for example in this case we are looking at estimation of a parameter theta theta is any of the parameters it can be the amplitude phase delay whatever it is we have to estimate theta we have a y that depends on this theta and this theta can be a continuous quantity for example in the case of phase this theta is a number between minus pi and pi or 0 to 2 pi. Therefore, in the parameter estimation problem we have a wide array of values and we want to estimate that is we are not choosing one particular we are choosing one particular value among an infinite set of values. So, that is where the difference lies that is why we call this an estimation problem much like the detection theta hat ml if you see is arg max p of y given theta we basically keep moving theta and finding out that theta it maximizes p of y given theta of course a probability distribution is non-negative by definition. So, we can take log and we want to find the you know arg max for you know log is a monotonic functions you can all just take arg max of the log as well this comes in handy when you have see functions like Gaussian because when you have an e power minus taking log will just make things much easier. The Bayesian in case you have a prior on p let us say that you know you are doing say phase detection and then you know that the phase is between minus pi by 2 and pi with a probability higher than with a very high probability you can plug in that prior distribution and you have p of theta given y is p of y given theta times p of theta. Now if p of theta is the same for all theta then this these two become equal this is consistent with our discussion of detection theory which we did where we had equiprobial symbols then map and ml were the same. So, these are some things to keep in mind always remember it is sometimes convenient to take log if your expressions become simpler. Let us look at the first parameter estimation problem the first parameter estimation problem that we are looking at is to get this amplitude that is we want to find out a that now let us say for a simplified example let us say that you are doing bpsk communication that is your b is either plus 1 or minus 1 and let us just say that you are have a system where you are receiving these this y you are just getting a number y and you have to figure out what a is you are affected by noise that is Gaussian noise with mean 0 variance sigma square. Now this problem is actually just a reformulation of your bpsk problem that is if b is plus 1 then y is Gaussian with mean a and we are using let a is in a small a to denote the actual value. So, if b is known to be you know let us say b is plus 1 then your y is actually a Gaussian random variable with mean a and variance sigma square. If b is minus 1 then y is a Gaussian random variable with mean minus a and variance sigma square. So, that is what we have mentioned here if b is known p of y given a comma b is plus 1 is e power minus y minus a the whole square by 2 sigma square by sigma root 2 pi if b is minus 1 then is e power minus y plus a the whole square by 2 sigma square by sigma root 2 pi. If you remember from the previous slide the parameter that we are estimating is a we just have to do an arg max over here and it is very easy to do the arg max when is this maximized when you put y is equal to plus a here. Similarly when is this maximized when you put y is equal to minus a over here. So, it is very evident if you have just one sample then let us just look at it intuitively in the absence of noise. The absence of noise you just get a right if you put if you take b as plus 1 you get a if you take b is minus 1 you get minus a. So, if b is known which corresponds to the case where it is a training based estimation that is if you tell the receiver hey I am going to send you plus 1 then the receiver knows what to expect as b and it can use that to figure out what a is more accurately or you tell the receiver I am sending minus 1 then the receiver can use that to just take the negative and figure it out and the ML estimate is just what you receive even in the presence of noise because you do not know what else to do you know noise takes you to the left or right equally probably your best guess is the mean. So, that is why you get y is plus you know you basically get y is equal to capital A rather a hat is y when b is plus 1 a hat is minus y when b is minus 1. Now, if b is not known apology this should be not known if b is not known then we have a situation where we need to average over b that is if b is not known that is let us say now you are a receiver you just know that the transmitter is sending plus 1 or minus 1 you do not know what was received then there is a trick that you know you can do suppose that the value you receive is very very high let us say it is a high positive value you can be reasonably sure that plus 1 was sent by the way I did not mention a is a positive number we are not concerned with negative a because if you know you can always just swap your b's to appropriate the handle it. Let us say that the transmitter sends the you know send something which you do not know and you get let us say plus 7 or something and let us say noise variance is known noise variance is let us say 1 you get plus 7 or something then you are into reasonable positive territory and you can be sure that what was sent was plus 1 and you can guess that your a should be close to 7 because you got 7.1 should be 7.1 similar if you get minus 6 which is a very high negative value you can just take your a as close to being plus 6 because you know that when it's a high negative value the probability that the receiver actually sent plus 1 is very very unlikely the problem occurs when you're close to 0 you're close to 0 you don't know what was sent that's where this cos h starts coming into the picture this cos h essentially if you just look back at this expression handle the situation where you're close to 0 so if you're only slightly positive and you do not have a reliable estimate then the value of a that you're going to get is going to be quite small and you're in a situation where you don't really know so the intuitively what happens is that this cos h which occurs because you have this e power minus a power y plus a essentially handles this case of uncertainty let me just you know if you simplify this expression e power minus y minus a the whole square by 2 sigma e power minus y plus a the whole square you can see that e power minus y square by 2 sigma square is common e power minus a square by 2 sigma square is common and you the remaining terms are e power you get e power minus 2 a y by 2 sigma square e power plus 2 a y by 2 sigma square and that essentially gets converted to this cos hyperbolic function this accounts for that uncertainty in the value of b if the where received received signal is essentially close to you know close to 0 in comparison to sigma so intuitively if you have if you have 3 sigma 4 sigma away from 0 you can just take it you know you just know that he sent the received he or she at the transmitter sent plus 1 or minus 1 appropriately. Now typically having just one sample based estimation or one short estimation is never enough let us say that you are using your cell phone your cell phone is essentially calibrating itself to the base station if you are really close to the base station you can get a signal very easily but suppose you are standing inside a building or inside a tunnel you will not get enough of SNR because your noise will be quite high in comparison to a in such a situation you want to use multiple symbols. So, the vector version of this problem is where you have y k is a b k plus n k k is 0 1 2 up to k minus 1 and a is the unknown that you want to estimate it is the same problem except that we now repeat you you now repeat this multiple times we actually get y 1 y 2 y 0 y 1 y 2 up to y k minus 1 we have k such measurements. Now since n k that is the noise realizations are independent and identically distributed Gaussian's intuitively you know if we just average appropriately we will get our proper estimate. For example, b k can either be plus 1 or minus 1 let us say that I give you the value of b 0 b 1 b 2 if b 0 is minus 1 you take negative of that y k if b 0 is plus 1 you take the positive value. Let us take the extreme case let us say all b's b 0 b 1 b 2 are all 1 in that case you are going to get y 0 is a plus n 0 then y 1 is a plus n 1 a place n 2 a plus n 3 and so on. The best thing you can do is average because averaging will just average the noise also and that will reduce the variance of the noise. Therefore, if you now just formally want to do this write y as a column vector consisting of the stacked values of y 0 y 1 up to y k minus 1 similarly write an array for b similarly write an array for n. So, you are essentially your equation is of the form y is equal to a times b plus n that is what is being used here. Now in this case you can just understand that y under bar is actually going to be a Gaussian random vector with mean a b and variance not surprisingly sigma square times i. If you know b now all you need to do is you need to just perform the weighted average like we discussed it is very evident because if you multiply y by you know if you just multiply y by b transpose ok you will find that that is a sufficient statistic you can prove it and you can essentially get this particular estimate very very easily. The intuition like I mentioned is just flip the signs and average appropriately every sample has equal weightage because the noise is independent and identically distributed. So, averaging should just do the job for you. Now if you now want to do the same parameter estimation under the situation where unfortunately you do not know b k by the way this is also very practical because in some communication paradigms the transmission is being sent to multiple people. So, let us say that 10 people are receiving a signal you can't waste time calibrating the first user calibrating the second user calibrating the third user and so on. So, you keep sending the transmission and everyone wants to receive it then they should come up with a way by which for example if user 1 is already receiving it you don't want to waste time in training user 2 because user 1's transmission is going to be affected. So, is there a way to do the estimation of a without having to know b's in advance. This is called the blind estimation approach if the same formulation except that the b is not known at the receiver. In the case where b is not known at the receiver you just do the formulation much like the previous, but then you just have to average over the b's as well. So, in other words you have to marginalize over the b's to get rid of the b's and you can work that out. I am not going to do that as part of this lecture, but if you do that you end up getting a form that is somewhat reminiscent of what you saw just previously. Earlier you were saying it was 1 by k summation k runs from 0 to k minus 1 yk times bk which was flipping the signs and adding. In this case there is the stan hyperbolic a yk by sigma square. The stan hyperbolic is actually a kind of soft estimation of b. In other words if you look at tan hyperbolic it's a very nice function which looks like this. The tan hyperbolic looks like this. It's minus 1 at minus infinity 1 at infinity and if you receive a b let's say which if you receive a y which is highly negative or y that is highly positive you're essentially giving more weightage to that. In other words if you basically take a y that is several times sigma you're essentially giving it more weightage. If you take a y that is negative several times sigma you give it more weightage. If there is a y that is close to 0 the tan hyperbolic is 0 you don't give it much weightage. So this essentially acts as a it's almost like it is implicitly estimating the bk's as well while it is giving you an estimate of the a. So this is something which you can see and you can listen to this is also called sigmoid function and it has this property that you are able to get a soft estimate. In other words if it is very very reliable you are adding some weight if it is not reliable then you are not adding any weight to this particular sample. So the intuitively if you call b hat of k as tan hyperbolic that is if you are able to reliably say that it's a high amplitude and you know or negative high amplitude you will give that parameter more weightage. So that's basically the approach which we take here. So let us stop with this for this lecture. In the next lecture we're going to continue our discussion of parameter estimation and look at how we can estimate other parameters as well for example those corresponding to phase frequency delay and so on. Thank you.