 Hello, welcome back to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering at Haiti Bombay. So, in this lecture we are going to continue from where we left off in the last lecture wherein we are discussing optimal reception in additive white Gaussian noise channel. Before we go to our problem of binary signaling, let us just recollect what the tools that we developed in the previous lecture are. In particular, if you consider M-ary signaling in an AWGN channel, you have a vector y is Si plus n. Remember, how do we get these vectors? We get these vectors by projecting them onto the basis signals and you can just treat your signals as vectors. The effective noise that affects your signaling is that vector obtained by projecting the noise onto your basis signals and we said that because of the theorem of irrelevance, the other part of the noise, the n perpendicular does not affect your detection. And so, you have this particular relationship where your vector y that is received is Si plus n, where n is a 0 mean Gaussian noise vector with variance or covariance matrix sigma square i meaning component wise the noises sigma square variance and independent uncorrelated and independent. Now, the ML detection rule that we derived was basically we had to just write the likelihood function which is the Gaussian and we wanted to maximize this. But we said that since the relationship between y and Si essentially appears only in e power minus, the only part that affects you is the y minus Si norm square because it is e power minus y minus Si the norm square if you just substitute in the PDF that you get from this y is equal to Si plus n. And because you have to maximize e power minus norm y minus Si square by choosing over different i's, you just have to minimize norm y minus Si square which is why the ML detection rule just reduces to the minimum distance detection rule. And we also added a simplification if you want to write it in terms of inner products you can expand this norm y minus Si square, this is a common norm y square term that you can eliminate and put a negative sign. So, you have this to be equivalent to arg max norm y minus sorry inner product y Si minus norm Si square. This is an these two are basically equivalent and you can use either sometimes one will be easier than the other. For the minimum probability of error the only difference is that when you write the likelihood function the prior probability pi i also comes in which is why you have this particular extra term coming in and in the expansion also this will be there. If this log pi i is equal for all i then this arg max is not going to depend on pi i because pi i is something which is not dependent on i therefore this will reduce to the ML. So the ML and minimum probability of error are equivalent and under the case of equally likely transmitted symbols. To give you an idea as to why this actually matters suppose that there is a scenario where one symbol has much higher probability let us say that the probability of sending 1 is 0.9 and the probability of sending 0 is 0.1. I will give you a simple exercise think about this the probability of sending 1 is 0.9 which means at the receiver if you just close your eyes and say always one was sent one was sent one was sent you are going to be right 90 percent of the time. So in this situation you can definitely see that doing ML is not optimal and you have and this MPE if you actually work it out will give you a probability of success of a little more than 0.9. So that is an exercise which you can look at. The proof for this we saw in the previous class so we will not delve into it again. So today let us focus on binary signaling. Binary signaling is something that is like a very basic kind of ingredient it is you can think of it as the act of sending bits but it will also serve as a building block for a higher number of signals. To understand binary signaling and the performance but with binary signaling we will consider the simple case of on off signaling. In this situation either you switch on the voltage and you send a signal or you just send nothing that is these are the two modes it is almost like you know a 0 or a 1 that is basically the model with which we are sending. We are going to inspect only one symbol although you can imagine that this symbol this transmission is going to be repeated again and again and again for every symbol interval capital T as you saw in the pulse shaping related discussion. So in this case let us take this scenario H1 situation where Y is equal to S of t plus N of t that is S of t is sent and Y of t is received this Y of t is S of t plus N of t. This situation is where let us say this is the situation we send 1 this is situation where you send the bit 0 you send nothing Y of t is N of t that is if you do not send anything at the receiver you are only measuring the noise. Now we want to use our the knowledge that we have gained by you know we have proved the optimal detection mechanism and we are going to assume that the 1 and 0 are equally likely. So minimum probability of error is the same as the maximum likelihood. So what we are going to do is we are going to find the optimal detector and when we closed in the last class we guessed that if we look at this inner product YS as a just the number it is by which we are going to use to make our decision the decision rule essentially becomes is this Z more or less than norm S square by 2 this is the guess that we made for good reason because you know this norm YS when there is no noise this inner product YS rather when there is no noise is going to be norm S square when there is no noise and you send 0 then it is going to be 0. So it is like we are guessing that it is the midpoint ok. So that is basically what we are getting at and then we will look at the ML error probabilities and so on in this situation after checking whether this is the correct kind of decision region or decision criterion that we have. Whenever we go into it for every problem till you get comfortable it is a good idea to just match this with our vector picture and what kind of signaling is happening. In this case we have only S of t as our signal of interest. So this S of t can be anything let us say that S of t is something like you know this particular pulse it is like 0 sorry let me just correct that. So let us say our S of t is something like between 0 and 1 it is 1 you can take this you can take a sync you can take anything but for simplicity I am taking this. Now the way I have taken S of t is to say that I have just taken this as my you know the signal if you now want to do something like Gram-Schmidt orthonormalization it is very easy because you only need to do the first step because there is only one signal so the S of t is going to be equal to your psi of t or if you want to just have some fun you can you know you can you need not take this as one you can take this as 10 in which case your S of you know psi of t will be S of t will be psi of t by 10 and so on. So imagine that there is a signal in the background there is a basis in the background this basis is going to be psi of t is equal to S of t by norm S obviously because if you just take S of t and divide it by root of integral S square of t dt this is psi of t is just S of t scaled to be unit energy. So that is the picture that you should have here we have one dimensional signaling because there is only one S of t if there were S 1 of t S 2 of t and if they were in different dimensions you could have had multiple dimension signaling but in this case we have one dimensional signaling fine let us actually now just work this out in a rather neat way and find out this decision whether this decision rule is correct and so on. So let me go to my piece of paper yep so I am a piece of paper ok yeah so now what I am going to do is I am going to work out I am going to work out this particular scenario and see how it pans out. So let us take the blue color now let us take the black color let us actually sketch the probability density functions of y under the hypothesis H 0 and under the hypothesis H 1 under the hypothesis H 0 nothing is sent only noise is received under the hypothesis H 1 S of t is sent plus noise is added and that is what is received these are projected onto well in this case you know if you check what we did you had this S of you had just this in an angle bracket y S. So you are essentially just going to add this you are just going to take the inner product with S so we are going to start with that picture ok. So if you send nothing you are going to get something like this this is basically under the hypothesis H 0 if you send and by the way this is the probability density function of Z. So we are going to write this as F Z given dot of Z given dot where dot is H 0 or H 1 that is this is the PDF of Z and this is the PDF of Z under H 1 and this particular point is going to be the mean of Z when H 1 is sent. So let us write those things down. So Z given H 0 that is inner product y comma 0 is 0 I have to write expectation rather ok. Let me just take this a little more slowly so that you can get a better idea. So let me erase this part ok. So now the question is what are the means and variances of these two Gaussians. So first of all why are these Gaussians S because S of t is a known signal if you take S of t plus N of t and if you then convert them to numbers like you just you are just doing Z is equal to angle bracket y comma S ok. So under let us say under H 0 ok Z is equal to angle bracket y comma this is under H 1 of course angle bracket y comma 0 ok which is equal to 0 no but this there is a slight catch this is only in the situation where there is no noise. So you have to ok this is under the case where there is noise because this is actually S plus ok let me erase this this is S plus N sorry N comma 0 which is equal to 0. The other thing we are going to do is we are going to say under H 1 Z is equal to angle bracket y comma S which is equal to inner product S plus N comma S which is equal to inner product N comma S. So this is the situation that we have ok that is if we assume that only nothing is sent we measure the noise if we assume that something is sent we have to measure the we will be measuring S plus N. So now once we go back we have these two scenarios and we will we will find the ML error probabilities by looking at the value of Z but before that let us first just get some things clear which we have written Z is conditionally Gaussian why because you are adding a fixed number if you take inner product y comma S y has N of t that is the Gaussian and all others are fixed numbers. So expectation of Z given at 0 is expectation of angle bracket N comma S which is 0 and variance of Z given at 0 is covariance of NS NS this is something we have seen earlier this sigma square norm S square expectation of Z given H 1 is expectation of SN S plus N comma S which is norm S square similarly variance of Z given H x 1 is covariance of S plus N comma S S plus N comma S again by using our result that you know if you take inner product N v 1 and v 2 you can get this this is something which we will prove. So let us first go back and get those results over here. So just for simplicity I am just going to write them in a in a in a nice way. So what do we have? Okay our definition is Z is inner product y comma S so this Z is a random variable why because it is dependent on N. So now let us say expectation of Z given H 0 what does that mean under H 0 what is the value of y? y is this is expectation of under H 0 y is just N of t. So this is going to be angle bracket N comma S. So this is not at all surprising because you only measure noise and Z has only the noise component overlapped with S and if you write the integral and take the expectation inside that is this is going to be expectation of integral let us say minus infinity infinity more general case S of t N of t dt and if you take the expectation inside this is going to be integral minus infinity to infinity S of t expectation N of t dt this is 0 because expectation of t is 0 because your noise was 0 mean. I am going to leave this as is okay but I am not going to repeat these integrals for the other cases. Now since we need to find the error probability also we need to characterize this Gaussian under H 0 more carefully. So the mean as I have drawn is around 0. So this guess was correct what about the variance how fat is this Gaussian? For that we need to find out the variance of Z given H 0 to do this remember that we had this result the result was expectation of inner product N comma V 1 inner product N comma V 2 where V 1 and V 2 are any 2 fixed signals is equal to sigma square times in angle bracket of V 1 comma V 2. We derived this 2 lectures ago this result will come in handy when we do the work below. So variance of Z given H 0 is see what is Z under hypothesis H 0 it is S plus N sorry it is just you know sorry Y is just N. This is expectation of angle bracket N comma S N comma S okay that is just expectation of N comma S N comma S actually you can maybe rather than write it as expectation let me just write it as covariance because it will be easier for us to use this formula you know that covariance of X comma X is the variance. So I am just writing this as covariance angle bracket N comma S angle bracket N comma S is a real signal. So I can change the order of the covariances I have this formula here. So this gives me sigma square times angle bracket S comma S which is equal to sigma square times norm S square. This means that the covariance of this particular Gaussian under hypothesis H 0 that is a distribution of Z is completely characterized by this when 0 is sent you have the Gaussian to have mean 0 and it has a variance of sigma square norm S square. So that is basically the full characterization of the PDF of Z under the hypothesis H 0 which is another way of saying 0 was sent. Our next task is to repeat this for H 1 okay. So now under H 1 our references that Y of t is S of t plus N of t. So our Z is equal to angle bracket Y plus N comma S which is equal to angle bracket sorry it is not Y plus N it should be S plus N okay because Y is S plus N yeah which is equal to angle bracket SS plus angle bracket NS which is equal to norm S square plus angle bracket N comma S fine. What is the expectation of Z under the hypothesis H 1 you just put your expectation here norm S square is a fixed number because this is a deterministic signal. So this is going to be norm S square plus expectation of angle bracket N comma S we just proved above that expectation of N comma S over here is 0. So the same result applies over here. So this is equal to norm S square. So the mean of Z under the hypothesis H 1 is norm S square for the variance variance of Z given H 1 I am just going to use the covariance approach I will write this as curve angle bracket okay. I am just going to write the covariance of Z comma Z that is going to be S plus N comma S comma S plus N comma S okay just for convenience if you want to do it directly there is no problem but now I have this formula the same formula expectation of N v 1, N v 2 sorry if you write expectation of N come v 1, N v 2 that is covariance of N v 1, N v 2 I should have written this as covariance. So I can just address that also actually I can leave it because there is zero mean so it does not matter. This is the same as covariance of N v 1 comma N v 2 that is sigma square times the angle bracket of v 1 v 2. So if you use this particular formula unfortunately here it is not in the same form because there is an extra S here but we can always use the linearity of the inner product and this is covariance of we can write this as inner product S comma S plus inner product N comma S comma inner product S comma S plus inner product N comma S and as you know this is norm S square norm S square these are fixed numbers for the purposes of covariance we can subtract them out. So this is going to be just curve of inner product N comma S comma inner product N comma S which is equal to not surprising sigma square norm S square which is the same as the previous. Therefore our conclusion over here is that this particular Gaussian under the hypothesis H 1 has a mean at norm S square and has the same variance as the other Gaussian. Therefore by symmetry you should be able to safely conclude that this particular point is the midpoint between 0 and norm S square which is norm S square by 2 ok. So now when you want to make a decision how do you decide given why which hypothesis is more likely it turns out that it is very easy it is the you just have to find out which side of norm S square by 2 you are on. If you are on the right side of norm S square by 2 it is more likely that H 1 happened if it is the left more likely that H 0 happened. In other words if you are closer to norm S square if z is closer to norm S square you are better off saying that 1 was sent if z is closer to rather if it is closer to 0 that is to the left of norm S square by 2 you are better off saying that 0 was sent. This is the optimal decision for this kind of binary signaling but and this is where you know our slide comes in. So, we use the fact that the under hypothesis under hypothesis H 0 z has 0 mean and z has sigma square norm S square variance under hypothesis H 1 z has norm S square as mean and the same thing as variance. Now we have the recipe for making the correct decision. Now unfortunately even with correct decisions we can make or rather not with correct decisions what we think are correct decisions there can be errors why it could so happen that the noise can essentially and even under hypothesis H 0 let us say 0 was sent noise can carry you through to the right side of norm S square by 2. In a similar fashion 1 was sent and there may be a high negative realization of noise that may carry you to the left of norm S square by 2 meaning that under both these hypotheses there is a chance that you may end up making an incorrect decision. These incorrect decisions are what we call errors or symbol errors. So, what we want to find out is that under you know in this scenario what is the probability of making symbol errors. Let us do that afresh let us just move to the next page. So, let us say symbol errors. So, we want to find the symbol error probability. So, let us now draw the I will draw with some spacing this is under hypothesis H 0 and let us now in the other color this is under hypothesis H 1 I do not want you to go negative okay and this is our decision region. So, this is norm S square which is the same as mod S you know integral S square of t dt this is mod S square by 2 and this is 0. Now what is the probability of an error let us say under hypothesis H 0 okay this is the mean actually it should come down just bear with me it should actually just come down after this but yeah an error happens if you send 0 but you fall into this particular region you fall into this particular region. So, there is a symbol error probability that you can find out by finding the probability that under H 0 what is the probability that you cross S square by 2. So, let us now evaluate that probability okay. So, to do that under H 0 remember whatever z parameters are under H 0 it is 0 mean and has sigma square norm S square as the variance. Therefore, if you have sigma square norm S square as the variance then what you need to do is you need to do this probability of error under hypothesis 0 which I write as P is 0 is equal to integral norm S square by 2 to infinity because I want to find this area 1 upon variance root variance is sigma norm S root 2 pi I am writing the formula for Gaussian if you remember it is 1 by sigma root 2 pi e power minus x square by 2 I am just writing sigma norm S root 2 pi e power minus small z square by 2 sigma square norm S square dz this is the formula for the error. How I did I write this because under the hypothesis H 0 mean is 0 variance sigma square norm S square this is the distribution what is the probability under this distribution that the realized random variable falls above norm S square by 2. Let us now evaluate this it should not be too difficult okay because we are good with Gaussians we always want the standard normal to make this the standard normal I am going to make a substitution we are going to say let u is equal to and I am choosing this very carefully it is actually z minus mean of z by root variance mean of z is 0 so z by sigma norm S under this right when z is equal to norm S square by 2 the value of u is so norm S square by 2 so the value of u is going to be so you just substitute norm S square by 2 here you will get norm S by 2 sigma therefore let us now evaluate this integral p e given 0 is equal to integral norm S upon 2 sigma to infinity infinity does not change because it is just a same the same sign it is just a you know scaling by a fixed constant 1 by and if you look at d u is dz upon sigma root S so this particular sigma root S will get absorbed into this dz so you will get root 2 pi e power minus u square by 2 dz. Now your i should perk up over here why because this is actually the q function so this is the q of norm S upon 2 sigma so this q function gives you the probability of having a symbol error when the symbol 0 is sent you can also say bit error because we have 0 and 1 this is the probability of making an error under hypothesis 0 so q of norm S by 2 sigma if you evaluate this you are going to get the answer. Before we talk about this particular some intuitions related to this let us also just look at the probability of error under the hypothesis H 1. So now under hypothesis H 1 it is that you can I am not going to go back and show you but we have seen that the Gaussian which you realize the like the distribution of Z under hypothesis H 1 has a mean of norm S square and a variance of sigma square norm S square and in this situation you make an error if you go to the left of norm S square by 2. Now intuitively because of the symmetry you can clearly see that the area over here is equal to the area over here but still for the first time it is a good idea to evaluate it and verify that it is indeed the case. So, I am just going to write it down quickly and we can confirm that it is going to be the same error probability. So, under hypothesis H 1 we have mean norm S square variance sigma square norm S square. Therefore, our probability of error given 1 is remember we have to be to the left of norm mod S square by 2. So, it is integral minus infinity to mod S square by 2 1 by sigma norm S root 2 pi e power minus we have to be very careful Z minus norm S square the whole square by 2 sigma square norm S square sorry yeah dZ. Now over here again we will make a similar substitution U is equal to Z minus norm S square pi sigma norm S du is equal to dz upon sigma you know norm S that takes care of this part over here and to change the limits when Z is equal to norm S square by 2 u is equal to this norm S square by 2 minus norm S square upon sigma root S. So, this will be minus norm S by sigma ok. This means my p e 1 p e given 1 is going to be just going to go down a little the limits ok the signs do not change nothing changes over here minus infinity to minus norm S by 2 sigma e power minus u square by 2 upon root 2 pi du this is equal to and even though the limit is from minus infinity infinity to minus S by 2 sigma because of the symmetry of the Gaussian it is equivalent to integrating from plus norm S by 2 sigma to infinity this will also be q of norm S upon 2 sigma ok q of norm S upon 2 sigma. Therefore, the probability of error under the hypothesis H 1 and the probability of error under the hypothesis H 0 are both q of norm S by 2 sigma and if since these two are equi-probable the overall probability of symbol error or in this case bit error is going to be half of p e 0 plus half of 1 which is q of norm S by 2 sigma ok. So, this is where we are going to stop for this lecture the key takeaway is that whenever you have this kind of additive white Gaussian noise channel the relationships between the noise and the signal especially in the case of binary signaling will allow you to make these computations very very easily and it is essential for you to understand that wherever possible take advantage of the symmetry to find out the error probabilities easier. You do not always have to work this full thing out sometimes by inspection you may be able to write it, but just be careful. The other thing is whenever you take the metric like z which is angle bracket y comma s to be evaluated evaluating this particular metric and finding the point where you have to make a decision is key. So, to conclude what we said is that under hypothesis H 0 we found the distribution under hypothesis 1 we found the distribution and wherever our decisions actually at norm S square by 2 the value of z being norm square by 2 we cannot make a decision either way of course the probability that the noise will exactly take you there is anyway 0 to the left you will conclude 0 was sent to the right you will conclude 1 was sent. If the noise is very very large then you have to incur symbol errors and this symbol error is given by q of norm S by 2 sigma under the binary signaling approach where it is on off that is a 0 corresponds to 0 being sent and a 1 corresponds to norm S being sent. In the next lecture we will extend this to the concept of binary signaling with two different signals so that you can contrast and compare the effective errors and we will also talk about the energy per symbol and energy per bit and those implications on this kind of signaling. Thank you.