 Hello, so welcome back to this lecture series on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering IIT Bombay. So this will be the last of our series of lectures on demodulation at least with respect to the theoretical part. So in this lecture we will see a little more about MRE signal demodulation and subsequently we will now move to checking these methods out and you know checking how these methods work on GNU radio. To recall in the last lecture we saw a detailed discussion of on-off signaling that is binary signaling where the two signals that are being sent are either 0 or S of t and in this particular mode of signaling we found that we could perform an analysis of the optimal detection scheme and find the symbol error rate also. There is one minor remarks that I wish to make. We chose our decision rule based on this particular quantity which is S over here and using S we plotted our PDFs and we found the decision region on S to be norm S square which should be to the left or right of norm S square and so on. There is a slight modification also that I wanted to talk about. If you remember we mentioned that this psi of t as a normalized version of S of t can also be used and can also serve as a decision mechanism. So just to give you some perspective if you now look at our Z if we defined it as y comma S instead if you define U to be inner product y comma let us say psi in this case you are going to get another metric U which is basically the same as saying this is Z upon norm S. It makes no other difference but if you repeat the analysis with U you will get some things which are slightly different because if you now look at our relationships with which we had with U for example expectation of Z given the relationships with Z expectation of Z given at 0 is expectation of n comma S which is 0 variance of Z given at 0 is going to be covariance n comma S n comma S this with U is going to go a slight and interesting change. If you say variance of U under at 0 this is equal to covariance of angle bracket in the case of at 0 y is just n comma U sorry n comma psi comma n comma psi using our formula that covariance of n comma V 1 n comma V 2 is sigma square angle bracket V 1 V 2 this will become sigma square angle bracket psi comma psi which is equal to sigma square. Similarly, if you find the variance of U under h 1 it will also turn out to be sigma square and in this particular scenario if you basically draw the PDFs under hypothesis h 1 and h 2 for U ok this will say h 1 h 0 both should have the same height I am sorry just you know in this case things undergo a slight change because if you now look at of course I should have calculated the mean also if you now calculate the mean of U under h 1. So, expectation of U under h 1 is equal to expectation of and we would have calculated angle bracket y comma psi ok, but y is actually s plus n and angle bracket s comma psi is going to be therefore, what you are going to realize is that under U this is going to be at norm s this is 0 this is going to be norm s upon 2 and the variance is just sigma square. So, effectively what you are doing is you are scaling the whole problem by norm s now there is a small advantage over here not it is a you know you are going to get if you find the probability of symbol error you will get the same result as the one you got in s which makes complete sense except that over here you can make the conclusion that this distance which is norm s divided by 2 times the standard deviation of the noise is the entry in your queue. So, if you remember your P e you found to be under of course, equiprobable signaling was norm s upon 2 sigma that is if you find the distance between the 2 means if you define the distance between the 2 means of those which basically correspond to the actual values being sent because if you send 0 you are sending 0 send s of t you are sending norm s the distance between those in this particular axis is norm s divided by 2 sigma. In the case of z right if you remember or if you remember from the previous lecture in the case of z you had 0 and norm s square as the mean and the variances were the variance and for the noise was sigma norm s square if you do the same thing find the distance between these 2 and divide by root of the variance you will get norm square by 2 times the noise of variance 2 times sigma norm s this is same as the previous. So, what you can see is that I took inner product z comma s, but if you take inner product z comma psi also or any scaled version of s you will get the same result the only minor advantage with taking inner product of you know y comma psi is that you will get a number that is automatically scaled for this norm s there is no other difference, but this is just a useful thing to know that the choice of you know angle bracket y comma s was not unique any scaled version of s will give you the exact same correct result. So, now we will now move on to a more general form of binary signaling that is the case where you have 2 signals s 0 and s 1 in this case also you have to be very careful because s just because I write s 0 s 1 does not mean that automatically mean that there are 2 dimensions. For example, suppose my s 0 is something like psi 1 and s 1 is minus psi 1 or 4 psi 1 or something like that. So, in this case the dimension can be at least 1 or at most 2 depending on your choice of s 0 and s 1. We are now going to just look at the general situation general scenario in this case and like I mentioned earlier you can take the inner product with other signals which are related to s 0 and s 1 also I am just going to analyze by taking the inner product with s 0 and s 1 much like we took the inner product with s in the on off case. So, let us look at this particular scenario. So, I am just going to write this in a general way let us say general binary signaling. If you look at the general binary signaling this is general you have y of t is s 0 of t plus n of t y of t is s 1 of t plus n of t that is if you send it is if you have if you send 0 s 0 of t is transmitted if you send s 1 s 1 of t is transmitted no prices for guessing if you set s 0 to the 0 signal you get the same thing back as you did in the previous situation where you did on off. Let us now just understand in this case using the exact same tools how to find out the optimal decision region and so on ok. So, in this case also there is almost no difference let us take a inner product of y comma s 0 ok and you know minus norm s 0 square by 2 why because this was our decision criterion. If you remember you took norm y minus s i square you wanted to minimize it you expanded it removed the norm y square and took the negative sign you wanted to maximize this quantity. Similarly, you have to find out the better one among these better one meaning the one that is larger ok. So, you have to find the arg max that is if in a angle bracket y comma s 0 minus norm s square s 0 square by 2 is larger then conclude that 0 was sent if angle bracket y comma s 1 minus norm s square by 2 was smaller sorry was larger conclude that s 1 was sent it is very simple or if you do not like this you just do this I think I need a new page alternately you find the minimum of or arg minimum of that is it minimum distance. Now, in this case also you can actually let us say call this particular you know you can also call these particular variables as z let us say that you know z is equal to under hypothesis s 0 z is equal to angle bracket y comma s 0 ok. If z is angle bracket y comma s 0 then let us find the expectation of z under h 0 very easy that is going to be expectation of under h 0 what is y s s 0 plus n ok I should not have done that s 0 plus n comma s 0 and I am not going into the full details actually I should have done covariance let me just know it is expectation fine. So, I am not going into full details this is going to be norm s 0 square similarly variance of z given h 0 not surprisingly is covariance of now here you have to just be very careful under hypothesis h 0 what is y s 0 plus n. So, I am going to write s 0 plus n s 0 plus n sorry s 0 and comma s 0 plus n s 0 now needless to say if you expand this the s 0 square comes out and you are going to get this to be ok and if you do the similar for the other one you will get something similar and you know you have to just work it out typically you choose norm s 0 square as norm s 1 square. So, that stops mattering and you end up you know just getting a much simpler decision region, but in this case you know let us actually not go through the full evaluation, but let us just say that you know if you do this if you do the full maths and if you just you know get back like for example, if you want to make a decision then the decision region is going to be given by. So, let us just look at our slide over here. So, inner product y comma s 0 minus norm s 0 square by 2 is it greater than or less than inner product y comma s 1 minus norm s 1 square by 2 if you know simplified by taking the s 0 and s 1 to each other sides you will get angle bracket y comma s 1 minus s 0 you just have to check whether it is more or less than norm s 1 square by 2 minus norm s 0 square by 2 this is going to be your decision region. Now, there is an intuition over here because if you start evaluating the probability of symbol error in this case you end up getting q of norm of s 1 minus s 0 by 2 sigma that is this is q of d by 2 sigma where d can be considered as the distance between your pair of signals s 0 and s 1. This again satisfies our intuition because in the previous exercise where you used on-off signaling your s 0 s 0 was essentially 0 and there you had q of norm s which is this 1 or s by 2 sigma. Therefore, this q of d upon 2 sigma is a very effective kind of thing to know I mean it is a good sanity check to remember whenever you have binary signaling under Gaussian noise the optimal detector will have a probability of symbol error of q of d upon 2 sigma where sigma is the variance of the noise and d is the distance between these 2 signals. When I say distance between these 2 signals you have to evaluate norm of s 0 minus s 1 that is d. Now, in this scenario what is important is that this is true only when you have equiprobable signaling because we are doing ML and ML is same as MP under equiprobable signaling if it is not then you have to be very careful the decision region is going to shift you can see that as an exercise. So, this is something that can be worked out I am not doing the full evaluation over here, but the key idea is once you have these 2 Gaussians I am just going to you know once you have these 2 Gaussians and you have to find out where this greater than less than equality occurs that is going to be a decision region that is going to give you the point in which you have to make a decision. So, that is the key idea it will typically be at the midpoint if you have s 0s and s 1s well chosen that is the way to look at it. Now, in the general binary signaling you have q of d upon 2 sigma this will serve as a recipe for MRA signaling also. So, let us see. So, with binary signaling s 0 and s 1 can be assumed to correspond to the bits 0 and 1. We have not exactly discussed the aspect of how much energy we are consuming. So, if we define energy as you know from your basic circuits you know that whenever you have a signal if you integrate that signal from 0 to t and square of the signal that is like the energy contained in the signal between 0 and t. So, if you now integrate the square of the signal which is what we are doing when we write norm s 0 s 0s energy is integral s 0 square of t s 1 signal is integral s 1 square of t and we send these 2 signals equi probably because we have 0s occurring half of the times once occurring half of the times we have had this discussion and we are going with equi probably signaling the e b or energy per bit can be defined as half norm s 0 square plus norm s 1 square norm s 0 square norm s 1 square are the same as integral s square of t dt integral s 1 square of t dt and so on. Now, there is an interesting conclusion that occurs. If you look at the symbol error rate or bit error rate in binary signaling both are same with sigma square is equal to n naught by 2 I mentioned the choice of n naught by 2 because of the fact that we want n naught by 2 is the noise variance per dimension. The probability of error under maximum likelihood is q of d upon 2 sigma which is norm s 1 minus s 0 upon 2 sigma we can write this as q of root of d square by e b into e b by 2 n naught. This is something interesting because e b by n naught is like an SNR quantity it is like the bit wise SNR it is the SNR per bit so to speak and this d square by e b can be considered as something called power efficiency that is this d square by e b serves as something like a power efficiency. What does this mean? Intuitively let us say we go back to our situation of let us say binary signaling with s 0 being 0 then the average energy you spend is norm s 1 square by 2. How do you increase d? If you keep increasing s 1 to make it go farther and farther from 0 or s 0 then your energy increases. But the fact that you are making s 1 go farther and farther intuitively means that the amplitude of s 1 has to go higher and higher. The moment the amplitude of s 1 goes higher and higher you are spending more energy more joules or if you want to look at it as joules per second more power is being spent. Therefore increasing d has a cost that is your so called constellation your arrangement of these s 0 s 1 s 2 if you make them really far apart you are going to get great performance because your bit error rate is going to go very close to 0. But at what cost you are spending a lot of energy. The other thing is s n r per bit or s n r effectively the signal to noise ratio is what determines your performance not the signal power or the noise power alone. In other words if you have a very low amount of noise you can get away with just a very small amount of signal power to get the same bit error rate. Let me give you an example. Let us say that you have a communication system where you need a bit error rate of let us say 10 power minus 9. If the noise variance is 1 let us say that the amount of signal power you need is let us say some in some scenario let us say that it is 100 joules or something like that. If the noise variance reduces to 0.1 you may need only 10 joules. The noise variance reduces to 0.01 you need only 1 joule which basically goes to show that signal power and noise power individually are not the quantities that determine the performance of your communication system. It is the ratio that determines it. To give you a practical example let us say that you are performing a kind of communication which is very in a very narrow frequency band and the amount of noise that affects your signal is actually really really small because of narrow frequency band. Unfortunately your signaling also has to be on this narrow frequency band so if the amount of noise is small the amount of signal will also be small for the same kind of bit error rate. But it is suppose that you are now dealing in a situation where there is a very high amount of noise because the environment essentially has lots of radiation and things like that. To overcome this kind of signaling so to overcome this kind of noise you need to send at a higher and higher power. So if you think about what you know let us say you are sending across the room you may need some amount of power but let us say that you are actually sending it to a satellite which is several tens or sometimes hundreds of kilometers away you may need a higher amount of you know power to overcome the noise that you incur. This is something to keep in mind. So SNR signal to noise ratio is what determines your performance. d square by eb is power efficiency because the higher the d is the more the power you are consuming. So having a lower d is great in the sense of you being very power efficient but you have to map match your whatever your bit error rate requirement is that is key. So sticking to binary signaling because once you understand binary signaling M-ary signaling is a just an extension of this. I mentioned that the dimension of your signaling can be at least one at most two. So for example this particular signaling is what we saw just now it is on off key where you place your S0 at 0 and you place your S1 at 1. So in this case you saw that the Q is you know Q is basically norm S by 2 sigma norm S1 by 2 sigma in this particular notation. But you can also put your signals in different locations for example S0 and S1 are orthogonal. Can you think of an example of binary orthogonal signaling? So there are many. So for example let us say that this is my S0 this is my S1 you can obviously see I mean assuming this is 1 this is 0 this is 1 this is let us say 1 minus 1 this is minus 1 0 half 1 and so on you end up getting it that this is actually an example of two signals that are orthogonal. In this case if you choose the basis for vectors carefully remember I gave you an intuitive way this can be 1 1 this can be 1 minus 1 it is very evident that these are orthogonal. In this particular situation I am not choosing those I am using actually the this is 0 half and 1. So I am using these. So in this case 1 0 0 1. So this is basically the vector 1 0 this is the vector 0 1. So these two are orthogonal signals and the case where you use orthogonal signals you end up in the case where you are orthogonal signals you end up getting a concentration like this and this kind of constellation gives you again some you know a different error performance depending on how far as 0 S1 are from the origin and things like that and the detector also will have slight differences of course your recipe of taking angle bracket y comma 0 angle bracket y1 that does not change but the actual implementation mechanics may change. This anti-poddle is very very interesting reason anti-poddle is very interesting is because it is just like an extension of the on-off keying but over here you have these signals at S0 and S1 and typically equally far away from the origin. In this case what is the dimension of signaling if you think about it carefully you will find that this is the case where the dimension is exactly 1 just like in the on-off keying and it is very easy you choose this signal you choose this signal or anything else you know you choose a triangle you choose a sink whatever that is a signal and negative of that is the other signal. Now in the case of binary there is something nice which happens if you sit and write the detector in an angle bracket y comma is 0 y comma S1 and do all those things you will get something very neat you will find that under equally probable signaling the decision region is just to the left or right of 0 and you just have to take the sign SIG and sign of that z which you get by taking angle bracket y comma S1. So, if you find angle bracket y comma S1 if it is positive S1 negative S0 that is it. So, in this particular case also you have a very neat way of performing the detection where you just have positive and negative of the z giving you the answer and again Q of root of sorry Q of you know D upon 2 sigma will give you the error this is something that you can check out. So, one thing you can check is what is the average energy usage for each of these cases. Turning our right towards MRE signaling the ML rule for MRE signaling does not undergo much change its delta ML of y is arg mean Zi y runs from 1 to M minus 1 where Zi is angle bracket y Si minus norm Si square by 2 no difference except that there are more than 2 Si's that is the only difference and you can also just find distances for example, you can write this as arg mean Di's where Di's are norm y minus Si's this is just saying find me that particular signal point that is the least Euclidean distance least square you know least distance in terms of coefficient sum squared from the actual point that I sent that is basically it. So, the key idea is that you have the space of all received signals y you can just partition them into various parts that map to one of these Si's and using that you can also easily look at the geometry and predict the symbol error rates. For example, for PAM4 let's say that our signal this is one-dimensional signaling. So, I just have y's and s's are you know my y is one-dimension s0 s1 s2 s3 are all you know just scaled versions of the same thing minus 3 minus 1 1 3 optimal decision regions to the left of minus 2 to the right of minus 2 left of minus 2 is minus 3 right of minus 2 is minus 1. Similarly, left of minus 1 sorry left of 0 is minus 1 right of 0 is plus 1 left of 2 is 1 right of 2 is 3. The only catch slight catch here is that you know I mean actually finding the symbol error rate is very easy for example for 3 it is you just have to place a Gaussian over here and then the error is basically the probability you lie in this area which is great. But the only catch is that when you want to find the symbol error rate for 1 you have to take this part and this part. So, remember that you will get a combination of Q's similarly for you minus 1 although if you use equiprobial signaling and everything you do not need to redo because it will be the same as minus 1 for 1 and finally, for minus 3 and 3 it will be same. The only thing you have to keep in mind is that there are for 3 and minus 3 you can use the Q directly for 1 and minus 1 you just have to use 2 Q's because it can either go to this side or that side. So, you have to be careful this is an exercise that you should perform. For QPSK it is interesting the decision regions are this part, this part, this part, this part why because if you look if you are standing here if this is where the received signal is obviously this is the closest. But if you now slightly stray here this is the closest. So, in between these two this is the decision region in between these two this is the decision region. In this case you can actually think of this as depending on which way you look at it if you look at it as real it is two dimensional if it is complex it is one dimensional. But looking at it as a combination of real signals makes it easy you have to decide whether you are above the x axis or below the x axis you have to decide whether you are to the left of the x axis to the right of the x axis. This leads to something interesting when you think about how bits are assigned. So, for example, if you do a bit assignment like this it says 0 0 0 1 1 1 1 0 I deliberately did this you see the bit 0 is common on the top part the bit 1 is common on the bottom part. So, checking the whether you are above or below the x axis gives you an idea about this particular bit whether the most significant bit. Similarly, the least significant bit is same to the left of the y axis and to the same to the right of the y axis. So, checking whether you are to the left or right of the y axis gives you a decision on the you know gives you an idea of the decision on the least significant bit. So, in fact, one important relationship that you will see is that QPSK is actually two BPSKs in disguise when you look at it in the bit picture and this is something that you will work out and we will see also in the future classes. So, to summarize in the past several lectures we have gone into detail to check how demodulation works and how demodulation can be used for you know detecting the signals that were sent. We first saw that it is enough to project the received signals on to the modulation signal space size and in general the detection transforms the identifying region you know the detection problem transforms to identifying the region in which the received vector lies. This is a natural consequence of the optimal decision being the optimal decision being finding out from where you have the minimum distance. So, for example, for QPSK you saw that this particular region corresponded to this being the minimum distance. So, finding out where your y lies on that screen or on that region will automatically partition your decision region into various spaces. Under maximum likelihood detection under additive wide Gaussian noise the minimum distance decoding is optimal. The same is true for minimum probability of error under equiprobable signaling you know when you have equiprobable symbols MPE and ML are the same. In the further thoughts we know before we you know we will just do a GRU radio exercise and then come back to what happens at the bit level. So, symbols are fine but what happens when you start putting bits I gave you an example of QPSK but what about general signaling how do you find bit errors because everyone who does digital signal digital communication is talking about bit errors. So, how do bit errors occur and how do we characterize bit errors rather than symbol errors that will be something which we will see subsequently. So, in the next few lectures we will have new radio related work related to this. Thank you.