 Hello, welcome back to this lecture on digital communication using GNU radio. My name is Kumar Appaya. I belong to the department of electrical engineering at IIT Bombay. So in this lecture, we are going to look closely at signal space again, something we have seen earlier, but with the demodulation perspective in mind and we will see how the use of signal space to convert your waveforms into statistics or numbers makes the demodulation process much easier. So now, if you look at what we will see in this lecture, we are going to determine optimal receivers for MRE signaling in additive white Gaussian noise channels. In particular, rather than just sending one bit, if we send one of M symbols, which symbol was sent, the decision to make which symbol was sent at the receiver, how to do that optimally is something we will see in this lecture under additive white Gaussian noise. We will use the concepts of hypothesis testing signal space and the concept of irrelevance of some parts of the statistic that we do not need. In particular, we will show that only that part of noise which is along the components of the signal matter when you want to decide which symbol was sent. We will see this closely in this lecture. We will also look at the performance of various detectors and we will also find out how we can compute symbol error probabilities. So, these concepts will take a bit of time, so we will see this over a few upcoming lectures. So, the signal space approach is something we have already seen. Recall that whenever we want to send messages, one of let us say M messages, these M messages are sent as signals that is for message 1, there is a potential waveform that is sent. For message 2, there is a different waveform and at the receiver, your task is to determine which message was sent by looking at the received waveform. Now, this is the process of demodulation as we have seen, but the problem is that the received waveform is never an exact replication because the signal invariably undergoes some or the other transformation because it goes through a medium and there is always noise addition at the end. So, that means that the receiver has only a distorted copy of the waveform. So, the question is signals are complicated because if you start looking at modifications of the signal, there is an infinite set of modifications. For example, I can modify the signal at this location, this location, this location and there is a massive number of modifications that one can make. So, the question is what is the best guess for what was sent given this received signal? Of course, the assumption we are going to make is we have additive Gaussian noise that to white meaning with time there is no correlation. So, if we look at these H i's that is the question we are asking is you are observing Y of t which is S i of t plus N of t, i is number i is 0, 1, 2 up to M minus 1, one of those, the question is given Y of t which S i was sent that is the same same which message was sent, N of t is white Gaussian noise which is added and its power spectral density as we discussed is defined as N naught by 2 because it is always N naught by 2 across one real dimension. So, in the signal space approach, we have seen the signal space approach with regard to signal design and you know waveform design, but now at the receiver we have to now appeal to the same tools to find out what signal was actually sent. So, linear processing of a Gaussian of Gaussian noise actually yields a Gaussian random variable, why does this happen? So, let us say that you receive Y of t and you perform some you know integration or filtering with Y of t, the N of t component undergoes that filtering because of its fact because the fact that it is being added when you construct Y of t and that results in a number that if you integrate N of t with respect to you know V t and there should be D t here of course, then that results in a number and we of course, define this as inner product N v. So, this is now a number. So, let us say that you know you want to essentially find out which waveform was sent by integrating Y of t with some waveform and finding a number and that metric is what you want to use. Unfortunately, the noise also undergoes the same transformation and then gives you a number. Now, you know that N of t has mean 0. So, expectation of z can be shown to 0, but what about the variance because essentially the noise variance is what affects your ability to determine what was sent. If the noise variance is very high, there is a chance that you will jump all over the place, the noise variance is low, maybe you will just get the correct message all the time. So, let us say that you have two waveforms v 1 and v 2 which are finite energy signals, finite energy just is another proxy for saying that they are integrable. If N of t is a 0 mean white Gaussian noise signal whose power spectral density sigma square is N naught by 2, the power spectral density sigma square for all frequency N naught by 2. Then N v 1 that is integral N of t v 1 of t d t and integral N of t v 2 of t d t are jointly Gaussian with covariance N v 1 of N you know N covariance of these two random variables is sigma square times v 1 v 2. This is a very powerful result because N of t is actually a complicated signal because it is white. So, its statistics are very difficult to quantify in the sense that N of t 1 and N of t 2 are essentially uncorrelated, they are independent if t 1 is not equal to t 2. So, if you filter them or if you perform any multiplication and integration with waveforms, what can you say about the resulting random variables? It turns out that there is a very nice characterization, the covariance is just sigma square times the inner product v 1 v 2 which is a quantity that is finite and you can always just determine it because v 1 v 2 are integrable signals. For the specific case of you know inner product N v with N v in the sense if you look at the inner product sorry, if you find the covariance of N v with itself that turns out to be the variance of N v and that can be shown to be sigma square times norm v square. Of course, remember norm v is integral v square t d t. How do you prove this? So, this is not very difficult to prove. If you look at the expectation of N v 1 and N v 2, then you write the expectation in terms of integrals, but just be careful in writing the second integral using a different integral in a different letter actually. So, we write N t v 1 d t and N s v 2 d s, this is a double integral and we will just write it as double integral v 1 v 2. Remember v 1 v 2 are known finite energy signals expectation N of t N of s d t d s. Now, expectation N of t N of s d t d s. So, expectation N of t into N of s remember we have N of t N of s are independent if t is not equal to s and only if t equal to s then it is not. So, this means and you know that the power spectral density of N of t is essentially sigma square. So, the autocorrelation function is going to be sigma square delta of t minus tau. So, that is what we are going to use. So, you can write the expectation of N of t N of s as sigma square delta of t minus s. So, that is exactly what is being used over here. This also is because it is wide sense stationary you can you know that the autocorrelation depends only on the lag or the difference between the two times. Now, if we integrate with respect to d s integrates integral sigma integral delta of t minus s e t d s yeah. So, integral delta of t minus s d t d s. Now, I made another small change. So, because I wrote delta of t minus s here this is nonzero only when t equal to s. So, I silently just swap this t to s over here because this particular quantity is nonzero only when t is equal to s. Now, if you now perform the integration delta of t minus s d s that results in this particular quantity going away and you have just sigma square times integral v 1 v 2 because the integral of the delta is essentially 1. So, you have integral v 1 of t v 2 of t d t which is sigma square inner product v 1 v 2. Now, there is one important point if v 1 and v 2 are orthogonal signals that is inner product v 1 v 2 is 0 that leads to something interesting that leads to the conclusion that this expectation n v 1 n v 2 is 0 which means that the resulting random variables are uncorrelated since they are jointly Gaussian and uncorrelated they are independent. So, the fact that we have jointly Gaussian where random variables all over the place is very important because these neat results essentially come together only because of the fact that they are jointly Gaussian. So, what do we have in the signal space approach? If we look at the geometric interpretation of the projection of the white Gaussian noise then the signal space spanned by the m signals is finite dimensional with dimension less than or equal to m. What does this mean? If you have the signals s 0 s 1 s 2 up to s m the maximum dimension they can occupy is m potentially lesser because let us say that you are going to signal using just let us say some s 1 of t and s 2 of t is minus s 1 of t. Let us say that you are doing something like a binary signaling you send either s 1 or minus s 1 or you send s 1 or 0. In this case m is 2, but the dimension is only 1 because it is the same signal flipped and the signals are essentially linearly dependent. So, you know the second signal is a linear combination of the first signal for example or if you are going to send let us say the 3 potential signals say s 1 or s 2 or s 1 plus s 2 let us say. So, in this case let us say s 1 and s 2 are orthogonal, but definitely s 3 is not orthogonal to s 1 and s 2 and in fact, it is a linear combination. So, again there is m is 3 and your dimension is 2. So, the signal space is always finite dimensional and its dimension is less than or equal to m. The components of the white Gaussian noise orthogonal to the signal space are independent of the component in the signal space and thus irrelevant. Now, this is a very powerful statement and this is a statement we will spend a little bit of time understanding. The thing is whenever you take the projection of the noise on to a particular signal that is if you perform integral n of t v of t d t you get a number. If you perform integral n of t v 2 of t d t you get a number let us say v 1 v 2 v 3 are all orthogonal signals you get multiple numbers. Now, what about that component of n of t which is not along these signals that also has some information, but what is important is that these parts which have these components of n of t that are not along the signal space in which the modulating signals are present are orthogonal and therefore, they are irrelevant that is they do not make any difference to your detection problem in finding out the best you know your best guess of which symbol was sent. The other way to put this is the component of this noise along the signal space that is the component of n of t along your v 1 v 2 v 3 which are potentially along the signal space are sufficient statistics they are enough to make an optimal decision to determine which s was sent and this can be reduced to set of Gaussian random variables. So, the key insight we are saying is this complicated n of t that affects your signal can be reduced to a set of numbers and these numbers are going to give you the answer of how to decide which m was sent. So, rather than look at the complicated waveform question we have reduced it to a number question this makes things a lot simpler and you know you can now be very happy that we are able to make optimal decisions by looking at a set of numbers by reducing the waveforms to just these numbers. To get an idea of the signal space picture we will say let the signal space spanned by s 0 s 1 up to s m minus 1 of s s 1 minus 1 of t let us say be script s. Now, the orthonormal basis for this signal space can be represented as psi 1 psi 2 up to psi n. Now, notice that over here I have said psi n. So, we have n is less than or equal to m because of the fact that the dimension is always less than or equal to the number of signals. You will have equality only when you do orthogonal signaling like FSK otherwise in general it is n is less than or equal to m. The other thing is because these size are orthonormal you have these two properties psi 1 comma psi 2 is 0 and inner product psi 1 comma psi 1 is 1. This is true for all the signals not necessarily just for psi 1 and psi 2, but pair wise they are orthogonal and their energy is 0. Now, all of these s i's can be expressed as linear combinations of these size that is s i of t is a i 1 psi 1 of t plus a i 2 psi 2 of t plus dot dot dot a i n psi n of t. So, essentially what we are saying is that the size are we you can represent them as just linear combinations of the basis signals the orthonormal basis signals. And these size are directly obtained by just projecting this onto the sorry these a i k's are obtained by projecting the s i's onto the respective size. Now, this is something which we have mentioned before, but we want to do properly the question is how do we find psi k from s i of t? The answer if you want a systematic procedure is to perform Gram-Schmidt Orthogonalization. So, the Gram-Schmidt Orthogonalization involves finding phi k plus 1 of t which is s k plus 1 of t minus inner product s k plus 1 times psi i times psi i ok. So, fine what on and what on does this mean? What we are doing is we are essentially initializing phi 1 of t as s 0 of t by norm s 0 of t. This gives us a phi 1 in the next step actually this is let us say this is psi 1 rather this is psi 1 ok. This is initialization step and the next step to find the to find your basically now you have a signal which is orthogonal meaning which let us not say orthogonal for a single signal it has unit energy and it is essentially s 0 of t scaled to make it unit energy fine. Now, the next question is if we take s 1 of t how do we now get that part of s 1 of t that is not related to s 0 or not related to psi 1. So, how do we do that? So, we will we will now define a phi 2 of t as its s 2 of t minus remove the component of psi 1 in s 2. So, inner product s 2 comma psi 1 times psi 1 of t ok. So, now I am going to claim that this phi 2 is orthogonal to psi 1 how it is very simple. If you take inner product of psi 1 and phi 2 you will get inner product psi 1 s 2 minus inner product psi 1 s 2 times psi 1 inner product sorry inner product psi 1 psi 1 and product psi 1 psi 1 is 0. So, these two are essentially going to be cancelled out. So, this psi phi 2 is a signal that is orthogonal to psi 1 and if I now call psi 2 as phi 2 of t divided by norm phi 2 I get my second basis element ok. And this process can be iteratively performed to obtain a reasonable orthonormal basis from any s 0 to s m minus 1 you give. So, if you give me any m waveforms I can use the Gram-Schmidt method to find out an orthonormal basis for this signal space. This is an important process because when you design your s s you know you have to be cognizant of what the receiver design is going to be and to make sure that you are going to properly be able to make sure that you properly be able to find out the size so that you can project the noise get these numbers is an important process. So, we will actually spend some time to do a reasonably big example to find out how this process works and then I will also tell you some methods by which you can actually directly see the signal and also write down your psi 1 psi 2 without any effort. So, this is something that we are going to look at in the coming lecture.