 this is lecture 7. Okay, so the last lecture we saw quite an important fact, okay, the fact we saw is that the entire waveform channel model which we have been considering so far y of t is x of t plus n of t, okay, so this is the model that we have been looking at and we tend to think that this is actually an infinite dimensional model where each the signal at each time instant is very important, right. So, you might think this an infinite amount of data that we have to process to get through any kind of information but it turns out since we are only sending a finite number of bits, you only need a finite number of different x of t's, right and that you can use basically neural algebra to write in terms of a basis then you immediately convert that into a vector. Once you convert into a vector it's finite dimensional and you see it gives you the true picture, okay, so not just the picture in terms of the waveform, in terms of how much information what you need to process to get all the information you need, okay. So, how exactly is your information being represented in the signal so to speak, not in each time instant except that only what's really important is this coefficients that are multiplying the basis, okay, so I'll briefly run through it's quite important that we understand this. So, whenever we do this, right, in general you would think n of t is since some space x of t is in some space etc, I did a series of simplifications to show how when you have when you're sending a sequence of bits, right, which get mapped by the transmitter into x of t and then noise gets added to it, okay. So, what should be the first thing that you do in the receiver when you receive this y of t, okay, you should know ahead of time the basis vectors, basis signals which were used to generate all the x of t and the first thing you should do is do a correlation with y of t with each of those basis vectors to find basically the projection of y of t on that space. So, behind the projection first and then you try to do your decoding, okay, so you should project first, so I'll say simply project by correlation and we also saw before that this correlation can be accomplished by filtering, right, what type of filtering is it, what's it called, I gave it a special name, what, matched filtering, right, so it's called matched filtering, the impulse response is matched to the correlation correlating signal that you need, okay, so you do phi star of minus t for instance, so you can think of that also as a filtering, okay, so once you do this, it turns out you get a vector y, okay, and this vector completely represents, complete, holds all the information that you possibly want about b, okay, so if you're only worried about b and everything is held by this vector y, you don't need any other information, so remember when you do correlation, x of t goes through the correlation, n of t also goes through the correlation, right, so x of t plus n of t, right, that also goes through the correlation, so by saying y is enough, what I'm saying is, it's enough if you look at the n of t in your space, anything, any component of n of t orthogonal to it can be completely ignored, okay, so as far as decoding is concerned because given y, everything else you have is independent of x of t, okay, so that's the main idea here, okay, so the basis is quite important, phi 1 of t to z phi m of t, okay, so you project on to the space span by this basis elements, okay, so this is a picture that enables us to go from the waveform channel to a completely vector channel where I'm going to say my bits b is getting mapped by the transmitter into what, into a vector x, okay, what is this vector x, each component is that complex number which multiplies the basis to give you x of t, okay, so that's simple vector model plus the noise also we can represent using a vector, how do I do that again, I assume the noise goes through the correlator, okay, so the noise is the noise and noise in noise here in y, okay, so that is noise that has gone through the correlator that we did some calculations to show will be what, normal with zero mean, okay, zero mean and variance covariance matrix n naught by 2 i, okay, so i being the identity matrix, okay, so you do this you get a vector y, okay, so the important thing to remember here is everything here is random, right, b is a, b is 1 of 2 power n possibilities, you assume a uniform distribution for starting out with maybe we'll change that later, so which makes x also a discrete, typically a discrete random vector, okay, with the distribution is given very simply by the distribution of b, b to x is hopefully 1 to 1 mapping, okay, and then the distribution of n is known it's iid normal, okay, so I wrote it down in vector form, right, so what is this actually, iid normal zero mean variance n naught by 2, right, so I know my n is also distributed in a very simple way, so from all this I can compute f y y, okay, so I can compute this, in fact I can even compute the conditional distribution of y given x, okay, so these two quantities in particular this quantity will prove to be very, very important, okay, so what's the conditional distribution of y given as particular x, okay, so you'll see, so the problem now has become what, I mean it's become a simple vector problem, right, so you can imagine given a certain joint distribution for x and y, you have to go ahead and find what x could have been given that you observe y, okay, so that's the classic version of what's called a detection problem, okay, so I know x and y have a certain joint distribution and I'm observing y, okay, x has so many possibilities, I have to pick that x which could have been transmitted, okay, so we'll define some matrix for it, so what's my goal when I do that, my goal is I want to minimize my probability of error, okay, I want to make sure that the probability that I say the wrong x was transmitted is minimal, so all that we'll see later on, okay, so we'll see how a detection problem is defined carefully and how one can do these things efficiently and so on and so forth, we'll see all that later, but for now I want you to be convinced of this very much because from now on I'll pretty much be completely dealing with the vector version, okay, so quite often we'll only be talking about the vector version, okay, so the vector version has a lot of useful information about the waveform version and in fact for decoding it's complete, okay, there's no problem, nothing else you need, okay, any questions on this, anything that is disturbing you, anything that you're possibly taught about, it's okay, so this is crucial, okay, should know where this is coming from, okay, we'll do one simple calculation, one thing you might be worried about is what about the parameters that were of interest to me in my original waveform channel, what were the parameters, worried about power, bandwidth, bit rate and probability of error, okay, so can I compute all those things faithfully in this vector model as well, once I can do that I can completely throw away the waveform model, right, so I can happily deal with only the vector model, okay, so we'll start with one of those things, it turns out everything cannot be easily computed, one or two you have to go back to the waveform model and know how you're going back, okay, but most things can be very readily calculated, the way I defined x turns out you can easily compute the average power, how will you compute the average power, okay, so what's the average power in X of t, okay, so X of t, what's the distribution of X of t, the way I wrote it down I said it's equal to X i of t with probability 1 by 2 power n, okay, so and then what did I do, I took X i of t and then wrote it as a linear combination X ij, phi j of t, I said j runs from 1 to m, what do I know about the phi j of t, they are orthonormal, okay, so now if I compute the power in X i of t, okay, right, if I for instance do expected, so if I do X i square of t, okay, the instantaneous power and then integrate it out, okay, over all time, so I should be careful, I'll say energy, okay, as opposed to power, so I'll do X i square of t, then integrate it out for all t, suppose I do that what will I get, can I compute it with simply the X ij's, yeah, so you can do that, you square the right hand side, you'll get a whole bunch of terms and then you integrate from 0 to minus infinity to infinity, what will remain, only the individual square terms will remain, all the cross terms will vanish because my phi i or phi j of t are orthonormal, okay, and what about the square terms also, they will simply be X ij squared, right, the other inner product will be 1 because I have normalized my phi j, so you get all that for free, so it turns out the energy of X of t, X i of t, okay, remember what's, what am I defining energy to be, X i square of t integrated from minus infinity to infinity, okay, that's my definition for energy, okay, so this turns out to be norm of, well, norm square of X the vector, okay, so I think I called it X i, right, I am sorry the X i vector, okay, so that's good, right, so if I now want to compute the average energy in X of t, what can I do, right, I am going to take the average energy overall X i of t, one, right, so that's that, I can do the same thing with the random vector X, okay, so the average energy in X of t will be expected value of mod X square, okay, so that's how it will work out, okay, so let me write that down, maybe down here do we have space, okay, so maybe I will go off to the next page, okay, so the energy, okay, average, I am not saying average here, obviously it's average, energy of X of t can be computed as expected value of norm X square, okay, so this is a discrete random vector and presumably one can compute expected values for it very, very easily, it's simple summation, right, so it's very easy to do, okay, so energy is taken care of, you might say power but energy is good enough, right, so both of them are the same, roughly if you say, if you know the time interval, it's fine, okay, so energy I am able to compute, okay, and go back with the discrete model itself, bandwidth is a little bit more difficult, okay, so if I only give you the X i's of t's, bandwidth is not so easy, you need to really know the shape of the signaling waveform, you need to know so many other things, so bandwidth maybe one cannot immediately compute, what else was important, bit rate, one can compute knowing the support, okay, if you know the support of these X of t's, you can compute the bit rate and the last thing is probability of error and that can be accurately estimated just with the random vector model, you don't need to go back to the X of t, okay, so I just showed you one, the others can also be done in specific cases, I will show you how to go about doing it, okay, so like we will see next, okay, so I am going to postpone this detection problem for a little bit later, we will see a whole bunch of examples of designing these transmitters, what to do with the modulator, how does it translate into the vector model for several simple cases which you can maybe relate to in a very intuitive and nice way, okay, so that is what we will do for the remainder of this class, okay, so the first model and the simplest model, first-first modulator and simplest modulator, okay, so these are all examples of modulators, okay, the first one is what I will call binary phase shift keying, okay, so this name may not be that clear to you now but we will see why this is, okay, BPSK for short, okay, it is also called binary anti-portal signaling and so many other names, okay, so but BPSK is what I will use first, okay, so right now in all these examples, I will assume that the bandwidth W of the channel is much, much larger than 1 by t, okay, so 1 by t will roughly be my signaling frequency, okay, so bit rate roughly, okay, so I will assume my bandwidth is much, much larger than my bit rate, okay, so you will see the way I designed it, my modulated signals will have a huge bandwidth and all of that we will have to go through, so right now I will almost assume infinite bandwidth and say bandwidth, I am not worried too much, okay, so I will simply say whatever X of t I put in, the same X of t without any distortions will be available in the receiver, right, so there is no convolution, the convolution by H of t, like we got rid of it but even the bandwidth we will say is much, much larger than the 1 by t, so even this 1 by t will not play any role in my problem, okay, so I will remove that for now and then we will later on go and modify it because this needs to be modified, okay, this is very crucial, okay, you want your 1 by t to be as close as possible to your bandwidth, okay, so you will see later on bandwidth is a very precious commodity, okay, particularly in wireless today, you pay a lot of money for bandwidth, so you want to use every hertz of your bandwidth very, very efficiently, we will see later on that this is a bad thing but for now we will just take it just to get to understand how the signaling is done and all that, okay, so typically something else is done in practice, okay, nevertheless there are situations where this assumption still holds, okay, can you name one situation where this might hold, it is a very popular and common transmission medium where this will optics for instance, right, so even today with however fast data rate you can transmit electronically, the optical frequencies that are available are huge, okay, so here that assumption still holds and largely in optical domain people still use some form of this BPSK type thing, so they do not do anything very fancy, okay, so but in wireless and all certainly this does not hold and people want to optimize every hertz in their use, okay, so what is BPSK now? The first choice you will see is a choice for how many bits am I going to deal with at a time, okay, so we will say n equals 1, okay, so we will set 1 bit at a time, okay, so once you say 1 bit, my entire vector B becomes what? Simply just one guy, I will call it B, okay, and how many waveforms do I need? I need just 2 waveforms, the first one I will call x0 t, okay, so I will use slightly crazy notation, you will see why I did that, I will set, okay, my x0 of t to be within my support of course, but within my support it is going to be constant, okay, so this t was my support, I am not going to exceed it, but it is going to be constant and root es by t, why did I pick root es by t? Yeah, then if you compute energy for x0 of t you will get es which I can think of as my signal energy which is a nice notation to have, okay, so x1 of t I will take as minus root es by t in the same support, okay, all right, so if you want to plot this it is quite trivial, okay, x0 of t is going to be 0 and t, it is going to be root es by t, okay, then you have minus root es by t and x1 of t I will do with dotted lines, okay, so this guy is x0 of t, this guy is x1 of t, okay, so that is your, so you see why do you think phase shift keying makes sense to you, okay, so I am doing 180 degree phase reversal if you want, I am multiplying by minus 1, okay, so that is what I am doing here when I do this, okay, all right, so what about, so you see why I am saying bandwidth has to be much, much larger than 1 by t if I have, if I want to have any hope of getting this exact signal at the receiver, right, so you need a huge bandwidth, so this is going to correspond to a sink in the spectrum and unless you get at least 6 or 7 lobes of the sink you have no hope of getting anything close to this x of t at the receiver, okay, so to have some hope of being able to have this as my modulating signal, okay, your bandwidth has to be much, much larger, okay, so remember you tend to think of x of t as what the signal is at the transmitter, the way I wrote my model x of t is actually the signal at the receiver, right, before the noise is getting added, all that is going through the channel has already happened, right, in fact I took my, even the loss to be 1, right, so everything is gone, so the channel has to have a huge bandwidth if this x of t has to make its way all the way from transmitter to receiver without any distortion, okay, so that you see, all right, so that assumption is needed, later on we will relax it and see how to change that, there are smart ways of doing it, but for now we will say this is our x of t, okay, so bandwidth is quite large and the energy is es, okay, so average energy is es, right, do you see that, bandwidth is much, much larger than 1 by t, I am much, much larger, but it is like 10 times, okay, so 10 times is a huge waste of the bandwidth, right, okay, so if you do Gram-Smith, what will you get, what will be, how many basis elements do you think you will need to span this signal set, just 1, right, so my dimension is going to be 1, so if I do Gram-Smith you do not have to go through all the complicated process, it is very clear that this is a one-dimensional signaling space, okay, so all my signals can be represented as multiples of just one basis signal and that basis signal will be what, root 1 by t between 0 and, okay, so that is the orthonormal signal that will generate and your dimension will work out to 1, okay, so my signals when they, when I convert them into vectors under this basis, what will be the vector representation, what will be the vector x0, sorry, plus root es, right, just one dimension, okay, so there is only one quantity and x1 is going to be minus root es, okay, so these vectors are said to live in what is called signal space, okay, so that space, that vector space on which x0 and x1 lie is called the signal space, so x0 of t and x1 of t are signals and the corresponding vectors lie in signal space just to give you a name, okay, so we always call this space, right, so just to give you a name for that we are going to say they live in signal space, so how does the signal space look now, it is just one dimension, okay, so my entire signal space is only one dimension, maybe I will put down the origin just to be sure, so where are my signals in the signal space, one at root es, another at minus root es, okay, so that is my signal space, so my entire signal space is simply one dimensional, okay, so it is much, much simpler than looking at every single t and worrying about what to do with the entire waveform, simply correlate it with the receiver, get one number corresponding to each x of t and that is good enough, okay, so that is the standard notion, all right, so my signal in my vector form, the x is going to have just one component x, so my random vector x becomes a random variable, okay, and this random variable has got what distribution, okay, it is going to be plus root es with probability half and minus root es with probability half, so this is what I am doing, in my vector model x of t becomes this one random variable x which is going to be plus root es with probability half and minus root es with probability half, okay, so you can go ahead and check if you will that expected value of x squared works out to what, okay, it is all real, right, so I do not have to worry about modulus works out to es as well, okay, in terms of just random variables works out to es just like we thought it should, okay, all right, so what should happen at the receiver, so at the receiver, okay, I will draw the waveform model also to show you just the complete picture, so at the receiver you are going to receive a waveform r of t, okay, what are you supposed to do with r of t, correlate with phi 1 of t, right, so which is basically what, what is correlating with phi 1 of t, you integrate from 0 to t, okay, so there is lots of assumptions here, so what do I mean by integrating from 0 to t, what is my 0 and what is my t, okay, once I know my 0 I will know what my t is, so do I know, I mean, so many things, so you have to know where your 0 is and you should know the length t, okay, both of those things need to be known, so there is some synchronization problem and all that which you will face typically, okay, so assuming all that has been already done, okay, assuming you know where your 0 is and you know the accurate value for t, all you have to do is integrate from 0 to t pretty much, okay, of course there are some constant scaling here and there but you do not have to worry too much about it and then you get one value, okay, there is a 1 by root t, right, so let me put that down, okay, you get one value which we will call y which we know will be equal to what, x plus n, right, what will be this n now, x I know, what will be n, it is normal with 0 mean and variance n naught by 2, okay, so how do I figure out n naught by 2, like I said at your receiver it is possible to measure these things, okay, so somebody will tell you what your n naught by 2 is, okay, so comfort you can just call it n naught by 2 in your model, okay, so assuming somebody tells you what it is, okay, is that clear, so that is my model, my entire BPSK model assuming I have a lot of bandwidth is really, really simple, what am I transmitting in signal space, one of two things either plus root es or minus root es, in fact you can even normalize es to 1 and think of plus 1 and minus 1, okay, so either send plus 1 or minus 1 based on whether or not my bit is 0 or 1, okay, what happens to that point in signal space, some noise gets added to it which is even with this normal with mean 0 and variance n naught by 2, okay, and you will get some y, okay, so it is enough if you process that y, okay, so you do not have to worry about each and every time instant r of t at each t, simply integrate it from 0 to t and only look at that value, okay, it is enough and it is optimal, you do not have to worry about losing any information in that, okay, convince yourself that that is true, all right, so if you repeat this experiment several times and if you plot y, okay, each time you get, okay, so that part is called the received signal space if you want, so receive points in signal space, transmitted points are always what, plus root es and minus root es, the received points will be around plus root es and minus root es depending on the value of n naught by 2, if n naught by 2 is really, really small it is going to be very close to those values, if n naught by 2 is large it is going to be reasonably spread out but it will be around that, when will you, when do you think you will make errors even without going through the detection problem in detail, when do you think you will make errors, what do you do with the y to determine b, what is the very intuitive and simple thing to do, yeah, if it is positive you say it was say a bit 0 because maybe bit 0 corresponded to this and bit 1 corresponded to this and if it is negative you say bit 1, it seems like a very simple thing to do, we will derive it formally and confirm that, that intuition is justified even in theory, okay, so we will do that later on but for now that can be easily done, so when will you make an error, when your noise is large enough so that the point gets moved to the other side of the origin, okay, so you need a large enough noise, okay, so this picture is called a constellation, a signal constellation, basically signal space along with what, along with the points that represent your signal, okay, so on the transmitter side the signal constellation will be very simple, it will only contain two points, on the receiver side what will your signal constellation contain, a lot of points presumably if you run it several times you will get all kinds of points, okay, so it will be much more random on this received side, okay, so that is what happens here, alright, so the first interesting thing to do in this problem is how do you compute f y given x, what is this distribution, okay, yeah, so you have to whenever a condition, I have to specify the conditioning random variable, right, so if x is plus 1 what is the distribution, okay, so this will work out to two possibilities, one is what, normal width mean plus 1 and variance n naught by 2 if x is plus 1 and normal width mean minus 1 and variance n naught by 2 if x is minus 1, okay, and what will be f y, the distribution of y alone, I am sorry, if we are using there will be a root es, oh I am sorry, I put plus 1, I am sorry, okay, so I am sorry, I am sorry, you are right, you are right, thanks, okay, so I should be careful, so I am used to thinking of bpsk as plus 1 minus 1, it is very strange to see es after a long time, okay, so you need root es minus root es and what about f y, pdf for y, yeah, so you have to convolve xn n, right, so you will get to a mixture Gaussian as it is called, such a distribution is called a mixture Gaussian, there are two Gaussians centered at plus root es and minus root es each with probability half, so if you plot it it will work out to something like this, f y of y, this is y will work out to something minus root es plus root es, okay, what is the value, what is this height, who can tell me what this height will be, give me an approximate answer first and then an accurate answer, what is the approximate answer, 1 by 2 root 2, yeah, so it is a 0.5, okay, so 0.5 divided by what, root 2 pi then what, yeah, so yeah, you have sigma, sigma will be square root of n naught by 2, okay, so that is what you have, that is approximate, what is the accurate value, what should you do, why am I saying that is approximate, well you have a contribution from this normal distribution also, well it will be very very small, but it will have 1 e power minus whatever es and all that, but who knows, that might be a significant part, never know depending on the value of es, okay, so this normal will also add to that, okay, so that is the accurate expression, you can write it down if you want, all right, what about this point, what is the height, okay, so there will be some exponential paths, right, so 2 times, little bit ugly, but one can do it, okay, what is the probability that y is positive, half you are sure, okay, so easy, what is the probability that y is greater than plus root es, so you will have to do some computation, it is not all that is straight forward, but roughly what will the value be, yeah 1 by 4, right, so you can see it will be close to 1 by 4, if your noise is smaller compared to es, it will be close to 1 by 4, but there will be another term, another q term adding, it is a large q term, so you do not have to worry too much about it, but it might always be there, all right, so those are just questions to make sure that somebody at least is taking my plea for reading up on random process and Gaussian random variable seriously, okay, so if questions like this, if you think a lot later on there will be much more complicated questions you will have to answer, so you better have Gaussian random variables at your fingertips, all right, so this is BPSK, okay, any questions, once again I have described a very simple version, which is, which assumes a lot of bandwidth, but at least it is there, okay, so this is, is this a baseband or a passband signaling method, it is baseband, right, so there is no passband thing here, so the frequency is going to die down eventually after some time, so you can think of it as baseband up to some approximation, okay, but technically it has a huge band, okay, but it is clearly baseband, it is not around anything else, okay, all right, so there are some modifications, there is something called on off keying, instead of going plus root es and minus root es, if you say, say plus root 2 es and 0, okay, so if you do either on or off, okay, so which is what is usually done in optics, you either shine light or you shut down, okay, so that is the on off signaling, in what way will it be different from BPSK, what are the changes can you expect, okay, so imagine, see right, as it is there is really no change, right, I mean if you will just look at one signaling interval 0 to t, assuming it is going to go through, there is really no change, nothing changes, your pdf for y will change, how will it change, this thing will entirely shift, but the 0 and 2 root es will be there, so it is not root 2 es maybe, so it is not 2 root es, it is root 2 es, slightly different, but other than that nothing seems to change, but when you string together a lot of signaling intervals, some things will change, okay, so when I define, if I repeat this process over several signaling intervals, yes I have to be slightly careful, supposing I can do that, then things will change, okay, so for that repeated process, we will see BPSK and the repeated process will have different type of spectrums, okay, so DC component for instance will be very different, okay, so all these things one needs to take into account, but typically this can be done, all right, so that is BPSK, okay, so I have a picture of the waveform, I think it is, that is okay, we do not need it, okay, so the next thing we will see will be a pass band signaling method, okay, very elementary and simple example of a pass band signaling method, which I will call frequency shift keying, okay, or FSK for short, okay, once again we will pick n to be 1, so we will look at 1 bit at a time, okay, so my BE is just 1B, it is going to be 0 or 1 with equal probability, okay, so I need to define two signals, first one I will define x0 of T as root 2 ES by TE cosine 2 pi, I will write it explicitly this time, m0 T by TE between 0 and T, okay, and so this frequency of this m0 by TL denote as f0, okay, and x1 of T will be 2 times root 2 ES by T sine, okay, no, once again I will say cos, cos 2 pi m1 T by T, okay, once again between T and T and f1 will also be m1 by T, okay, so m0 and m1 are integers, okay, so without any trouble you can take them as positive integers, okay, m0 and m1 should not be the same, okay, so clearly if they are the same then there is no, both signals are same, why did I do this root 2 ES by T? Yeah, so the norm of x0 of T will work out to, norm square of x0 of T will work out to ES exactly, which is what I want, so my average energy in the signal from 0 to T is will work out to ES, okay, so that is what I want, all right, so you can plot if you want for m0 1 and m1 2, I have a plot here but it is not too critical, so you see if plotting is not too bad, okay, so what about bandwidth, answer this question for me, so think of, so what about bandwidth for this x0 of T for instance, how will the spectrum of x0 of T look, how do you answer this question? Yeah, it will be shink shifted on, shink shifted on both sides, right, right, is that clear, okay, hopefully that is clear, so the way to do this is the trick is take cos, stradding out everywhere and then multiply by a rect, okay, so think of x0 of T as a rect multiplied by a cosine, cosine will go to 2 deltas, rect will go to a sink but you have to convolve in the frequency domain, so things will move towards f0 and f1, so bandwidth, so this is technically going to be a pass-band signal, okay, so but the bandwidth is around f0 and f1, okay, so I will assume I have a large bandwidth around f0 and f1 available to me, very, very large bandwidth around f0 and f1 available to me, today that is a very, very bad assumption to make, okay, so pretty much there is no spectrum that is free out there, not in such large quantity, so you can think of the inverse assumption, so whatever spectrum you have, you take T to be so large that my 1 by T will fit nicely into that spectrum, okay, so that is the practical interpretation to this, so I will be signaling very, very slowly but it does not matter, I will be within my spectrum according to this limitation, so once again we need large bandwidth to justify this signaling, we will see later on how to get rid of that but for now we will assume that and go ahead and proceed and understand it and then we will come back and apply these ideas, okay, so the bandwidth is large around f0 and f1, typically you take f0 and f1 to be reasonably close, not too far away, okay, so you want to occupy some band, right, you do not want to be very far away, alright, so what do you think will happen if you do Gram-Smith on this, yeah, so you will get two different things, do you see that, okay, so it is very easy to see also which ones they will be, okay, so what will be phi 1 of T, okay, so you will get m equals 2 and phi 1 of T you can take to be root 2 by T cosine 2 pi f0 T between 0 and T and phi 2 you can take as what, same thing, root 2 by T cosine 2 pi f1 T between 0 and T, right, is that fine, so f0 and f1 being integer multiples of 1 by T, these two will be orthogonal as well, so if you multiply them and integrate over the 0 to T interval, you will get them to vanish and each of them will be also unit norm, okay, so you can see that also will hold, okay, so this is a valid orthogonal signal, okay, all right, so once I do this, what will be my vectors x0 plus root es 0, okay, so I have been thinking of transpose all the time, so I will write transpose, my vector x1 is 0 plus root es transpose, so you see I already had to get a two dimensional signal space, okay, my signal space is two dimensional, if I want to draw it, how will it look, okay, my origin is here, so my one signal will be here, say bit 0 and bit 1 will be here, okay, so this is my constellation, fsk constellation, all right, okay, so you can once again compute the expected value of the random variable x, which denotes random vector x, which denotes signal, you will see the average energy works out to es, okay, so remember I have the vector that represents my signal is actually x, which would be x1, x2, okay, all right, so what is the distribution for x1, okay, for x you know what the distribution is, it takes value root es0 with probability half and 0 minus root and 0 root es with probability half, what about the distribution for x1, 0 and root es with probability half is, similarly x2 will be 0 and root es with probability half is, okay, are x1 and x2 independent? No, it is not, okay, so it cannot be independent, okay, so if independent x should take so many more values, cannot just take two values, okay, so it is not independent, okay, so this is one of those cases where you cannot reduce it into some kind of a one dimensional problem, it will only be two dimensional, okay, so what do you do at the receiver then, okay, so once again I will do that in detail for this case and then we will maybe stop bothering about it, maybe we will, I do not know, okay, so at the receiver when you get an R of t, what should you do? You should do two correlations, okay, so one is a correlation with phi1 of t, the other is a correlation with phi2 of t, you get two signals, two values out, y1 and y2, okay, and then you go through a detection process with this y, okay, so my y becomes x plus n, what is my n, okay, this n is n1 n2 iid normal 0 mean variance n0 by 2, okay, is that fine, okay, so what is the joint distribution for y1 and y2 or basically what is the distribution for y, is it going to be jointly Gaussian, y1 and y2 will be jointly Gaussian, how many of you think y1 and y2 will be jointly Gaussian, yes, no, maybe how many of you think no, no, no, right, cannot be jointly Gaussian, what will it be, yeah, some of two jointly Gaussians, mixture Gaussians, mixture jointly Gaussian you can maybe say but it is not, definitely jointly Gaussian, right, so what will it look, okay, so in the signal space how will it look, have one point at 0 root es, another point at root es 0, now all your received points will be around both of those points and there will be a Gaussian centered around each of those signal points at the receiver, okay, so when you try to plot the Fy, I cannot plot it here, if you try to plot the distribution for y, you will have two Gaussians one centered at each transmitted signal point, okay, both of them will be circularly symmetric around that point, okay, but you will have to add both of them up with half, okay, so you will get a crazy type expression, okay, so that is how the received constellation will look, okay, so one question I want to ask before we proceed, just to see how comfortable you become, okay, suppose I ask you to derive some kind of a baseband equivalent picture of this passband signaling scheme, how many of you think you will be comfortable doing it, do you think that makes sense, should I worry about a baseband equivalent picture for this passband signaling scheme, I should write, I just, I showed in the first class the second class that every passband real signal which is what we are dealing with here has a baseband complex signal as it is equivalent, okay, so start off with that, let us go back to this picture, tell me what is the complex envelope of X0 of T is, take a couple of minutes and try to derive it, complex envelope is a sync, I can see several people not even thinking about trying, okay, I am sorry, what do you mean complex envelope in the frequency domain, I am not interested in all that, I want X0 tilde of T which is the, take your time, I mean it is not something that you can think your way through, think about it, write down, write down some things, we will get an answer, sorry, erect, okay, is it a real function, it is real, the complex envelope is real, how many of you think the complex envelope will work out to be real, approximately real, what is approximately real, give me a precise answer man, you are giving me all kinds of answers, what is X0 tilde of T, okay, so that is the assignment for tomorrow's, for next week's class, okay, so Monday morning 8 o'clock, it is the first thing I am going to ask, nobody answers that question, we are not proceeding, okay, so we will wait till everybody finishes that, okay, somebody has to come up with an answer for me, okay, what is X0 tilde of T, X1 tilde of T, can we get any kind of expression or does it, does it seem to work or does it not work, what is the base band equivalent picture for this, pass band signaling scheme, okay, so that is the question you have to deal with, we will stop here for this class, the next thing is something else which is new, okay, think about it, come back.