 So this is lecture 14. It's already 9, no? Okay. So a couple of things before we continue. So one thing I probably should have emphasized more carefully even at the beginning which I think I did but I didn't do it very explicitly. It's this fact that we have to keep the picture that we had for our communication system. So we always said I have n bits which I map into, map into some symbol, some a plus jb belonging to the constellation and then I do a multiplication with phi of t. So this is a picture that we have so far, 0 to t, 1 by root t and then you transmit. This is for mqam. So this goes through a channel and all that. So I've implicitly assumed so now I'm okay. So one more thing I should say. So for such a system I'm going to say what is going to be my bit rate? I'm expecting my bit rate to be n bits per t second. So once I see, once I say this, what does it mean? If I want this to be my bit rate, I should be able to operate this transmission continuously without any stopping. So like every t seconds I should be repeating it without any breaks or delay or waiting for something to happen or anything. So only then I can say I have a continuous bit rate of n by t. So when I say my bit rate is n by t, I'm implicitly assuming that I'm doing, I'm operating this every t seconds continuously. I'm not stopping. So is that feasible is a question that we had be quietly sidestepped? How did we nicely sidestep that? We said there's really no channel. The channel we assume to be, right? What is the channel that we assume? Assume this to be delta pretty much, right? Ideal. So you've got bandwidth is so large that my channel response is almost a delta. That was our assumption. So once you come up with this assumption approximately delta, then whatever you put out is exactly available here, right? So if you have xab t here, the same xab t here is available after your channel and you add noise to it and you continue. So what happens from t to 2t? Another thing will happen and there will be no interference between the signal from t to 2t and 0 to t. The reason is assume the h of t to be delta. So obviously there's no question of interference. All these things don't arise. The moment you relax this assumption and look at h of t not being delta, some strange things will happen. So one needs to look at it carefully. So if the thing I talked about towards the end of last class about inter-symbol interference was not clear, maybe this is a bit more another way of viewing it. So far we didn't worry about it because I assume my channel was delta. This is a very, very ideal large bandwidth assumption. Once I assume that there is no question of any interference of anything. What I put out from 0 to t, right? The signal y of t, if you look at y of t, y of t from 0 to t will depend only on x of t from 0 to t. y of t from t to 2t will depend only on x of t from t to 2t. There's no other dependence. There's no question of the signal corresponding to one symbol interfering with the signal corresponding to some other symbol. So that never arises in our ideal picture. I want to relax this assumption a little bit and assume h of t is not quite a delta but that's very sharply concentrated. Support is very, very small so that the bandwidth is very large compared to t. So the support of h of t is very, very small compared to my t. So if I assume that is the case, we'll see for BPSK I'll show you an illustration of how it will actually work and how ISI doesn't really play a role. So I'll show you this illustration in a slightly non-ideal scenario just to emphasize this point because this is important. I think one needs to see where this is coming from. So I'll spend some time on this. So I'll take BPSK. So BPSK is just one bit, 0 and 1 mapping to plus 1 and minus 1 and I'll assume I'm operating it continuously. So my x of t might look something like this. I'll probably look. So maybe I shouldn't draw my y axis so low below. My x of t might look something like this. This could be an x of t. So 0 to t then 2t here, 3t here, 4t here, 5t here. So this is when I'm operating continuously. So every t seconds I'm putting out one bit. I'm doing it on the transmitter. So what am I assuming my channel to be? My channel I'm saying as very, very large bandwidth compared to t. So if I draw both these guys on the same scale then how should h of t look? Yeah, very small compared to t. So it will look something like, so maybe if you want to draw it causally, just to be sure, it'll look something like this. It'll be very, very small, much smaller than this. I'm just drawing this little bit exaggerated. Very, very small in the same scale. So if you take 0 here and t here, it's extremely small. So that's how my h of t is going to be. So this is much more slightly more practical than saying delta. Delta is a very ideal theoretical type assumption. This is slightly more practical. What will happen if I convolve these two? If I convolve a very small, tiny support signal with this square, what should happen? It'll always be a square again. So it'll almost be a square wave, except that those corners will be a little bit rounded that's all. So you can visualize this convolution. But since I'm drawing it causally, there'll also be an overall delay, but that delay will adjust. We'll say that delay is known at the receiver and it adjusts. So maybe it adjusts for it in some way. So once you do that, you'll see when you convolve here, in the ideal case, you'll get the exact x of t if you say h of t is delta. But if it's not so ideal, even then it's not too bad. It's going to look something like this. It's going to look something like this. There's no real reason why it'll look anything else. So it's going to look definitely something like this. So remember, so it's an okay assumption to say that this is equal to x of t. And then you can process. So if you do, for instance, if when you do integration from 0 to t, the part that because of the small curve here and there will not really contribute anything. The bulk is what's going to really contribute the whole thing. And then when you threshold with zero, it'll be very clear whether it's positive or negative. So this little thing here will not contribute. If there's no noise, of course, there's noise, then noise will do something. That's a different thing. But the signal itself is pure as far as correlation is concerned. So even when I operate with a bunch of pitch together, as long as my time scales are so that the support for H of t is really, really tiny compared to my signaling interval t. So what's the interpretation of this in the frequency domain? The support being very small. The bandwidth of the channel is very, very large compared to the signaling bandwidth that I'm using for my x of t. So those things are all equal and not equal and concepts can be viewed in both things, both domains and convince yourself that that's true. So even when I operate continuously a bit after another, as long as my timing is, I mean, my assumption of large bandwidth is true, I can think of the resulting convolut, the received signal as being just x of t itself. So what does it mean? This is was a crucial step here. Now remember, when I say y of t is x of t plus n of t, x of t, I'm assuming is the signal at the receiver after the convolution has happened. And then when I do my, what is the first step at the receiver to go from continuous time to discrete view to correlate, correlate with an orthonormal basis. How do you find the orthonormal basis? You find it for x of t, you don't find it for x of t convoluted with h of t. Actually, what should you find it for? x of t convolved with h of t, that's what you really need to find it for. But since I know my x of t is pretty much the same, I'm correlating with the orthonormal basis for x of t itself. So even when I operate continuously, say I send 10 bits at a time, when I want to decode all 10 bits together, what should I do? I should look at the signals corresponding to all the 10 bits. I'll have 2 power 10 possibilities. I have to find the orthonormal basis for that and then go about doing it. But that problem will break out, break down into a symbol by symbol detection problem because of the orthonormal I'll go through that once again, you'll see what I mean. So but remember the assumption we made is very crucial in the receiver. I'm assuming my orthonormal basis can be used as if it's for x of t itself without worrying about the convolution. So that's a crucial thing. So ideally, you'll have to take care of the convolution and then find your orthonormal basis because that's your reception. So hopefully, this picture is clear in your mind. So this is what I mean when I say, so for instance, people were talking about, I'm assuming infinite bandwidth, what's the band pass equivalent. So this is the way to picture all that. So you think of this, approximately as working out in this portion. Any questions? Well, this is a baseband picture. You can also think of a passband picture. What will happen in passband? This will be the envelope of your passband. So you'll have a real part, imaginary part, sine and cos. So the signal will look more passbandish. The envelope will be this. All right. So it's clear now. So what's the next question? The logical next question to ask is what happens when I want to shrink my t? How small can I make my t in relation to the support of h of t? So I want to keep reducing my t. So what am I doing in terms of bandwidth? I'm using more and more bandwidth. I want to use more and more bandwidth because as you learn soon when this 3G auction happens, bandwidth is a very, very precious commodity. You pay a lot of money for it. So you want to use as much of the bandwidth as it's out there. So when you want to decrease t, but you can see clearly when you decrease t, when t becomes comparable to the support of h of t, what will happen? The convolution will hardly look like x of t. So I mean, I have a depiction here. You can try this in MATLAB if you want. It's not too difficult to picture. When my support for h of t becomes close to t, when I decrease t to close to h of t, then my x of t convolved with h of t will look drastically different from x of t. And you cannot run the same receiver as you had before. You can't just happily use the orthonormal basis according to x of t. So things have to change there. So that kind of a situation is a situation where ISI happens. Y of t from t to 2t will depend on x of t for you can say pretty much all t depending on how your h of t is. So maybe if it's causal, then you can think of all the previous symbols will be involved and all that. So that's the crucial picture to keep in mind. So when t becomes comparable to support of, t is comparable to support of h of t. This is in time domain. What is the equivalent notion in frequency domain? Bandwidth of x of t becomes comparable to bandwidth of h of t. Then you will see y of t from t to 2t, for instance, will depend on all x of t I'll say. So because the reason is x of t convolved with h of t and h of t extends beyond t. So it's going to influence a lot of things. So this results in what's called ISI or Intersymbol Interference. So the signal that you put out corresponding to the symbol from 0 to t will also play a role in y of t from t to 2t which you actually expect only for symbol 2. So it will again affect that. So that's Intersymbol Interference and that fundamentally changes your receiver. The reason is why you have to now do orthonormal basis for x of t convolved with h of t. You can't just happily do orthonormal basis for x of t itself and you can't treat each symbol individually. You have to group all the symbols together and then look at it. Hopefully that's also clear. So each symbol individually cannot be done because one symbol is going to interfere with the other symbol. So you can't do it independently like we did before. You have to look at everything together. So I'm going to talk about this why we did each symbol independently in the previous case next. I'll write it down carefully and then show to you why we could get away with doing each symbol independently before when we had this last bandwidth assumption. Okay, so let me do that. Yes. Yeah, I'm going there. That's where I'm going. Eventually I'll get to that. Okay, so I'll get to that. So I mean I wanted to continue there and then I realized this notion of continuously operating. I didn't emphasize before. I didn't point out some of the intricacies involved in that. So I thought I should spend some more time on that. I'll come to that as soon as we get there. So far when we correlated, we treated each symbol independently. We did have to look at 10. If you're sending 10 symbols one after the other, we never really bothered to look at all 10 together. The reason is because you'll see why. So it's very easy to see why that happens. So let me illustrate that once again. So this is again an illustration for continuous operation in the previous case. Okay, so in the previous case, what is the previous case in the ideal large bandwidth case? Okay, intuitively it's clear. I think it's very clear, but I want to make sure that even from an orthonormal basis point of view, it's going to work out. Okay, so why did that happen? Suppose now in my picture, I have a sequence of N bit vectors. Okay, so B0, B1, all the way to B, L minus 1. So this is N times L bits. Okay, grouped N bits at a time. Okay, and then I map it to... Okay, so I'm going to illustrate it for MQAM. Okay, so anything else also remains the same. I'm going to map it to a sequence S of K of symbols. Okay, AK plus JBK, K will run from 0 to L minus 1. Okay, so this is the sequence of symbols. Okay, so this is my bit sequence, N times L bits. I'm using a... What kind of... I don't know. So if I did N... So I don't know. So you'll have to use like 2 power N QAM. Right, so that's what you'll have to use here. Right, so 2 power N QAM is the case I'm considering. Am I right? It's 2 power N QAM. Okay, so you've got capital L symbols. So you have 2 power NL possibilities here. So same 2 power NL possibilities will be here as well. Okay, and then what did I do? I did. So I'm going to show what actually happens here. So I'll say I'm kind of filtering by phi of t. Okay, so what is this phi of t? From 0 to t it is 1 by root t. Okay, so I'm filtering with an impulse response phi of t. So what will I get out? Okay, my transmit symbol X of t will be summation K equals 0 to L minus 1. Remember there is... This is a discrete time sequence at every t seconds. Okay, so this is going to be S of K what? Phi of t minus K. Right, so that is my signal that is going out. Okay, according to my ideal assumption, what is my channel? Channel is delta. Right, so it doesn't matter. It's just the same thing. And then noise gets added. Okay, since my channel is delta, I know at Y of t is going to be what? Exactly equal to X of t plus N of t. So when I want to correlate at the receiver, I can do orthonormal basis with respect to X of t. Okay, so how many possible X of t do I have? This is now 2 power NL possibilities. Okay. Right, it seems like a huge number, but how many bases will you need? 1? It can't be 1. Okay. So if you do Gram-Schmidt on all these 2 power NL X of t's, what will be the bases that you should get? What if L equals 1? If L equals 1, what's the basis? Phi of t. So L is 2, what needs to be the bases? Yeah, just the shifted versions. Right, so if you do an orthonormal basis decomposition with all these 2 power NL X of t's, you will only get L basis elements. What are the L basis elements? Phi of t, phi of t minus t, phi of t minus 2t, so on. Okay, that is crucial. The fact that your basis elements are time shifts of each other is extremely crucial. Okay, and that happened because I took phi of t this way. Okay, so it's an important point here. So I want to emphasize that. The orthonormal basis works out to be phi of t minus kt for k between 0 and L minus 1. Okay, convince yourself that this is true. Okay, so and this is very important. Okay, so once the orthonormal basis works out to be this, everything else we did is justified. Okay, so we can treat each symbol separate. Okay, so you can simply correlate with t minus kt. Do you see why that is true? Right? Yeah, so when I correlate with phi of t, what will happen? All the other symbols will pretty much drop out. I will be only doing the first symbol. So I might as well do it after t seconds. I don't have to wait for LT seconds to do that. Right? After t seconds, I can correlate with phi of t. After 2t seconds, I can correlate with phi of t minus t. And what I get will only be s of 2. Okay, so you see that? That's the nice thing about the way we've worked. Okay, this y of t when you receive it. And if you imagine running it through a set of correlators, phi of t. Okay, this is correlation. Okay, so I don't know. I should be careful here. Okay, if I imagine correlating with phi of t, correlating with phi of t minus t and all that, so on correlating with phi of t minus l minus 1t. Okay, what will you get out? Here you will get nothing but s of 1 plus noise. Okay, if you want n of 1. Okay, so s of 2 plus noise. And what will be the PDF of all these guys? The noise PDFs, they'll all be IAD Gaussian with zero mean and variance n naught by 2, which is the level of the PST. So all these things you can show, there's no problem. Okay, this will be s of k, s of l minus 1, I'm sorry, plus noise. And you can run detectors independently. Okay, they're all independent. So detectors can be run independently. Well, all this is really, really intuitive. But I just want to show the pure mathematical basis for this whole thing, if you will. No, I mean, it's very intuitive why each symbol is independent when once I make the delta of t assumption. So it's very easy to do that. But just want to show that even if you do orthonormal basis for a bunch of symbols together, you pretty much get the independent basis for each symbol. So there's no problem. Okay, and if you look closely, you'll see you don't need to have all these correlators as independent circuits. The same circuit can be reused every t seconds, because phi of t will pretty much do only from 0 to t and then t minus t will come only from t to 2 t. So you can happily reuse the same circuitry. So you simplify the circuitry. Okay, so many things depended on the basis becoming phi of t minus k t. Okay, phi of t and its shifted versions became an orthonormal basis. And because of that, first of all, the decoding problem was simplified. And well, it became independent. Okay, so it's very crucial. Okay, and right now it seems like we can do this only when h of t is delta or in a more practical way h of t has a huge bandwidth compared to x of t. Okay, so but it turns out one can look at this same system very closely and allow h of t to have a general bandwidth. And then find out if there exists a phi of t, which will fit into the bandwidth of h of t more closely, but still have phi of t and phi of t minus 2 t and t of t minus t and phi of t minus k t being orthonormal. If that is the case, then I can repeat the same thing. Okay, so basically the ISI case is a generalization of the above. Okay, so you will see we will generalize this and we will try to fit it into a finite bandwidth. Okay, so if you understood why the ideal case with delta of t as a channel response split into symbol wise, you will also it will also be clear to you why the ISI case will split into some ways. Okay, so it's possible definitely for the ideal channel assumption. Okay, the ideal finite bandwidth channel assumption is definitely possible. Any questions on this? It's clear? I'll come to that. Maybe at that time I'll answer this question. So when I deal with the ISI case, I'll come to that. Okay, so this your question is kind of like about this capital L, right? It will become irrelevant. The way we choose it, we can allow L to be any number. So it doesn't matter. But maybe you want to optimize and do something else. That's a different question. But the way we do it will be L can be any number. Okay, so I want to convince, I want you to convince yourself of a couple of things now. Okay, so if you look at the picture in time domain, it's very clear and obvious why phi of t and phi of t minus kt, like phi of t minus kt is an orthonormal basis. Okay, can you convince yourself of the same thing in the frequency domain? Right, Fourier transform, this is all L2 function. So Fourier transform is inner product preserving. So if we take Fourier transform of phi of t minus kt, what should I definitely get? I should get a set of orthonormal functions again. Okay, so what will happen if I take Fourier transforms? Yeah, so you'll get sinc but sinc with e power minus j different phases multiplying. Are all of them orthonormal? Yeah. Okay, so you should know you should be comfortable with the fact that all of them will be orthonormal. Okay, so it's a little bit of a non trivial fact. It may not be very trivial like in the time domain case. Okay, but try to think about it that way. Okay, so that's useful. So the reason is we will see designing orthonormal signals in this case in the time domain was very, very easy. But when you want to optimize your bandwidth, you'll want to design it in the frequency domain. Okay, so there you should have some reasonable comfort with going back and forth. All right. Okay, so if there are no more questions, let me proceed. Okay, so if I have to summarize, okay, so in the ISI case, my phi of t, okay, in this case, I could happily pick it to have infinite and large pieces. It'll be different. It'll be different. The way we do it is different. It's not the same thing. It's okay. I just wanted people to think about it. The way we do the orthonomality will be different. We'll convert it into discrete time and we'll do it very comfortably. Okay. All right, so the phi of t in the ISI case has to see in the in the in the large bandwidth case, I only had to worry about orthonomality. I didn't have to worry about bandwidth of phi of t itself. Bandwidth of phi of t could be very, very large. There is no problem because my channel bandwidth is much, much, much larger than that. Okay. But if I want to bring my channel band bandwidth of phi of t close to bandwidth of h of t and constrain it with that, then I have to worry about bandwidth here. Okay, so in the in the case that we are going to consider now, what is the case we are considering my h of f is going to be what? Okay, minus w by 2 to w by 2. Okay, so if I say finite bandwidth, whatever I do to phi of t, phi of t must fit inside this bandwidth and preferably be fully filling. Okay, so if I take only very small bandwidth, I know how to do it. I can pretty much take it to be flat. I don't have to really choose anything phi of t very carefully, but I don't want to choose it as flat. Okay, because I want to occupy more and more bandwidth. Okay, so I want phi of t to have bandwidth. I will say close to okay, I am putting tilde, but it is close to w by 2. I want to push the bandwidth as high as possible. Okay, so that is the significant change from what I did before. If I did not change that, then I might as well pick it to be flat and pick my t to be really, really large compared to 1 by w and then I will be happy. Okay, I don't want to do that. I want to pick my bandwidth to be tending to that, but still have what? But still have phi of t minus kt orthonormal. Okay, is this possible? Can this be done? What is the largest bandwidth of phi of t for which this can be done? All these things are questions which are answered by what is called the Nyquist criterion. Is there a question? Yeah, well the way I am defining bandwidth for baseband signals is the largest positive frequency. Okay, so for passband, I will define it differently. So it will be, bandwidth will be defined only for positive frequencies. I don't, I mean I don't want to say I have amusing bandwidth from minus 100 to 0. You can use it in theory, but in practice you don't pay for the negative frequencies. So your bandwidth is only positive frequencies. Okay, so I think maybe our minister will be successful, but it's very difficult to sell negative bandwidth to companies. They won't buy it. So we will define bandwidth as only positive frequencies. So for baseband, it will become w by 2 and for passband, it will naturally become w. Okay. So I want my bandwidth to be as high as possible. So once I want to increase my bandwidth, you'll see there's more stringent requirements, but it's possible. It's still possible. There's no problem. We'll see how this can be done and that's answered by what's called the Nyquist criterion. Okay, so we'll see that next. And am I doing anything else before that? Okay, so I'll do the setup a little bit more carefully because soon enough we'll extend it to other cases and I want to do, I want to be able to do that. Okay. So let me repeat the setup once again. The setup is quite important, so that grasp it very carefully. Okay, so now instead of looking at 1B vector, I will look at B0, B12, B, L minus 1. I know I'm repeating it, but I think it's fairly important. Okay. So you have n times L bits. Okay. And okay, so this is some map into a signal constellation. Okay, the signal constellation should have size what? 2 power n, right? The signal constellation size should be 2 power n. It goes and mapping n bits into the constellation point. So that's the only condition on my signal map. Okay, and I'll say it's a 2D map. It's just a complex number. Okay, it puts out a complex number. And those complex numbers, I'll collect them into what I'll call a symbol sequence. Okay, so this will be S of k. Okay, remember this is complex, k between 0 and L minus 1. Okay, so this picture we'll use several times in this course, so I want you to get used to it. And then I'll use, I'll call this a transmit filter. So this V of t, we've been always thinking of it as a basis function before. What I'm going to write down also as a, can be thought of as a basis function, but you'll see in the general case once you convolve with H of t, it doesn't make sense to call or think of this alone as a base function. So to give it a general name, I'll call it as a transmit filter. Okay, so I'll use this notation G of t for that. So this is my transmit filter. Okay, so it might turn out to be the basis or it might play a role in the basis. Okay, so that might happen, but in general, it's just a transmit filter. Okay, so what am I expecting from the transmit filter? I'll treat this as actually like an impulse train. Okay, separated at t. Okay, so I'll think of this as in continuous time as S k t minus k t. Okay, this is what I'll think of this as. And then I'll filter it. When I filter, what will I get? Yeah, so this will be my transmit signal X of t. So I need a notation for it. So for that, I'll call this entire vector capital B and I'll index this with B. Okay, is that fine? X B of t, this will be what? Summation k equals 0 to L minus 1 S of k G of t minus k t. Okay, all right. So that is my transmitter in complex baseband. Okay, right now I'm in complex baseband. Okay, so you might say maybe G of t you want it to be complex, but typically people choose G of t to be real. Okay, what will be complex? S of k will be complex. That's good enough. Okay, maybe if you're, maybe in future somebody will come up with a clever method where G of t itself becomes complex. So far, it's not really necessary. So G of t can be chosen to be real. Okay, so notice one thing. First thing I will not insist on, I will not insist on support of G of t between being between 0 and t. Okay, why is that true? So once I start insisting on that, I know my bandwidth is going to become much, much larger. And to support all those things, it'll be very difficult. So I'll allow my G of t to have a general support, I will not insist on anything about support. Okay, but I will definitely insist on bandwidth for G of t. Okay, so I'll say bandwidth for G of t should be what? Definitely within W by 2. That's the only thing else. Okay, so we won't insist on anything on the support. We'll say bandwidth for G of t should be less than or equal to W by 2. Okay, so the reason is this is going through my channel, which has a bandwidth W by 2, so I don't want to exceed that. Okay, so that's one thing. The other thing is the word about implementation. I want to point out that also. So in reality, today, what you do is these transmit filters, easiest thing to do is to have a very powerful processor for this transmit filter. Okay, so the whole thing can be built into a processor, gets the bits as input, converts it into a complex number, just two real numbers, a plus jb, it's very easy to do that. And then what you do is you multiply a by G of t, b by G of t, that's what you do when you do this a plus jb times G of t. And G of t is kind of over sampled by the processor very heavily to get samples of G of t at very, very close intervals. You over sample it because your processor probably works at some gigahertz clock today. I don't know, maybe not gigahertz, but definitely tens of megahertz and you can afford to over sample your G of t, which is a baseband signal. So you over sample it and then put out say an 8 bit version for each sample of G of t. And then you have a d2a, maybe in fact many processors have d2a built into them. So you have a d2a, which will convert the 8 bit and sample and hold. So you see it's a very processor type implementation. You don't have to actually worry about G of t, no. So now G of t can be an arbitrary shape. I know any arbitrary shape G of t can be implemented in baseband easily with this processor and d2a combined. So I don't have to restrict myself to a specific form for G of t, which can be implemented in analog with some filters and all. So that's going to restrain you. So I won't restrain that way. So I'll say any G of t, any waveform is fine with me. And in terms of implementation, I'll have this processor backed idea in my mind to do that. So you don't even need a processor. Actually just spreading out these, I mean just store it in a RAM and then read it out. I mean it's not that difficult today. So it's very easy to do this. One d2a is pretty much enough. So don't worry about if I give you a complicated expression for G of t involving several terms. It can be done very easily today in almost like a software situation, even though in hardware. Okay. All right. So that's my X B of t. So pretty much the only constraint is bandwidth. Of course, there'll be power constraints. Okay. The power constraints will always be there, but we'll think of G of t as normalized. Okay. So once you think of G of t as normalized, what's the advantage? If G of t minus kt is also orthonormal and all that, then the power of S of t, no, actually if it's normalized, S of k times G of t will have the same power as S of k itself, expected value of X of k. So my constellation energy will map to that. So we'll think of G of t normalized in that way, but that's not too important. Okay. So typically you pick norm to be one. Okay. So that's a good pick, good way of doing that. All right. So that's my, that's the way I'm going to think about my thing. Okay. So once these things are true, okay, if my, if my X of t goes through a H of t and what is my response here, filter response here, it's flat between minus W by 2, W by 2. Okay. And I'll have to insist on a linear phase because I'm drawing only the magnitude response here. I'm saying it's flat. I'll say my linear phase, my phase can be linear. Okay. Linear phase will only cause a delay and I can adjust for the delay without any problems. Okay. But I can't have weird phase making my response look very bad. Okay. So that has to happen. Okay. So once that is there, everything is fine. So once phase is linear and you have constant magnitude, what will happen to Y of t? What will be outside here? It'll pretty much be equal to X B of t. Right. With the delay, but I'm going to say I'll adjust for any delay. Okay. So no problem. We don't worry about the delay. So once I have this and this condition is satisfied for the bandwidth, right, this after the filter will come through unharmed. Okay. And that's crucial. Why is that crucial for the receiver? Yeah, the receiver you can do correlation with the normal basis for X B of t itself and not worrying about the convolution with H of t. Okay. So that's the crucial part. So to this noise gets added and then you do your receiver. So you do your receiver correlation. We'll find this as we go along. Okay. So one assumption you need in the design of G of t is knowledge of W by 2 and knowledge of the channel. Knowing that the channel is flat in your band, you should know that. Okay. So in practice, how would you know something like that? Right. It's not very easy to find out, right. So what people do is they sound the channel from the transmitter, go to the receiver, the receiver will use a really low rate communication to tell the transmitter kind of what the channel looks like. So from there, you make a choice and then you decide what the G of t is. So you can do all these things. So some training can be done. So these things are practical and can be done. So in the information, you don't need that much. But if you know your channel is fixed and it's not going to change too much, you can calculate it ahead of time and then design for a certain specific type of H of t. Okay. So but you have to know it's magnitude response flat and phase response linear to proceed. Okay. So phase response linear is a typical assumption you can make. I think well it's those kind of things but magnitude response has to be flat and you should know that. Okay. Is that clear? Okay. So the next thing we'll do. So once you know the received signal is just simply XB of t. That is good news. But now I want so what's the step next? The next step is always easy. Okay. So there's no confusion here. The way to do it is take all your XB of t, find a orthonormal basis. Okay. You might get certain number of orthonormal basis and then you correlate Y of t with that and based on what you get, you build a detector, you find your pdf, join pdf and proceed. Okay. But that can get quickly complicated. Okay. Suppose I have n equals 6, which is 64 QAM. Okay. And then I want to do is like 1000 bits. Okay. So immediately everything goes out of the window. Right. You can't expect to do Gram-Schmidt over such a huge number of things and you can't pick any, you can't come up with any rule for picking G of t that way. Okay. So we'll again go back to a previous case and use that as our motivation. In the previous case, everything simplified because G of t minus kt formed an orthonormal set of signals and that simplified everything. Okay. So now I'm going to ask the question. Suppose I want G of t minus kt to be an orthonormal set of signals. Okay. Is that possible? Can I design G of t so that G of t minus kt is an orthonormal set for k equals 0 to L minus 1. Okay. So is that possible? Once I do that everything becomes dirt cheap. Okay. I only have to do correlations with G of t and G of t minus kt. In fact, my circuits can be repeated because I'm doing only the same correlation over and over again. I'll show you how to make that happen. It's a little bit more involved here, but it can be done. All those simplifications happen and everything becomes easy. Okay. You can almost do symbol by symbol. Don't have to worry about ISI at all. Okay. Is there any question? Decides t. I'm going to come to it. This orthonormality condition will determine t. So we haven't come to that yet. Okay. So right now I've just picked a symbol time t every so capital T determines this guy, right? How often do you put a new symbol out? Okay. And then everything else follows. And how do you determine capital T? Capital T will turn out to be determined by this orthonormality condition. There will be a certain maximum certain minimum t that you can possibly pick. Okay. Any other question? Okay. So what's the condition? It's just from here, it's just simple signal signals and systems type analysis. What do we want? The condition we want is G of t minus kt. Okay. Inner product with G of t minus LT should be what? Should be 0 if k is not equal to l. Maybe it's 1 if k equals l. Okay. That's fine. So now over all possible k and all possible l I want to vary, but that's the same as varying. Okay. This condition is equivalent to G of t, G of t minus LT, l equals 1 to l minus 1. Is that okay? Okay. See those are the only shifts that are really relevant to me and you can show these two are the same. Okay. So I'll in fact want that. Okay. So instead of restricting ourselves to l minus 1, you'll see this will be, this will not give us readily easy answers. So we'll say simply l not equal to 0. Okay. And we'll say this goes on and on and on. We'll ask that for all l not equal to 0. Okay. So it seems like a most stringent condition, but it will give you a nice clean closed form answer which is much better than looking at just 1 to l minus 1. Okay. And also another thing is this capital L may not be something that you fix ahead of time. Okay. Sometimes we might want to send 100 symbols. Sometimes we might want to send 1000 symbols or maybe 10,000 symbols continuously. Right. And you want to be able to adjust to all of that without worrying about this ISI and auto normality condition. So if you require for all l not equal to 0, then it simplifies both the analysis and the practice. So you probably want to pick that. Okay. So once you look at this, let's write this out in integral form. Integral t equals minus infinity to infinity g of t, g of t minus l t. Okay. dt equals 0. You know correlation can be also written down as convolution with the matched filter thing. So this thing will work out to g of t convolved with g star of minus t. I mean I have been saying g star will be real but just to be complete will be g star of minus t evaluated at l t to be 0. Of course for l not equal to 0. Okay. So all these things. Right. So looks like this signal is important to me. g of t convolved with g star of minus t. When I sample that signal, what should I get? Okay. Suppose I define this gate to be c of t. What should I get when I sample this signal? If I define c of l as c of l t, it should be delta l. Right. So if c of l, so c of t, c of t would, what would be the Fourier transform of c of t? c of f would be mod g of f squared. Okay. So now c of l, what would be the Fourier transform for the, yeah, Fourier, discrete time Fourier transform for c l. It is going to be 1 by t summation m equals minus infinity to infinity c of f minus m by t. Right. This will be the same as what I would call as e part j 2 pi f t. Right. In the discrete time domain. This has to be what? The discrete time Fourier transform of delta. What is that? 1 for all f. That is it. That is your Nyquist criteria. Okay. If your g of t is such that mod g of f squared when aliased with respect to 1 by t, okay, gives you a flat response. Okay. So remember, then what will happen? Then g of t, t and g of t minus t and g of t minus 2t and all these things will be orthonormal with respect to each other. That is an if and only if and all these conditions have to be satisfied. Is that okay? Okay. So all this 1 by t and 1 and all that is okay. It is relevant and important to be strictly mathematically accurate, but up to certain constants you can say that. So all you need is mod g of f squared aliased with 1 by t should be flat. Then you can normalize and adjust and get everything else you want. Okay. So this is what I meant I said when I said frequency domain type constant. It is a different way of doing it. It is not the same thing as that. Okay. Everybody is happy. Everybody is convinced to understand what this means. Okay. So the first thing you should check to get used to this is to check that the phi of t we had before satisfies this condition. What was the phi of t we had before? One between 0 and t and 0 else. Okay. That one definitely is orthonormal but t of t minus 2t is orthonormal. So in the frequency domain do you think you can convince yourself that this is true? What is the Fourier transform of that? It is a sinc. Okay. Then mod square is what? Sinc squared. And then when that is aliased will you get 1? Okay. Well there you go. That is an answer for you. So if you did not follow that you better go and work that out. So that might be a good lesson in getting used to all these DTFTs and Fourier transforms and aliasing and all that. Okay. So does that work out is something you have to check. Okay. So it is easy to check that at least for that case it has to work. Okay. But that may not be the only case. Okay. There are so many other situations possible. Okay. So there was a question about choice of t. This tells me something about choice of t. Okay. So notice g of f vanishes outside of w by 2. Okay. What is my g of f going to look like? Assuming I have a real signal. It is going to be something but it is going to be symmetric about 0. Okay. Okay. And it vanishes outside of w by 2 and minus w by 2. And what do I want? What will happen if I do mod g of f square? That will also vanish outside of minus w by 2 and w by 2. And then when I want to shift and add by 1 by t, what am I required? What am I requiring? That it should become flat. Okay. So if I choose my t, my 1 by t to be greater than w, what will happen? If my 1 by t, okay, suppose w is here. If 1 by t is greater than w, what will happen? There is no way I can satisfy the no ISI condition. Okay. The orthogonality, Nyquist orthogonality condition cannot be satisfied. Nyquist criteria will be while later. Okay. So any g of t which is band limited to w by 2 cannot satisfy the Nyquist criteria if 1 by t is greater than w. So this automatically gives you a constraint on how fast you can, your symbol rate can be. So that implies simple rate less than or equal to w for bandwidth w by 2. Okay. Remember this is base band. So you should be careful. Okay. So that is your way of fixing w. Okay. So the next question to ask is what is the next question to ask? Suppose I say symbol rate is less than or equal to w, what would you desire? Equal to w, right? So you want the fastest symbol rate that can possibly happen. So can you have symbol rate equal to w? If symbol rate is equal to w, what is the only possible g of f? Flat between minus w by 2 and w by 2. Okay. So there you go. That is the way you keep constricting these things. And then you try to relax that and see if you can get something better. So we will do that in the next class.