 So, this is lecture 2 and the last class what we were seeing was a very high level picture of what a digital communication system is supposed to do, some kind of a definition of the major parameters and all that. So, I am going to remind you of that. So, the problem we have at hand is basically I will do a block diagram kind of description and you will see this block diagram description is very very useful and helpful to imagine and visualize these kind of systems. So, you have a transmitter and the receiver, transmitter gets say a vector of bits b and it is supposed to convert that into a signal x of t which goes through a channel and that the output of the channel you get y of t which is received by the receiver and the receiver has to produce an estimate of what the bits transmitted could have been. So, this is a nice block diagram of what we saw before and this kind of completely defines the problem of digital communication. So, what do we have to do? We have to design two functions if you will. You can think of this Tx and Rx as two function. So, x of t is this transmitter function of this vector b and what is b hat? b hat is whatever the receiver does to y of t to produce the estimate. So, given the certain b and given information about a certain channel, channel is kind of known to you. You have to design Tx and Rx such that you can achieve communication at very low error rate and all that. So, you want b hat to be equal to b without any serious problem. So, that is what I want to capture kind of in this picture hopefully it is clear. So, a few things we saw the main parameters here which we will have to play with and be careful about. The first thing is power of x of t. So, there are ways to write it down mathematically in terms of what it is. So, typically when you have a voltage or current, the power is square of that quantity. You think of x of t as a voltage or current in some circuit, the power would be square of that quantity. So, that goes for the relationship b square by R and i square r. So, power square of that quantity is instantaneous power. So, x square of t would be instantaneous power. So, typically we will be worrying about the average power of x of t, so not the instantaneous power and some systems instantaneous power is also important, but usually the average power. So, when you do average power what should you do? You integrate out x square of t and divide by the total time. So, that will give you a way of calculating the power that x of t has, the transmit signal x of t has. So, this is going to be again, once again remember this is average, not instantaneous, maybe this is some px. So, usually there will be an upper bound on this. So, there will be an upper bound on the transmit power, you will be constrained to transmit at powers less than, average power less than a certain maximum power possible. That is the first thing. And the next thing is bandwidth or spectrum, I will say bandwidth of x of t. This will also be constrained. So, this will also be constrained because of the channel and all kinds of other things that I talked about. There will be a constraint on this also. So, these two guys will be constrained. So, this is what I mean by knowing the channel. So, you know what your constraints are given. These two constraints you have to design these functions t of tx and rx, so that what is your objective? Your objective is to get a reasonable bit rate. So, your bit rate should be some kind of reasonable number or if you want me to be specific on it, maybe you want to have some kind of a trade off between the bit rate and the next quantity of interest to you which is probability of error or error rate of your system. So, if you think of this vector B as B1, B2, so on and B hat as B hat 1, B hat 2, so on. So, this probability of error is probability that B i hat is not equal to B i. So, one assumes this is kind of an identically distributed for each B i. So, all bits are equally are going to have similar behavior and you get a certain probability of error. So, what would you desire for bit rate? You would want as fast a bit rate as possible. What would you desire for probability of error? As low a probability of error as possible. So, those two things are going to be related and there is going to be some kind of a trade off between these two. There is going to be a trade off between these two. There is going to be a trade off between these two, trade off here, trade off here, everywhere. So, all these four quantities are going to be closely interrelated. There is also another quantity which I have not written down here which is the receiver noise. So, that is also very important noise power, it is maybe I will call P n. So, bandwidth of X of t I did not put a number for it, it is a w. So, noise power is also important. We saw noise power is another quantity which affects probability of error and in fact, it is also related to bandwidth and all that. So, you will see it is all these quantities are interrelated. So, our goal in this course roughly is to study given a set of constraints on these quantities. How do you go about designing Tx and Rx, what are the interrelationships? So, one simplification we did to get rid of all the physical properties of the channel is to come up with a mathematical model for the channel itself. So, one model that I wrote down last class which I said which will follow pretty much throughout the course, the most general model is what, it is a linear Gaussian model that I had. So, maybe I will write it right here, I will say Y of t is what, X of t convolved with impulse response H of t, okay, so maybe you cannot see it plus a noise, okay. So, this is my linear Gaussian model and like I said it is a very, very powerful model. You design for such a model and it is directly translatable into several actual physical systems including wireless, wire line, optical, whatever you name it, okay, so it is all possible, so it is a very powerful model. And then we made a few simplifying assumptions to get this linear Gaussian model into a slightly simpler form to start off with, which was the band limited ideal model, okay. So, what is ideal about my channel, I am going to say H of f which is the frequency response of H of t, it is going to be 1 bit in the bandwidth of interest minus W to W for instance, okay. So, there it is going to be 1, so it is an ideal thing but it is band limited, okay, so that is the, that is with that understanding we could move from here to Y of t equals X of t plus N of t, okay. And this was the ideal band limited additive Gaussian noise model for instance, so N of t is going to be Gaussian and all that and that is fine. So, for instance it is quite surprising that with such an abstract model a lot of things are known, for instance the most ideal trade-off between all these quantities is known, okay. And this was found long, long back in 1949 by Claude Shannon in the first papers to ever show up in this area, okay. So he found this very surprising and nice result, suppose you want the probability of error to be arbitrarily low, when you say arbitrarily low it means as low as I want, okay, to be arbitrarily low, then he showed your rate will have to be less than or equal to W log base 2 1 plus Px by Pn bits per second, well this is bits per second, okay. So, quite a surprising and powerful result to have come up very early on in the theory, okay. So, given that you want your probability of error to be driven arbitrarily low as low as you want, which is what you want, right, then your rate will have to be less than this for this model, okay. So, it's a surprising and nice result and this such results have actually given shape to most of communication theory as it happens today, okay. So, that's just wanted to write it down so that you know what the kind of powerful results you can derive with simple models like this, okay. So, that's the formula and you'll notice a couple of things. First thing is the rate seems to grow, I'm sorry. So, rate seems to grow linearly with the bandwidth W, okay, so that seems to be good enough and the other thing it depends on is only the ratio Px by Pn, okay, which is the power of X of t, average power of X of t divided by the average noise power, that's what it depends on. It doesn't depend on for instance Px separately or Pn separately, okay, so it depends, so you'll see Pn actually depends on W also, okay, so that's something I've not shown here, Pn will depend on W, so it's kind of a pretty good thing, but in terms of the rate it depends only on the ratio, okay, so that ratio is important, we'll define it later on and proceed, okay, so this is roughly the system that we'll be dealing with pretty much throughout this course, our goal will be to change some of these assumptions and design different types of transmitters and then define suitable receivers for them and define suitable quantities to analyze probability of error, all these things. So, we'll keep coming back to this picture over and over again and these kind of block diagrams are very, very powerful tools kind of, almost like a tool, okay, so we'll start with a big block diagram and spread it up into several smaller blocks and then design each block separately, it ends up being a very, very powerful tool, okay, any questions on, comments on any of these things, it's a very provable, accurate theorem, okay, maybe we'll see a proof of it towards the end if we have time, okay, so couple of other things just to give you a big picture in case you're seriously thinking about becoming a communications engineer or getting into some research in this area, so a big picture of how the different fields or courses that you have taken so far interact, okay, so you have, I'm going to put digital communications at the center because you're doing the course right now, so I'll put this at the center, maybe it's not at the center depending on where you are, okay, so there are two related areas which feed into it, which guide the design of digital communication systems, okay, so the one thing is information theory, okay, information theory which is what enables you to get results like that, R less than or equal to W log base 21 plus SNR for instance, those kind of results come from information theory and in fact this is closely related to coding, error control coding and coding theory, so information theory and coding theory feed into the very fundamental basic designs of communication systems, how you go about doing it, what's the right metric to look at, how do you do it and all that, okay, so once you know the high level design, that high level design has to be translated into a lot of algorithms and those algorithms are typically done in signal processing, okay, so you have signal processing kind of feeding into this, okay, in particular today it's just completely digital signal processing, there is some analog part which is very front end but everything else is completely totally digital, okay, so it's all digital signal processing and of course there are parts which you still cannot build without any RF and analog things, particularly in some and high frequency and all these things, so you have the RF analog part feeding into the very front end of these things and then of course for very fast functioning and compactness and all that you have digital VLSI building into, so all these things play together in, so the things on the right hand side are more like tools for translating the design you have into algorithms, implementations and all that, okay, and things on the left are fundamental tools to design the system itself, okay, so there's also an area called communication theory which kind of puts all these things together, okay, so you can imagine communication theory at the intersection of all these three, okay, so how do you build communication systems? Okay, so this is, so depending on where you want to be in the scheme of things, you might want to specialize in one of these things and maybe know the other things as enough to do, okay, but of course they're not strict divisions, people who do analogs claim to know a lot of digital communication also these days, okay, so they do a lot of stuff back and forth, so there's also all kinds of give and take, okay, all right, so that's pretty much as far as introduction and other things are concerned, so the rest of this lecture we're going to spend beginning with preliminaries that you need to be able to understand most of this course, okay, so that's what we'll do now, okay, so these are all basics which you should already know and be very very familiar with at this point, okay, so I'm going to assume that and we'll go through it at a very very fast pace, as fast as possible, I don't want to spend too much time, like I said, it's only to fix notation, okay, nothing, nothing else, most of it I will state and I'll go through very quickly and it's, you may not even want to write these things down because I'm not trying to really make you understand or anything like that, I'm going to quickly state all these things, hopefully it's all very very familiar to you, in case something is not familiar, you've never seen ever before, stop me at that point maybe we'll go into some detail, okay, so the first thing is we'll be dealing with complex vector spaces, okay, the notation I'll use for a complex vector space of dimension n is this crazy c, c with a stroke right in the middle, cn, okay, and a vector in cn I'll denote as z, which will be a column vector but just for compactness I'll write it as z1, z2, zn transpose, okay, so when I say transpose it is, it becomes a column vector, this is an arbitrary vector in z, okay, in cn, okay, you can think of your discrete time signals for instance, okay, so of course you have the infinite sequence length but usually when you deal with it in practice it's always finite length or you deal with it in finite length, so all your discrete time signals will be, will be, will belong to some complex vector space, okay, so what is each of the zi, each of the zi is a complex number, hopefully you're very very familiar with it, a complex number is, is a real ordered pair of real numbers, the root of minus 1 I'll denote j, okay, so x of i plus y of i, you can also write it as in polar form as r theta, okay, so all these things are very very standard, you know how to go from one to the other, okay, it's quite good enough, okay, so a dot product defined in this, in this vector space z1, z2, okay, is actually summation z1i, z2 star i, where star is complex conjugate i equals 1 to n, it also has a more compact notation which is z2 Hermitian z1, okay, what is z2 Hermitian, you do a transpose and then you take conjugate of each element, okay, and you have the norm of vector z which is what, z inner product with itself which will be summation of square, okay, mod square of each element, okay, so that's the complex vector space, hopefully that's very very familiar, the second thing that we'll be dealing with, so that's for discrete time signals, for continuous time signals we need vector space with whose elements are signals, okay, or functions of one variable, usually that variable is time for you, you can also keep it as frequency if you like and you can know, you know how to go from one to the other as well, okay, so that the signals will live in, typically most of the signals we see live in this function space which is called L2, okay, so this L2 is a special function space, it actually collects all finite energy signals together, okay, finite energy is pretty much all the signals that you have, right, so finite energy signals belong to L2, that's a very simple definition for it, X of t, okay, this L2 is set of all X of t such that integral from minus infinity to infinity, absolute value of X of t squared dt is, when I say less than infinity I mean it's finite, okay, so it doesn't go to infinity, what is, so X of t is complex value, okay, it can have a real part as well as an imaginary part, okay, so when absolute value is the absolute value for the complex number, so like I said L2 collects together all finite energy signals, for instance finite signals that are spread over only a finite time and are bounded will belong to L2, likewise signals that are spread over a finite frequency and are bounded in frequency will also belong to L2, okay, so L2 is a very, very nice space, it's very well behaved, lot of things happen very, very comfortably there, you don't have to worry about weird things in L2, okay, so in L2 also you have an inner product which is X of t, Y of t, this is integral minus infinity to infinity, X of t, Y star of t dt, okay, so this is a valid inner product, it satisfies all the properties that you need and it also, you also get a norm from this, okay, so the norm is what's called the 2 norm which is, which will work out to, if you do X of t, comma X of t, it will work out to mod X of t squared dt, so you see, so norm is kind of, so all finite norm, finite 2 norm signals will belong to L2, okay, so that's L2 for you, it's got a nice inner product as well, all right, so often, so this is like I said, what is L2? L2 contains our continuous time signals that we'll be dealing with in this course, okay, sometimes you might also, it's also useful to have signals which are not really finite energy, infinite energy but finite power, okay, several times you deal with it, what's a very good example, finite power, infinite energy signal that you use all the time, sinusoids, okay, sinusoids you use all the time and it's a good abstraction to have and have good theory for it and that we'll say belongs to this function space L infinity, okay, well it contains many more things than sinusoids but I'll just simply define L infinity because that's the only thing that's, it's very nicely defined in theory, okay, so we'll say it's bounded, bounded functions, okay, so the function itself is bounded, it doesn't explode like a sinusoid, right, so sinusoids will be heavier and this is not such a nice space, for instance, there's no inner product here, there's no norm, this can define a norm but the inner, it doesn't come from an inner product, okay, so it's not a very nice space and a lot of strange things can happen in this space but we won't worry too much about it but for us, the main use here will be it contains sinusoids, okay, contains for instance, sorry, contains for instance e to the power j omega t, okay, which is a, which is an indispensable function in any signal's course, sin omega t, cos omega t and all that, okay, so these are the two, our signals will live in these two spaces, we won't take anything else that's not here, okay, then what else, what else, what else, what else, okay, so once you have an inner product, the inner product obeys, okay, I'll just keep listing it out as soon as I go, the inner product obeys what's called the Cauchy-Schwarz inequality, okay, so it's quite useful, several times we'll use it in our proofs, okay, Cauchy-Schwarz, how does Cauchy-Schwarz work? If you have two vectors, say s and r in an inner product space, what do we know is true? What's Cauchy-Schwarz? The inner product, absolute value of the inner product between s and r, okay, so the inner product takes two vectors to a complex number, okay, that's how I'm defining my inner product, okay, so s and r will be less than or equal to, the magnitude will be less than or equal to norm of s times norm of r, when will you have equality here? Equal is too strong, both have to be differ by a constant, right, complex constant, so if s can be, s is equal to a times r for a complex a, then you have equality in this Cauchy-Schwarz, it's definitely, there's no other equality otherwise, okay, so this is something we'll use quite often sometimes to come up with some results in the proofs, okay, so the next thing is, I don't know, I'm going to drop the numbering, I've been numbering but I think I'll lose the numbering pretty soon, the next thing is convolution of functions, okay, so it's quite important, if you have two functions s of t r of t, okay, the convolution is another function of time which is properly denoted as s star r t, the evaluation at t is going to be equal to minus infinity to infinity s of u r of t minus u du, okay, so this is a very standard definition, convolution, so this is the value taken by the convolution at t, okay, so that's the definition but usually there's an abuse of notation which helps us a lot in terms of ease of working with convolution, you typically write q of t as s of t star r of t but you should know that there is, this is a tricky notation, sometimes it can lead to all kinds of problems but this is an abuse of notation which we will use in this course, okay, so the next thing I want to mention is delta functions, okay, I think enough has been said about delta functions for you as far as we're concerned, one of the properties is very useful as the sifting property for delta functions, what is this integral defined to be s of t naught, okay, so the use of delta function is typically to define Fourier transforms for L infinity functions, when you have sinusoids you can diffuse this delta function notion and define a Fourier transform and so that the whole thing is consistent, for L2 functions in L2 Fourier transform is properly defined, there's no problem, once you go out this L infinity it's not defined properly, so you use delta functions to define Fourier transform and keep it consistent with all the other definitions in L2 as well, so that's useful as far as delta functions are concerned. The other thing I'll talk about often is the linear filter, okay, we'll use these blocks quite often, linear filter is nothing but an LTI system, okay, so an LTI system which takes say in this case a continuous input X of t, it's fully characterized by an impulse response, what does it produce at the other side, X of t convolved with H of t, okay, so that's what happens in a linear filter, we use this quite often, okay, so any term that I talked about which is strange, it's okay, all right, so maybe if you've not seen this L2 and L infinity very formally it's nothing to be really scared about, one thing has an inner product, other doesn't have an inner product, it's the only thing you should know, okay, beyond that it's nothing more to worry about, okay, all right, so the next one small aside here which will be important for us is this computing inner product by filtering, filtering and sampling, okay, so maybe you have not seen this very formally, it's quite important, okay, so how did I define inner product for two functions, okay, so if I have two functions X of t and S of t, okay, so I'm sorry, I'm doing convolution already, okay, the inner product between X of t and S of t is defined as what, right, integral minus infinity to infinity, X of u, S of u, du, well S star of u, du, okay, so that's how I define my inner product, if you stare at it very closely it seems to be quite similar to the convolution integral, so it makes sense that you should be able to compute the inner product by convolution, for instance if you said t equals zero it seems to be almost close and then you would only do some adjustment to fit R of minus u2, S star of u, okay, so that's the only adjustment you have to do and you see here a linear filter is going to convolve whatever input with its impulse response, so if you select a suitable impulse response based on S of t you should be able to achieve inner product calculation through filtering and sampling, sampling at t equals zero, okay, so that's the notion of computing inner product and that's called the matched filter, okay, filter matched to S of t, so how do you do it? You take X of t, filter it with, okay, so something happened here, X of t, you filter it with S star of minus t, okay, so that's the impulse response which is matched to S of t, okay, it turns out here if you sample at t equals zero you end up getting the inner product between X of t and S of t, which will be some complex number, right, some a plus j b, okay, that's what you'll get, okay, so it's easy to prove this, it's not too difficult but the, I'm not going to write detailed proof, okay, try to write down the proof if you're interested, it's easy to do it, but this impulse response, a filter which has impulse response equal to S star of minus t is called a matched filter, matched to, matched to what, S of t, okay, so this S star of minus t, once in a while I'll use this notation just to ease the confusion because S star of minus t has so many other things going on around it, right, there's a conjugation, there's a minus usually when you use it in some other systems people get confused because of the notation, okay, so I'll typically use this notation also, S m f of t, I'll say is S star of minus t, S m f, okay, subscript is m f which is matched filter, star of minus t, okay, so in several receivers you'll have to do correlations or inner product computation and there it's done typically using matched filtering, okay, so this is a very common way of doing this, all right, so a couple of things I want to point out about this because I think if the first time you're seeing it, it's, it requires some effort to translate this into real life, I said all my signals I'm going to imagine are voltage and current in a circuit, what will I do with the complex valued X of t, can you have complex current, complex voltage, okay, so what you do is you have to take two signals, that's all, so when I say a complex X of t what do I mean, I have two real signals X, X r of t which is the real part and X i of t which is the imaginary part and I'm looking at both of them together as an ordered pair, okay, so that makes me a complex signal, okay, so that's the, that's the first thing to keep in mind, so this X of t when it's complex is actually two wires, okay, carrying two different signals X r of t plus j X i of t, so if I have to implement this, I have to actually do a fairly complicated implementation, I'll show you how it works, okay, so suppose and similarly this S m f of t will also be what, real part and a imaginary part, okay, so I'll call this simply S 1 of t plus j S 2 of t, okay, suppose you have such a picture, this entire filtering is quite complex, okay, so it's not, so it's actually X of t convolved with S m f of t, you'll have to do four different convolutions to execute that, okay, so you'll have two different symbols, signals coming in, wires coming in, one will carry X r of t, another will carry X i of t, okay, both of them will get filtered by what, okay, so you'll have, okay, I'll draw four different filters, okay, so maybe you can do it smartly with just two, but S 1 of t, S 2 of t, okay, and then you have S 1 of t and S 2 of t, okay, and then how do you finish up the convolution, you'll have to do, okay, so these two, this has to be added to, added or subtracted, subtracted from this guy, right, am I right, okay, to get the real part of what's going out, this will be Y, okay, I didn't put Y of t here, so Y of t, Y r of t, okay, how will you get Y i of t? You take these two guys and add them, okay, so it's not really impossible to imagine an actual LTI system or something which does all this, okay, so all filters, then addition, subtraction, these things can be done in analog and definitely in digital very, very easily through any problem, okay, so this whole box is basically representative of what, that just that one single small box I write with S star of minus t, okay, so that's also we think of complex signals, okay, two wires carrying two real signals, that's one complex signal, okay, it's just mathematically easy to write down a simple convolution with complex value signals as opposed to four different convolutions adding and keeping everything real, okay, so that's something and when you sample, of course you have to sample both, okay, so sample both to get the sample, all right, any questions on this, okay, okay, so we're slowly going to move towards spectrum and Fourier transform and all that, so before we go there we'll see a few popular signals and give them names, the first one is the sync, what is the sync? The sync in this class is going to be defined like this, sync of t is sine pi t by pi t for t not equal to zero and for t equals zero you make it a continuous extension to get one, okay, so that's my definition for sync and then I'll also define rect, okay, rect in from a to b of t will be one for t between a and b and zero outside of it, okay and next we'll have unit step, I think this is pretty much all we'll need, if we need more I'll just introduce it wherever, okay, so I think you know what unit step is, okay, u of t is defined as one for t greater than or equal to zero and zero else, okay, so that's the, that's the definition for functions, standard functions, sync might be slightly different from maybe what you're used to, maybe some of you have seen sine t by t, it's not a big difference, same thing, but some of the transform properties will change, if you're used to the sine t by t you might want to read up a little bit and make sure you know the sine by t by by t change as well, okay, all right, so next thing we'll define is Fourier transform, well this is continuous time Fourier transform, there's also a discrete time version which we will see a little bit later, okay, so continuous time Fourier transform, you start with x of t which is a possibly complex value which we'll call time domain signal because I'm going to use t as my free variable there, it's it's it's defined as typically I'll denote by capital letter the Fourier transform of the small letter of the signal x of f, I'll use f, I won't use omega, okay, so I'll use f, it might also cause some change in the way you write down the formula, x of t e power minus j 2 pi f t d, okay, so that's the definition for Fourier transform and I'm sure you've read enough about it, you know the value of this and it's quite useful, okay, and also has an inverse relation which is valid for well-behaved good functions, it's minus infinity to infinity x of f e power j 2 pi f t d f, okay, so if you're used to the omega you'll get a 1 by 2 pi in this inverse relation, okay, so you do an f, you don't get it and if x of t is an L2, okay Fourier transform is very well defined, it converges, everything works beautifully, capital X of f will also be an L2 function, okay, in the variable f, okay, so it will also be L2, it's all very well defined, in fact it preserves inner products, inner product between x1 of 2 and x2 of t, x1 of t, x2 of t will be the same as inner product between x1 of f and x2 of f in the variable f, okay, so it preserves inner products, so it's a very, very nice transformation from one's L2 space to another, okay, but very often we'll also use L infinity functions as x of t and expect a Fourier transform and that's defined using delta functions, okay, so there's all kinds of definitions for that, I'm not going to go into detail there, I'm assuming you're familiar with all that, okay, so that's the first quick property, so if x of t is an L2, like I said it's very, very well behaved, you can show the inner product between x1 of t, well you can first show that x of f is also an L2, so I'm going to write L2 here but you should know that this L2 is different from this L2, right, what is, what is, what is different, t has become f, okay, both of them are complex valued except that t has become f, I think more, you have the same functions here also with f, okay, it's no, no, no big difference, okay, so the inner product you can show is the same as x1 of f, x2 of f, in particular if you let x1 equals x2 what happens, you get the familiar Parseval's relation which says the two norm of x, x of t is the same as the two norm of x of f, okay, so all that is valid in, valid for L2, well infinity these relationships are more complex, we won't worry about it except by saying you'll need delta and it's all consistent, all the properties and everything is consistent, okay, so quick roundup of the properties of Fourier transform that we will use, okay, so the first one is a very standard Fourier transform pair which is used at, used for deriving pretty much every other Fourier transform pair that you want, okay, so this, okay, so this is one more thing, so if we write two arrows and put ft means it's a pair, okay, so t sink ft, okay, so this is a very common thing and delta of t, okay, very often I'll omit the ft, so whenever it's clear I guess it's not really necessary, okay, so ft, so clearly one for all f is not L2 function, right, so you can't integrate, so that's the problem, okay, so that's the first thing and this pair, this pair along with the properties that I'm going to, well some of the properties that I'm stating and additional properties that I will not be stating will give you pretty much every other Fourier transform pair that you want, okay, so this is just, this is good enough, so you should remember this by heart, the thing else you can do it, okay, so the second property is if you have x of t being a Fourier transform pair with x of f, then whole bunch of other pairs can be obtained immediately, the first one is remember it's all complex valued, okay, so I'm all writing complex valued things, so this conjugation and all these things have a meaning, okay, first thing is what's called duality, you can show x of minus t will be a Fourier transform pair with x of f, okay, so if you take the capital X function with t as the variable instead of f, you get the small x of f, okay, that's duality and then if you conjugate t, you get what, you get the negative, the mirror image conjugate for capital X, okay, if you do a match function transformation on t on x, okay, you do x star of minus t, which basically you make this the match filter response for x of t, then you get x star of f, okay, remember this, this is very important, okay, the match filter match to x will have a response which is the conjugate of the original response, okay, so when I say conjugate what happens, the real part remains the same, the imaginary part becomes minus, okay, alright, so if we know that x of t is real valued, then it turns out there is a nice symmetry, okay, so what is x star of t if x of t is real value, equal to x of t, so what should happen, x of f should be the same as x star of minus f, so what this means basically is the positive frequencies are enough to completely describe the spectrum, the negative one can be obtained easily from the positive one by conjugation and time, then frequency reversal, okay, so you do that, so this actually translates into two conditions for real and imaginary part of x of f, real part of x of f equals real part of x of minus f, so this is even symmetry for the real part and you will have odd symmetry for the imaginary part, imaginary part of x of f is minus imaginary part of, okay, so I think this is called conjugate symmetry, okay, so this is conjugate symmetry and quite a few other properties, I'll just quickly list them as we go along, x1 of t convolved with x2 of t is going to be a Fourier transform pair with x1 of f, x2 of f, use it along with duality, you can also get the Fourier transform pair for the product, it will be x1 of f convolved with x2 of f, okay and then the next properties which will be useful are delay and multiplication by exponential, these are very useful for us, if you delete a time domain signal, so to speak by t0, you would get a multiplication by 2 pi ft0 in the frequency domain, okay, so what's more important to us is this guy x of t e power j 2 pi ft0 will be what, no f0 t, I'm sorry, okay, f0 t, okay, will be what? x of f minus f0, okay, so this is extremely useful in building systems, particularly communication systems because like I said, you will, the x of t several times the constraint will be is that it should be in a particular frequency band, okay and that frequency band may not be amenable to a small circuit board that you might want to build at the transmitter side, so what do you do? You build your signal first in base band in whatever system you have and then you use this property to multiply by a suitable sinusoid and shift it to whatever frequency you want just before transmission, okay, so this is a very useful trick you can use that simplifies your design a lot, okay, so this is a property that's used a lot, another one is scaling property, I don't want to write it down, okay, so I think that's enough, that should be more than enough, x of a t becomes 1 by mod a, x of f by a and all that, okay, all right, so the next thing is autocorrelation and I don't know what number I'm at, okay, maybe I'll say I'll say 3, okay, so autocorrelation and energy spectrum, okay, maybe this is not all that familiar, okay, this is particularly useful when you go to random signals but for now we are just dealing with deterministic signals but even there, let me define it formally so that we'll know what to do when these things show up even for deterministic signals, okay, so you have an x of t, okay, how do you define the autocorrelation, you have an x of t being a deterministic function, the autocorrelation evaluated at tau, okay, so the autocorrelation you don't use t usually, you say tau just for just to differentiate it from t, okay, this is minus infinity to infinity x of u x star of u minus tau du, okay, right, so once again this is closely related to inner product and convolution obviously, right, so you see for instance a tau equals 0 you pretty much get the norm of x, okay and for every other tau what do you get, you get the inner product of x of t with a delayed version of x of t, okay, so the autocorrelation basically is an evaluation of inner products for all time delays, okay and it gives you a lot of information about the function, you can show like I said it's very closely related to convolution, you can show this is x of tau convolved with x star of minus tau, okay, so you send x of t through a filter matched to itself, you get the autocorrelation at the outside, okay, right, makes a lot of sense, so this is x mf and based on this property one can quickly show the Fourier transform of rx of tau will be what, mod x of f square, okay because x of t, x of tau will go to x of f and x star of minus tau will go to x star of f, we multiply the two you get mod x of f square, so in particular if you have an autocorrelation function its spectrum will be what, yeah real value but positive also, okay, so it's a very good property to have for spectrum, okay, so that's the thing, okay, so what else, okay, so the next thing is the energy spectrum, okay, so it's a very simple definition but I want to just write it down, okay, suppose you take x of t and pass it through a very, very narrow band filter at f naught, okay, so I am going to pass it through a very narrow band filter centered at f naught and one edge being f naught plus delta f by 2, the other edge being f naught minus delta f by 2, okay, so this is my filter maybe this height is 1, okay, you would get y of t, right, so the energy spectrum at f naught is defined as, okay, once again this is much more useful for random signals, we will come to that at a later time, at f naught is the two norm of y of t, basically the energy in y of t divided by delta f and you have to tend delta f to 0, okay, so delta f is seem to be very, very small, okay, so what will happen if you have a reasonably well behaved x of t, this will tend to mod x of f square, f naught square, okay, so hopefully you can see it all the way at the bottom, so the energy spectrum is very, very closely related to the Fourier transform of the autocorrelation function, okay, so you can see how you define similar things for the random signals, okay, so when you have a random signal x of t, each realization is not fixed, right, it's random, so you can't, there's no point in computing energy for each realization separately, so what you do, you average it out and compute the average autocorrelation, okay, so in the autocorrelation instead of this you will do an expectation around that and you get a average autocorrelation and then once you get an autocorrelation function it's a deterministic function of time, for that you compute the Fourier transform, you get the spectrum, okay, so in the random case this makes a lot of sense in extension, but in real, real deterministic case also one can make similar definitions and nothing stopping it, okay, so I think I've done reasonably okay as far as time is concerned, pretty much finished up all that I wanted to say, the next thing we'll see is baseband and passband signals, basically I think we have about five minutes left, so maybe I should just quickly define those quantities, we'll proceed further later, okay, so okay, so this is again definitions which you might already know, but in this context it becomes very, very useful, okay, passband signals, okay, so when I introduce the ideal band limited AWGN channel I said I'm going to assume my H of t is going to have a response which is flat between minus w and w, okay, so typically that may not be the case, okay, so based on the requirement you have your H of t might actually be flat in some other frequency band, okay, so you might be required to operate only in a certain frequency band, okay, so then you will need a certain passband characteristic for X of t, okay, X of t should be non-zero in spectrum only in some passband, okay, so for that these passband signals are very, very useful, okay, so let me formally define it, X of t is said to be a baseband signal, okay, if the spectrum of X of f equals in absolute value goes to zero for mod f greater than w for some w, okay, so this is a baseband signal, so by definition all signals will be baseband, right, so you also need the other condition, so X of f should not be zero within the band, okay, only then it becomes a real proper baseband signal and you also want to think of w as a very reasonably small number, okay, compared to the actual frequency at which you might be transmitting, so it should be a small number, okay, so that all that is understood here, so it is band limited to a small band in the baseband, okay, and the next and similar definition you have for an LTI system, okay, an LTI filter with impulse response H of t, when is it supposed to be baseband? I will say a baseband filter if H of t itself is baseband, okay, so that is the definition, we have a similar definition for passband, okay, X of t is passband if mod X of f equals zero for mod f plus or minus fc greater than w for some fc greater than w greater than zero, okay, so around a certain center frequency for a small band you have non-zero spectrum, everywhere else it is zero, that becomes a passband signal, okay, similarly you have an LTI system being passband with impulse response H of t being passband if what? If H of t is passband, okay, so throughout this course, okay, we will consider real baseband signals and real passband signals, okay, I have been talking about complex signals also, you will see why soon enough, okay, turns out real passband signals can also be thought of as complex baseband signals, okay, so that is why it is useful to have complex signals in our midst, but still in our entire course we will be concerned with real baseband and passband signals, this is good enough, okay, so for communication purposes there is really no reason why you should consider anything other than this, okay, so it turns out this is good enough and these passband signals we will represent as complex baseband, okay, I will do this derivation maybe in next class, okay, so you can show a complex baseband signal is representative of a real passband signal, so in essence what will we be dealing with? Baseband signals, real and complex baseband signals, okay, so those are the only things that we will be dealing with in this class, okay, so there is a lot of advantage in doing this passband to baseband jump, okay, so the reason is you will see this passband signal while it seems to be dependent on FC, once you come to baseband there is no FC in the picture anymore, all you have to do is some multiplication at the end to get any FC you want, okay, so you can do design independent of the center frequency if you concern yourself only with the baseband signal, so that is an advantage in doing this, okay, so we will stop here if there are any questions, it is a good time to ask, fine, okay, so we will begin next class