 So, so we begin lecture 5 in the midst of confusion. Is that okay? Is it working out? Can you see? Okay, good. So, this is lecture 5. So, in the last class, I think the last thing I did was write down some definitions for random processes, discrete time as well as continuous time. But I want to step down real quick and make a couple of plots of the Gaussian distribution. I think it's important that we plot the Gaussian distribution. Several times you'll be asked to visualize it. It's good to see plots. So, the Gaussian random variables, I'll show a very typical plot, take 0, 1 for instance. Okay, if you plot the PDF of this variable, the mean is going to be 0, the variance is 1. What will be the value at x equals 0? Okay, what's fxx? 1 by root 2 pi e power minus x square by 2. Okay, so it's 1 by root 2 pi, that's roughly 0.4 or so. Okay, so that's the value. And it'll died on pretty fast on both sides. And will it be symmetric? Yeah, it'll be symmetric around 0. So, it'll look, this is about 1 by root 2 pi. Okay, so that's the plot looks. So, if you do normal with mean mu and variance sigma square, that's going to move, this pretty much is going to move to the center as mu and it'll also scale a little bit. And you'll divide by sigma. So, depending on that, it'll scale a little bit. So, the one dimensional thing is not too bad. So, you can quickly do it. It's not, it doesn't sound all that bad. So, that's the first case. The two dimensional thing is a little bit more tricky. Okay, suppose I take xy to be jointly Gaussian. Okay, jointly Gaussian IID. Okay, so IID unit normal. Okay, so how will the joint PDF look? You have to draw it in two dimensions, right? So, it's tough to draw a picture like that on the screen. So, I'll roughly do a support type picture. Okay, so you'll have, if you do fxy x, y, okay, it'll have its peak at 0, 0, right? And then it will die down in all directions in a circularly symmetric way, right? So, that's the way to visualize it. So, it'll have a peak, the peak value will be something and then it will circularly symmetrically die down in all directions. Okay, if your variances are not the same, if they are different, then that circular symmetry will be lost. Okay, mean itself doesn't make a difference, but if the variances are different, that circular symmetry will be lost, but still it will die down in all directions in kind of a different asymmetric fashion. Okay, so if you have access to some plotting software, try some 2D plots of these functions with different mean and variance. You'll give a picture of how the, how the PMF or PDF looks in two dimensions. Okay, so this is an important picture to keep in mind. Okay, so that's one thing as, okay, so why am I, there is some weird thing that's happening here. I don't know why, but I'm going to hide this. Okay, all right, so, so, so it's just a last question. Before we go on, I want to close the probability review with this last question. And I want to tell me what, I want you to tell me what the answer is. Suppose I define a random variable y as x plus n. Okay, n is normal with say 0 mean and variance sigma squared. x is uniform in the set minus 1 to 1. Okay, so when I say curly brackets, it's a finite set. Okay, so x is uniform in the finite set means what? x takes value 1 with probability half and minus 1 with probability half equal probability. Okay, but if I use square brackets, what does it mean? If I do a, b like this, what is this? Okay, so if I say x is uniform in this, what is this? It's a continuous random variable uniform between a and b. So the PDF of x is 1 by b minus a between a and b. Okay, so this is a small subtle variation in notation, but the answers can totally change depending on what it is. Okay, and so I'll say x and n are independent. Okay, what's the PDF of y? How do you go about finding such things? Okay, so you can do convolution, right? So even though it's discrete, you write it in terms of deltas. Okay, and you know what's the PDF for n. So you convolve those two, you get it. Okay, so in terms of visualization, it's easy to visualize. It'll be two Gaussians centered around minus 1 and plus 1 with variance sigma squared, but there'll be a scaling of half on each to make sure it works out. Suppose I say instead of uniform discrete, if I say it's uniform continuous, in the set say the same set minus 1 1, how will you do this? Okay, it's a considerably more messy convolution. Okay, but still it's a convolution, right? Take the PDF, which is a continuous PDF, 1 by b minus a between a and b and then convolve it with the Gaussian. Okay, so it'll be a little bit more messy, but you can always get it. There's no problem in doing that. Okay, if you want, you can go to the Fourier domain and do the convolution as multiplication and then come back. Okay, that's also possible. Okay, so we'll be dealing with these kind of things often. So I want you to get used to something like this. A similar thing can happen in two dimensions also, right? So if I have a vector of y's, y1 is x1 plus n1, y2 is x2 plus n2, a vector, then I can think of it in two dimensions and similar picture will evolve. Okay, alright, so I think that's pretty much all I wanted to say. Let's go back to these random processes. Okay, so I've been distinguishing between two type of processes, discrete time and continuous time. Once again, notice that I'm not saying the random variables themselves are discrete or continuous. I'm only saying time is discrete or continuous. In the discrete time case, I have this kind of notation. In the continuous time case, I have x of t. So basically the index is either discrete or continuous. Okay, so we think of the index as time. It's okay. It's not a big deal. Okay, so how do you specify it? You specify it by specifying what I called as finite distributions. Okay, you take a finite number of random variables from this collection that you have. Okay, you have a huge collection of random variables. You take a finite number of them. You should specify their joint PDF. That's the proper way of completing or specifying the random process. But typically in practice, we will never do that. Okay, so how will we do it? We'll always specify the, what's called the sample function method of specifying the random process, which is you'll specify x of t like a function of time. Okay, like a function of time involving random variables. So based on what the random variables are, you actually get new random variables for each t. Okay, that's an indirect way of specifying the finite distributions as well because theoretically, one can compute the finite distributions from such a specification. Okay, alright, so that's where I stop and hopefully this is clear. Okay, so very common way of specifying typically the continuous time distribution is this, I wrote down a way of doing it, a cos, say for instance, omega t plus some phase phi. Okay, so I might specify x of t like this. Okay, where this a omega and phi can be random variables in general. Okay, some random variables with a certain joint distribution. I specify the joint distribution of a omega and phi. Those random variables themselves don't change with time, but I have a variable t appearing explicitly in my sample function definition. So if I give you a set of times t1 through tn, you can plug it in here, find all of them as functions of a omega and phi and then evaluate their joint PDFs. How would you go about doing something like that? How many of you think given t1 and t2 and a joint distribution for a omega and phi, how many of you are confident that you can get the joint PDF in say half an hour? Is it an easy problem? In this case, it's not too bad. Okay, so think about it. It's not all that terrible, but it can be done. In general, if you increase the number of time samples, it becomes more complicated. Okay, but in general, it's more difficult, but it can be done at least. Okay, so if you don't even know how to start a problem like that, then you're in some kind of trouble. Okay, go back and revise some of the notes you might have taken in 356 about transformation of random variables. How do you deal with that? How do you deal with this kind of indices and all that? Okay, all right. So let's keep proceeding. To proceed further, I'm going to define two specific types of random processes. The first one is strict sense stationary random processes. Okay, so I'll keep doing this, whatever I write on the left part of the screen will be for discrete time, whatever I write on the right part of the screen will be for continuous time. Okay, I'll do both together, just keep on going. Okay, so strict sense stationarity has a very simple definition. Okay, so I need a notation. Okay, maybe I won't do it. Okay, so the PDF, the joint PDF of xk1, xk2, so on till xkn for a set of n samples of my discrete time random process should be identical to the PDF of k1 plus delta, k2 plus delta, xkn plus delta for all delta, kn. Okay, so for any set of n samples I take, it doesn't matter if I delay all of them by the same amount, I should get the exact same joint PDF. Okay, as long as I keep getting that, my discrete time random process is said to be strict sense stationary. Okay, there's also a continuous time version. What will I do in the continuous time version? Wherever I have k, I'm going to replace by t. Okay, and then my delta can actually be a continuous index. Okay, so that's how I'm going to specify for the continuous time case. Okay, so I'm not going to write down the continuous time case, continuous time case definition is similar. Okay, so you can figure it out. It's not too bad. All right, so strict sense stationarity is very strong. Okay, and typically, typically when you do practical study of these random processes, you're not worried about the PDF at all. Okay, so the PDF is there, you know, but it's still, you know, usually the signals are bounded between a certain, a certain value, you're not too worried about the entire PDF. Knowing the exact probability distribution will not be so crucial to you. The kind of things you need to know are the mean value, the power and the spectrum. Okay, so the average power, the average spectrum and the main values. Those are the things that are very critical to you when you design. Okay, so all these things are just second order things. Okay, you don't need to know the complete joint PDF all the time. Okay, so you're only worried about first order, which is mean and power, which is square of the thing, right? And square and in fact, spectrum will be delay and multiplication. So you'll see, ultimately, you're only interested in second order statistics for your entire random process. And that's enough for you to design systems. Okay, so we'll go ahead and define those first and second order statistics. Then we'll be happy with that. You won't be worried about specifying the entire joint PDF, only the first and second order statistics of my random process. Okay, so what's the first order statistic? It's the mean. Okay, so in the discrete time case, it's very easy to define. A max of k in general is expected value of xk. Okay, so in the continuous time case, I'll have a similar definition, a max of t is expected value of x of t. What is this expected value over? It's over the distribution of xk, the distribution of x of t. Okay, and it's easy to evaluate in most cases. Okay, the what's on the left hand side, you can identify this will be what what kind of a quantity will this be? Will it be a random variable? It'll actually be a deterministic signal, right? So deterministic discrete time signal. Okay, likewise here, this will be a deterministic continuous time signal. Okay, so already the first order statistic seems to be much simpler than the entire random process definition. Don't have to worry about so many things, we just specify a discrete time signal continuous time signal, which is much better for us. Okay, so this is deterministic. And that makes it simpler to study. Okay, the next second order statistic, it turns out, it's not enough if you look at expected value of xk squared. Okay, so that's enough for average power. Okay, but usually you're interested in spectrum, power as it changes as time changes, how does my power profile kind of change with time? That's what that's how you study in frequency, right? So to capture all that you do the proper correlative second order statistics, that's the definition of autocorrelation. Okay, so here's the definition rxx of k1k2 is expected value of xk1, xk2. Okay, so I'm one more thing I should point out when I do autocorrelation is usually we I'm defining this for real random processes. Okay, so my xk is real. Okay, usually always you think of random processes as real. It's possible to define a pair of random variables and think of them as complex value. Okay, it's possible. In that case, if your random variables are complex value, then the autocorrelation definition will change. You'll have to conjugate one of the things. Okay, so remember that I'm in most cases in this course, at least we'll be dealing with real random processes. If at all we need complex, I will do a conjugation on one of those two things. Okay, so remember that. And the discrete time case also is a similar definition rxx of t1 t2 is expected value of x of t1 x of t2. Okay, so I think my x of t2 has gone out of the screen. I'm sorry about that, try to keep track of this. Alright, so this is the definition for autocorrelation and you can see it captures the variation of kind of power loosely with time. Okay, so you multiply two of them together, you get that. And if k1 equals k2, what do you get? You get the average power itself in the average energy, so to speak, at that time. Okay, so that's once again instantaneous, but it's okay. At least you get something. Alright, so the next definition is what's called white sense stationary random processes or WSS for short. Okay, so pretty much all the random process we'll be dealing with in this course will be white sense stationary. Okay, so maybe one of them will not be white sense stationary, but we'll quickly make it white sense stationary and work with it. Okay, so it's always white sense stationary. The reason is you can define spectrum for these, these type of processes. So you want, you want to have white sense stationary. So what's white sense stationary? Two conditions need to be satisfied. This mean instead of being a deterministic function of k will have to be a constant. Okay, so I'm x. For all k you should have the same mean in your random process definition. And the autocorrelation once again, if you evaluate it at k1 k2 or evaluate it at k1 plus delta k2 plus delta should remain the same. Okay, so you shift by a constant number. The autocorrelation doesn't change. Okay, there are similar definitions in continuous time. I'm not going to write it down. And that's how it works. So this is white sense stationary. Okay, so we'll like I said, we'll pretty much be dealing with white sense stationary process because the spectrum can be nicely defined. I'll go ahead and define it in the next step. So before that, a quick question. What is stronger strict sense stationary or white sense stationary? Just by the name. Six sense stationary should be stronger, but they can be strict sense. They can be white sense stationary random process that are not strict sense stationary. Does it make sense? Yeah, it can be. So you can have it's a weaker process. Usually we'll be doing that. In the Gaussian case, you'll see it will coincide and all that. But they can also be strict sense stationary random process which are not white sense stationary. Does that make sense to you? Is it possible? Not possible? Okay, so the trick lies in making sure you pick a random process for which mean and autocorrelation won't exist. Okay, so it's possible to have random variables for which mean doesn't exist. If mean doesn't exist, you can't define white sense stationary. So you can you can cook up some strange process like that, which will be strict sense stationary, but it will not be white sense stationary. So you can't think of strictly speaking, you can't think of strict sense stationary as stronger version. But in most cases, since we'll be dealing with random variables which have a mean, don't have to worry about those kind of case. So that's the minor technicality, which you might come across sometime later. Okay, all right. So can you name a random variable? It's popular distribution, which doesn't have a mean, mean doesn't cost you distribution doesn't have a mean, mean doesn't exist. Okay, all right. So, so, so, so, so let's let's go through and redefine this random, the autocorrelation for the case when for the white sense stationary case, it turns out since the shifts don't matter, you don't have to take K1 and K2, it's enough if you have one variable and a shift, right? So it'll work out very easily there. So that's how it's defined for white sense stationary random processes. Okay, so rxx of simply one variable, this is expected value of xk plus m xk. And then why can I write just m on the left hand side? I know this is going to be independent of K, it's going to be the same for all K, whatever K I choose, this quantity on the right hand side for a white sense stationary process has to evaluate to be the same. So it can only depend on m and it cannot depend on K. So I can write just one definition. Okay, in the continuous time case, I'll use tau to denote this quantity, here it'll be t plus tau x of t. Okay, so that's my white sense stationary definition. Okay, so this, so now the autocorrelation, the mean becomes a constant and the autocorrelation becomes a deterministic signal, right? Deterministic signal, here also it becomes deterministic. Okay, but it's continuous time. Okay, so once it becomes deterministic and nice, it's simply one variable thing, you can define a spectrum for it. Okay, so you think of a Fourier transform of the autocorrelation function as the spectrum of your random process. So you see the white sense stationary is crucial in the spectrum definition, otherwise, you won't get a nice spectrum definition. Okay, so it's possible to make it rigorous and show it's indeed a spectrum, we'll do it kind of later in the inverse way. I'll first define the spectrum definition. Okay, so it's basically very, very easy to define. Okay, so you take the random autocorrelation function and do a DTFT on it, you get what's called the power spectral density for my random process. Okay, so this is the PSD or the power spectral density for my random process. In the discrete time case, what transform will you take? Take a Fourier transform. In the continuous time case, I'm sorry, take a Fourier transform and you get Sxf. Once again, it's the power spectral density for my continuous time random process. Okay, you can show strictly that this is represents some kind of power, maybe I'll motivate it later. But for now, you can take it as a definition. Okay, one can also take a Z transform here. Okay, and write the Z transform of the autocorrelation as some kind of the, I don't know what to call it. It's a bigger set than the power spectral density. It's also defined sometimes. Okay, we'll deal with this also several times. All right, so that's the spectrum. And one last thing is cross correlation, should I define it? No, yeah, I should define it. I should define it. Okay, cross correlation between two random processes, both will be assumed to be, you'll assume them to be what's called jointly white sense stationary, we'll come to that later. So you have x y being two random processes. Okay, and r x y of k 1 k 2 is defined as expected value of x of k 1 times y of k 2. Okay, so these are all just mundane definitions. There's a similar definition here. And it's said to be x and y are said to be jointly white sense stationary. If what each of them have to be individually white sense stationary. And in addition, this cross correlation should only be a function of k 2 minus k 1. Okay, so that's the other thing jointly W SS if r x y of k 1 plus delta k 2 plus delta equals r x y of k 1. All right. So, so in that case, you can define the cross spectral density cross cross correlation to be a function of just one random variable, which is simply I'm sorry, a function of just one variable, which is simply expected value of x k plus m x k. Okay, so similar to what we had before. Okay, so there's also similar definitions possible for the continuous time, continuous time case, I'm not going to spend too much time with that. All right. So that's, I'm sorry. Yes, oh, y k you're right. Sorry. It's too close to lunch. All right. So let's, let's proceed further. So the next thing that's important is filtering a well, I'll say why it's in stationary random process from now on, when I say random process, we'll assume it's why it's in stationary. Okay, so I won't repeat that. Simply say random process, you have to assume it's why it's in stationary. Okay, suppose you have a random process coming into a filter with with trans with impulse response H of k. Okay, so maybe the the discrete time Fourier transform is H of e per j omega maybe the z transform is H of z. Okay, and you can show what you get out will also be a random process. Okay, so I'm changing notation here. So I should not do that. Sorry. 6k going in and you can show what you get out will also be another random process. Okay, so how do you visualize this? I mean, when how do you think of this in practice? What is the random variable coming in? Actually, at any given time, you will have an instance of this random variable coming in, right? So an instance of this random process coming in, there can be several instances. So you think model that as a random variable and study the average behavior. Okay, so what will happen on average is what we are looking at. Okay, there can be one instance where something else will happen. Okay, but this is all averaged over all those instances. Okay, so that's what we're interested in. Okay, so you can show why K will also be a random process, right? And the mean of this random process, which will also be whites and stationary and the mean of this random process will be mean of x times what? Times this e per j zero. Okay, so that's that's one thing you can show. Okay, and you can show the auto correlation function of this random process is the auto correlation function of the input random process convolved with H m convolved with H star of minus m. Okay, which is the match filter response to this. Okay, and from here, you can show, okay, in most cases, this will be real. So H star is not too critical. From here, you can easily show this will be sx of e part j omega times what? modulus of e part j omega square. Okay, so if you want, if you want the z transform version, this will be sx of z times h of z times what? h star of one by z star. Okay, so those are all different versions of doing of saying the same thing. In the continuous time case, when you have x of t being input to a continuous time filter with impulse response h of t, which has a Fourier transform h of f. Okay, you can write a similar relationship. My will be mx times h of zero, and then r y y tau will be similar expression. Okay, so r xx of tau convolved with h of tau convolved with h star of minus tau. And you can write s y of f s x of f times absolute value of h of f square. Okay, so all these things will also be true in continuous time, you can show a lot of these properties. Okay, so all of them are white So you see why the spectrum is the autocorrelation function daily represents the power in the signal and sx of f can be thought of as the spectrum of the random process itself. Okay, so in the same way in which for discrete time, what happens to the spectrum? Okay, in discrete time, the spectrum gets multiplied by the transfer function. In the, sorry, in deterministic case, spectrum gets multiplied by the transfer function. In the random case, the Fourier transform of the autocorrelation function gets multiplied by the absolute square of the transfer function. So that's the correspondence. The reason why absolute square and all comes up is the power spectral density will have to be positive. Okay, you cannot have negative values. If the autocorrelation function will be such that its Fourier transform is positive everywhere. Okay, so well, non-negative at least. Okay, so you cannot cannot go negative. So you can't have simply sx of f multiplying h of f. It won't make any sense. The mod square comes naturally there because of that reason. Right, so it will also be real. Right, sx of f will also be real. Okay, so those things I didn't write down. The PSD will be real and non-negative. Okay, so that's quite important. Alright, so that's something I wanted to talk about. Okay, so the next thing we'll deal with is Gaussian random processes. And those who dominate our study once again. Okay, so basically, definition for Gaussian random process is simple, both in discrete time and continuous time. What's the definition? All finite distributions are jointly Gaussian. That's the definition, jointly normal. Okay, so it turns out whites in stationarity and sticks in stationarity coincide in this case. In the Gaussian case, there's no problem. So since we're assuming whites in stationarity, everything will simplify tremendously. Okay, so how do you specify a whites and stationary Gaussian process? You simply specify the mean and the auto correlation. Okay, any old function cannot be the auto correlation. The auto correlation should satisfy a lot of properties. Critical among them being its Fourier transform should be real and non-negative. Okay, so that's a very crucial property when so there are some restrictions on this rx6 of m, but pretty much once you specify a valid mean and a valid auto correlation, that's enough. You specify the random Gaussian random process completely. It's much simpler than specifying the finite distributions. And it's also much simpler than specifying even sample functions. So typically you don't function and you don't specify sample functions for Gaussian process. Simply say mean and auto correlation, that's good enough. Okay, I'm going to claim that from here, you can calculate all the finite distributions. Okay, so any finite distribution you want can be calculated using simply the mean and the auto correlation function. Okay, so this is the discrete time case. In the continuous time case, you'll have what? A similar thing, a mean and rxx of tau. Okay, so these two things completely specify the Gaussian random process. Why is that true? Given these two, why can I find any joint PDF? Yeah, that's it. So all PDFs are simply defined by the mean and covariance matrix. And can I find the covariance matrix from the auto correlation function? Yeah, what is the covariance matrix consist of? It's basically expected value of some xi times xj, which is the auto correlation evaluated at j minus i. Okay, so I know once you specify the auto correlation, everything is specific. Okay, so joint PDFs can be found using this. Okay, so that's clear. It's no problem. Okay, a very special case of Gaussian random process which we will study often is what's called white. A white Gaussian process. Okay, so it's usually once you say white Gaussian, it's usually noise, but still I'll simply keep it as white Gaussian process. A white Gaussian process usually has zero mean case, we will always take mean to be zero for a white Gaussian processes. And the auto correlation function is special in the sense that it's going to be equal to in the discrete time and not by two times delta m. What's the discrete time delta? When m is zero, it's one, everywhere else it's zero. Okay, in the continuous time, it's much more severe. It makes it pretty much non practical. It says the auto correlation function is going to be n naught by two times delta tau. Okay, so first thing you should ask me is what is this n naught by two? Why n naught by two? Okay, well, it can be any constant. You can keep it as some a or b, anything you like. But n naught by two is for historical reasons. Okay, so whenever we assume white process, you take the auto correlation to have value n naught by two. Some reason it's there, we might as well accept it. Okay, so if you do a power spectrum, what will you get? Will be n naught by two for all omega. Here you will get n naught by two for all f. Okay, so it'll be a constant. Okay, so that's white. The discrete time case is at least reasonable. Right, you can expect maybe since you're sampling in discrete time consecutive samples to be uncorrelated. But in the continuous time case, what does it mean? This n naught by two delta means however closely you sample. Right, however closely you sample, you should get independent random process. So that's a little bit impractical. Okay, but it turns out these noise processes typically that we model as white Gaussian are generated with such a huge bandwidth. Okay, so this S6 of f is flat for such a huge bandwidth and normally you're working on such a small bandwidth typically that you can assume that this noise process is pretty much flat forever. And since you anyway do a low pass filter in the beginning, you'll only get a flat spectrum for the noise and that's good enough. Okay, so that turns out to be a reasonable assumption. Okay, so that's why this is very useful. So like I said, noise is usually modeled as a white Gaussian process. Okay, so this n of t that we had in our definition y of t equals x of t plus n of t, n of t I said is a random signal. Right, so that's usually modeled as a white Gaussian random process. Okay, so that's how it's modeled. Alright, so the last thing I want to do as in terms of things, this is the last thing, do we have anything else? Yeah, pretty much this couple of things, last couple of things which we have to do is these two things. The first thing is sampling a continuous time random process. Okay, so you get a similar folding of the spectrum, I just want to remind you of how that works. What happens when you sample a continuous time random process? You have a continuous time random process x of t. From here, you take samples every t seconds to get a discrete time random process yk. Okay, what is yk? yk is x sampled at k times t. Okay, so you from a continuous time random process you've gone to a discrete time random process. Okay, so if the continuous time random process is specified by a mean and an autocorrelation function which has a power spectral density of 6 of f, one can expect similar to the sampling theorem that the mean and the autocorrelation and the power spectral density of the discrete time process, sample discrete time process will can be expressed in terms of the corresponding things for the continuous time case. Okay, so that's what I'm going to describe right now. Okay, so it's very easy to do it. Look at yk equals x of kt. So what should mean mean of yb? Expected value of yk will be what? Expected value of x at any time and you know it's y-sensititionary, so it has to be equal to mx. There's no problem. So the mean is not going to be changed in any way. What about r, y, y, m? It's going to be expected value of yk plus m times yk. Okay, so convert that into suitable xs. You will see you can directly convert to a, convert to this rxx. Okay, so you'll see this will work out to rxx of mt. Okay, so there's no problem there as well. Okay, but this rxx of mt now becomes a sampling of the, of a deterministic signal and you can directly apply your sampling theorem there and figure out that from here. If you take the discrete time Fourier transform for r, y, y, you're going to get 1 by t summation m equals minus and Fourier to infinity sx f minus m by t. Right? So once you come to the discrete time case, it becomes very simple. Okay, what is this omega? If you want to go back, it will be 2 pi ft. Okay, so if you want to go back to the physical sampling process, that's how it works. Okay, so you do get a folded spectrum in the, in the random case as well. Okay, but you get it for the power spectral density. Okay, and when will you not have any aliasing? When will you have the same type of statistic reflected even in the discrete time case? When your bandwidth is, when your sampling frequency is at least two times the bandwidth of the original thing. Okay, so once again, this doesn't mean that you can reconstruct a random process or anything when it doesn't make too much sense. Right? You can only reconstruct a statistic that's possible for the random process and that's possible if as long as there's no aliasing. All right, so that's also there. All right, so the last, last bit of preliminary we need is a special random process which requires careful study which I will call as the PAM random process. What is PAM? Pulse Amplitude Modulation. Looks like all of you know what it is. So the way it's defined is the following. Okay, it's a continuous time random process. It's defined using a discrete time random process. Okay, so xk h of t minus kt. Okay, so there are several things here. This xk is a, xk is a determine the discrete time random process. Maybe it has its own mean and autocorrelation function and power spectral density. Okay, so all those things will exist. So this t is some kind of a pulse duration. Okay, so or symbol time if you will. Okay, so I will say symbol time or pulse duration. Okay, what is h of t? h of t is some deterministic signal. Okay, all right. So first question, what will y of t be? y of t will be a continuous time random process. That's clear. Will it be y-th and stationary? Only for some very special choices of h of t, it will be y-th and stationary. Right? If h of t at all varies, you can immediately see there's no chance of this being y-th and stationary in most cases. So y of t in most cases will not be y-th and stationary and that's not a good situation for us. Why? Then you can't compute spectrum for y of t. You can't design systems. You won't know what bandwidth to allot. You won't know anything. Okay, right? So that's something we don't want. We want to have a good handle on bandwidth and all that. So what we do is we introduce another random variable to make this y-th and stationary. Okay, so this is a very standard trick. You might have seen this before. You do a timing offset type, face offset or whatever new random variable. So I'll say I'll define z of t as y of t minus theta where this theta is uniform in, notice my notation, 0 t. What does it mean? Theta is a continuous random variable which is uniform in the interval 0 to t. Okay? And I'll say theta is independent of xk and it's independent of everything else. Okay, so once you do that, you'll see you can write this. This can be written even in terms of xk h of t minus kt minus theta. So once you do this, it turns out z of t becomes white-sense stationary as long as h of t is BIBO stable. Okay, so you always need h of t to be stable. Even in the previous case, whenever you filter, you need h of t to be stable. Otherwise the mean itself won't be defined. Okay, so you can't do anything with that. Okay? So z of t happens to be white-sense stationary. This happens to be white-sense stationary. And you can define mean and autocorrelation for it. So I've forgotten what the mean is. I think you can work it out. It'll depend on h of 0 and mean of x. Okay, so all that you can write down. The important definition is for the power spectral density. Okay? So remember this. Remember this by heart. It's very, very important. 1 by t mod h of f squared s e power j 2 pi f t. Okay? So what is this s now? sx. I'm sorry. This is the power spectral density of the input random process. Remember it's a discrete time process. So you have a dt f t and you convert back and forth between continuous and discrete with f t. Okay? 2 pi f t. Then mod h of f squared. That's the standard Fourier transform. Then you get a extra 1 by t. Okay? So that's all these things are crucial. Okay? So usually this will be something very simple. Okay? So you don't have to worry too much about it. Say for instance that'll be a constant. Okay? The power spectral density for x will be a constant. Okay? You'll assume that it's independent. Xk will typically be assumed to be independent, identical type of thing. So that'll be a constant. So pretty much the spectrum of z is going to go like mod h of f squared divided by t. Okay? So that kind of makes sense. If t decreases, then you expect a larger spectrum. Right? So by t and then mod h of f squared should happen because that expression looks so much like a convolution between x and h. Okay? So if you write out and simplify it should work out in a way similar to how convolution worked out. Okay? So that's how the spectrum works out for these guys. Okay? So this expression you should know by heart. So maybe this last lecture was very crucial. So as going along we'll take most of what I did in this lecture into the next few, into the rest of the course. It doesn't mean that you should neglect all the other things I talked about. Okay? So particularly the Gaussian random process stuff, make sure you revise it, go back and read some any of your favorite books and probability, go through the Gaussian random process, go through transformation of random variables, all those things will help you out a lot. Okay? So as far as random signals are concerned, it's enough if you know formulas like this. Okay? But it's good to know discrete time Fourier transform, Fourier transforms, Z transforms, go and revise a little bit. It'll be comfortable. Okay? So if you're looking for specific books, okay? So let me summarize briefly once again before I give you these book chapters and we'll close for today. So the summary is we began by looking at the model for a communication system which is going to be x of t convolved with h of t plus n of t and the transmitter's job is to take in a sequence of bits and put out x of t. The receiver's job is to estimate b by observing y of t. Okay? So we want to design this tx and rx. What are we concerned about as far as x of t is concerned? I'm concerned about its power and bandwidth. As far as n of t is concerned, I'm concerned about its power. Okay? And finally, what are my parameters, the design parameters that I want to optimize? Bit rate and and what? Probability of rate. Okay? Bit error rate. Maybe that's nice for putting it. Okay? Alright. So after this we saw a whole bunch of review of signals and systems and we saw DSP a little bit and then we saw probability and random processes. Okay? So if you want a good reference in terms of books, okay? So I would suggest, okay, so maybe the next page. So almost all of what I wanted to do, okay, almost all of what I've done are in these book chapters. If you take, for instance, Barry Lee and Mesha Smith, it's in chapters 1, 2, 3. Okay? And if you take, for instance, Proek is fifth edition. Okay? So I have the fifth edition with me. So if you're using any other edition, it might be first and second chapters. Okay? So these two are good reads. Okay? So if you're looking for something to read, go back and pick up these books and try to read them. Both books must be available and most of what I did is condensed into these chapters here. Okay? So it might be good for you. All right. So I think we'll stop with this from next class onwards. It's real digital communication. Okay? So you better come prepared with all these things and otherwise you'll miss a whole bunch of