 This lecture, lecture 25, ok. And what we are going to see beginning with this class is loosely called equalization. Well, maximum likelihood sequence detection is also a form of equalization. So, equalization is a general term that deals with receiver signal processing that tackles ISI, ok. So, that is the way to define equalization, receiver signal processing to tackle ISI, ok. So, that is the very loose broad way of putting it. There is also possibilities of doing something at the transmitter to equalize, ok. So, it is, so this receiver is something I want to talk about closely, ok. So, so far the only way we saw for tackling ISI was to build a front end which is optimal using the Whiten match filter, right. You do a match filter followed by a sample rate sampler followed by the whitening filter. Then you have an optimal front end which converts into a, converts your signal into a signal space samples, set of signal space samples, signal, no, symbol time samples, ok. And then you run those ZKs through a Wittebee algorithm and that is the optimal way of doing things. You cannot do any better than that, right. So, that is the, it is the optimal way of doing things. And the problem we saw with the Wittebee was basically just it is not implementable in if your number of taps becomes very large, ok. So, for in some cases your channel that M of Z might actually be IIR, infinite impulse response. In that case, there is no question of implementing the Wittebee, ok. So, there are several cases where you want alternatives and those alternatives is what we will briefly see here, ok. So, I will be closely following for most of this part, I will be following chapter 8 from Barry Lee and Meshishma, ok, alright. So, let me once again remind you for as to where we are, ok. So, you have a set of symbols coming into a transmit filter G of t and then you have the C of t, ok. So, let me see how well people remember what C of t is, what is C of t, channel response. But actually it is the baseband equivalent, complex baseband equivalent to channel response. So, C of t in fact can be complex, ok. So, keep that in mind, ok. So, then you add N of t. Once again N of t is the complex baseband equivalent of the passband noise. So, N of t can also be complex, ok. So, when I talk of complex processes, the autocorrelation and power spectral density are defined slightly differently, right. What is the autocorrelation for N of t? Expected value of N of t times N of t plus tau conjugate, right. So, you always do a conjugate and there is confusion about where you put the conjugate and all these things, ok. So, I did not go into detail there, but remember that there are some intricacies in the actual definition of complex random process. So, there you have a received signal and we define and we derived the optimal front end in two different ways, right. So, I first defined the metric which was the minimum distance metric in the continuous signal domain itself, ok. And then we derived the optimal front end. It is also possible to derive it using the orthogonal projection which is optimal, which is the optimal thing to do, ok. So, you do that, you get this front end which is h star of minus t and what is h of t? g of t convolved with c of t, ok. And then you sample at signal rate, symbol rate to get a sequence yk, ok. And we saw this yk has ISI from both the causal and the anti-causal part and in fact, you can write yk as what? sk convolved with what? yk can be written as sk convolved with what? rho hk, ok. So, where rho h is the autocorrelation of h, ok, plus n prime k, ok. So, this is an autocorrelation function as in it is symmetric about k equals 0. So, we will have both causal and anti-causal ISI. So, yk will have contributions from sk, sk plus or minus 1, sk plus or minus 2, so on, ok. So, that is the way your contributions go. And what more do we know about n prime k? n prime k is a complex Gaussian random process, but it is not white, ok. What will be its power spectral density? It will be sh of c, ok. It will also be, so that is something to keep in mind. So, if I actually write down what the power spectral density there will be, ok. So, sh of e part j 2 by f d, ok. So, we know that an autocorrelation function, autocorrelation rho h of k can be factors, they are factored as gamma square mk convolved with m star of minus k. Whereas, mk is monic causal minimum phase and all that. So, once you do that, you go ahead and do the filtering with 1 by gamma square m star 1 by z star to get zk which only has causal ISI terms and those ISI, that ISI can be written as simply filtering of sk with mk alone, ok. So, those are all things we saw, ok. So, this entire thing, this front end so to speak, ok. The reason why it is called the front end is, it processes the received signal first, ok at the very front, at the very front, then converts and produces symbol rate samples, ok, or samples in general. So, those kind of things are called front end. This is the white end matched filter, ok, WMF for short, ok. And this we saw as optimal front end because it does orthogonal projection onto the signal space, ok. So, it has to work, all right. So, couple of words about the spectral factorization and what gamma square is because that is important, I kind of quickly went through it. So, hopefully it is clear, ok. So, if you view it in z domain, sh of z is gamma square mz m star of 1 by z star, ok. So, it is very typical to talk of rational transfer functions in z domain, ok. So, whenever you think of z transforms, you think of always rational transfer functions, ok. So, but in reality, your actual spectrum, see whenever, but the Fourier spectrum is only on the unit circle, right. So, only for, only in terms of f and Fourier transform itself, you do not think of it as a rational form or anything like that, right. You do not have to have the Fourier transform in rational form, you can think of it as a general function, ok. So, it is good also to, so this is usually useful when the, when sh of z is rational. When it is not rational, it is also good to have the Fourier transform form which is what? It is gamma square me power aj 2 pi ft m star e power j 2 pi ft. The reason I am writing it as ft as opposed to just using a theta or a phi which is a normalized frequency variable is to emphasize that, what am I emphasizing that? The symbol rate, right. So, capital T is crucial here, ok. The symbol rate enters the picture. So, that is why I am writing it this way, ok. So, sometimes I will also write it as e power j theta. The theta is the normalized frequency variable that you usually use for the DTFT, right. So, you do not use this other thing, ok. So, so here once again this is all monic causal minimum phase and all that. What is gamma square? So, we have a formula for gamma square, ok. I hope I gave you this formula before. Exponential 1 by 2 pi integral minus pi to pi sh e power j theta dt, ok. Oh, there is a log, ok. I forgot the log. I forgot the log, I forgot the log, ok. So, log sh e power j theta, ok. So, clearly once, you give a formula like this, clearly for the spectral factorization to exist, what should happen? This integral should exist, right. If this integral does not exist, then there is no question of the spectral factorization, ok. So, that is a condition, it is called the Pali-Weiner condition, for the existence of the spectral factorization. So, most spectra will satisfy this constraint, ok. So, the time when this will not, this integral will not exist is when sh of e power j theta goes to 0 over a non-zero interval. If it goes to 0 only at one point or a finite number of points, you can show still that this will exist, ok. But if it goes to 0 on an interval which is of non-zero length, then clearly this integral will go for a toss, ok. So, what does it mean in practice? Do not use such frequencies, ok. So, if you know your, this folded spectrum is going to go to 0 over a band and do not use that band for transmission, ok. So, then some things, these things, very bad things can happen, right. Obviously, you do not want to do such things, ok. So, this is something that is done, alright. So, this quantity, right, this quantity, you can also see that it will be positive, right. So, finally, you are doing, even though you are doing log and integrating, finally, you are doing an exponential. So, everything is going to become positive. So, gamma square is ok. And this quantity is actually called the geometric mean of sh, ok. So, I will denote it like this, ok. The reason I want to denote it like this is, I want to distinguish it from the arithmetic mean. How will I define the arithmetic mean for sh or any other function? Simply 1 by 2 pi integral minus pi to pi sh d pi j theta d theta, ok. So, you can use now the standard inequalities. What is the standard inequality that I might want to use here? Jensen's inequality can be used here to show that arithmetic mean will be at least as large as the geometric mean. And when will there be equality? When you have a flat function, ok. So, that is the quantity, that is the result we will use, ok. So, result we can use to quickly see some properties is the arithmetic mean is at least as large as the geometric mean and equality if and only sh is flat, right. So, I can define these geometric means and arithmetic means for any other function also, right. For any other spectrum which defined from minus pi to pi, I can define a very similar geometric mean and arithmetic mean and this is the result we will use later, alright. So, this is something maybe I did not mention before. So, I want to just emphasize that very quick, alright. So, at the end of the day, we saw that instead of battling with a very complicated channel model, we can simply see that the only model of interest to us is SK going through a monic, causal, loosely minimum phase M of Z and white Gaussian noise being added to produce ZK, ok. So, this model was fairly important. So, ZK became SK convolved with MK plus NK and here not only is this MK have all these nice properties in addition, NK was what? It is white and Gaussian, ok remember it is complex Gaussian, well 0 mean and the variance of the real and imaginary part is what? Yeah, variance of real or imaginary part, both the parts of N of K is what? So, you can show this will be N naught by 2 gamma square, the real part imaginary part, ok. So, what will be the level of the PSD of NK? So, it will not be N naught by 2 gamma square, it will be N naught by gamma square because of the complex part, ok. If N of K were to be a real random process, it would have been just N naught by 2 gamma square. So, it is complex, since N of K is got both real and imaginary part, you will see it has to be N naught by gamma square, right. So, if you do just expected value of N square of K and N of K times N star of K, you will have both the real and imaginary parts, two of these case will add, ok. So, that is one thing you have to watch out for when people talk about power spectral density of complex process, ok. So, it is a little bit different from real, alright. So, this is the model that we are going to work. So, remember what are the assumptions, what are the various assumptions for this model to hold, ok. So, first of all, you have to know H of t, right. So, you have to know, receiver has to know H of t, ok. And then you have to be able to spectrally factorize the folded spectrum caused by H of t, ok. So, what is this H of t? G of t convolved with C of t. So, maybe G of t you already know, ok. So, it is a transmit filter. So, then C of t definitely you have to know, ok. And then you have to hope that SH of e power j theta is spectrally factorizable and all these filters are nice and implementable, it is not very terribly bad, ok. So, this is a serious limitation, but in spite of that, this is a useful model at least from theory to know what is possible, ok. So, the next thing the one more thing we will need to compare all these different equalization methods is to quickly be able to compare them in terms of probability of error, ok. So, we saw the MLSD evaluating the probability of error was a bit of a pain, but we still we were able to evaluate it using what, what is the key idea that was used to simplify all those calculation. You go to pairwise symbol error probabilities as opposed to individual probabilities, right. You go through pairwise, the pairwise idea, invariably everything will simplify to one Q function, ok. So, roughly whenever you do any processing like this, signal processing like this, finally when you make decisions on your symbols, you can always evaluate probability of error roughly using the pairwise idea and you will get one Q function multiplied by a constant outside, ok. So, many of these equalization methods if you want to do very accurate analysis is impossible pretty much, it is very tough. So, we will not do that, we will only do pairwise and we will quickly expect one Q function to be the probability of error, ok. So, we will compare these methods based on pairwise error probability or what is inside the Q function in the pairwise error probability and that is called usually figure of merit, ok, loosely like SNR, but there might be other components other than SNR there. So, usually it is called figure of merit, ok. So, what is the idea here? So, finally probability of say symbol error in this case or in general some other error as well, we know roughly will work out to some constant times Q of an argument, ok. So, I want to say that argument is related to the figure of merit, ok. So, that argument I will it is it is once again standard to call it root gamma by 2, ok. So, you can call it anything else it does not really matter this root gamma by 2 and all it is just a convenience being used. So, usually it will work out to what? d by 2 sigma, right. So, you want that d by sigma d square by sigma square type to be as gamma, ok. So, that is why they do this artificial definition. So, this gamma is called the figure of merit, ok. So, for any system what would you like the figure of merit to be? Should it be small, big, big, right. So, it has to be as big as possible, the bigger the better, ok. If you have a larger figure of merit then your system is doing better than some other system which has a smaller figure of merit. But remember there is approximation here there is a constant outside all these things matter, ok. So, in the non-ISI case when you for instance have a two-dimensional spectrum or so, what is the figure of merit work out to? What will it work out to usually? If you have a 2D constellation X, right, no ISI, what will the figure of merit work out to? Yeah. So, d min squared by sigma square, right. So, that is how it will work out to, ok. So, what is the d min? d min is the d min of X, ok, right. When you do not have any ISI, the figure of merit is going to work out to that, ok. Your symbol SK is coming from an X, ok. What is the minimum distance? Minimum Euclidean distance in that X alphabet square that divide by sigma square. Sigma square is the variance of the real or imaginary part, ok. So, that is what this will work out, ok. So, in the non-ISI case it is very, very clear. In the ISI case what is going to happen is what we are going to look at, ok. So, at best we can expect the ISI case to be as good as the non-ISI case and we can be happy about it. If it happens to be as good as the non-ISI case then we can be happy about it, but maybe it will be lesser, maybe it will be slightly weaker, we do not know, ok. So, we will have to look at that carefully, ok. So, this is a comparison we will do, it will be in terms of figure of merit, ok. So, for the ISI case one can quickly find bounds on figure of merit, ok. It is possible to find bounds on the figure of merit for the ISI case, ok. The non-ISI case the expression itself came out very easily but for the ISI case bounds are easy, the exact evaluation might be more difficult, ok. So, the first bound I am going to talk about is what is called the matched filter bound. Basically what you assume here is, you assume that only one symbol is transmitted, ok. So, I just now said ISI case but I said I am quickly saying there is really no ISI, ok. So, if there is only one symbol that is transmitted there cannot be any ISI, right. Do you believe me or not? Obviously cannot be any ISI, it is only one symbol, ok. So, but that is a bound, right. Definitely it is a bound for the ISI case. The reason you do this is you want to account for the channel, right. When the non-ISI I am not even thinking of any H of t. So, the energy of the channel kind of gets skipped, right. You forget about that expression, but that is important to take care of. So, maybe you, so you have to bring in the channel a little bit. So, for that I am going to do matched filter bound where you have only one symbol transmitted but the channel will still be there, ok. So, if that is the case then what is your Zk? Zk is simply just S0 mk plus nk for all k, right. This is for k equals 0, 1, 2, so on. So, is it ok? Your symbol S of k is simply S of 0 delta, ok. So, you are convolving with mk. This is just a multiplication. It becomes this, ok. So, now this S0 belongs to X, right. This is my symbol. Remember this is my received symbol. I want to measure the minimum distance between any two received symbols, ok. So, remember I am doing only pairwise probability, ok. So, all my symbols now are in some very large dimensional space, but I have two symbols between which I want to find distance, ok. Right, remember this is clear. What are the received symbols now? It is not just S0. It is actually S0 times m0, S0 times m1 like that, ok. So, remember that is the received symbols. Received symbols are what, ok? Some a times m0, a times m1, a times m2, so on for a in X, ok. These are my received symbols. So, if you have had to plot the received constellation, I will need a really, really large dimension. If mu is finite, of course, yeah, it is finite dimensional. You would get a finite dimension, but still the dimension is fairly large, ok. So, now I have to find the distance between any two received symbols, ok. Two received symbols correspond to two distinct transmitted symbols. I have to find the distance between two received symbols, ok. So, once I find the minimum distance between any two received symbols, I can use pairwise probability very easily, right. My figure of merit calculation will come automatically without any problem, ok. So, that is the first step. So, how do you find that? So, d min squared is minimum over a a prime in X summation k equals 0 to infinity of what? Modulus a mk minus a prime mk squared, right. Now, mk is going to nicely factor out and you will be only left with mod a minus a prime squared, ok. What is minimum over a a prime belonging to X? Summation mod a minus a prime squared, ok. There is no summation, I am sorry, a minus a prime comes out summation mk still remains, right. So, let me write that down carefully. I think I did that a little bit fast. So, maybe it is not clear. So, you will have mod a minus a prime squared multiplied by summation k equals 0 to infinity mod mk squared, right. So, that will come out of the summation because it does not depend on a, ok. So, that comes out. So, this what is this? This is d min square X, ok. So, this we can show. Therefore, we see that in the non-ISI case, d min square becomes d min square X multiplied by this quantity, which is summation k equals 0 to infinity mod mk squared, ok. All right. So, this I can quickly evaluate, ok. This I can quickly evaluate, but this one seems like it is a little bit complicated. I might have to do a spectral factorization to evaluate that. So, maybe I want to avoid spectral factorization. How can I quickly avoid spectral factorization and come up with an expression for this quantity in terms of gamma and just H of z? Can I do that? Is it possible? How will I do that? Yeah, exactly, right. So, you see, you can compute, you can compute quickly, this can be quickly computed by integrating SH of z. So, basically it is energy in H, but there will be also a gamma square entering the picture, ok. So, that you can, this you can show will be equal to energy in H divided by gamma square, ok. So, because mod H square is the same as mod m square. So, you do part samples, you integrate over m square, you will get the same as integrating over H square. So, that becomes energy in the received pulse EH, ok. Is that fine? So, why does the gamma square come down denominator? That is how we define the spectral factorization. Gamma square is some positive constant which I pulled out, ok. So, EH by gamma square, ok. So, this, so now, this will give us an easy way of computing the figure of merit for the matched filter bound or the non-ISA case, which is what? Demon squared by sigma square. For demon square I will use this formula. So, it is 1 by sigma square times demon square X times EH by gamma square. But what is sigma square now? This is the variance of the real or imaginary part in the final Nk I got. What is that? It is N naught by 2 gamma square. So, you see the gamma square also cancels and the matched filter bound nicely evaluates to demons square X 2 EH by N naught, ok. So, this is nice to see, ok. So, remember EH is the energy in the received pulse H of T, yeah. So, whatever those constants you need to make sure that the summation works out. So, this EH has to be suitably defined, ok. So, if you want to evaluate it from mod H square, then there has to be some, if it is a DTFT of that, then there has to be 1 by 2 by H2. So, it is also possible to find it from various other formula, ok. So, be careful. So, basically this is the quantity and I am going to say this is the same as mod H square, ok, alright. So, this is a nice figure of merit to have. So, what does it mean? The figure of merit for the ISI case can never be larger than gamma MF, ok. So, this will be the largest possible figure of merit that you can possibly expect for the ISI case, ok. And see what all the factors that are involved here. You have demons square of X, of course, you expect that and then you have the energy in the transmit in the received pulse H, then you have N naught by 2 dividing, ok. So, this is nice as in it can be easily evaluated. There is nothing that stops you from evaluating it can be very, very easily done, ok. So, you see from here the probability of error for the ISI case, ok, simple error or any other error with ISI is going to be greater than or equal to some constant times Q what, root gamma MF by 2, ok, alright. So, this is the probability of error with pretty much no ISI, ok. So, this is the first bound and we will usually compare or in one or two cases we will definitely compare with this matched filter bound to see whether it is working or not. But this is a useful bound in practice because any other equalization scheme that you come up with you can always compare with this and see how close you are to this to decide whether or not you are doing very well, ok. So, this is a useful bound to have from that point of view. The other bound that we will do is the MLSD bound, ok. So, this is more difficult to evaluate and I would not make a big show of evaluating it because it is we know it is tough, ok, but we know how to do this, right. So, we know this is the best receiver for the ISI case, it is not the non ISI case, the ISI case itself, ok. And this figure of merit also we did before, right. This is going to be equal to d min squared by sigma squared, but what is d min now? What is d min squared now? You remember how did we, how do you define the minimum distance between two points in the received constellation for MLSD? What did I use? I used there was a term that I introduced in terms of paths on the trellis, ok, yeah, the error event, right. So, basically you can write this as minimum over error events E and then I defined a metric for the error event E, ok. So, this is the quantity and I said one can evaluate it for a trellis, but it is not very easy, it is possible to evaluate that you do that, ok. So, in general this will be greater than or equal to d min squared of x, ok. So, we saw that also, this will be greater than or equal to 1 can show this result in a very nice way, ok. So, it has to be greater than or equal to d min squared of x and that will work out and your favor, ok. So, it is difficult to make a very much more precise statement about the MLSD figure of merit than this, ok. This is the best bound that you can think of. So, if you use this bound, you can see gamma MLSD will definitely on the one side be less than the figure of merit for the match filter case or it will be greater than, ok, d min squared of x divided by n naught by 2 gamma squared, ok. Remember the sigma square here is n naught by 2 gamma squared, ok. So, this is the best you can say for the MLSD, ok. So, it is difficult to do more than this and unless you know the actual M of z and it is finite tap and you can run some algorithms to find what the minimum distance error event is, ok. So, if it is a small enough trellis then you can do it otherwise it is tough to do this, ok. So, this is roughly figures of merit and for approximate methods that we see we will also calculate figure of merit, ok. And hopefully you are convinced that figure of merit is a much, much easier calculation than accurate probability of error, ok. So, you just find closest received signal constellation points and then do Q of that by 2 sigma and much, much easier than anything else, ok. Alright. So, that is all the background we need. We can directly jump into one of the simplest constrictions out there for equalization which is called the zero forcing linear equalize, ok. So, here is the constrictions actually very easy, fairly simple to describe, ok. And even to analyze it is fairly easy. So, here this is the channel that we have noise gets added we have zk, ok. So, the zero forcing linear equalizer basically puts a filter in zk, ok after zk puts a linear filter, ok. So, that what you get at the output xk has zero isn, no issn xk depends only on sk, ok. It depends only on sk it does not depend on any other signal component, ok. So, what should that filter be? 1 by m o's, ok. So, at this point this is pretty much the unique filter, but if you try something else there can be more possibilities, ok. So, because before this for this model what is actually happening to R of t, you are doing h star of minus t and then you are sampling at symbol rate and then you are doing 1 by gamma square m star of 1 by z star, ok. So, if you change many of those you can equivalently achieve this overall effect, ok. But this is also a good way of doing it. This is the filter which will achieve zero forcing linear equalization. So, once you do the equalization what can I do now here? I can do symbol by symbol detection, yeah, I am not going to talk about that now, ok. So, whenever people say linear equalization zero is zero forcing linear equalization they mean that you are going to filter be 1 by m of z and then you are going to do symbol by symbol detection, ok. That is already implied when you say linear equalization, ok. So, you do symbol by symbol detection to get sk, ok. So, what is xk now? How can I write sk? xk is going to be sk plus some noise, but what will be the statistics for that noise? Will it be Gaussian? Yeah, it will be Gaussian. It is a Gaussian random variable, random process filtered by a linear time invariant filter. It will once again be Gaussian. There is no problem there, but it will not be white, ok. So, what will be the PSD or what will be the PSD is something you can determine, right. What will be the PSD? What will be the PSD? So, it will be roughly of the form 1 by sh of e power j theta, right. Do you agree or not? The PSD will be 1 by sh e power j theta, right. Of course, there is some gamma square type thing that I am that I am ignoring. There will be a n naught by 2, ok. So, there will be n naught on top which I have to take care of, ok. So, there will be an n naught. So, in this case I can say this is equal. This will be the PSD, ok. So, right now it seems like it is not really a bad thing, right. Maybe you think it is it is ok. What is what is wrong if there is a weird PSD n naught by sh of e power j theta, ok. So, an interesting question is what is the variance of the real or imaginary part of n prime of k? How do you answer the question? What is the variance? Suppose I say sigma v squared is the variance of real part of n prime k. What is the variance? Well, it will also be equal to the variance of the imaginary part, right. It is all that property will be preserved by linear filtering that will not be lost, ok. But how do you calculate sigma v squared? Yeah. So, you will have to do integrate out this area and then divide by 2, ok. So, that is the that is how it works. So, let me do that. It will become n naught by 2 integral minus pi to pi 1 by sh, ok. There should be a 1 by 2 pi somewhere, right. It should be a 1 by 2 pi. There has to be a 1 by 2 pi, otherwise it does not work, ok. e power j theta d theta, ok, right. So, what what do you think will happen to sigma v square? So, if before this what what was sigma v square? n naught by 2 gamma square. So, now it has become n naught by 2 into the arithmetic mean of what? 1 by sh of e power j theta, ok. So, that is how I am going to write it down. The final expression I am writing is n naught by 2 times arithmetic mean of 1 by sh, ok. So, that is my final expression for sigma v square, ok. So, what can you expect in general sigma v square to be? It is going to be larger than n naught by 2 definitely, ok. So, that is why it said linear equalization causes what is called noise enhancement, ok. So, people typically call this noise enhancement. Definitely compared to n naught by 2 the variance is going to be larger, ok. So, it is going to be reasonably large. When will it be much much larger? When can you expect it to be very very large? Yeah, if there is a spectral null somewhere in your channel then it is going to go off to a very large number. In fact, in that case even the linear equalizer is very questionable, ok. So, your variance is going to go off to a huge number you cannot handle or if there is a dip in the channel then the linear equalizer goes out of the picture as well. And when when why will there be dips in the channel? Well, that is what you want to do, right. You want to use bandwidth where the channel is going to low amplitudes, ok. So, you are expecting some very low amplitude part for the channel as well, ok. So, that is why you are doing equalization and all that. Channel is reasonably flat you do not have to do much equalization, ok. Since you are expecting it to go down then you may not be able to do linear equalization, ok. So, linear equalizer seems to be a limit in terms of seems to put a limit on the bandwidth that you can use, ok. So, channel is reasonably flat and it is not going down to 0 then you can do linear equalization, but you cannot really go to parts of the channel where it is going really really, ok. So, if you want to do figure of merit once you know the variance it is very easy to do figure of merit, ok. So, figure of merit for the zero forcing linear equalizer is going to be simply demon square of x divided by sigma v square, ok, right. I am doing just symbol by symbol detection. What is my what is my received value xk? It is basically the symbol plus some Gaussian noise at a certain variance, ok. Of course, there is some dependence from symbol to symbol, but I am completely ignoring the dependence on the noise, right. That is why I am doing linear equalizer, ok. So, this is my figure of merit. So, if you want you can substitute everything you know and you get the answer as demon square x 2 times 2 divided by n naught is h, ok. Is that a question? What is the question? I am not going to do that. In the zero forcing linear equalizer you do not do it, you just do symbol by symbol detection, yeah. If you want to do something else that is more fancy then you go back to MLSD, right. You will have to pretty much do MLSD at that point if you want to do anything optimal. Maybe there are some suboptimal things. We will see more versions as we go along, but in the zero forcing linear equalizer the symbol by symbol detection is kind of implicit, ok. That is why you are doing this. Otherwise why would you want to force the ISI to zero? If you can if you can deal with dependence between symbols you will not force it to zero in the first place, ok. So, you have a receiver which cannot deal with ISI at all. Only thing it can do is linear filtering. So, what do you do? You have to do this, so that is the thing, ok. Well, there is a better option. We will go to that soon enough, but for now this is the first thing you can do, ok. Of course, if there is a spectral null you are badly hit this linear equalizer will not even exist, ok. So, a version which is much, much better for implementing is what is called the zero forcing decision feedback equalizer, ok. So, most people pretty much do not implement the linear equalizer and practice except in special cases where the channel is very well behaved or you know your where there are some other peculiar properties we will maybe talk about it later, but usually you always use decision feedback equalizer, ok. So, it is a very much more common thing to use and the reason you will see the figure of merit there will be a big there will be a nice change because of something, ok. So, let us see what is the idea in the decision feedback equalizing, ok. So, let me draw a different picture for the linear equalizer. What is happening in the linear equalizer is the following, ok. So, what is happening here? The z k I am filtering by 1 by m of z, but I am going to write it in a feedback loop, ok. Because I know m of z is monic, I can of course write like this what should I put here? Is m of z enough? What should I put here? To get back my same x k as before, what should I put here? This is very basic system stereo. Write it down, write it down, what should it be? x of z equals z of z plus some filter times x of z itself, ok. So, already you know x of z is 1 by m of z times z of z. So, what should it be? I already gave you a hint, I already put m of z there. You might also want to use the fact that m of z is monic, plus 1. You are getting very close, but it is not plus 1, ok. Should I put a minus here? Should I put a minus here? Maybe I put a minus here, maybe the minus is bigger, needed. m of z minus 1, ok. So, this is what I want, I want m of z minus 1. So, you tell me, should I put a minus here or should there be a minus here or not? It is fine, it should be minus, ok. I think there should be a minus, ok. So, this is the same filter as before, see this is nothing but 1 by m of z. It is just written in a different way. So, 1 by m of z is unstable, even though I am writing it as m of z minus 1, this overall filter will also be unstable, ok. So, it is all feedback, but you are feeding back all kinds of stuff. So, it will be unstable only. So, do not worry about stability there, ok. So, what do you do next? After xk, you put a detector, right, symbol by symbol detector to produce s cap, right. This is what I did before, ok. So, a smart idea which is really smart, it came from somebody and is to move the detector before the feedback actually happens, ok, right. So, you see the way it works. If you move the, it is a very smart idea, the way it came about, it is a slick idea also. So, if you move the detector inside this loop, ok, you are not doing anything drastically different, right. Say xk, you expect it to be close to sk, right. Whether you feedback xk or the detected version of xk which is the more accurate version you want is quite ok. Normally, it does not make much of a difference, right. So, that is an idea that somebody introduced, this person called Austin, quite a long back. So, once you introduce that, you see things become much better, ok. Suddenly everything becomes much, much nicer, ok. So, that is the decision feedback equalizer. So, you then instead of feeding back the filtered version xk, you feedback decisions on xk, ok. So, you move this guy inside, ok. So, if you do that, what picture will you get? Let me draw the whole picture once again, ok, m of z, noise gets added, ok. So, you had zk here plus minus, maybe I will call this guy still xk, ok. I will call it x prime k, because I want to distinguish this between more decisions. This is s cap k, then you feed it back, but I have to put m of z minus 1 here, ok. So, this is the zero forcing decision feedback equalizer. Few comments about this structure, ok. So, first of all, is this structure linear? No, right. So, once you move these decision things inside, you are doing arbitrary decisions, not it is not when you are doing non-linear things, immediately you are doing non-linear things, ok. So, clearly this is this whole structure becomes non-linear, ok. Non-linear is one thing and another thing is it is definitely stable, right. Whatever you do, whatever your m of z is, it will be stable, why? Yeah, the detector all output only either only finite numbers, it will output only values from the alphabet, which is always finite. There is no question of it ever becoming unbounded, ok. So, nothing is going to become unbounded. It is even x prime will cannot be unbounded. If your s cap is bounded, z is bounded, definitely the filtering will also be bounded, x prime can never go unbounded. So, it will definitely be BIBO stable whatever you do, right. So, because of that non-linearity. So, that is. So, in this one simple sleek idea of pushing that slicer inside, they have accomplished at least stability irrespective of zeros on the unit circle for mz, ok. So, that is a nice thing that has been accomplished. Another thing that has been accomplished is again in the figure of merit, ok. So, the figure of merit seems like this little bit difficult to compute, ok. So, you have to compute the figure of merit here for x prime, right. You have to find out all the possibilities for x prime and all that. So, assuming all the previous decisions are correct, ok. As if you assume all the previous decisions are correct, one can quickly find the figure of merit, ok. So, I will come and pick up with this next week, ok. So, we will look at this and see how to find figure of merit and do comparisons with the previous case, ok. Is there any question? So, this is I think the essence of feedback. It is just the most vital thing, ok. Anyway, let me stop now and comment on this offline. So, it is