 So, this is lecture 28 and we are looking at equalizers, so we have been looking at various types of equalizers. I am going to do a brief summary before we proceed, I think we will some point, I will come back to it quickly, but I want to just briefly rehash few things, so we are looking at several, see there are several angles at which you can approach equalizers, in fact there are much more than what I am going to do, I can only hint at some of the possibilities, so the basic signal processing problem is signals corresponding to different symbols are going to add linearly combined and add to noise then you are going to receive it, so that is fundamentally the problem, so what do you do with that is the question and there is one optimal thing to do, that optimal thing turns out to be not implementable if the channel has a pole for instance, so this is really a bit of an issue because in several cases you might have that problem, so since that is not implementable we are dealing with a lot of suboptimal structures, so there are lots of ways of doing suboptimal structures, that should be clear to you from whatever you have learned so far, the optimal thing is just one usually and then once you come suboptimal there are millions of possibilities and after you go out of this course if at all you try to implement some equalization you will see you will be doing something else which is suboptimal and suboptimal will involve a lot of approximations, the intuition will come based on lots of approximations and ultimately you will just make an implementation and try it and see how well it works, so that is how the final goal or performance will be judged, so some things to keep in mind, one of the crucial things to keep in mind is to look at the slicer input, so that is a crucial thing which is common to many of these equalizer designs, so ultimately after you do some processing to the signal, finally you are going to slice and see the slicer input and look at the signal component plus the noise component and do all your processing based on that, so that seems to be like one common thread in most of this linear equalizers, BFE's and all that, so you look at the slicer input and you focus your attention on the slicer input, I want a certain characteristic at the input to my slicer, so first thing we thought was signal contains two parts, the valid signal and ISI, ISI plus the valid symbol, so in fact the input to the slicer you have three different components, a contribution from SK which is your symbol of interest and contributions from SK plus or minus 1 SK plus or minus 2 and all that, those contributions and then you have noise and you can do signal processing, you can condition your signal in several different ways to try and reduce or increase different parts of this, these three parts, so the first idea is to do what is called zero forcing, so before all that the first thing is to decide what kind of processing you want to do, so one processing that immediately suggests itself is filtering, so you are getting a certain symbol rate samples, so our general model was SK is going through some HZ and noise is getting added, so the noise PSD we assumed is some SN and then what you do with the ZK is the question, so the first model is to put some filter there, so that seems like the most obvious thing to do, so you want to put a filter there, so the question is how do you decide that filter, so where do you go for that, you go to the slicer again, so you do a lot of things here, you might do a lot of things but finally you are going to have a slicer and the slicer input really captures every property that you want to use in designing the signal processing, so that is the first thing, so the first simplest thing to do is to not worry about, worry only about ISI at the slicer input and say I want to drive that ISI to zero, so you do a linear equalizer to drive that ISI to zero but in the process what have you done, you have increased noise, so you might say why do not you do the filtering only for the symbol, only for the signal, so that is not possible, so that is the fundamental problem, so noise and signal have some more added and you have to work with the sum and there is no way of separating it out, so that is to keep that in mind, I am saying no way of separating out accurately at the input to your processing, so you have whatever processing you do you are going to do for both ISI and noise, so that is the problem, you thought you are removing the ISI but in fact you are working on the noise also and noise is getting enhanced, so that is the problem, so how do you deal with that problem, how do you try to remove the ISI from the noise is the central is the next question, so if you want to get rid of the noise enhancement, you have to somehow get rid of noise and keep the signal, so if you look carefully at the receiver which part is doing that for you, which part is throwing away the noise and retaining the signal, slicer, so while you looked at the slicer input, the slicer is also doing a good job for you, it is somehow getting rid of the noise and giving you just the signal, so now if you assume the slicer output is accurate, you can process the signal separately and the noisy signal separately in your receiver, assuming your slicer output is accurate, so that is where this feedback comes in, so in essence feedback plays such a central role in almost any signal processing or electrical engineering application is because it somehow removes that error and gives you the signal to play with, and then you play with that signal and play around with it and see what else you can do with it, so that is how the suboptimal, that is the kind of the philosophy of all these suboptimal equalizers, so you have signal plus noise coming in, you can do some filtering to it, but that is not good enough because you are constrained to filter both signal and noise, you are worried about the slicer, but the slicer is doing a good job of removing that noise for you and so you take the output of the slicer and play around with it, you can do better, so that is the overall philosophy and we will see as we go along, we will play this game in various ways, so these structures will change and we will play around, so almost all ways will be guided by the slicer input and also the slicer output, whenever we desire to process the signal alone, we will go to the slicer output and whenever we want some constraint for designing this part, you look at the slicer input, so that is philosophy I think hopefully you see where all this is coming from, so it is important to do that and the next thing that one needs to keep track of is the mean squared error, so where did that come from was the question, so of course the ultimate goal is to minimise probability of error, probability of error is a difficult expression, it is a non-linear expression number one, number two, it cannot be computed in a decision directed fashion very nicely at the receiver, in the training mode you can nicely compute it, but it can quickly become bad if you are doing completely decision directed, because there are ways to overcome those problems, but anyway if you are doing completely decision directed then you can go wrong, so you go wrong, you continue to go wrong, there can be problem, on the other hand mean squared error seems like such a clean way of characterising any problem, I am sorry, well let us forget about it, probability of error is a tough thing to deal with, the only thing we can deal with is mean squared error, see look at the slicer input it has got ISI, it has got valid symbol, it has got noise, if it did not have ISI yeah you can deal with probability of error, if it has ISI then probability of error becomes more complicated, you have to worry about propagation and all these things, so you do not worry about all that and only look at mean squared error, so maybe it is some suboptimal thing, but strangely enough what people have shown is, if you do mean squared error, mean squared error you can reach capacity, so capacity is the best you can do, so now that people have shown that mean squared error becomes a very definite valid criteria, so of course in this course I will not show you how that works, but people have shown that mean squared error looking at mean squared error is a very good criterion in the sense that it can be capacity achieving, so it is optimal in that way, it is not immediately clear who it can be optimal, but it is optimal in that way as in you can achieve capacity, so all these things are motivations for why mean squared error is a good criteria and we will look at mean squared error at the slicer input, between the slicer input and the slicer output as far as the receiver is concerned, but theoretically how do you define mean squared error, you define error as slicer input minus, minus SK, this was my error symbol and mean squared error is mean of square of this, so all these things are, see all these things are fairly new to you, all these terms are fairly new to you, but hopefully you see ultimately you are not doing anything that is very great or fundamental here, so it is just a nice way of doing the single processing and most of it is intuition simply motivated, but the deeper meaning is mean squared error is much more justifiable at a much more fundamental way from a capacity point of view, so that is the, I am just telling you that take it on faith, maybe if you are interested I can give you references for reading up more on that, so that is the story, so let us proceed with what we were doing, so we were looking at the MMSE linear equalizer, so we were trying to design that linear equalizer which will minimize the mean squared error at the input of the slicer, so I had an expression for it, the first expression I had, well let me remind you what the structure was, SK is H of z, we are not assuming this is Monic minimum phase and all anymore, just taking it to be a general filter, general filter H of z, then NK once again is a general noise process Gaussian, but with some arbitrary PSD, I want to design a filter here which I called D of z and then I want to do a slicer, this is my 0 forcing equalizer, I am sure I called this zk, this xk, this is s hat k, so my ek became xk minus sk and then carefully writing out all the filtering and all that, I was able to show that the power spectral density for e is going to be what? Es multiplied by ht minus 1 squared plus what? Sn multiplied by mod d squared, so this is what I was able to show, so this is actually a function of you can imagine e part j theta on the unit circle, so that is how this power spectral density works, it is non-negative real and my msc is what? Mean square error is, how is it related to sc? Yeah, the arithmetic mean of sc, so it is all real and non-negative, it is the arithmetic mean, so if I minimize sc at each point theta, I would have minimized my mean square error as well, so that is the first step. The next step is to recognize that sc has been written as the sum of two squares, so whenever you want to minimize sum of two squares, the useful trick is to complete squares and then see the reminder and the variable you desire should be inside the square, so then you know how to drive that square term to 0 and whatever is remaining is has to be the minimum case, so I said you can try it, if you do all that you will get the expression that I gave you in last class, so let me see where that expression is, here it is, so the expression is in terms of sc, so it worked out as sc times mod d minus es sz inverse h star square plus es sn sz inverse, this is what it worked out to be and that is a nice form in which we completed the square and sz is what? The power spectral density of z which is easy to come up with es mod h square plus sn, so in terms of this I was able to complete squares, so a nice exercise is to show that these two expressions are exactly the same, so it is a simple algebraic exercise but it might give you some lessons in how to play around with these complex variables, basically you are trying to complete squares in complex domain and so on, so this is what we got, so from here it is very clear how to minimize the mean square error or how to minimize s e the power spectral density of the error at every point, so you simply choose d to be what? es h star divided by s c, so I will write it in the full form mod h square plus sn, so that is my d and what will be my mmsc as in the minimum mean square error? It is going to be the arithmetic mean of es sn, so let me write it in the complete form, sn divided by s c which is es mod h square plus sn, the arithmetic mean of this, so let me write the es also on top, so that becomes a nice expression, so that is the minimum mean square error, so as long as you are forced to do linear equalizer in this form, in this structure this is the best d for minimizing your mean square error, you cannot do anything better than this, so assuming this is what you are allowed to do, the only thing you are allowed to do is do a linear filter on zk, what is the minimum mean square error I can get, so remember this mean square error now has two components, previously when we did zero forcing, if you look at this expression, when we did zero forcing what happened to the mean square error? The first term became zero and zero forcing what was the choice for d? 1 by h, the first term became zero and the second term contributed to the mean square error, so as we did the analysis it is clear that that may not be the optimal choice for d for minimizing the total mean square error, if you think ISA is more evil than noise then you might want to pick zero forcing, but in the way you are signaling if it does not matter whether ISA or noise the only thing that matters is the error terms, the magnitude of the error terms in my input to the slicer, then you might want to minimize the total sum, why would you just think of ISA any different from noise, so if it is the same then the total sum needs to be minimized, if the total sum needs to be minimized then doing zero forcing is not the optimal choice, so if you notice here the filter is slightly strange, strange or not, it is strange, so ES h star divided by ES mod h squared plus SN, but it is a filter, it defines a frequency response if it is rational, if all these case are rational, once again gives you a rational c transform, so it is all rational and nice, so it is no problem, so that is the choice, sorry did the expression you got an answer, there is nothing more to say beyond that, so that is the simple question, so let me look at these things closely and make a few more comments about stability and all these things, so the remarks are quite important, so let me just copy this and paste it into the next one, so there you go, that is my MMSELE if you will, so you will see there will be the moment you do something like this there will be a C of acronyms, so you need to be comfortable with all these things, MMSELE minimum means squared error linear equalizer is this expression, so remember what your h was, h had several terms, it had h0, then it had h min, then it had h max, then it had h0, so basically you had zeros and poles inside the unit circle, zeros and zeros on the unit circle and zeros outside the unit circle, so those were assumptions and my h max I said it has to be FIR and for practical case it cannot be IR, and h0 also I said practical filter is a stable, invisible filter is stable, so it cannot have poles, so those were reasonable assumptions to make on h of z, so we notice while the zero forcing linear equalizer depended only on h, the MMSELE depends on several other things, it depends on the energy of your signal, it depends on the energy in noise, so if you can't estimate those or if you don't want to estimate those then the MMSELE is clearly not implementable, so if you want to implement MMSELE you have to have a clear idea of what this ES is, what this SN is and all that, so that's some added complexity in the MMSELE, but in effect it gives you a, finally it gives you a better mean square error, so maybe that's good for you, so few more comments about implementation, so if you want to implement this, how do you go about implementing it, so you have few remarks, you can do a cascade, so that seems like a nice way of implementing D, so you take zk, first filter with h star, this is actually you're saying this is what, this is a filter matched to h of z, so it's a matched filter, matched to h of z, h star of 1 by z star and then you do your other filter, you do this filter and you get your xk, so this might be a way of implementing it, so this guy, is it a problem doing h star, is it a problem or not, no problem, suppose you have poles inside the unit circle in h, it can happen, what will happen to h star, h star of 1 by z star, so this is a matched filter, so what will happen to the poles, if you have zeros, poles and zeros, you're going to go outside the unit circle, so it becomes a problem, so in general you can't handle poles outside the unit circle in h star of 1 by z star, so if it has poles outside the unit circle, then you have problems, what is h star of z, h star of z makes no sense, so whenever you do h star, it become h star of 1 by z star, so when I write, see when I, is that clear or not, it has to become h star of 1 by z star, it's no h star of z, what's the confusion here, you're asking me what this h star is, if I write in terms of z, it will become h star of 1 by z star, so what do I see, in your, if you're restricting to the unit circle, what is z and 1 by z star, make no difference, but outside you have to write it as h star of 1 by z star, no that's the matched filter, that's how you write it, clear or not, what is the confusion here, if you write a symbol, a signal h k convolved with h star of minus k, you get mod h square, so the h star of minus k, when z transform becomes h star of 1 by z star, so whenever you, whenever I write h star, if I write, if I'm thinking about z, I have to make it 1 by z star, only then there is a realistic meaning with the signal that I have, of course if you're restricting the unit circle, it makes no difference, so what happens because you're doing a matched filtering is, poles inside might go off to poles outside and that might, that might make your filter IR anti-causal and that's the problem, so this is a bit of a problem, the matched filter can be a problem if you have poles inside the unit circle, so maybe you can approximate it, maybe you can implement it, so but this guy is okay, so if you look at on unit circle, there's no problem, it's all non-negative real and positive and all that, but then of course if there are poles in the denominator, then there is a problem, there can be poles and if it's outside the unit circle, once again there can be problems, but it's in general, it's not too bad, but definitely this will be stable, there's no problem with stability here, on the unit circle, it cannot go to 0, why is that? It's all positive, so it cannot go to 0, I said it's not going to go to 0, all this is not going to happen, so on the unit circle, it's going to work, so that's a quick word about implementation, the matched filtering is a problem, in case you have poles, then there might be a problem with things becoming anti-causal, but in general it's okay, so it's not as bad as 1 by h for instance, so that's the first story, so not as bad as, let me just put it that way, so 1 by h becomes really bad, this is okay, this can be done, see for instance 1 by h cannot be done if you have 0s outside the unit circle, so here 0s outside the unit circle won't trouble you too much, so things like that are okay here. So the next thing is what happens when SN tends to 0, when SN tends to 0, what happens? MMSE becomes 0 forcing, so MMSE tends to 0 forcing, of course when you don't have any noise, you don't have to worry about noise enhancement, so this filter with 1 by h you get, so first part, noise part goes to 0, so you don't have to worry about it, only have to pick the ISI part to 0, okay, so that's the model of the story, okay, so this is the ideal MMSE linear equalizer, it still has trouble with implementation aspects, things can become my AR anti-causal if there are poles inside and outside, so it's a bit of a problem, it's not really a solved issue but still it's better than the 0 forcing linear equalizer in many cases, okay, so one thing to specialize to is the minimum phase channel, okay, so here's a special case, it's when h of z is the M of z we had before being monic and minimum phase, okay, so in this case we only saw the 0 forcing DFE before, okay, so now we 0 forcing linear equalizer before, I'm sorry, now we're going to see the, we saw two equalizers for this case, right, we saw the 0 forcing linear equalizer and the 0 forcing DFE, so now we're going to see the MMSE linear equalizer for this case, okay, so in this case what happens, okay, so in the next case we'll consider is, I mean the noise spectral density being just N0 by gamma square, okay, so basically white, okay, so what's happening is we're going to the previous case, okay, so you have MZ here and then noise gets added which is white, okay, what happens to the linear equalizer, the MMSE linear equalizer in this case is the question, okay, so D is what? ES times h star which is in this case M star divided by ES times mod M square plus N0 by gamma square, okay, so another way to write this is M star by mod M square plus N0 by ES gamma square, okay, so there's really no change here, I've just rewritten the same thing except that SN became N0 by gamma square, so this is the D that you might want to put, okay, and then you have the slicer, right, so the important thing is this is not 1 by M, okay, so in the zero forcing linear case this was 1 by M, this is what now M star by mod M square plus N0 by ES gamma square, so that's just a bit of a problem here, okay, so maybe it's not, problem in the sense maybe it's not as simple as what we had, okay, so it's not 1 by M, right, so it's not as simple as implementing 1 by M, you have M star, okay, which might have poles outside the unit circle, okay, so you might have to worry about the implementation, okay, we have a match filtering, so it's not as simple as 1 by M, but maybe it will give you a lesser mean square error, okay, so what is the MMSE, okay, so it's the arithmetic mean of ES times SN which is N0 by gamma square divided by ES times mod M square plus N0 by gamma square, so this is the mean square error, so I am just substituting it into the formula, okay, so I am not doing anything great here, just putting it there, so if you play around a little bit you will get that this expression becomes N0 by gamma mod M square plus N0 by ES, the arithmetic mean of this, okay, so this was R, this is our mean square error, exactly the same as before, nothing different, so I want to compare this with the zero forcing linear equalizer, for the zero forcing linear equalizer what becomes the mean square error, okay, we did not, we calculated something called the figure of merit, the figure of merit also involved some computation like this, if you go through and do the calculation carefully, you will see this will work out to N0 by gamma mod M square, the arithmetic mean of this, okay, so you can go through and do this, okay, so which of these two guys will be larger, the zero forcing has to be larger, right, so look at what we are doing there, the denominator we are adding an additional term, okay, so the arithmetic mean is definitely going to be lower for the MMSC linear equalizer as opposed to the zero forcing linear equalizer, but what did we pay, the penalty we pay is your filter becomes more fancy, even in the minimum phase case, the filter becomes a little bit more difficult to implement, you have M star and then maybe it is more fancy, in the linear the zero forcing case it was simply 1 by M and if M is minimum phase you are not scared of 1 by M in anything, anyway, okay, so M star is a little bit more painful to deal with, okay, so but if you can afford to do that filtering then you get a lower mean square error, okay, so in that way MMSC also turns out to be better, but in a with a pinch of salt, okay, so maybe the implementation is not that easy, okay, so that is the MMSC, okay, so now we are going to move towards the DFE, okay, so we are going to move towards this, this is what we are going to do next, we are going to do DFE both the zero forcing and the MMSC part of DFE, okay, and the way I introduced the DFE the last time, okay, how did I introduce the DFE, okay, I did the 1 by M of Z differently and simply push the slicer inside, okay, I did not really go into great detail as to what is actually happening, now that we are trying to minimize this mean square error and that has become a metric, one can do better intuitive explanations for the DFE and what else it is doing, okay, so that is what I am going to do now to motivate the DFE a little bit more from a fundamental, from an MSE point of view as opposed to just moving the slicer here and there and all that, okay, so let us look once again at what the linear equalizer was doing, okay, so the linear equalizer is taking SK, okay, so this is the same model as before, H of Z, okay, you got ZK, you did some D of Z, XK and it went directly to the slicer, right, to produce S hat K, right, this was the linear equalizer and the linear equalizer does not really process or attempt to process the signal separately by using the output of the slicer, the philosophy of DFE is you want to use the output of the slicer and try to process the signal separately, okay, so that is the whole philosophy of the DFE and to motivate the MSE point of view, I am going to do it slightly differently, so if you look at EK, this is what, XK minus SK, okay, so EK is first of all the main problem with EK is or one of the problems is it is non-white, okay, well actually it is a problem and an opportunity, okay, EK happens to be non-white, okay and previously we were just dealing with EK as it is and we found the power spectral density of it and worked with it, okay, so maybe you want to make the error white, okay, okay, so that is the first question, so suppose you want, instead of making it non-white, you make it white, okay, so if you can make the error white then maybe there is a potential decrease possible in your mean square error, right, so maybe, so instead of integrating over some non-flat area maybe if you make it flat then the integral can go down, okay, so that is one philosophy, okay, but for that you need access to the error signal, right, nobody will give you access to the error signal, so how will we generate error signals, well let things go through the slicer and subtract etc, okay, so that is the philosophy, so assuming you have access to E of K in some form in the receiver, can you try to whiten E of K and then see if your mean square error is going down or not, okay, so that is another way of motivating decision feedback and doing equalizer that way, okay, so how do you whiten E of K, okay, so that is the question, main question to ask, can we whiten EK and will it decrease the MSC, okay, so we will try to whiten EK first and then ask the question of whether it decreases MSC next, okay, so that seems to be the reasonable thing to do, so how do you whiten EK, so let me write down the model for EK here, the way it actually happens is SK goes through HD minus 1, okay, NK goes through what, simply D and then you add these two things, you get EK, right, this is my EK, okay, so how do I whiten, it is very easy to whiten actually, so you do a spectral factorization on the power spectrum of EK, if you do that you are going to get what, some epsilon square which is the, which is geometric mean of SE times ME and ME star, okay, so once again I am not writing the argument here, so when I write ME star what does it mean, ME star or 1 by Z star, okay, so nothing else makes sense in this kind of factorizations, okay, so once I know that this power spectral density of EK can be factored like this, how do you whiten, it is a very obvious and simple way of whitening it, do 1 by, okay, just do 1 by ME, okay, so I will come to this, putting other constants in before, simply do 1 by ME, right, that is the most implementable, most implementable whitening filter if you will, okay, just do 1 by ME, so previously we were doing all kinds of other things, right, we did 1 by gamma square M star and all that, okay, so we will see why this 1 by ME comes in, okay, so after this you are going to get E prime K, okay, so I will call this as W, WZ, okay, so that is my 1 by ME, okay E prime K, so here I know definitely this will be white and what will be the PSD, epsilon square, okay, epsilon square is going to be my PSD, okay, right, so, okay, so it seems like maybe one can whiten the error, okay, so it does not seem like a very difficult proposition, so we can whiten the error, okay, if you have access to EK separately, you can whiten the error, there is no trouble there, okay, but will we have access to EK separately and how do you integrate it with the receiver is the question, okay, so I am going to do that next and then we will finally come back to the mean square error and we will see absolutely in the mean square error also we get an advantage even if you do this whitening, okay, so we will come back to that, hold on for a while, I just want to make sure that I can do this within my receiver, okay, I have access to the EK separately and I can do this within my receiver, okay, so it is a little bit of a trick here, it works in a very in a very interesting way, so I want to show you how this is going to work, okay, so let us look at SK going through H of Z, okay, noise is coming in, I get ZK, now at this ZK level, right, ZK is what noise plus signal, okay, right, if I want, if I try to whiten something here what will happen, whatever I do will happen both to the noise as well as to the signal, I cannot separate anything here, okay, but I am going to go ahead and do it, okay, so I am going to say I will do first this D of Z, okay, right, go back and if you look at it, first this D of Z, then following it I will do W of Z, okay, so I am going to instead of writing it as two separate filters, I will simply write it as one filter which is D of Z times W of Z, okay, so what has happened here, in YK noise will be white, okay, so there is no problem, noise will be white in YK and you can think of D of Z times W of Z as one filter if you want, okay, so it is not a big deal, do not worry about, this thing has got PSD some SN of Z, you can think of this as one filter which whitens noise also, so it is okay, it is also a very valid way of looking at it, but we will come back and look at some other ways of doing this, okay, so I have somehow managed to whiten noise, okay, but what has happened, my signal has gone through an extra W of Z, right, I want my signal only to go through D of Z, right, you go back and look at the structure before, I want my signal only to go through D of Z, I do not want it to go through W of Z, I want only the noise to go through W of Z, okay, somehow both of them have gone through the W of Z, okay, so now I want to do what, 1 by W of Z to signal alone, okay, that is what I want to do, but I know I cannot have access to the signal alone, so what should I do to get access to the signal alone, put a slicer, okay, so I am going to put a slicer here, okay, I am going to put a slicer here, okay, I will come back and modify this, there is a modification to be done here, I am going to put a slicer here and get an approximate version of the signal, okay, so now a nice way of implementing the 1 by W of Z is what, feedback W of Z minus 1 from S hat minus S hat of K, okay, so that is a nice way, so you just simply do this W of Z minus 1, right, from the output of the slicer back to what? The input and that gives you the 1 by Z you wanted on the signal alone without the noise, okay, anything that comes here, okay, I am going to call this xk, okay, so let me just rewrite this with some room here, okay, so this is yk, xk, okay, so whatever noise that xk has is going to be knocked off by the slicer, okay and then the signal alone goes through my feedback filter and as long as my signals are correct, all the extra signals that would come in yk will keep getting knocked off, basically 1 by W of Z will be done properly, okay, as long as it is coming back, there is no problem, then xk will continue to be the proper signal that I want, okay, so without too much of a problem, okay, so that is the structure for the DFE, okay, so this is exactly another way of deriving the DFE, so previously I just did not do anything, I moved the slicer to the inside the feedback loop, okay, so now we have from filters before and after and all that, so it is a more complicated derivation, but this is the DFE structure, okay, so the heart of the structure are these two ideas, the idea that you have to filter the, you have to whiten the noise, okay, to try and minimize the mean square error, hopefully maybe minimize the mean square error and to whiten the noise somehow you need access to the symbol and noise separately, okay, so one way of getting access to the symbol is to bring the slicer into your processing, so that you get access to the symbol and you do suitable feedback and you do it, okay, so this is the derivation and a lot of terms here, so this filter overall is called the precursor filter, precursor equalizer in fact, it is called the precursor equalizer and this filter here is called the post-cursor equalizer, okay, so this structure is usually generalized, you know, I mean, so you pick some arbitrary filter here, some arbitrary filter here and try to optimize them to minimize something, okay, say your probability of error, MSU or something, so those are generalizations of the DFE for other criteria, but the way we derived it, we wanted to have a linear filter first and then followed by whitening of the noise, so that gave us this way of deriving the DFE, okay, so that is how we came to the derivation and hopefully it is reasonably clear, okay, so this is going to achieve noise whitening approximately, the reason why it is approximate is the decisions can be wrong, okay, so it achieves accurate noise whitening whenever the decisions are right, achieves noise whitening if SK is accurate, okay, so that is a nice thing to see, okay, it does not contribute anything else, okay, so I think this is a good point to stop, I will pick up from here the next class, we will try to proceed with this in the next class.