 So, this is lecture 29 and we have been looking at this DFE structure for the general case and the way I motivated it and where I introduced it was slightly different because the structure here requires some additional things to the minimum phase case that we saw before. So, we need to do something else which is different the final structure we derived looks something like this SK right thinking too far ahead we have a general channel head C then noise gets added to it. So, the noise PSD we take it to be some SN. So, presumably one can spectral factorize that and the structure here this one filter I wrote it as dZ times WC case you think of d of C as a filter which minimizes ISI and then W of C as the filter which whitens the noise that results. So, noise becomes non-white because you are filtering it and WC whitens the noise. And then but what happens unfortunately is when you whiten noise the signal that you somehow removed ISI from right the signal component when d of C acted on the signal component it removed the ISI and then when you wanted to whiten the noise unfortunately the signal also has to go through the W of Z right. So, signal becomes filtered by W of Z and you want to run 1 by W of Z just on the signal. So, a nice way of doing it is to run that the output of this filter through a slicer which gives you just the presumably just the signal component with some errors yes, but at least we will get the signal component alone and then you put it nicely in a feedback loop to implement 1 by WC. So, this is kind of the way in which I motivated and did the design for DFE and this filter overall filter is called a precursor equalizer and this is called a post-cursor equalizer. So, this is the general DFE structure. So, a few more remarks and comments and things to think about in these kind of suboptimal things. So, fundamentally if you want to if you want the ideal way of designing things is to minimize probability of error. So, that is the ideal way of doing it it is the optimal way of doing it there is no questions asked. But unfortunately if your channel has poles and all that and we have an infinite impulse response that thing goes out of the window you are not going to be able to implement the ideal case. So, you want to do something suboptimal. So, when things are suboptimal there is suboptimality in two ways. So, you have the ideal signal plus ISI plus noise. Noise is Gaussian. So, you know the PDF. If you have signal alone buried in noise you know what to do for detection there is no problem. But you have signal which is Gaussian plus some other signal component which is clearly non-Gaussian plus Gaussian again which is noise. So, the way this MSC criteria is dealing with the signal is kind of roughly equating that also to Gaussian and saying I will minimize the total variance of ISI plus noise. So, you see if it is just signal plus noise then you want to minimize the variance of that noise because you know probability of error in signal plus noise depends as Q of 1 by sigma. If you reduce sigma probability of error becomes better you know that clearly. But if it is signal plus some non Gaussian component plus some Gaussian component it is not clear that the optimal thing to do is to reduce the overall variance in terms of probability of error it is not clear. But that is a good approximation. So, just to generally limit the variance of anything else that is other than your signal seems like a good approximation. And if you have a lot of ISI terms adding in the central limit you might be able to argue that what you have is also Gaussian. So, roughly probability of error can be minimized. But it turns out it is a dangerous argument to make because you have other signal component contributing it is clearly not Gaussian and just minimizing the variance may not be a good idea. So, that is something to keep in mind but it is a good practical fix to minimize the mean square error even in the non-Gaussian case. In the Gaussian case it is optimal, non-Gaussian case it is not really optimal but he will live with it. So, that is the one more piece of information to think about in these designs. So, that is the DFE structure. So, now the question is what is the question now? What is the next thing to do? Given h of z and Sn you have to find d of z and w of z. So, how to find d of z and w of z is the question. In fact, the only question is how to find d of z? Why? Yeah, from d of z. So, you do spectral factorization on Sn times mod d square you are going to get w of z as 1 by the minimum phase component in the spectral factorization. So, w of z is directly derived once you know d of z. So, the question only remains only remaining question is to find d of z and for that we will take inspiration from the linear equalizer. We will say we can take d of z to be the zero forcing linear equalizer or the MMSE linear equalizer criteria that we have. So, you use those two criteria pick the d of z according to that criteria and then see what w of z it results in and then see what mean square error it results in and compare and see if it is indeed better. So, that is what we will do in this lecture predominant. It is clear right. So, that choice if you pick d of z to be the zero forcing equalizer what is the zero forcing linear equalizer? 1 by h. If you pick d of z to be 1 by h then you get what is called a zero forcing d of z. Then of course, you have to find the w of z corresponding to them. It is more work than the linear equalizer. The linear equalizer we never found any w of z we just did 1 by h and then did what? Slicing. So, you are not going to slice now you have to find the noise whitening according to the specified percentage. So, that is the that those are the two things we will see quickly go through that. So, the first choice is the zero forcing decision feedback equalizer. And here are the choices you pick d equals 1 over h. So, to calculate the error after d. So, you have to calculate the error after d and try to whiten it. So, that is important. So, what is happening to the noise? N k is flowing through d and you get e k. Am I right? That is the that is roughly well you do not get e k. Well, let me be careful here. Let me not do this. Let me not do this. I think I had this picture before. What is the picture? There are two things. S k goes through something and then n k goes through something else and then you have to subtract. So, keep that picture in mind. S k goes through h d minus 1. Am I right? So, in this case h d minus 1 becomes 0. So, there is no s k component. But in general you will have that also. And then you have n k going through d. So, let me draw the complete picture so that we can go back to this. And this gives you e k. This is a model for e k. So, now, what is the spectral this p s d for e? Since this is 0, it is simply s n times mod d square. So, d d star. That is my s e. So, d is 1 by h. So, in fact, this is s n by mod h square. So, all you have to do is a spectral factorization on this. Given h of z assuming it is rational and s n is rational, one can always find the spectral factorization. You will know exactly what the minimum phase components are. So, suppose we do that. Suppose we write s e which is s n by mod h square as sum epsilon square times m e times m e star. This is the spectral factorization. So, remember. So, what will epsilon square be? Geometric mean of s e. So, the geometric mean of s n divided by mod h square. So, this involves an ugly integral. You can avoid integral in the rational case by actually doing the spectral factorization forcing m e and m e star to be monic and whatever constant you get outside will automatically be your epsilon square. So, in the rational case, you can clearly avoid integrals. You do not have huge integrals in computing epsilon square. So, this can be computed. So, if you whiten this noise. So, what is the choice for the whitening filter now? W is what? 1 by m e. So, that is how we are going to pick the whitening filter. So, inverting the minimum phase part which is eminently implementable you do that. So, once you whiten, you get an e prime of k. The variance of that e prime of k is your mean square error. So, essentially your mean square error becomes this epsilon square. So, you do spectral factorization on the noise and you are whitening the noise. So, clearly everything else goes away and the only the power square power of error simply becomes this epsilon square which is which is the mean square error, this is geometric mean of s n divided by mod h square. So, that is the story. So, it is quite simple. So, that is nothing really major going on. You just pick W equals 1 by m e, you get the answer. So, what I am going to do now is to take the standard form just an assumption for h and then show how this how these two filters work out, how the overall structure looks and make some comments on how implementable these structures are. So, let us see. Suppose I take h to be h0 times, let us say hmax times hmin. So, let us say if there is a h0, I will take some of them in hmin and some of them in hmax and adjust and do so things. So, h0 will kind of say is 1. So, we will not worry about that too much. So, suppose h is this and what is s e and I will also assume s n spectrally factorizes as gamma n squared mn mn star, the noise as power spectral density. Then s e becomes what? Just a question of substituting things there, gamma n squared mn mn star divided by h0 h0 star, you will get a mod h0 square times what? Which will come to the minimum phase, which will be the maximum phase. So, you will have hmin and hmax star as minimum phase. Do you agree? It was maximum phase outside, then you do a 1 by z star, it is going to come inside. So, that is hmax star and hmin star and hmax will go to the maximum phase component. So, once you know the way h splits and your spectrally factorized s n, the factorization for s e becomes very obvious. So, in fact, from here you find that m e becomes what? mn by assuming all these things are monic. So, you have to pull out the constants so that h0 is something so that hmin and hmax are monic. Once you do all that, you get hmax star and your epsilon square which is the m s e becomes simply gamma n squared by mod h0 squared. So, this is in fact even easier. So, you simply look at h and make sure you pull out a h0, make hmin and hmax monic. And then you do spectrally factorization on s e or in fact, find the geometric mean of s e which is gamma n squared. They do that, you divide the two, you get a mean squared error in the 0 forcing d of e. So, that is the story. All right. So, once I choose m e like this, I know my w, d also is known, d is simply 1 by h. So, I can now write the complete structure, multiply d of z by w of z and write w of z minus 1 and write the complete structure and see how it works. That is what I am going to do next. So, d times w becomes what? 1 by h which is 1 by h0 hmin hmax times w which was right here. This worked out to be or what is w? It is 1 by m e. So, hmin hmax star divided by m n. So, things will cancel and you get an interesting thing here. So, 1 by h0 which is a constant, it is okay. It does not matter. It does not make any difference. Then you have hmax star divided by hmax and then you have an m n inverse. So, this is your dw and of course, w is what I wrote here. So, if you put all these things together, the final structure I get looks like this. S k goes through my symbol rate channel model which is hc and it is going to factor as h0 hmin hmax. Remember these guys are monic. Only then I can make the simple substitutions. Then nk gets added to it. I am going to say the psd is sn which factors as gamma n square mn mn star. And then my precursor equalizer is 1 by h0 hmax star by hmax and then mn inverse. And then I have a slicer and my post-cursor equalizer is hmin hmax star divided by mn minus 1. So, this is my post-cursor equalizer. Let me do it again. s hat k. And this is my zero forcing DFE in short. And then what is the mean square error? After all this works out to be in some simple terms, gamma n squared by mod h0 square. You can also think of it as the geometric mean of sn divided by mod h square. So, it is the geometric mean of that as well. So, let me also write that because that is useful in theoretical comparisons, geometric mean. So, this is the precursor and this is the post-cursor. Anything interesting that strikes you about the precursor equalizer? So, there are lots of interesting properties about that. We will come back to it slowly. We will make some few remarks as we go along. But the first thing I want you to observe is assuming s hat of k and all is accurate, the slicer error is white. So, it is white Gaussian. So, that is a nice property of DFE. And we of course designed it like that and it is good to see that it works out that way. So, slicer error is white which is an interesting property for DFE. So, it works out very nicely and the way we designed that. The next comment is about the precursor equalizer. The precursor equalizer seems to have a constant part which is quite irrelevant. You might want to just ignore that. Then it has h max star by h max. What kind of a frequency response is that h max star by h max? Magnitude response will be flat. So, it is an all pass response. So, that is the first comment about the precursor. So, magnitude of h max star by h max is 1. So, it is definitely doing some all pass filtering. So, this is all pass. I am sorry. Yeah and then after that followed up with MN inverse which purely it does what? Whitens the noise. So, you can view the precursor as consisting of two parts. Phase filtering which is going to play around with certain things without causing too much trouble to the PSD of the noise. If you do all pass filtering the PSD of the noise is going to come through without any problem. When the PSD level the phase makes no difference. Then you remove simply. Do you do just MN inverse? This is the same as the power spectral density with the spectral factorization for Sn. So, this all pass phase filtering in terms of magnitude is nothing. Pretty much works on the signal the effect is on the signal to remove part of the ISI. So, it removes part of the ISI. That is why you call precursor removes the anti-causal part dominant. So, it is interesting that it is all pass, does not do anything else to the signal. And then of course, you have the Posca's equalizer with your h min, h max star by. No, you cannot say all that. It is part of it mostly will be anti as well it will remove some parts. So, you cannot be very sure about what it removes. So, one can view it one can view certain similarities to the previous structure we had that is the only thing I am saying. Some part nothing beyond that can be assumed. So, that is the zero forcing DFE. It is interesting to the pre case of precursor works that way. So, if you have a non-minimum phase channel where h max is not 1 and h max has got zeros outside the unit circle, 1 by h max is still going to cause trouble for you. 1 by h max is going to be IAR anti-causal if you want to have a stable implementation that is going to cause trouble in the implementation. So, that is the only problem other than that is ok. So, if you want to do it, this will work out quite well. Any questions or comments on how this is working out? People are ok. So, these are. So, one thing you will notice is several times we get filters which are clearly not implementable ok. So, in practice you will have filters which are not implementable. So, what all this means is these are kind of optimal suboptimal structures ok. So, the criteria is suboptimal, but once the criteria is suboptimal you do not place any practical constraints on the kind of filter that you can implement ok. So, you are just saying ok you give me whatever filter that comes out I will see if I can implement that is the. So, in that way it is you are kind of making some optimistic assumptions ok. So, later on we will relax that we will relax that and say I want to say my filters finite tap has only so many tabs you tell me what is the best design we can. In that case the question becomes really really practical. So, we will do that also, but for now we should at least know how the general structure should look. See then if you are doing DFE and you want to do zero forcing DFE you know the structure to look for is roughly all pass and noise filtering noise whitening right. So, those kind of intuition you can get from the theory when you go to the practical side ok. So, but anyway in practice things are done slightly differently you will have to have finite tap filters ok. So, that is the only thing, but these things give you nice comparisons if you do the finite tap case you cannot do analysis analysis and you will not get any comparison ok. So, the next criteria is the MMSE DFE ok. So, here the choice for D is the MMSE choice for ISI after the filter ok. So, what is that? It is ES times head star divided by ES mod head squared plus SN ok. So, that is the minimum main square choice for the for D we saw this right ES times SZ inverse head star. So, this is SZ ok. So, SZ is ES times mod head squared plus SN ok. So, that is SZ alright. So, now for the error term you cannot ignore the two parts, but you know I have minimized my SZ ok. See if from the expression the MSE also is very easy to derive here ok. So, you know SC is what in this case right you completed squares and you set one term to 0 and whatever was remaining will be your SE and what remains was what ES times SZ inverse SN right that was my previous case ok. This was the power spectral density for the noise that remains after your MMSE. So, now I have to widen this noise which means I have to do a spectral factorization on SE of on this PSD and figure out what the minimum phase component is that is what I am going to do ok. So, clearly SZ is a part of this. So, I might want to do spectral factorization on SZ itself ok and of course, SN also has a spectral factorization. So, eventually this works out to gamma N squared by gamma Z squared MN by MZ times MN star by MZ star ok. So, this will be the spectral factorization ok. So, you notice this is your mean square error. So, ME works out to MN by MZ and your mean square error after whitening is going to work out to ES times gamma N squared by gamma Z squared ok right. Another way of writing the MSC is what it is the geometric mean of what the SE right. So, geometric mean is going to be ES times SN divided by ES mod H squared plus SN and for comparisons with the previous case you might want to push the ES down to the denominator ok. If you push the ES to the denominator you get this ok. So, if you go ahead and compare with the previous case in the previous case what was the mean square error 0 forcing DFE it was SN by mod H squared ok and the MMSC DFE the minimum the mean square error is going down ok SN by mod H squared plus SN by ES ok. But remember the mean square error going down may not mean anything in many cases ok. So, it just goes down you have picked a weight of filter it goes down ok maybe this is the minimum mean square error you can get just fine it is great ok, but that may not mean much why because the error distribution is not Gaussian right DFE is going to allow some symbol ISI to come through as well ok. So, it is not Gaussian. So, it does not directly translate into probability of errors there might be some adjustments you can make to improve on probability of error ok. So, those are interesting things to study, but in general this is the minimum mean square error possible you cannot possibly go below this ok. So, that is something we have used ok. So, that is the MMSC alright. So, this is ME and anything else I should do ok. So, let me again once again do that final picture which has does W and everything together ok. So, D is this and maybe I should do W here. So, W becomes what 1 by ME which is MZ by MN ok this becomes the whitening filter W. So, now, I am going to do the entire structure which means I will need D times W. D times W is going to be ES times head star divided by SZ right SZ we factored as gamma Z square MZ times MZ star then W which was MZ by MN ok. So, things will cancel and finally, you will get a filter which looks like this some constant ES by gamma Z square times head star divided by MZ star times MN inverse ok and W we knew is MZ MN inverse. So, if you draw this whole picture you get a nice picture of the complete MMSC BFE noise PSD is SN which factors as gamma N squared MN MN star H I will keep as H itself ok. So, I am not going to do H min H max ok it is not too important. D times W becomes this interesting filter which is ES by gamma Z square times head star by MZ star MN inverse ok. So, where did where did all this gamma Z and all come from came from the power spectral density of Z which factored as ok. So, let me write the first expression power spectral density of Z is going to be ES times mod H squared plus SN clearly and that is going to factor as gamma Z square MZ MZ star ok. So, that gave me all these things and then I do my post cursor which has a slicer which gives me the symbols back and then you filter through to get MZ MN inverse minus 1 that gives you your 1 by 8 ok. So, that is your that is your MMSC DFE. So, one can show that within this structure within the DFE structure the mean square error obtained using the MMSC DFE is the minimum. You can never go below that and what is that mean square error we add two different expressions for it one expression is ES gamma N square by gamma C square and also it is the geometric mean of this point ok. So, that is the structure. So, you see there are lots of striking similarities with the with the previous case the zero forcing DFE case ok. You have a head star by MZ star which is not all pass that is the first interesting thing. So, it is not it is going to do something strange to the way things are going to behave and then you do an MN inverse which is not which just finally ends up whitening anyway. So, you see that is the that is the way it works. So, you visualize it slightly differently to see the whitening and then you do a 1 by MZ MN inverse minus 1. So, in fact, the W of Z is MZ MN inverse it is a slightly different thing. So, W of Z looks eminently implementable it is a nice filter, but here once again the head star might cause problems for you. So, it is a match filter right match filters with if they have the original thing has maximum components zeros then you are going to have trouble. So, it just may not work very easily. So, so that is one thing to pay attention to I am sorry though. If the original filter has poles inside then head star is going to have poles outside and that is going to cause IR anti-causal problems. Ok. So, poles if the if your channel response has poles then match filtering is tough. Ok. So, by poles inside match filtering is tough. So, that becomes a problem. So, so that is something to watch out for and again MZ star is showing up in the denominator that is also one more thing to watch out for. So, all these things might cause trouble for you. Ok. So, one thing I want to do before we proceed is to take this zero forcing DFE and MMSE DFE specialized to the case when H of Z is M of Z which is minimum phase. So, derive write down those two structures and see what we get. So, the approximation. So, this is a special case where we are going to take two we make two assumption the first assumption is H of Z is M of Z which is minimum phase. Ok. So, what happens in this case and SN will take to be white. Ok. So, this is the assumptions that we are going to make and if you do that what happens to the zero forcing DFE what is the precursor? You can go back and look at the expressions and try to derive it. So, H naught is 1 and the minimum phase is M of Z. It is 1. So, the overall finally, the precursor will become 1. Precursor will not do anything and what will be W of Z? Equal to M of Z. It has to work out this way. So, you see the zero forcing DFE in the special case became the same zero forcing DFE we had before in the M of Z. Noise was already white so you do not have to make it white by doing anything else and then you do slicing and then you do just 1 by M of Z for the signals. So, that is M of Z minus 1. What is the MMSE DFE for that case? What is the precursor? Precursor will have some does not it have an M star? It should be an M star ESM star by ES mod M squared plus N naught. Am I right? So, it is a nasty filter. It is not a very easy filter. Am I right? Is it okay? So, it will be ESM star by ES mod M squared plus SN. So, and what is W of Z? It is not easy to figure out this. This thing is a little bit more difficult. It is the minimum phase component of the denominator that you have here. Okay. So, if this you are able to write as ESM star by gamma squared MZ MZ star. Okay. So, this will work out too. Is that fine? Okay. So, I am sorry. It is not M of Z. It is MZ of Z. Okay. So, be careful. Okay. So, that was a problem with the way I chose my. Oh, I am sorry. So, it is D times. Okay. You are right. You are right. So, what is the final expression? What does it work out to? Oh my God. I am getting totally confused now. Okay. So, what is the precursor? So, you are saying this is not correct. You are right. So, that is for D. Right. So, I do not want to write D. I want to write D times W. Okay. So, this will work out to ES by gamma Z squared M star divided by MZ star. Right. Is that fine? And W is fine. Right. W is fine. So, I wrote the expression for D in terms of instead of DW. So, precursor is this and MZ will be a 1 by N naught. Oh yeah, 1 by N naught. So, N naught will come here. Okay. So, the MSE in this case is what? For the zero forcing TFE. So, it will be the geometric mean of N naught by mod M squared. Right. Am I right? And the MSE here will be geometric mean of N naught by mod M squared plus N naught by ES. Okay. So, you can also write it in so many other ways. This will work out to the same point. Okay. So, I know we have some time left, but I think this is a good place to stop. And the next class, we will look at some, if we take a simple example and work it through and study it. Okay.