 And what we are going to do in this class is summarize all these non-optimal equalizers that we have been looking at and see one example case where we actually derive it for each of those, derive all the equalizers for one simple example case. So, I am going to summarize first and my summary is somewhere here, so I will make a table and we will talk about it. So, here is what we are looking at SK and ES is expected value of mod SK squared, Hc, like I said there is also a version where you have a PSD for SK. So, I am assuming the PSD is just flat and ES in this case assuming IID SK, there is also a version when this is not and it is not significantly different from the other case. You simply replace wherever you have ES with the power spectral density you will get the same answer. So, it is not very different, but anyway I have not done that explicitly in this class. So, you have Nk, we did look at the case where the noise is, noise has a certain power spectral density and it is not necessarily white. But in most cases we will restrict at least an evaluation, we will restrict ourselves to and H of z I am going to assume splits as H0, Hmin, Hmax and then all this results in a Zk and suboptimal equalizer is going to come after Zk. It is going to process Zk to try and estimate SK. So, that is what it is going to do and Sz if you see, we will need Sz also and many of the at least in the MSE criteria, Sz naturally shows up and that we will take to be this. So, I apologize for the bad notation here, the subscript and the argument of these functions are the same, but hopefully it is not too bad. So, you can distinguish which is the subscript and which is the argument. So, it is not too bad. So, let us make a table. The table is going to be linear equalizer, zero forcing, then MMSC and then DFE. And let me draw some lines. Let us make a line below that and under linear equalizer we are going to have two different. So, linear equalizer you have to only worry about one filter, just the D and then the MSE corresponding to that. And for, so these two I will write down, so D is not too bad and the MSE is also not too bad. And for the DFE likewise, you have, well unlike the linear equalizer you have two filters you have to worry about. The first is the precursor which I have denoted as DW, D times W and this requires some space. Then followed by the post-cursor equalizer which again requires some space here, hopefully I am not running out. Then finally, the MSE corresponding to each of those things. So, it is good to have such a table and try to fill it out. So, if you look at zero forcing linear, 1 by h is D. So, you put a one filter with 1 by h and then you slice after that. So, what is the MSE? It is also very easy. It is going to work out as the arithmetic mean of SN by mod h square. So, you can see why it will work out like that. You just filter the filter the ISI completely goes away, but noise gets filtered by D, D which is 1 by h. So, SN is going to get multiplied by 1 by mod h square and the arithmetic mean of that will be the variance of the noise and that has to be the MSE. The MSE linear equalizer worked out to ES times h star divided by Sz. Am I right? This is what it worked out to and in this case, the mean square error works out to SN divided by mod h square plus SN by ES, the arithmetic mean of this quantity. So, you can compare these two, the MSE linear equalizer and the zero forcing linear equalizer and probably conclude that one is less than the other. It is quite easy to conclude because the denominator is increasing by a positive quantity, certainly in some points it is going to be non-zero. So, it is going to pull it down. So, that is the arithmetic mean. So, it is just MSE is better and in terms of implementation you will have to worry about the poles being introduced by 1 by h or by ES h star by Sz. So, then depending on the poles and where they are they might be implementable, stable and stable all kinds of problems might arise. So, that is the DFE. So, you move to the decision feedback case things are more slightly more complicated you have many more filters and the zero forcing the filter happens to be 1 by h 0 h max star divided by h max which is an all pass filter which only filters the phase then MN inverse which is the whitening part of that filter. Since it is all pass noise PSD does not change after that. So, you can filter with simply MN inverse to make it make the noise white. So, you know the DFE the slicer error is expected to be white right. So, your whitening. So, that is something that works out and then W worked out as a bit more complicated in this case h min h max star divided by MN minus 1. No, there is no minus 1 is there a minus 1 there is no minus 1 right. So, I will write W minus 1 here just to be consistent. So, that is the that is the pre cursor the post cursor filter it is actually not W it is W minus 1. So, I am writing W minus 1. So, in this case if you compute MSE what is the assumption you have to make to compute MSE easily. There is no error propagation at the slicer right. The slicer is making accurate decisions. If the slicer were to make accurate decisions then the MSE will work out to nicely the what mean the geometric mean of Sn by mod h square. So, since we know the geometric mean is smaller than the arithmetic mean when compared to the zero forcing linear equalizer the MSE in the DFE will be zero forcing DFE will be better slightly lesser. But what is the catch there? What is the problem with talking about MSE for this? For DFE zero forcing it is not a big deal, but for the MSE we will see. The next thing is the MSE filter here the precursor worked out to gamma z squared h star not just h max star it is h star divided by Mz star which is not all pass followed by Mn inverse and the post cursor worked out to Mz Mn inverse minus 1. And in this case the MSE will work out to the geometric mean of this quantity ok. So, you see when compared to the MMSC linear equalizer the MMSC DFE is going to be better and when compared to the zero forcing DFE the MMSC DFE is still going to be better which is the comparison that you cannot make easily in terms of zero forcing MMSC you know DFE and the MMSC linear equalizer. So, that is the comparison you cannot make directly unless you do accurate computation you would not know which is the mean square error ok. So, the MMSC k is the mean square error can be a little bit misleading ok. The reason is since your noise is not purely Gaussian you have some simple component you cannot treat purely treat it purely as Gaussian and your probability of error may not necessarily go down just because the mean square error went down ok. So, later on hopefully in this class or next class I will show you a nice way in which you can actually get larger MSE but have better probability of error in the MMSC DFE ok. So, that is a very interesting idea ok, but this is a picture ok. So, what you have to learn ok. So, this first thing is while the derivation while the while we relax the assumption of MLSD complex complexity is probably going down, but still it does not mean that all these filters are implementable. Several times we ended up with IIR anti-causal filters, IIR anti-causal can only be approximated ok. So, in that way these filters are not constrained in complexity. So, in most cases anybody who wants to build a filter will say I can only do so many tabs ok, so many tabs causal, so many tabs anti-causal. So, it is all finite you delay it and you implement it, it is the only thing you do ok. You cannot say I will implement an IIR anti-causal ok. So, these things do not work very easily ok. So, that is one thing to keep in mind. So, these derivations are just to give you a nice idea about how the, is there a question at the end? So, it is just to give you an idea of how the optimal structure will look, what can you expect in the optimal structure, what kind of problems can you expect even in the optimal structure during equalization. What happens in your channel if the 0 moves outside the unit circle right, you might think 0 moving out, 0 moving outside the unit circle should not cause any problem to me in the equalizer, but we will see in one of the calculations that can cause trouble in many of your equalizers ok. So, you should know where you are paying the penalty ok. So, those kind of nice intuitive calculations can be done in this framework even though the filter is slightly suboptimal ok. So, also this feeds into a lot of ideas that you use in suboptimal constructions ok. So, what you have to learn is very important learn is given a H of z and an SN ok and a constellation you should learn how to compute ok all rational ok, I will not expect you to do it for non-rational cases, all rational cases you should know how to compute each of these things ok. And this involves a certain level of DSP skill ok. And if you are not sure about your z transforms what it means to write z inverse things in terms of z, what is max, what is min, where the 0s comes, where the pole comes, I would strongly suggest you go in and read up a little bit it is not too late now, you should be able to compute each of these quantities ok. And remember you can avoid computing, you can avoid integrals in all the MSE computations. How do you avoid integrals in the arithmetic mean computation? Arithmetic mean is the coefficient of c power 0 ok. How do you avoid integrations in the geometric mean computation? You have to do a spectral factorization. Assuming you can do a spectral factorization, you have identified the minimum phase and maximum phase components, you can avoid integration. Why is that? The geometric mean comes out as the constant in the spectral factorization right. Any spectrum you factorize it is geometric mean times the minimum phase monic times the maximum phase monic. So, you pull out all the coefficients you will get the spectral factorization values ok. So, maybe I will show it for some simple examples, I am sure in the tutorial problems you will do more of this ok. So, this is something that we will do ok. So, any questions, anything that disturbs you, any comments you might want to make about any of these things ok. So, one thing I am leaving out is the argument. So, anytime I write h star, it is h star 1 by z star in the for the z transform ok. So, remember that that is very important ok. All right. So, if this is reasonable then we will proceed with the examples that we are going to do or rather one example ok. So, here is the example that we are going to do, we will try to build all these all these equalizers for that example ok to get this thing. So, this is my example I am going to take h of z to be 1 plus c z inverse and they will light and they will let the the noise to be simply n naught which is white ok. So, it is a simple 1 0 system and here we will say c is the complex number ok. I will not say it is real or anything else, it is complex and I will let my transmit constellation to be BPSK plus minus 1 ok. So, those are all my various assumptions ok. So, we will start with the zero forcing linear equalizer ok. So, the zero forcing linear equalizer structure has only one filter and what will that filter be in this case? 1 by h which is 1 plus z inverse ok and what is m s e ok. So, here again it is the arithmetic mean of n 0 divided by mod h square. So, how do you write mod h square? h times h star ok what is its which h star you are taking? 1 by z star ok. So, remember that is very important I am doing arithmetic mean on the unit circle ok. So, when I do it with the z transform and try to find z it automatically becomes 1 by z star ok. So, remember that. So, I am going to write here I want to find the arithmetic mean of this ok. So, you should be careful about what this means I have written z transform, but the arithmetic mean is on the unit circle, but this is useful because now I can write a partial fraction expansion and quickly figure out the coefficient of z power 0 ok. So, something that keep in mind ok. So, if you do that I am not going to do that right now. So, you have to be careful about several assumptions ok what are the assumptions you have to be careful about in the partial fraction expansion ok. Yeah. So, you will have to make sure that your ROC includes the unit circle otherwise you have no business computing many of these things ok. So, you have to include that. So, things will expand differently depending on this modulus of z of c. If mod c is less than 1 you will expand it in a certain way, if mod c is greater than 1 you might have to expand it in a different way ok. So, but eventually you can do all the computation and this will work out to I think I did this for the other example. So, I am not going to repeat it in this case only thing that is different is c star that would not change anything. So, this should work out to something like this ok and not by well I think there is a model is outside as well ok. So, check this I am not. So, for mod c greater than 1 I think something happens here I have not really checked it. So, I want you to check it for both cases for mod c less than 1 I am quite sure this is very correct. For mod c greater than 1 please check this calculation I think it is correct, but check it ok. So, this is your mean square error with the linear equivalence ok. So, if you look at implement possibilities for implementing this D mod c less than 1 you can this is ok in general ok for everything ok. So, mean square error is blowing up, but it is ok this is not too bad for mod c greater than 1 it immediately becomes D becomes non-realizable ok. What is the reason? You have a pole outside the unit circle and if you include it it is a ROC that includes 0. So, it is going to be anti-causal and it is going to be IAR anti-causal and you cannot really realize it it has to be approximated if at all you want to realize ok. So, that is the zero forcing linear equalizer and this is how it works out ok. So, the zero forcing DFE is the next thing that we are going to do. So, if you go back and look at the table that you have in the zero forcing DFE I have to distinguish between mod c less than 1 and mod c greater than 1 because the structure completely changes whether or not I have mod c less than 1 mod c greater than 1 ok. Because I have to do a minimum phase factorization for H of z ok and if mod c is less than 1 1 plus z inverse shows up in H min if it is greater than 1 it is going to show up in H max ok. So, things completely change depending on whether or not mod c is less than 1 this will happen only for the zero forcing DFE ok other things it will not happen. So, the zero forcing DFE you have to pay some attention ok it is not very straightforward you have to know where the poles and zeros of the channel are before you can determine the zero forcing DFE ok. So, so first case I will do is mod c less than 1 this is something we did before also. So, I will do that very easily ok. So, for mod c less than 1 what happens to this channel H of z is a minimum phase channel. So, the same thing that we did before applies ok. So, the minimum phase channel the precursor becomes what precursor becomes 1 ok. So, there is really no precursor ok and the post-cursor becomes simply m of z minus 1 which is c z inverse right and what is the mean square error same as before no geometric minus what geometric mean will be only n naught ok this node divided by 1 minus c square right. Sn by mod h square if you take geometric mean the coefficient outside is only n naught see H of z is already minimum phase monic the other part will be maximum phase the geometric mean works out as only n naught do you see that ok. So, you have to evaluate the geometric mean of what. So, let me let me do that because I think it is important and then I do this carefully it is the geometric mean of Sn divided by mod h square. So, what is that Sn is simply n naught times H of z which is 1 plus c inverse times h star of 1 by z star which is 1 plus c star z ok. So, the geometric mean of this is very easy to evaluate because it is already in spectrally factorized form with both of these case being monic. So, you can simply read it out and that comes out as n naught ok. So, once again in that in the previous example we saw also we expected this right you compare the MSC for linear 0 forcing linear equalizer and the MSC for 0 forcing DFE. DFE is better ok it works out to n naught which is larger than n naught by 1 minus mod c square when mod c is less than 1 ok. So, let us draw this let us draw the structure I think it is useful I will show you once again how this works out make this mistake always. SK goes through 1 plus c z inverse noise gets added PST remember this nothing and then you do there is no there is no prefiltering simply c z inverse and you get your s hat and this structure achieves an MSC of n naught which is very good ok. So, the second case we will consider is when mod c becomes greater than 1 ok. So, in this case you have to worry a lot about what your H min and H max is ok. So, look at H of z ok it is 1 plus c z inverse H max you write it usually as in terms of z right you do not write it as in terms of z inverse. So, you will have to play around with this this is important and easy, but if you have not seen before you will be surprised at how this thing looks ok. So, this is how you write H of z in H 0 H min H max format ok. So, once you write it like this things can be very easily figured out ok. So, this becomes your H naught ok. This is a delay which you can ignore right c z inverse you remember in my general H of z formulation I had a H 0 times z power r and I said any z power r we will ignore times a part which is written in a way similar to the H min ok. So, you should remember H min and then the H max ok. So, this will be my H max ok. So, watch out for this it is a subtle change, but if you are not used to it it will be confusing ok. So, that is how you identify the H max and the H min ok. So, you pull out and it is useful to write it in monic form ok. So, I want to write it in monic form also because then further spectral factorizations become easy and obvious ok you can just pull it out because we assume that when we did the derivations for those structures ok. So, once you do this the precursor becomes what? You go through and apply the formula it is 1 by H naught H max star by H max times M min inverse right M min inverse is 1 and H naught is c. So, you have 1 by c times H max star by H max what is H max star? 1 plus 1 by c star z inverse divided by 1 plus 1 by c c. This is my H max ok. What type of filter is it? It is I i r, it is I i r anti-causal ok. So, it is pole where is the pole? It is at minus c which will be outside the unit cell ok. So, this is I i r anti-causal and definitely not very implementable maybe you can approximate it in practice ok. So, what is post-cursor? This is if you go through the post-cursor what is the formula? It will work out as 1 by c z inverse ok and this is ok. One can implement the post-cursor without any problem 1 by c z inverse ok. And what about MSc? It is going to be the geometric mean of what? Sn by mod h square ok. Remember how the mod h square the minimum phase maximum phase will be different, but you have a H naught also. So, you have to be very careful about that. So, Sn is simply N naught mod h square will give you a mod c square times what? H max times H max star ok. So, which will be the minimum phase part and which will be the maximum phase part? H max star will be the minimum phase monic part and H max becomes the corresponding maximum phase part. From here you can quickly read out the geometric mean to be N naught by mod c square alright. So, I want to make out make an important point. Remember for this structure MSc was N naught ok. For mod c greater than 1 can I cancel ISI with the structure here on top with the same structure as before for mod c less than 1. So, all I want is to cancel ISI right. Look at the structure it will work right. You go through the filter and look at what it does at the input to the slicer you will not have any ISI you will only have noise. So, even if mod c is greater than 1 I can do this nothing stops me from doing this. Assuming my s hat of k is accurate I will definitely get s of k minus 1 c times s of k minus 1 and that is the signal component coming here it will definitely cancel right you just write the equation you will see it cancels ok. But look at the filter that we have got for mod c greater than 1 ok. So, what is the advantage in doing all this filtering and which is different from the precursor we had before right this filtering is much more complicated than the precursor for mod c less than 1. So, what is the point in doing all this filtering for mod c greater than 1? If you only want to cancel ISI it looks like ISI is getting cancelled there also. So, what is the point? It is a very simple answer it is right there on the same slide. So, why did we want to put down such sorry yeah it is a bad thing right. So, I know I mean why do you want to do all this crazy anti-causal filtering when you can cancel ISI with simply no precursor. What is the point in doing all this? The mean square error is going down ok. So, mod c is greater than 1 n naught by mod c square is definitely less than n naught if you are ok with an MSC of n naught then you might as well cancel ISI with that ok. If you are not ok with an MSC of n naught you want a much lower mean square error you have to go through all this pain ok. And in this configuration the 0 this is the minimum mean square error possible in the zero forcing idea ok. If you want to force the SI to 0 you will have to pay the spell ok. So, all these things are important single processing ideas at the receiver I mean you might want to think about all these things. If you really want the MSC to go below n naught then you have to do this ok that is one point. The other point is when mod c greater than 1 look us what look at what is happening to zk ok. What is zk? So, let me make a few remarks ok. When mod c greater than 1 results in a more complicated precursor for lower MSC ok, but you can as well use the same filter as before right. So, see it is just a general value if it is one in one case it got cancel other case also it will get cancelled does not matter what mod c was, but you if you are willing to pay that penalty for higher MSC then you can stick with the same old thing as before it might be interesting to look at ok. So, that is the first remark the other remark is look at zk zk is what sk plus c sk minus 1 plus nk right ok. Look at what is happening you typically expect mod c to be less than 1 right see the corresponding. So, you have to adjust your delays in the receiver so that sk is has maximum strength in zk if it does not have maximum strength in zk. So, maybe you have to adjust it, but in many cases that may not be possible ok. So, if you imagine a mobile wireless situation maybe the guy is coming closer to the transmitter or there are reflections in all kinds of crazy ways ok. So, you might actually have that in your design equalizer at some point the second tap is stronger ok. So, that is the that is the point of maximum phase right what is the minimum phase is the maximum phase. Maximum phase means the later taps are stronger the earliest taps are not strong So, if you are if you are moving towards the maximum phase type thing then you actually have to give more weightage to the second tap as opposed to the first tap which is what the later receiver is doing this IOR anti-causal thing is doing ok. The previous filter the in the mod c less than 1 case your first tap is the strong one. So, you throw away the second tap and decide based on the first tap ok. So, in the second case you have to do much more complicated stuff to get to the minimum mean square error ok. So, all these things are intuitive things to look at ok. So, in general one comment you can make is if you if your c is moving towards the origin then your zero forcing linear equalizer will go for a toss you do not even have to think about it, but as it crosses the unit circle ok. So, suppose mod c is crossing this c is crossing the unit circle. So, let me just say that that is what I want to say ok. C is crossing unit circle case from minimum phase to maximum phase your equalizer changes your MMS equalizer changes drastically. If you want to hold on to the minimum mean square error it is going to change completely in a drastic fashion ok. So, if you do not pay attention to this you will be doing something which is quite detrimental to your receiver there will be huge error propagation it will go all over the place ok. So, something to pay attention to. So, this kind of intuition you cannot build if you do only practical things right. So, you need to do the ideal case first to understand the intuition behind what the problem is with zeros in the channel. The zero is inside the unit circle what is the problem it is on the unit circle what is the problem if it goes outside the unit circle what are the various problems to equalize ok. So, that is the kind of thing you can expect ok. Any other questions or comments something that is disturbing you about the way this thing is working on ok. All right. So, that is the that is the zero forcing criteria. So, let us move on towards the MMSC criteria no not the let me say what are the various things I have done so far I have done only zero forcing linear and zero forcing TFE right. You have to move towards the MMSC criteria ok. So, the first thing next thing we will see is MMSC linear equalize ok. So, here you do not have to worry so much about minimum phase maximum phase because you do not have to split H ok the filter will be the same irrespective of what the split is ok. So, that is that is a nice thing. So, there is only one filter here and that filter has a very readymade standard formula it is what E s times E s times h star divided by E s mod h square plus s n ok E s is 1 for you right this is just bpsk we will assume equally likely. So, E s is 1 there is no problem what is h star 1 plus c star z ok divided by mod h square which will work out to 1 plus c star z times 1 plus c z inverse ok plus s n which is n not is it ok. So, that is the formula for D ok. So, how do you compute so, how do you proceed with any further computation here ok. So, it is a little bit tricky you have to factor the denominator for all you want to say anything about stability or analysis and you want to do mean square error first thing you have to do is factor the denominator how will it factor let me see denominator is actually a valid has to be a valid. So, well it is real and non negative on the unit circle right denominator. So, if you have z as a root then 1 by z star will also be a root ok. So, that is a useful trick you can use to factor the denominator ok. So, in fact, I will assume that the denominator factors as case and so, let me write it because this thing is going to show up in several places. So, let me write the denominator separately the denominator is this s z right this thing is 1 plus c star z times 1 plus c z inverse plus and not ok. So, I am going to assume this is factoring as ok. So, the two roots I will assume are 1 minus alpha z inverse times 1 minus alpha star z can I do this ok. So, I will assume this thing is factoring as 1 minus alpha z inverse times 1 minus alpha star z ok. So, I will assume also without any problem that mod alpha is less than 1 it is the root inside the unit circle ok. So, maybe less than or equal to 1 if you insist on allowing for roots on the unit circle ok. So, what should I put out here to make sure that the constant term agrees be very careful what is the constant term on the left hand side and not plus 1 plus 1 plus mod c square ok. So, it is a bit of a complicated expression and is that enough what is the constant term in the product here is it just 1 1 plus mod alpha square right. So, in the you have to divide by 1 plus mod alpha square am I right ok. So, why did I go to all these paints why did I do this because I want to do a spectral factorization on the SC right. So, I am just why I am doing I am pulling out the constant term. So, you see the way I pulled out the constant term to make the minimum phase monic part and the maximum phase monic part. So, it is a simple algebraic thing there is nothing here, but you should be used to it if you are not used to it this thing will come out as a big surprise to you ok. So, what is the geometric mean of SC for instance it is that constant outside ok. So, those things we will use later ok is it ok all right. So, what is the so this that this filter ok. So, so if you look at the filter D now it is got 2 poles one pole at alpha and the other at 1 by alpha star ok. So, there is no way it is going to be implementable ok. So, that is a bit of a problem with having nice real non negative denominators ok. So, you are going to have whatever zeros you have it is going to be reflected on the other side and you will have several such other zeros and it will kill you ok. So, you cannot very easily do a implementation ok. So, these poles will definitely kill you and implementing D is going to be tough ok. So, the while it is ideal and nice it is definitely not implementable it is two sided and it is going to go for a toss ok and you can only approximate ok. So, that is the first lesson to keep in mind, but supposing you implement this D what is your mean square okay how do you compute that it is the it is the geometric mean of n naught by this S z right n naught by S z can I quickly find the geometric mean of that it was already I have already done most of the work. Spectral factorization was the important work. So, once I do the spectral factorization it will work out to I am sorry the MMSC right. So, anytime you do MMSC the geometric mean will come in am I right or is it arithmetic mean oh you have to do arithmetic mean oh so, this is the yeah you are right you are right you are right ok. So, you will have to do arithmetic mean of this guy ok. So, the arithmetic mean you need to do I thought I did that somewhere, but maybe I have not done this wait let me see. Yeah. So, the arithmetic mean is a little bit more tricky. So, how do you do arithmetic mean you have to do partial fraction expansion ok. So, you do partial fraction expansion it will work out you can see what it will work out to 1 by this will work out to 1 by 1 minus mod alpha square right roughly it will work out to that ok. So, you can see that. So, what is the moral of the story make sure you know what the coefficient of z naught will be for 1 by 1 minus alpha z inverse into 1 minus alpha star z and use that as a ready made formula if you are deriving it in the quiz you are not going to succeed ok. I believe the answer is 1 by 1 minus mod alpha square ok. So, you probably check it out ok. So, I am going to ask that you check it out. So, I think I have an answer here, but I want you to check this and let me know if this is correct or wrong the answer that I have got is 1 plus mod c square plus n naught 1 plus mod alpha square divided by 1 minus mod alpha square ok. So, this is the answer I have, but please check this as the coefficient of z naught in this expression case n naught by s c ok. Check this do not be do not just take this as the right answer, I believe this is the answer I have ok. It should be right I mean it should be easy to check all right. So, the next thing is the MMSE DFE ok. Here is where you will get the least error possible and that will be that will be what? What will be the precursor? It will work out to 1 by gamma z square times h star by mz star ok. We have already done the work right mz star and gamma z square we have already found ok. It is the same as the spectral factorization that we did before and that will work out to ok. So, I have written it here ok h star is 1 plus c star z and mz star is 1 minus alpha star z ok all right is that ok. See this is the same as sz right I have to do a spectral factorization on sz and then use that am I right same thing works out here as well ok. So, that is my precursor what is my post cursor simply mz minus 1 which will be mz minus 1 will be minus alpha z inverse ok that is the post cursor equalizing ok. And the mean square error now will be the geometric mean of the previous quantity and that is easy to read out and that will work out as n naught times 1 plus mod alpha squared by 1 plus mod c squared plus n naught and you see easily that this will be the least possible msc in all the four of all the four msc's this will be the least possibility for every case ok. Of course, for mod c less than 1 there is something else, but everything else it will work out as same quantity ok. So, look at that very carefully make sure you you agree that this you can check all those computations ok. So, once again this dw this precursor will be difficult to implement ok it is going to be iir anti-causal, but you notice one observation you can make is what will it have any causal part ok. So, this filter is going to be anti-causal right am I right or wrong ok. So, this filter is anti-causal. So, most people who implement mmsc dfe in practice assume the precursor to be anti-causal you always take an anti-causal precursor ok you do not take both sided precursors when you do tfe ok. The post-cursor will be purely causal purely strictly causal as in it will not have a zero term also it will be strictly causal and the precursor will be taken to be anti-causal for this reason ok. So, maybe it is not valid in the very general case, but in most cases an anti-causal filter will handsomely work as your this is a precursor ok. So, the only case to worry about is when there is an mm inverse right and there is an mm inverse that might be a causal part ok. So, if if a noise is white you do not have to worry about it it will definitely be anti-causal always ok. So, that is why it is taken as anti-causal ok. So, any other comment I wanted to make I wanted to make any other questions that you have ok hope it is ok. So, in your tutorial I will definitely give you problems like this where you have to repeat the same calculation for a different h of z maybe with some power spectral density for noise and it is also a very useful question in the quizzes and finals right. So, make sure you know this by hearts just at one level it is just formula substitution, but another level there is some careful DSP that might be involved ok. So, you should know to do the factorization very quickly otherwise you will be spending a lot of time on useless long calculations which might take you nowhere ok. So, we have about 5 minutes left, but I think this is a good point to stop and I will pick up from here and point out the other thing as we go along ok. So, so, so, so, so, so, so ok. So, so maybe I should spend some time talking about the rough road map as we go along from here ok. So, the next thing we will see is what is generally called constrained complexity equalizers. So, here we will say the my equalizer filter has so many tabs. So, what is the best equalizer can I have ok. So, that is the next thing we will do ok. So, another thing we have already always done so far is assume that the receiver knows h of z ok all the receiver processing is based on the knowledge of h of z. So, first we will assume that the receiver knows h of z and do the constrained complexity equalizer. After that finally, we will relax that assumption ok. So, we will relax that assumption through training the receiver and by building adaptive receivers adaptive equalizers equalizers that can adapt to different channels. So, this is very very useful that is the next thing we will see ok. So, this is very very practical. So, in case in the lab for instance people are looking to implement receivers you will be implementing adaptive constrained complexity receivers equalizers that is the that is what you will be implementing that is the only thing you can implement ok. So, that is the next thing we will see. So, after this we will see more elaborate and complete practical receivers. So, more practical receivers receiver structures as in. So, so far two assumptions we have always made is the carrier is exactly known, carrier phase is exactly known, carrier frequency is exactly known and then we have also assumed the timing is exactly known ok. So, we will relax those two assumptions and generally see a receiver structure which is fairly large ok. So, between these two we may or may not see one more thing I have not decided if I should do it which is which is what is called I think it is called several things it is called fractionally spaced equalizers ok. So, in most cases we saw we have so far assumed only symbol rate sampling right we always assume H of z is a symbol rate model there is nothing that we have changed that just changed so far about symbol rate sampling ok we always did symbol rate sampling, but it turns out the precursor equalizer is a major problem to implement in simple symbol rate sampling. The ideal precursor equalizer seems to be unimplementable ok. So, at several people actually in practice do twice the symbol rate sampling just for the precursor after that you do just symbol rate, but just for the precursor you do twice the symbol rate just to get better implementations and better theory there is also theory behind it we suggest that is better ok. So, we may or may not do it I might I might just quickly refer to it then we move on to the next structures ok. So, after that depending on time we will do the remaining.