 okay so this is lecture 26 okay and the last thing we've been seeing is working with this discrete time model to see how to equalize a channel with ISI right I took this model where MOZ is monic causal loosely minimum phase and you have white Gaussian noise adding to produce CK and we saw some equalizer structures the first structure we saw was the maximum likelihood sequence detector which works when mu is definitely finite but not only finite but quite less okay we want to have some other other equalizers which will work for longer longer mu as well okay so the first option we studied was what I called as zero forcing linear equalize okay so the structure there is very very simple simply take ZK then do what pass it through 1 over MZ okay so what you obtain out is XK you know this is going to be SK plus plus what sum n prime K okay so that's what it's going to be and you run it through a slicer okay so I'm going to denote it like this slicer this is like notation it's called slicer this is basically symbol for any thresholding device okay so if you can you can see what is this what is this plot I've done there so thresholding device for what modulation BPSK right so that's the picture that's drawn but it actually represents a general slicer for any modulation you might have any decision region you are enforcing for your decoder okay so that's what you do you produce a set of K so this is zero forcing linear equalize okay and we saw an expression for the figure of merit so so the reason why figure of merit is important is figure of merit directly gives you the probability of error so in this case probability of error is easy to evaluate right so we're doing simply XK which is SK plus some noise signal plus noise all discrete so you know Q function and pair ways are a probability will be fairly accurate okay and we went through and computed the variance of n prime of K and we found that that was n naught by two times the arithmetic mean of 1 by SH of E Pajeta okay so that's very crucial so if you're worried about why this SH of E Pajeta suddenly shows up you can see if you go before if you look at this M of Z model closely what's actually happening what's actually happening here is what 1 by gamma square M star 1 by Z star and then this was through a sampler and before the sampler was what H star of minus T right this was the actual processing if you do a 1 by M of Z what is this overall filter right that is 1 by SH Z do you agree okay so that's actually 1 by SH of C okay so that's why it's not surprising that see this one will be what what will be the how can you model the the PSD here is what for noise what is the noise PSD there yeah SH of E par J theta right that was the PSD okay so then you run it through 1 by SH of E Pajeta what will happen you mod square it and divide you see it will become PSD will become 1 by SH of E Pajeta okay so that's why it's very natural that the sigma the variance of the real or imaginary part of n prime of k depends directly on the the one denominator I mean the reciprocal of SH okay so we did all that and we found the figure of merit for zero forcing linear equalizer who have this very simple form Dmin square of x what is Dmin square of x the minimum distance between two points in your original signal constellation from which S of k was drawn okay so that's the Dmin square it's very easy to compute okay times 2 by n naught and I had this notation for what what is the arithmetic mean okay what is arithmetic mean 1 by 2 pi integral minus 1 by 2 pi integral minus pi to pi 1 by SH of E Pajeta okay so this quantity right this quantity is is the variance of the real and imaginary part of n prime k right that's the variance that's why the factor 2 comes okay so if you look at just PSD you'll only have n naught divided by 2 comes because of the real and imaginary component okay so any questions on this anything that's disturbing you about how this is going to be done so if at all you want to implement something like this when can you really implement it's the first question okay so when can you really implement the linear equalizer in this case yeah so that's that's one problem that's from a performance point of view just from existence of 1 by m of z since it's loosely minimum phase since m of c is loosely minimum phase all the zeros and poles are on or inside the unit circle right zeros can be on the unit circle so when you do 1 by m of z what can happen poles can be on the unit circle so what can happen because of that it can become unstable okay so but definitely it'll be an implementable type filter so how will you implement it 1 by m of z is it is it f i r or i i r what is it yeah this yeah so if you take m of z as an f i r filter supposing m of z is f i r then 1 by m of z will be an iir filter so in general m of z may not be f i r or i r it can be a mix of both right so you can be iir in general okay so you can have both zeros and poles so when you do the reciprocal in general you might get both f i r and i r but since it's minimum phase right it's easy to implement always okay there are poles on the unit circle so it might cause some instability but it's okay it's not a it's not too crazy to imagine implementing 1 by m of z and there are some poles on the units okay so so you finally you get get a reasonable implementation but the variance will blow up if 1 by sh of e power j theta is not integrable okay right so if you had poles on the unit circle if you had see if you had zeros on the unit circle for sh of e power j theta right then may or may not happen so in most cases it will exist if you have only isolated zeros it's not a problem 1 by sh of e power j theta usually will integrate roughly some at least it'll not blow up too badly okay but if you have sh of e power j theta being zero for a for an interval of theta then it'll definitely not exist you can't integrate okay so but anyway I mean there might there might be cases where blows up even because of zeros okay so that's the story all right so this is zero forcing linear equalizer so towards the end of this class we'll take one definite example and we'll do all the three equalizers together so I don't want to do an example right now then you can't compare everything very nice okay so the alternate the next structure we saw what I called zero forcing decision feedback equalizer this is a non-linear equalizer okay so that's why we call the previous one linear equalizer because this is clearly non-linear because it's got a thresholding device inside the filter okay so it clearly becomes non-linear okay so what's the structure here zk goes through okay so you do 1 by m of z you kind of implement it in the feedback form on the way you would do a IIR filter implementation right how do you do an IIR filter implementation okay so if you think of you put it in the denominator and you'll do the outputs get delayed and then sampled so if you do that way you would get a m of z minus 1 in the denominator okay and then you do a plus minus here I'm sorry it's always done okay and then you move the slicer inside okay the slicer goes inside okay this I'll call x okay once the slicer goes inside everything becomes non-linear but it also becomes stable everything becomes stable there's no question of instability once you have the slicer inside whatever happens as long as the inputs are bounded outputs will have to be bounded because output is a slicer output okay it has to be bounded and nothing can blow up anywhere in the middle okay so that's the zero forcing decision feedback equalizer structure okay so so you would like to come up with a nice expression for what x of k would be and then find figure of merit okay but the problem here is you have to worry about slicer error slicer is a non-linear part and it's not very easy to deal with that as convolution or anything like that right so you can't you can't happily do an expression for x of k because it depends on the past decisions that were made about the made about the symbols and those could be erroneous and if it's erroneous then you have to and it's all discrete so it's a little bit confusing so one way of avoiding all that is to assume that the past decisions were error free okay so at those instances where the past decisions were error free can you compute an expression for x k and can you find figure of merit for those instances okay and you can show over a series of long long series if you have a reasonable probability of error there'll be lots of instances where the past instances past decisions are error free if m of z is not doesn't have too many tabs and all that this is not too difficult to imagine okay so you make that assumption so the assumption you make is past decisions are error free so technically the thing we compute is valid only for those k where s k s hat k minus 1 s hat k minus 2 and all are error free okay past decisions are error free okay so once you do that x k is very easy to compute what will be x k okay this is z k minus what okay z k minus s hat k convolved with this thing m minus 1 I'll call it m minus 1 k what is m minus 1 k it's the inverse transform for m of z minus 1 so if m of z was 1 plus m 1 z plus m 2 z inverse z power minus 2 z inverse so on what is m of z minus 1 we simply move the minus 1 here so this becomes just m 1 z inverse m 2 z square so on so this k is this m minus 1 k okay so what is z k now is s k convolved with m k right okay so all the components that that are involved in this m of z minus 1 in this convolution will get cancelled okay the first is plus n k okay so this requires careful I mean it's easy to see I know I don't want to spend too much time on this but you can see that everything all of these guys which were in the in the previous convolution will go away the only thing that will be left out is s k convolved with delta k okay so you can write m m k as delta k plus m minus 1 k right so and then you do the convolution you see s k convolved with m minus 1 k will come s k convolved with delta k will come s k convolved with m minus 1 k will cancel with this because I've assumed all past decisions are error free the only thing will be left out with them okay so you get easily here this to be equal to under this assumption this will become s k plus n k okay so now the figure of merit is trivial to compute simply what demon square x divided by the usual noise variance which is n naught by 2 gamma square okay so you get 2 gamma square divided by okay so q of root gamma by 2 would be a very accurate measure of reasonably accurate measure of probability of error okay the problem is we've assumed past decisions are error free you know past decisions have errors this is not a good measure but when there are no errors in the past decision of those instances this is a this is a good measure of probability of error okay all right so once again this is the sigma square for real or imaginary part of okay sin naught by 2 gamma square how do you compute gamma square what is the gamma square is related to s h of e power j theta it's actually what let me do a little bit of a manipulation to show you how you can relate this z of the figure of merit from this to the figure of merit from linear equalize okay so what is gamma square it's exp 1 by 2 pi integral minus pi to pi log s h e power j theta d theta okay so first thing I want to do is I want to write integrals in terms of 1 by s h of e power j theta because I want to compare with the linear equalizer right so because that that involved an integral over 1 by okay so I want to do 1 by s h of e power j theta so then what will happen what will happen yeah so if you do 1 by inside the log it will become minus so e power minus will become 1 by e power that okay so I can write this as 1 by exp right 1 by 2 pi integral minus pi to pi log 1 by s h of e power j theta d theta do you agree right so basically this becomes 1 by the geometric mean of s h of e power j theta okay so you can imagine this integral is likely to be smaller in value than the previous integral right so that's something we showed geometric mean is always less than the arithmetic mean also it also you can see you're taking log of 1 by s h okay so even if 1 by s h is blowing up fast log is going to slow it down a little bit okay so so you can see the advantages visibly there but even in theory you can show what this will happen so so so let's compare the two figures of merit for the le the figure of merit was what what demon square x 2 by n naught what arithmetic mean of 1 by s h e power j theta okay for the zero forcing decision feedback equalizer what is it going to become demon square x 2 by n naught what geometric mean of s h of e power j theta okay and since you know geometric mean is less than the arithmetic mean in general definitely this figure of merit for the zero forcing DFE will always be greater than or equal to the figure of merit for the zero forcing linear equalizer which is what you want which means the probability of error is going to be lesser in general most cases yeah so to take care of this past error problem you can go through some work and show that it's not really a problem okay so that requires additional work but you can show this is a meaningful comparison okay so there are several ways of doing it another way of showing that this is a meaningful comparison is what take several cases and simulate and see that it works so that's one thing but this in theory also it's possible to actually show that the past errors see it depends on the number of tabs right number of tabs and you see if you make a few correct decisions the probability of getting the next correct decision is it's it's correct this is valid so it's question of how many how often will you make mistakes every mu times or something and that probability is quite less even if you assume a reasonable probability of error overall probabilities so you can you can show some properties okay so that's the what's the final comparison okay so remember what was the figure of merit for the MLSD okay we didn't have a simple expression but it was demon square by what sigma square so what was sigma sigma is the same as sigma square the same as n naught by 2 gamma square but what is demon square it's the minimum distance of the minimum weight of the error event right minimum possible weight for an error event so it's not very easy to compare this with this but we have some bounds right so using some bounds we can say some things okay so so we said so in general if you compare the three you'll see that the MLSD has the maximum figure of merit so assuming demon square is the same as what assuming demon square is equal to demon square of x if you do that then you'll see MLSD has the maximum figure of merit right this is also n naught by 2 gamma square right did I say that right yeah these two okay so maybe these two will be equal so maybe okay so so this is not well there is a so this is this this can be shown to be equal to the match filter bound right and the match filter bound is different now what is the max filter match filter bound it's it's demon square x times 2 EH by n naught some such thing okay so it's so it's a little bit stronger than this don't don't think that this is the expression that you will get even for the match filter bound so you can you can show that in general the MLSD will have the largest figure of merit and then you will have the zero forcing DFE and then you will have the zero forcing linear equalize okay so of course these are approximate computations it's not exact computation but this is a good computation to do and in practice it works out to be very similar okay so what we're going to do next is to take a very simple example so very simple m of z then try to come up with these equalizers what they look like what what's going to be the actual figure of merit let's see if there's some comparison can be made okay so that's what we're going to do next and the example i'm going to come up with this i'll pick m of z to be take a very simple example 1 plus alpha z inverse okay and we'll say alpha is between 0 and 1 okay so maybe we can even take it to be positive negative it doesn't matter okay it's a real number between 0 and 1 okay so you see m of z is monic causal and minimum phase right because i picked alpha to be between 0 and 1 okay so the first question is how will the MLSD behave can you compute figure of merit for the MLSD okay so we'll start with the MLSD first the wittebi algorithm try to compute its figure of merit and then we'll go to the linear equalizer and the other things okay so if you want to compute figure of merit for the MLSD you need the minimum distance at a given okay so for that you need the trellis okay so let's do the trellis okay so i'll assume bpsk okay i'm sorry say x is plus minus 1 okay so if you do bpsk we'll do two stages and you'll see it's the thing will be minimum distance agreement will be reasonably clear from two stages okay the trellis is going to look like this okay minus one what's the output minus one minus alpha okay okay if it's plus one what is the output one minus alpha okay right so if a plus one here one would give you plus alpha minus one would give you minus one plus alpha okay so that's the trellis so you can once again write it okay so now the next thing is to find the minimum weight of the error event okay so there are several error events here okay so there are some eight there'll be some eight date different error events you can try to go through and compute the distance for each of those error events okay so let's see you take this following error event okay which i'll draw in red what's the metric for that error event distance 8 alpha should be square and all that okay okay how do you compute the distance okay so you have to look at the difference of the output between these two branches what is the difference it's actually two right two so you square it plus what the difference here which would be two alpha and you square that as well okay so you get four times one plus alpha squared okay okay right so just for just for fun we'll take one more error event and try to compute its metric and then after that we'll conclude that that's the minimum possible metric okay so of course it's not the right computation but it's a reasonable computation to do okay so this is the error event that I want to take okay so the blue guy what is the metric for that two square do you agree the difference between these two branches is two and the next one is what again two alpha squared okay so you get four times one plus alpha squared do you agree okay okay so so that's your so so from this we can make an intelligent guess that demon square for this is going to be four times one plus alpha squared okay so that's just a guess may or may not be a very good very good dancer but that's the guess okay so if your noise variance n naught by two gamma square this some sigma square then this will work out to demon squared by n naught by two gamma square okay so gamma is not very readily computable right so how will I compute gamma I have to tell you what sh actually is okay so we'll assume sh to be what shall we assume we'll assume it to be one plus alpha z inverse okay so it has to be what gamma square times m of z times m star of one by z star how do you compute m star of one by z star from this it'll be one plus alpha z do you agree with that right you're doing one by z star what happens to the star you're doing m star again right so it becomes the conjugation will go okay so we'll simply get one plus alpha star actually it's one plus alpha star z okay so but alpha I'm taking to be real so it becomes one plus alpha z okay so so so most power spectral densities in rational form will look like this you'll have a you'll have a certain real mostly real term with z inverse and the same thing will become z in the other form okay so this is something you can use when you do spectral factorization okay if somebody gives you sh of z and you have to do spectral factorization you can use this form and minimize your work if you don't want to do that we'll simply take it to be equal to this okay if you do that then so there is some catch here about energy in the m I have to be very careful about that so so suppose you find so we'll take gamma square to be one okay just for simplicity simply take gamma square to be one and this is my sh of z so now if you have to compute the match filter bound what will the match filter bound be what's the formula for the match filter bound okay I'm sending only one signal right so it will become demon square of x times energy in m okay which is actually e h by gamma square okay so in fact in this formula e h becomes equal to energy in m okay which is one plus alpha square okay so because gamma square I'm picking as one so it works out to just one plus alpha square okay so you'll see the numerator becomes demon square which is what four times one how did I get this four demon square of x demon of x is two right because plus minus one it's two so if you do demon square it's four and energy in m is one plus alpha square it's one alpha one plus alpha square right divided by sigma square okay which is the same as before so n naught by two gamma square I pick gamma to be one so it's just n naught by two sigma square if you want okay all right so this is the MF bound and what was gamma MLSD the exact same thing okay so you see in that in this case the MLSD gives you the match filter bound so what does it mean when if the MLSD figure of merit is equal to the match filter bound what does it mean how did we derive the match filter bound assuming no ISI so what it means is in this case when you have just one tap ISI the MLSD can completely get rid of ISI and give you performance which is equal to that of no IS okay so it may not happen in general it happens in this case one plus alpha z inverse if you have some other channel for M of z it might happen that gamma MLSD can never be equal to gamma MF okay in which case you know that you will always suffer a loss when compared to the ISI free case okay but that's okay in some cases it might work out that gamma MLSD is equal to gamma MF which means you can get to the ISI free performance but if you do MLSD if you don't do MLSD who knows what happens if you do MLSD that's what's going to happen all right so that's the first bound we have established for the figure of merit for the C-equalizer okay now for the same setup I want you to do derive the structure of the linear equalizer and then find the figure of merit the zero forcing linear equalizer I'm sorry so what do I mean by structure of zero forcing linear equalizer what's that filter right write down that filter and then derive the figure of merit okay try to use some knowledge of DSP when you to avoid integration okay so you can avoid integration you know what how do you do the integration 1 by 2 pi integral minus pi to pi sh of e power j theta d theta is what from DTFT you're going back to the the zeroth term the coefficient of z power 0 okay so if I give you sh of z you don't have to do integral to find the arithmetic mean and all that right but now we have to do 1 by sh of z so you have to do the 1 by sh of z and find its zeroth term first that will be equal to their integral so you don't have to do the work if for a rational case it can be done very easily of course if your sh of e power j theta is not convertible to a rational c transform then you have to do the work and stuff but otherwise that work is not needed avoid that work okay so the structure is really really simple you have zk coming in simply filter with 1 by 1 plus alpha z inverse to get xk so how will you write this filtering in time domain I'm sorry oh okay so no I just want a difference equation that's the most useful thing in implementation type thing no what's the difference equation relating xk and zk oh my god what's happening man don't give me such don't give me such cash yeah so what's xk xk is zk minus minus what alpha xk minus 1 not zk minus 1 is an IR filter okay so the difference equation will work out you have the difference terms in the left hand side okay this is how you might want to implement it in practice okay of course you can convolve with the unit step probably not a very maybe that's the better way of doing this this is the way you do it yeah when will you have stability problems for this filter for what type of values of alpha alpha equals 0.2 will you have stability problems for what type of alpha yeah when alphas close to one I picked alpha on the on the real line right 0 to 1 when alas alpha becomes close to one like 0.99 0.9 we'll start having stability problems from the theoretical point of view from practical point of view what will happen start having really bad effects because of quantization see ultimately you can't implement 0.99 accurately multiplication by 0.99 accurately you're going to represent it some fixed point as it keeps squaring and cubing and you have to keep track of so many previous expressions your accuracy will take a severe beating in fact the poles will shift sometimes okay the way in which you're implementing stability causes a lot of problems in this kind of implementation okay so then you do a slicing right fine this seems simple enough do a slicing and find s at k okay now for the figure of merit what do you need on the top you have d min square of x that is just four it's no problem times two divided by n0 okay right n0 by 2 is your sigma square right so maybe you can think of that as sigma square so don't get rid of it but then you have the arithmetic mean of 1 by sh of e power j theta so how do you compute that so nice way of doing it is to look at 1 by sh of z what is 1 by a sh of z 1 by 1 plus alpha z inverse times 1 plus alpha z okay and then you have to compute this terms first coefficient the coefficient of z power 0 in the expansion of this how do you do an expansion here you have to do what's called a partial fraction expansion and for that it's useful to write everything in terms of z inverse right so that's the next step so you write everything in terms of z inverse you get z inverse 1 plus alpha z inverse it's also common to write it in 1 plus alpha form okay so it's so it's common to take the alpha common out and then write it as 1 plus 1 by alpha z inverse is that fine okay so now if you do partial fraction expansion what will you get how do you expand this in partial fractions so what are the two denominator terms 1 plus alpha z inverse and 1 plus 1 by alpha z inverse what should go on the top here 1 plus alpha z inverse in the remaining terms you should put z equals alpha okay so what do you get no not z equals alpha z equals minus alpha okay so you put z equals minus alpha what do you get yeah you should have 1 by 1 minus alpha square okay that such an expression has to come okay 1 by 1 minus alpha square why should such an why did I say such an expression has to come because I know when alpha becomes close to 1 various things should go wrong right so I should have something like 1 by 1 minus alpha square okay so otherwise it won't work okay so things should not converge and alpha is 1 the series should make no sense okay so that's how it should work so the other part will be what minus 1 by 1 minus alpha square really this is what you get are we taken care of this divided by alpha this is what you get so it works okay so what's the coefficient of z plus 0 on this term okay be careful in the expansion because the two terms will expand differently because your region of convergence is mod z greater than alpha okay so don't if you're expanding alpha by 1 plus alpha by z properly you can't expand 1 by alpha z also the same way right it should be some confusion yeah so it has to be anti-causal okay so one term will contribute the other term will not contribute to the z power 0 term am I right okay go ahead and do this carefully because it requires some decision the way in which you're doing it okay so 1 by 1 minus alpha square okay so that seems believable to me okay coefficient of z power 0 on this expansion has to be 1 by 1 minus alpha square okay so from there you can compute okay from here you compute coefficient of z power 0 to be 1 by 1 minus alpha square okay and that is the arithmetic mean of 1 by sh e power j theta that has to be the arithmetic mean right that's how you compute 1 by 2 by integral which thing okay so this is this is 1 by 1 by 1 minus alpha square okay so and this will have to be equal to the arithmetic mean of 1 by sh of e power j theta okay why is that that's a simple DTFT formula okay go back and look at it 0 term is the integral right so that's how it works so here I should put what 1 by 1 minus alpha square okay so if you want you can write it differently you can write it as 4 times 1 minus alpha square divided by sigma square okay so sigma square is n naught by 2 okay the same sigma square I had for the MLSD and the match filter bound okay okay so you see while the MLSD and match filter bound are giving me 4 times 1 plus alpha square by sigma square the zero forcing linear equalizer is going to give me a it's going to give me a figure of merit which is 4 times 1 minus alpha square and as alpha tends closer and closer to 1 the ratio of these two figures of merit will really blow up okay and the zero forcing linear equalizer will go for a toss alpha is equal to 1 what's the figure of merit finished half probability okay so you can't do anything you might as well toss a coin and decide what SK was no point in doing all these things okay so clearly all these things are bonus so while this analysis might sound very approximate to you in practice it's very useful you can do it quickly it's no big deal in doing the 1 by sh of e okay once you have a reasonable model you can find it and you'll know how well your linear equalizer will do okay pretty much and it's a good model it'll tell you the exact way in which things will work okay so it's pretty good that way okay so the next step is to do the decision feedback equalizer and with that way you close this class ZF DFE so once again I want you to do the structure and structure and do the figure of merit for this okay so the structure is going to work out as ZK plus minus slicer here okay and you have MZ minus 1 what is that filter MZ minus 1 alpha Z inverse okay so XK S hat K so if you have to write it in time domain what is XK CK minus alpha S hat K minus 1 so that's what so notice a subtle difference between this and the and the linear case linear case you had minus alpha X K minus 1 so here you have X S hat okay it's not X hat okay S hat okay which is a different thing and that will make all the difference in the world assuming S hat is accurate okay so if you do that it will work out okay so what will be the figure of merit here okay so you will simply get 4 by sigma square right okay okay so you see notice the subtle difference I want you to pay attention to this figure of merit there's a lot of message here so what's happening in the MLSD what did we get 4 times 1 plus 1 plus alpha square and match filter bound was 4 times 1 plus alpha square DFE is doing what 4 by just 4 so DFE is losing a little bit when compared to MLSD so what is why is it that is losing seems to be doing the same thing but why is it losing okay so that's where you have to look closely at the ISI and some distance that the ISI provides okay so MLSD actually uses the ISI to the best possible effect what is DFE doing it's getting rid of the ISI and then deciding based on only the individual symbols so that's the loss of optimality in the DFE okay so the zero forcing DFE is not as optimal as the MLSD MLSD takes care of the entire sequence and extracts everything that the sequence can give you while the DFE is going to throw away the see it subtracts alpha times the inverse right that actually holds some information depending on what you have you might be able to use it here in the trail is right what the DFE does is to simplify complexity it does just simply throw the throws away that information and simply makes a decision on what the symbol is available so because of that this is small hit it's not but it's not as bad it doesn't become unstable like the linear equalizer where it gets a 1 minus alpha square well unstable in some cases but DFE just does 4 by sigma square it loses out on the extra benefit to be gained by using that extra ISI information okay so that's the only thing okay so this is a nice example in which you can see how there is these three figures of merit nicely tell you the story that's actually happening this MLSD which is as good as the match filter bound and then you have zero forcing linear equalizer which pays a penalty because of the inversion right it does it inverts and gets rid of the ISI but it filters the noise also with the inverse because of that this noise enhancement and you get a problem the DFE while it doesn't have the problem of the linear equalizer in terms of noise enhancement doesn't use the ISI the way it's supposed to be used just gets rid of it and makes a decision which also supposed to be suboptimal and you get a slightly lower figure okay so that gives you a nice story a nice assignment to do next is the following repeat the same thing for the simplest all pole m of z what's the simplest all pole m of z first order m of z how do you take it m of z is pick gamma square to be one okay is not any problem one by one plus or minus it doesn't matter so one plus alpha see news okay okay again once again take alpha between zero and one once again take x to be plus minus one okay go through and derive the three structure so don't start with the the MLSD what will happen with the MLSD yeah so you can't do MLSD pretty much right so the MLSD is reasonably ruled out for this case right how will you do MLSD here can't do it this is not a finite type filter MLSD does not hold but you can compute the match filter bound that is okay you can compute the match filter bound that's easy to compute and you can do linear equalizer and decision feedback equalizer and figure out whether there'll be problems in any of them which one is better it'll give you a nice idea for what happens when you have a pole in your channel response what happens when you have a zero in the channel response okay so then you can slowly build up some intuition on how the equalizer suboptimally equalizes here okay so we'll stop here pick up once again Monday morning