 Welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering IIT Bombay. In the previous lecture we saw the minimum mean square error equalizer and the potential advantages that it has over the 040 equalizer. While still being suboptimal it takes into account the effect of noise while performing the equalization. In this lecture we are going to implement the minimum mean square equalizer on GNU radio and compare the performance with the 040 equalizer and find out how it works when you have lower and higher SNR regions. Let us first perform a simple MMSC equalization example without really a special channel wherein we just have a scaling based channel. So let us begin. So I will have a random source and this random source goes from 0 to 4 with a byte. We will grab a constellation encoder say constellation along with a constellation encoder we will also grab a constellation object and we will then grab a throttle because we are performing a simulation connect these up. We will use the default QPSK constellation. We will call it MyConst and we will also have this point to the MyConst object. Now we will have a simple gain factor so I am just going to introduce a range controller for command. I will say range and I am going to have this being the amplitude A going from let us say 0.01 0.01 through 10. So because it will be a high SNR let us keep the default as default value to be 1. Now I am going to just say controller for command I will say multiply and we will just add a multiply we will multiply we will add a source constant source and this constant source is going to be having value A and that settles it. Next we add noise unit energy noise controller for command F we will get a noise source we will do add controller for command F we will say add and now we are going to do a simple analysis of two equalizers. Of course it is very simple the zero forcing equalizer just says undo the channel thus in this case our channel is just the value A so we will do 1 upon A for the MMSC however we will have to be careful we will have to do what not 1 upon A but we have to essentially scale based on the SNR how do we do that in this case you can verify using the discussion in class that the scaling factor will be A upon A square plus 1 if you don't believe me you can just substitute in the formulas and get it but it will be A upon A square plus 1 let's now see how this works out so first I'm going to add a constellation sink and in this constellation sink we will take two objects two inputs the first one goes through just 1 upon A the second one goes through A upon 1 plus A square so let's add two multipliers we'll just copy this control C control V and we'll add a constant source control C control V over here and this particular constant source is 1 upon A the second one we'll do another multiplier control C control V control C control V over here and this particular constant source doesn't do 1 upon A it does A upon A square plus 1 if you see that when A becomes very large that is in the highest SNR regime this is 1 upon A because you know even A becomes very large you can ignore the one alternately you can also write this as 1 upon plus 1 by A also that will also give you the same result but for convenience I'm going to write this as A upon A square plus 1 let us now execute this flow graph of course the value of A and the corresponding noise are resulting in this I start increasing this A you can see that the constellation start becoming more discernible okay in fact let's make A go to a higher value let's say let's allow A to go to about 100 and step also let's make it 0.1 that makes it better yeah so now let's set A to somewhere here you can see that this is very high SNR regime and in the very high SNR regime you can see that the zero forcing and MMSE equalizers are very close which satisfies our intuition but as you make the SNR lower and lower by reducing the A let's say you start making it this over here you can slowly see that the blue points the blue parts of the constellation that correspond to the zero forcing essentially start shaking a little more and going further out let's actually just do one thing let's increase the number of points the constellation shows let's also increase the number of points this random source shows essentially generates and it's also increased the sampling rate to get somewhat faster results if I execute this flow graph and let's say I set the SNR to a smaller and smaller value you can see clearly that the blue parts are outside the red parts which means they essentially end up spreading more and as my SNR becomes lower and lower it is very evident that the blue parts are going further and further out the it's almost like the MMSE essentially takes into account the fact that there is noise and because of that moderates the one upon a division and A is small it says don't divide by A divide by something like A plus one by A that results in a slightly better performance unlike the MMSE which just says even if A is 0.01 you just divide by one upon A in the limit when A is close to when is A is close to zero then the MMSE equalizer says okay I'm giving you nothing I'm just going to start outputting zero while the zero forcing essentially just says I'm just going to divide by A still and give you some random results that may or may not make sense as the SNR starts increasing okay sorry as SNR starts increasing let's say something like if I make it 0.1 still you can see that you know the red blob is still not that discernible as you start make going to a higher and higher SNR then things start to come back and as the SNR increases you can see still that the MMSE doesn't really allow the red to go very far out while the zero forcing takes it out this is an intuitive way to understand that the zero forcing's approach of just saying I'm just going to cancel the channel even if it's a single tap channel irrespective of noise is not the best approach for for low SNRs while at very high SNRs as SNR keeps increasing you will see that the zero forcing and MMSE essentially become one and the same as you can see the blue one is slightly out you know it's slightly outside but not by that much as my SNR increases to let's say now let's say if it is close to 10 that corresponds to a high SNR then you can see that there's almost no difference there's somewhat of a difference not much difference this confirms the intuition that your zero forcing and MMSE equalizers are one and the same for high SNRs but at really low SNRs they have a much different approach to handling the problem this was a simplistic picture let us now move on to looking at the running example that here we have been discussing wherein there was a channel that had the coefficients one half minus half and see how the zero forcing and MMSE equalizers compare in that scenario we will be using the same running example consisting of symbol transmission at the rate of half a symbol per second and because of the channel the effective received symbols are essentially convolved with p of t that lasts between one and four samples and its values are one half and minus half you may recollect that we wrote this in the form of this particular matrix equation which when made more compact becomes u is equal to half minus half zero zero zero zero one half minus half zero zero zero one half and r k can be written as u b k plus w k where the symbol that is being detected b k corresponds to the middle column of this u this is something that i would like you to keep in mind let us quickly open a python prompt and try to put this together and compare the actual MMSE equalizer and the zero forcing equalizers that we get to now perform a comparison of the MMSE and zero forcing equalizers we will use a python prompt and write out the equalizers in python and compare them let us begin we will first import numpy next we will write our u matrix remember that the u matrix had the columns which corresponded to one half minus half and so on but the first column was half minus half and three zeros the second column had zero one half minus half zero the last column had three zeros followed by one and half we will write them in the row form and take the transpose so u is equal to in p dot r a remember the first column had half minus half and three zeros let's write it the second column had zero one half minus half zero the final column had three zeros followed by one and half the dot t ensures that we get a transpose if you now look at your u you can verify from the slides that this is indeed the u that you want let us revisit the zero forcing equalizer that you evaluated earlier if you remember the zero forcing equalizer c z f can be evaluated by using the formula u multiplied by and in python recent versions of python at least you can use the at the rate operator for performing matrix multiplication in p dot inverse i can take transpose because hermitian and transpose are same for the real matrix and then i will multiply by the column vector zero one zero which i can compactly write in this way if i evaluate c z f you will see that i get five upon eight five upon eight five upon eight minus one upon eight and two upon eight which was exactly the solution that you got earlier now let us evaluate the m m s c equalizer using a similar approach remember to evaluate the m m s c equalizer you need es that corresponds to the signal to noise ratio as well let us now write out the formula for the m m s c equalizer we will assume an es let's say es is equal to i'll say 10 corresponding to 10 db our r is u multiplied by u transpose plus one upon es times c w one upon es times and since our noise is id across samples we will just take the noise covariance to be identity and then we just need to perform r inverse times p when this in this case it is r inverse times u zero let us do that to get u zero that corresponds to the uk remember we are interested in b k this column multiplies b k minus one this column multiplies b k this column multiplies b k plus one if you remember the way we formulated this u therefore we have to multiply by the middle column of u and that can be extracted by writing colon comma one the colon corresponds to get all rows the one corresponds to get the second column now if we evaluate this okay i'm sorry we have to just evaluate the inverse in this manner np dot lin alg dot inverse okay we get something let us actually store this as c m m s e and let us display it also okay now let us increase the snr to let's say 100 because that corresponds to 20 db you can see that the coefficients undergo some changes they start looking a lot like the zero forcing equalizer if you don't believe me let us actually just look at the zero forcing also just below you can see that they start becoming closer if i increase the snr to let's say a thousand you will start seeing that the coefficient start looking really really close if i now just take one more step let's say i make this in our 10 000 like 40 db or so you will see that you are really really close to the zero forcing equalizer why does this happen the reason is because this particular expression r inverse times p the r inverse times p where this p is u zero essentially boils down to the same as the u times u transpose u inverse times e zero that you saw in the case of zero forcing as this noise terms contribution becomes closer and closer to zero you can verify this by working it out but intuitively as well when there is no noise then the optimal strategy is to just cancel the interference altogether therefore you can use this numerical approach to verify that the zero forcing equalizer and mmc equalizer are the same at very high signal to noise ratios for one more step let us just add another zero so this is 20 db 30 40 50 db of snr and for 50 db you are really really close if you don't believe me if you want to just subtract and compare the coefficients the coefficients differ only in their fifth decimal point or sixth decimal points our next task will be to implement this in the flow graph to compare the zero forcing and mmc equalizers to perform the comparison of the performance of the zero forcing and mmc mmc equalizers we will build upon the flow graph that we used for the zero forcing equalizer we have built this a couple of lectures ago if you do not have this flow graph i urge you to follow along in the zero forcing new radio related lecture and build the flow graph so that we can now take this forward and build the comparison between the zero forcing and mmc equalizers our first endeavor here will be to add a variable that corresponds to the matrix u so let's do control f for command f and say variable but before that let us actually just do import of numpy because it will come in very handy so i'm just going to do the import first so we will grab our import block we will write import numpy snp now that we have numpy let us first create the variable control f for command f we'll say variable and this variable let us call it u and its value will be np dot array and we will write the same thing that we wrote earlier this is the first column the second column the third column and transpose we now have the matrix ready our next endeavor will be to add actually construct the filter to do that we will now need the snr the noise std essentially is our proxy for snr the noise std gives the noise power and the signal power is one because we use a normalized constellation therefore our es is just going to be one upon noise std square now to actually perform a comparison of the zero forcing and mmsc equalizers let us first remove the the extra constellation points over here and then we will first make this have only two inputs the first one being the zero forcing based constellation and we will just add the mmsc based constellation as well for the mmsc based constellation we will be creating another pair of interpolating fir filters that will take in coefficients that correspond to the mmsc equalizer now we're just going to create a copy of this so control c control v now these coefficients are going to be determined at runtime we will double click this is not going to be two by eight five by eight five by eight we are first going to perform our inverse times p and then take the particular coefficients that we need so in this case we will do np dot linog dot inverse okay now we are going to do u at u transpose plus one upon noise std square sorry noise std square because es is one upon noise std square times np dot i five let's make this wider so that you can see this okay we now have the inverse times u one let's see whether what this does if there's a slight error yeah so yeah so we can't evaluate it because the the singular when noise std zero no problem it's actually just set the noise std to start at 0.01 so that we don't have the zero related issue so now there should be no problem this is done but we don't need all the coefficients for this particular filter we need the first third and fifth coefficients so what i'm going to do is i'm going to actually index them by using first swapping them sorry by first reversing them and then we will go from the last coefficient to the first coefficient in this manner or let's first reverse it let's first make it simple colon colon minus one reverses the sequence then we need colon two every other sequence let's check so as the sanity check let's actually just yeah the noise SNR is very low as a sanity check you will see that the coefficients are close to 0.25 and the last coefficient is point close to 0.625 similarly i'm just going to copy this rather i'm going to copy this interpolating filter control c paste it and i'm going to change this one by double clicking on it and i'm going to take only the instead of the first third and fourth i'm going to say first third and fifth second and fourth so i'm just going to put a one here one colon colon two we'll ensure that i get the second and fourth coefficients so that gives me minus one two five minus one two five m that is minus point one two five and minus point six two five this essentially makes my mmsc equalizer a equalizer very very easy to build let me just make everything visible for you yeah so now that we have our mmsc equalizer also let's connect the corresponding filters and again i need an adder control c and control v we'll add these two up and then we can compare the constellations yeah let us make the noise std a variable so we'll delete this and say control f for command f we'll say variable and we will just make this noise std and we'll make it 0.1 and then we'll see what happens with 0.1 you can see that the constellations look somewhat similar but the interesting thing will happen only when our snr starts becoming worse let's make it 0.3 and it's also increased the number of samples to 10 000 now you can see that both the constellations have some spread but if you look carefully the spread for the red constellation may be slightly lower we can verify by increasing the number of constellation points over here and then viewing it you can see that the spread for the red constellation points is slightly lesser this is because the mmsc equalizer accounts for the impact of noise let's just increase the noise standard deviation to 0.4 we're entering into lower snr territory now you can see that the blue points go much farther than the red ones indicating that the mmsc equalizer performs somewhat better of course it's not much better somewhat better if you make the noise really really high it says 0.8 now everything becomes bad but still because of the fact that you're taking into account the impact of noise you can see that the red points don't go much further let's actually just increase the sample rate a little bit so that you can see a little more yeah so you can barely make out a couple of you know some four blobs over here indicating the four qpsk constellation points and they are somewhat concentrated but the blue one goes all over the place because the noise is amplified significantly of course at really low snr's the performance of the mmsc equalizer is also poor primarily because it doesn't really cleanly remove the impact of equalization that well but it's still marginally better than the zero forcing equalizer one remark is that the mmsc equalizer minimizes the mean squared error which is not same as maximizing the symbol error rate sorry maxim minimizing the symbol error rate minimizing the symbol error rate optimally is achieved only by maximum likelihood sequence estimation for which you need to perform something like the witter b algorithm but you can just assume that the zero forcing works at very high snr's mmsc is okay at medium snr's as well and both of them are the same at high snr's both zero forcing mmsc but at very low snr's they perform poorly and when the snr becomes much much lower then you have to really settle for a lower data rate or a poorer performance in this lecture we have seen the minimum mean square error equalizer in the case of having a single tap channel as well as having a more complicated channel response as you have seen when the signal to noise ratio is very small then the use of the minimum mean square error equalizer has several advantages over the zero forcing primarily because it limits the noise enhancement significantly the minimum mean square equalizer minimum mean square error equalizer while being suboptimal still takes into account the noise variance and therefore can result in better performance at low snr's while at very high snr's it's roughly equal to the zero forcing equalizer as we have seen the use of suboptimal equalizers is a key idea that simplifies receiver implementation and you can explore more on this in the various GNU radio blocks that are available thank you