 Welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering IIT Bombay. In the previous lecture we have seen in some detail aspects related to scalar quantization. In particular we saw how you can quantize a uniform random variable or Gaussian random variable and what the impact was on the mean squared error and how that connected to your choice of boundaries and quantization levels. In this lecture we are going to implement these on GNU radio in a simulation. We are going to write a small Python block that not only calculates the quantized values but it also gives you the mean squared error if you specify the quantization levels. Let us go about this in the coming minutes. Before we evaluate our quantization performance on GNU radio let us first see how we can implement quantization in a very simple way using Python so that we can embed this into our Python block on GNU radio. We will first import numpy by saying import numpy as in p and then we will perform the quantization in a very simple manner using pythons and numpy's inbuilt array features. First let us set our quantization points, so quantization levels. So it is a Q level in p.array. Let us say that our levels are minus 0.75, minus 0.25, 0.25, 0.75. Let us say that these are the levels. In fact if you remember this is these are the optimal quantization levels for a uniform random variable between minus 1 and 1. For starters let us generate 10 random values between minus 1 and 1 which are uniformly distributed. We will say x is equal to np.rand and we will say 10. As you can see x there are 10 random values between 0 and 1 but to make them be between 0.1 and 1 rather you should just say times 2 minus 1. If you say x you get 10 random values between minus 1 and 1 that are uniformly distributed. Now our aim is to check each of these values and then quantize them to the closest quantization level. For example 0.35 you can see that it will be closest to 0.25. Similarly minus 0.98 will obviously be closest to minus 0.75. Now one obvious way is to perform a loop over every one of these values and then compare it with every one of these values and choose the minimum. But a better way is to use the array features of numpy to do this in one shot. What we will do is we will essentially repeat this array x 4 times and then repeat this Q levels 10 times perform a subtraction and get the minimum for each row one shot. Let's see how this can be done. So if you look at this x it is an array of shape 10. So now let us just repeat this 4 times in 4 columns. So how do I do that? I can say x.repeat 4 if I say x.repeat 4 it repeats it in a linear fashion which is not what we want because it's like 0.35, 9, 6, 7, 0.35, 9, 6, 7 is repeated 10 times and so on. This is not what we want. Instead we will use we can actually do you can actually reshape this but instead what we will do is we will use np.repeat we will say x and we will say 4. This reels the same result but now we can then say axis equal to 1. Axis is equal to 2. Axis is equal to 0. So if you do this right then you essentially get an array that has 4 rows and 10 columns and these 4 rows and 10 columns have the values that you want repeated I mean the same values are there the 10 values are repeated for you know 10 times 4 times rather now if you just take the transpose you will get this. So you have the same 10 values in x repeated in columns you know multiple times. So for example if I just say x enter you see this 0.359, 0.359, 0.344, 0.344. Now what I am going to do is I am going to subtract the first column by minus point from you know I am going to subtract minus 0.75 from the first column minus 0.25 from the second column plus 0.25 from the third column plus 0.75 from the fourth column take the absolute value and take the minimum across all of these because that will give me the closest quantization point. Let us begin doing that. If you look at our Q levels our Q levels are minus 0.75, minus 0.25, 0.25, 0.75. Let us do the same thing NP dot repeat we will say Q and we will say 10 we will say Axis is equal to 0 Q levels. So now we have these and we have these so all I need to do is subtract and take the absolute value. Let us actually give these some names. So I will call this Q L reps I will call this x reps with repetitions. So now if I say NP dot absolute Q L reps minus x reps I will get an array of size 10 cross 4 and I just need to find out the index of the minimum value over here. I just need to find the index of the minimum value on each row. So to do that I can just say NP dot arg min and NP dot mark min you just have to specify the axis whether it is a row or column if you say one you get the exact thing that you need you get the index of the minimum element per row and if you now just say Q levels square bracket NP dot arg min you get the quantized value to confirm whether this is indeed correct what we can do is we can just compare this with the original array. You can check that 0.35 is close to 0.25 that's correct 0.35 is close to 0.25 minus 0.8 is close to minus 0.75 0.44 is close to 0.25 closer than 0.75 of course minus 0.24 gets mapped to minus 0.25 minus 0.98 again gets close to minus 0.75 minus 0.75 is there 3 5 2 5 so you can see that in just a one line of code you are able to perform the quantization very very effectively. We will be implementing this idea when we view the quantization block on GNU radio as well. Let us now move to GNU radio to implement this code let's just revise the lines first so we have our random variables we have our Q levels we have our x reps we have our qql reps and we have our quantization as this. So let us now implement this in GNU radio we will now begin our implementation of the quantization which we just discussed on GNU radio to review the quantization we will be looking at both the quantized values as well as the errors. So let us first grab a noise source controller for commander we'll say noise source and we will perform the quantization first with a uniform random variable. So let's just double click this noise source we'll call it uniform and amplitude is 1 we'll call it float and let's actually see the kinds of values this takes by adding a histogram. So I'll do control for command f we'll say throttle first we'll make this a float throttle and then we'll say histogram controller for command f we'll grab a histogram and executing this flow graph is going to give us values between minus one and one uniformly if you want to make sure that it is indeed uniform you can just take more points you will see that it's reasonably flat because roughly a hundred out of the thousand fall into each bin. So now let us now go a hundred of you know hundred out of the ten thousand rather okay let us now go about performing our quantization okay let's also add a time sync control of command f we'll say time sync the time sync also we will set it to float and we can just view the uniform values they may make some sense okay this is uniformly distributed if you just look at the amplitudes and we can now move ahead we will now build a quantizer which takes in one argument that has the quantization values so let's use a python block so control for command f we'll say python we'll grab a python block we will say our example param will change to a list that contains the quantization values we'll say open in editor now we will perform our usual corrections by removing the comments we will call this our quantizer it takes float input and let's say it gives two float outputs one being the quantized value the other being the error we will also give out the error because we can histogram the error to see how it works and instead of example param we will call it q levels we will change it to q levels so now we are ready to implement our quantizer over here use the same approach as we did in the code we will first say q levels is equal to self dot q levels this is just for convenience so that I don't have to type self dot q levels each time is a ql rep is equal to ql reps is equal to np dot repeat q levels and we will repeat it as many times as there are inputs so we'll say len inputs input items 0 because that is the number of values that you have axis as we saw there is equal to is equal to 1 similarly we will say x reps is equal to np dot repeat input items 0 and we will repeat it based on q levels we will say len q levels axis is equal to this axis should be 0 apologies should be 0 and this should be 0 transpose our quantized values are just going to be set in this manner q levels and we will actually make sure q levels in umpire a so that we don't have any issues with the indexing if it's a standard python list then there could be some issues okay we'll say np dot argmin and this np dot argmin needs axis to be equal to 1 because we are performing it row wise and we're going to take np dot abs and this np dot abs is going to be ql reps minus x reps and the error is going to be the second output so we'll say output items 1 colon is very simple it's going to be input items 0 minus or we can directly give it in this manner we can just say x reps minus ql reps sorry we can just say input items 0 minus output items 0 this is just going to be x minus x hat a couple of remaining changes to make the code work are first to rename this as self dot q levels because in the work function we are actually accessing self dot q levels and we don't need self dot q levels underscore param we can remove this unnecessary comment and finally input items must be within square bracket because we're replicating the array so we must specify input items itself as an array within square brackets with this we are ready if you say this is okay and our quantization levels are minus half and half if you execute this flow graph you will see that you will get this performance to check that this is correct let's go to the stem plot and let's stop this temporarily if you now zoom in you can clearly see that whenever the blue value is negative that is the value is between 0 and minus 1 the quantized value is minus half which is correct one of the blue value is positive that is between 0 and 1 you can see that the quantized value goes to plus 0.5 which makes complete sense the error is between minus half and half because let us say that you are your random value is a number between 0 and 1 it's a positive value it gets quantized to 0.5 the number plus 1 when it gets quantized to 0.5 results in a quantization error of half the number close to zero when it gets quantized to 0.5 results in a quantization error which is minus half or i think the reverse so one gets quantized to 0.5 error is half minus 0.5 gets sorry 0 0 gets quantized to 0.5 the error is minus half let us now see how this performance changes when you have more bits we can actually just take a copy of this quantizer say control c control v and let us connect the same input we'll connect we'll make another time sink we'll make this three we'll make another histogram input make this two now this quantizer we will make it go between minus 0.75 minus 0.25 0.25 and 0.75 which is the optimal two bit quantizer for the case where you have uniform random variable between minus one and one so we connect the original output to the time sink and this is the error and now if you execute the flow graph you will see that the error becomes narrower and the error is naturally between minus you know minus 0.25 and 0.25 and if you zoom in on to the waveforms as well if I hide the red one you can see that the green is a more faithful reproduction of the original waveform and the error is definitely lower let's add one more bit over here let us now add one more bit so the levels are going to be it's very easy minus seven upon eight minus seven upon eight minus five upon eight minus three upon eight minus one upon eight and I'm going to copy these and or rather like you just write it one upon eight three upon eight five upon eight and seven upon eight and with this let us just inspect what we get as you can see now the error it becomes even more narrow because it is now between minus point it should be between you know minus 0.125 and 0.125 which it probably is you can verify by changing the number of bins and then if you look at the green one it is actually quite close to the blue one and if you want to plot the error also you can just subtract and plot it I'm leaving that to you as an exercise let us also do one more thing let us actually just see how the error looks like for a Gaussian so for that I'm going to add a parallel noise source and perform the same thing so let's actually just copy this control C control V and we're just going to change this to Gaussian we will then add the same equipment control C and control V is what we will do and now we can just connect the Gaussian source to these here however let us make the quantizer the optimal quantizer for a Gaussian from the lecture you may recall that the optimal value is root of 2 upon pi so we can try setting that value and seeing how the performance is so we choose minus 0.798 and this is plus 0.798 because this value is close to root of 2 upon pi and we will leave the second one as is and see how the error looks so now over here at the time sink we will connect the original source and we are set if you execute this flow graph let us actually add some labels okay so we are going to say uniform I will copy this I'm going to call this uniform here we will call it Gaussian copy this and here also we were going to call it Gaussian now executing the flow graph for Gaussian it seems to give this kind of performance let's see there for the Gaussian this is how it looks like and this is how the histogram looks like okay now one thing is interesting that it starts looking close to uniform the original error however let's see the original error also starts to look flat okay let's actually just set this to let's say 0.1 just for fun and see what happens it's a poor quantizer but we will see what the impact of this is and you can see that you can see here that the error is quite you know poor because you know the error is quite different because the quantized value is very far from the real value and the error over here the blue curve is definitely much much more fat and you can barely see but the amplitude is you know it's it's close to you know the error is actually close to really really high value over here but of course uniform quantizer still seems to do well we can hide it and you can see that the error looks like this now if you increase this value to let's say minus one and one let's say this is minus one to one and let's to take the optimal quantizer here which is just the optimal one bit quantizer let's say it is minus 0.79 0.79 and we can compare these obviously the second quantizer should work better let's see what happens you can see that the second quantizer works better because its error footprint is much lower it's actually incurring a much lower error than the first one the first one has a higher error even if you look over here you may sort of see that the green one and red one are very close but the green one is supposedly tracks the blue curve better because the amplitudes are not high very often while the red one assumes that amplitudes are higher to get an even better picture if you say minus 2 and 2 this is going to be a very bad quantizer and the blue curve is going to be very wide the blue histogram is very wide as you can see there's a lot of error over here however and you can clearly see that there's a lot of overshoot for the red while if you look at the green and the blue the green and the blue are comparably much better so in this manner you can verify the performance of the quantizers by writing a simple quantization code and by adding more and more bits you can improve the performance if you want to just confirm even for a Gaussian let's say that you add these bits over here for a Gaussian you will see that the blue curve is now going to be very good so if you use a uniform quantizer even for a Gaussian it's going to perform reasonably well although you may be wasting a little bit of bits because you are not taking into account the distribution so in this manner you can easily verify the performance of various quantization algorithms in this lecture we saw how you can use GNU radio to good effect to perform quantization we built a simple quantization block wherein you have to specify the quantization levels and in GNU radio you can then use this this block to perform mapping to these levels in terms of a minimum distance mapping and also observe the error we saw both for a uniform and Gaussian sources that if you deviate from the optimal quantization points your error performance diminishes significantly therefore choice of the correct error correct quantization levels definitely improves your quantization performance by minimizing the quantization error thank you