 So, far in our GNU radio related exercises, we have restricted our consideration to symbol error rates. However, as we have seen, symbol error rates and bit error rates have a significant difference. In this lecture, we are going to essentially build an approach to measure the bit error rates for various constellations and confirm that these bit error rates are consistent with what you would observe in theoretical knowledge which we have seen in the class earlier. So, we will essentially generate random symbols and check the bit errors that these symbol errors essentially translate to and we will also confirm that these bit errors are consistent with the theoretical evaluation that we did in the class. Before we commence our discussion on bit error rates, let us make an observation about the Gaussian noise source that is provided with GNU radio. Let us first take the Gaussian noise source by doing CTRL F or CMD F and typing noise and we get the noise source and for reasons that will be clear shortly, let me also grab another noise source. I am just going to copy paste this, so I am going to hit CTRL C or CMD C and CTRL V or CMD V to paste and let us make the second noise source, let us make the first noise source over here a float. Both of them have amplitude 1 except that the first noise source is float while the second noise source is complex. Now we want to view the distribution of these two, preferably we will visualize them on the same histogram except that I am just going to take the real part of this complex noise source. So, let me do CTRL F or CMD F and say complex to float and let me grab this, actually let's grab complex to real that is what makes it easier, this just takes the real part. Now I have the complex to real, I will add a throttle, so CTRL F or CMD F, let's grab a throttle, place it here, we will make our throttle float, connect this over here, next let us grab a QT GUI histogram sync, CTRL F or CMD F, type HIST, now we would like to view two, basically the histogram of two signals over here, so double click this, as always let's increase the number of points, the bins can be 100 that is fine, let's make the X-axis go from minus 4 through 4 because we know that a Gaussian has a lot of its values between minus 3 and 3 and then let us make the number of inputs 2. Now we connect these two over here and we are ready to visualize, let us visualize this flow graph, now as you can see the data 0 is the first sequence that comes from the real noise source while the data 1 which is red is the second noise source rather the real part of the complex noise source, it is evident that the second one the complex noise source seems to be narrower and taller while the first one seems to be fatter and shorter, like if you just hide this you can see that it is definitely fatter and taller, let's just make things a little more accurate, I think let's add one more point over here and let me also increase the sampling rate to 192000 and let's visualize, now it is very evident that the red one is taller while the blue one is shorter and fatter, now what does this mean, if you remember from the expression for the Gaussian it's 1 by sigma root 2 e power minus x square by sigma square into 2 for 0 mean that is whenever the sigma is smaller your Gaussian will be narrower and taller whenever the sigma is larger the Gaussian will be fatter and shorter, this basically indicates that the real part of the complex source essentially is Gaussian noise that it has a lower variance, why is this, the reason is because the noise from the complex Gaussian noise source yields complex Gaussian noise that is real part and imaginary part each having half the variance, in other words this particular red part has a variance of half while the other one the imaginary part will have a variance of half, if you don't believe me we will use the fact that this Gaussian noise source essentially has two independent Gaussians and if I now take the independent Gaussian from the real and the one from the imaginary and add them I will get similar variances, let me show you, so let's first remove this and let's grab a complex to float so that we get the real and imaginary parts, now the real and imaginary parts are both independent therefore if I now get an adder, I control f and I say ADD and grab the adder and I make the adder, double click on it make it float, if I now connect this and connect this over here and connect this over here, now if you visualize you will see that both of them are roughly overlapping indicating that both of them have zero mean and have the same variance, this confirms the hypothesis that the two different real and imaginary parts that make up the complex Gaussian noise source essentially have variance half each, in this case the variance is half because the amplitude I have chosen is one, if the amplitude I have chosen let's say is four then the variance will be so four means the variance should be 16, so the variance will be eight each, if I choose the amplitude to be root two then the variance will be two and one each for the real and imaginary parts, this aspect is something that you need to be aware of because whenever you do evaluations with real valued constellations such as BPSK or PAM4 or PAM16 and so on be aware that the noise you add should be real because the complex noise anyway does not contribute and therefore you may make slight errors in your bit error rate or symbol error rate computations, in order to compute bit errors we need to evaluate XORs, so to do that we will just take a very small detour and check how XORs can be done using python so that we can build a nice python block to do our XORs, so let us first import numpy, after importing numpy let me create a small array consisting of some integers and these integers can be thought of as representing some of the message values of your M PAM system, for example if you have PAM4 our messages would be one of 0, 1, 2 or 3, so the transmitted messages are 1, 0, 1, 2, 1 and 3 that correspond to the bit sequence 0, 1 because 1 can be represented in binary 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1. Now let us introduce 2 bit errors, let us assume that this 1 is essentially flipped to 0, so this becomes essentially 0, 0 and this 3 both the bits are flipped and this also becomes 0, 0, so let us check, so as you can see the 0, 1 bit sequence here became 0, 0 while the 1, 1 bit sequence here became 0, 0 meaning that in our 12 bits we have introduced 3 bit errors, now to find out the bit errors that we have we are going to perform an XOR of these 2 and numpy allows you to do a bitwise XOR by using the bitwise XOR function, so if we inspect the error pattern we can clearly see that there is no bit error here, let us just make things need error once, so as you can see there is no bit pattern in this one, there is no bit pattern here, there is exactly 1 bit error here because this essentially becomes, it goes from 1, it goes from basically 0, 1 to 0, 0, so this 1 bit error here and then there is 0 bit errors here, 0 bit errors here and there are 2 bit errors here which is very evident which is why you get 3 which represents 1, 1, so now we are just going to count the number of bit errors from this particular sequence. Next, to calculate the total number of bit errors which by the way it is not 1 plus 3, 4 because this corresponds to 1 bit error and this 3 corresponds only to 2 bit errors, what we will do is, we will use an inbuilt numpy function called unpack bits, okay, so unpack bits requires you to have unsigned bytes, therefore we will just convert our array into that type, now if you run this particular code what you get is, you will get essentially you had 1, 2, 3, 4, 5, 6, you had about 6 bits, so you are going to have each of those are converted to 8 bit patterns, 1, 2, 3, 4, 5, so it is essentially you are going to get something of length 48, so you can verify this, so and what are these 48, these 48 are bit representations, they are essentially converting the 8 bit bytes into their bitwise representations, so essentially this 0, 1 which is sitting here was at 1 and the 3 became 1 and 1, now you can just sum this, you get 3, you can practice this for some other combinations and verify that this allows you to count the number of bit changes, this is the approach that we will be employing in order to compute the number of bit errors that we will encounter in GNU radio as well, so now one thing that we wish to do is to only output bit errors corresponding to the symbols that generate those bit errors, in other words if you look at this particular error pattern, then this particular error pattern came about from 6 you know PAM4 or QPSK symbols, therefore the 6 PAM4 or QPSK symbols correspond to 12 bits, so our goal would be to output exactly 12 zeros or ones indicating whether each of those 12 bits is in error or not, so the next task for us is to convert these into 12 bit patterns, that is very simple because even though this has 48 elements we only need the last two bits of every one of the eight of eight elements, so we will just reduce this array to get our bit error pattern, so to do that we will just take the seventh byte of each of these and then the eighth byte of each of these and then just put them together, of course the order of the precise bit error locations is going to be incorrect, but that is okay because we are just interested in histogramming them and the order is not significant, therefore one way to do this would be let's say that we'll call this UEP for you you know unpacked error patterns, so we will take UEP, we want the seventh bit and from there on every eighth bit, so we will say 6 colon colon 8, that gives me this particular error pattern, so the three remember got flipped, so this one corresponds to that, this is the MSB of the three and then if I just do 7 colon colon 8, I get this one because the one was flipped to zero, so zero one became zero zero and this one because the LSB of that three was flipped and now if I just concatenate these by using the plus operator or let me use the np.concatenate 6 colon 8, oops sorry I'm just going to now do this and now I have my three bits and this the number of elements here is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and therefore this corresponds precisely to the situation where I have the number of bits being rather the number of bit errors being equal to the number of bits that we are computing them for, the 7 colon colon 8 is python slicing notation, this gives me every eighth bit from the seventh or rather the eighth bit onwards, this gives me every eighth bit from the seventh bit onwards, this is something you can check out from the python documentation, we will be rewriting this code in our python block in order to compute the bit errors, let us now rebuild our PAM based or you know PAM based communication system but this occasion we will be looking at bit errors as opposed to symbol errors, so let us now build our PAM four let's say PAM four constellations bit error rate calculator, so we will first begin as always with a random source, dole for command f, we will get a random source, this random source will go from 0 through 4 and we will now make this a byte, we will then grab a constellation encoder encod and place the encoder over here, of course it needs a constellation which we will call MYCONST that is my const and I am going to now create a constellation object, so control f, CONST and we will get the constellation object, by default the constellation object already has a QPSK like constellation, so we will leave it as is for now but we will just call it my const, next we will add a throttle, control f for command f, we will get a throttle, we will add noise but we will always want to control on the noise, so we will first add a range, control f R-A-N-G-E, we will grab a range, we will call this as always noise STD, this goes, starts at 0, goes to about let's say 10 and step is 0.01 and control f for command f, we will say noise source, grab our noise source, its amplitude is NOI STD, so now we will just say add, control f for command f, add we will connect these and finally we will view our constellation, so control f for command f, we will get the constellation sync and this constellation sync will show us and we will increase the number of samples to get averaging, the constellation sync will show us these constellations and if we increase the noise you can see that the constellation starts becoming fatter because of the effect of noise, now our next task is to ensure that we compute the bit error rates, remember for this case there are two bits for every symbol, so we cannot have a sync block, we need a block that outputs twice the number of samples as we input, so let us begin by adding our python block, so control f for command f, we will type block and we will grab the python block, we will double click this and open in editor, yeah so now we have to make some changes over here, unlike the previous where symbol error rate, in the case of symbol error rate one symbol gave you one symbol error number that is a yes or a no which is a zero or a one, but in this application we have an qpsk symbol which means one symbol would lead to two possible bit error patterns, therefore we cannot use a sync block, we will have to use the so called inter block, this ensures that the timing is matched and for every input symbol or input symbol pair in our case that you give the output of this block will be two symbols which are zeros and ones which we can use to histogram for bit errors, we will get rid of this string, we will get rid of this, we do not need this parameter and then we do this should not be sync block, this should be inter block and we are going to say we will say qpsk per counter that should be fine, the inputs can be two bytes so we will say int eight, int eight and the output will also be an int eight that consists of zero and one pairs, we do not need this example parameter, we could take the decimation rate as a parameter to make it work for other quams also let us try that, say decim rate and we will say equal to two so that you know it works for qpsk but we will extend it for two others, let us say we will call this we will call this general BR counter, so it will work for pamphor also and several other set symbols, now this in sync out sync is fine, we also it should not be decim rich, we interpret I apologize, yes and finally we will say inter is equal to inter rate and then we will say self dot inter rate is equal to inter rate, we will also say self dot set relative rate is equal to interpret, this is what GNU radio will use to determine how many samples you are going to output, now over here we need to write out our sequence of commands that correspond to computing the XOR performing the unpacking and then getting only the right sequences, so let us do that do the bitwise or bitwise XOR input item zero, one so this gives me the error patterns, next task for us is to convert it to the unpacked form because this just has the errors as let's say if it's three it corresponds to two errors because it's one one and so on, so we're going to say bits error patterns dot as type we'll convert it to unit eight, of course within quotation marks we will then just take the we'll then just take the seventh and eight bits of these and then just put them in the concatenate them and put them in the output sequence, let's do that, now let us spend a bit of time understanding what this code does, so we have calculated the bitwise XOR between the input items one and input items two and this has markers of how many bit errors there are, now this BR counter will work for qpsk, quam16 and so on, so if you use it for something like quam16 then the total number of patterns is essentially 16 which corresponds to four bits, we in which case we are going to have four bits per symbol and therefore our rate will be four, so what we do is depending on what interpolation rate there is which corresponds to the constellations essentially m or you know in a for example of quam16 the interpolation rate will be four, we are going to compute the errors from the eight the least significant bit towards the left and concatenate all of these, in fact you can run this loop for all eight it doesn't matter but it is just more efficient for us to do this, in fact to get the correct number of output sequences you have to do this because if you run this for example for interpret two it is going to get seven minus zero for when i is zero it is get seven minus zero which is the eighth bit for all the bytes then seven minus one which is the seventh bit for all the bytes and you concatenate all those error patterns and then you just output them, so this approach is a nice way to calculate the bit errors very very effectively, now this artifact is because we are writing our own bit error counter in GNU radio there are possibly more efficient and effective ways to do this, let us now check that this bit error rate counter works correctly, so when we save and exit we will now have a little block called BER counter with interpret two, let's first play with this BER counter by just giving it some fixed values and seeing whether it does the right job, so i am going to add a couple of constant sources i am going to say controller for command f and say constant source and this constant source i need a byte and let's keep this byte as zero and let's take another constant source and this constant source let's keep it as a byte but let's keep this as a variable constant let's say BER pattern, so i am going to connect this over here, so now what i am going to have is this BER pat should be from a range so as a controller for command f i am going to say range and we will double click this and call it BER pat and then we will make this int default value is zero started zero stop at 100 doesn't matter this int can be converted to float so i don't have i'm sorry to byte i don't need to worry and finally i need to just visualize this so i'm going to get a time sink so controller for command f i'm going to say time sink and this time sink if i place it here i'm going to just change it to float i also need a i think i also need a u care to float so u care to float will take this unsigned character and convert it to float and connect it and we're on our way now if we execute this flow graph you see that zero and zero when exhort gives zero but let's say that zero and one when exhort they seem to give a one one and one when exhort sorry one you know like one zero and zero zero and exhort give a one while if you have one one both the bits are flipped so let's understand if zero is essentially output zero zero and zero zero have no errors now zero one and zero zero you have error half of the time which is why it looks in looks like this if you now do one zero and zero zero you again get a pattern where half of the time it is up and other half it is down and we have three which is one one you have two errors so essentially we have tested that this bit error rate counter functions correctly so again let's get rid of these constant sources we'll keep the u care to float and we'll just put a histogram now the next task for us is to actually get back the constellation by adding a decoder and then checking the bit errors the amount of bit errors so let's just add a constellation decoder so control f for command f we will say decoder we'll get a constellation decoder which uses my cons and we connect this over here we connect this over here we connect the original source over here and we are set we now just need a histogram so control f for command f we'll say histogram now this histogram sync will connect here and we will increase this to 10,000 to 40 number of bins can be it doesn't matter 10 because we only are bothered about the value zero and one so we'll go from minus let's say 0.2 to 1.2 we are and well actually yeah because we're just counting the number of zeros and ones ones are errors zeros are no errors so now let us just execute this flow graph we have a nice triangle standing at zero indicating that there is no error and if i increase the noise you can see that you know there's going to be a large amount of error now if you increase the noise too much then you can see that both of these are at the same height indicating that the bit error rate is half half is interesting because when it is half that means a coin toss will also give you a good enough guess you have no information going through now let's start increasing this by a slight amount as you can see the qpsk constellation keeps going up and up let's actually change the constellation also to have more points so that we get a better kind of blob now if we increase the noise you can see that the constellation grows as the noise starts going to higher values let's say the noise variance a standard deviation gets close to 0.2 or 0.3 now you are going to start to see some occurrences of some of these symbols crossing over some of these bits crossing over rather and as discussed in the lectures you can see that you are starting to have some bit errors now in this particular case because we are using the inbuilt constellation the eb essentially is the eb over here is es upon two so because es is essentially one and your eb is half of es so your eb s sorry eb is essentially half and n0 is essentially one because that's what you chose your noise to be you chose it to have unit variance so if you do the calculation you can find the bit error rate without much difficulty now it is evident that by increasing the amount of noise you are able to clearly observe more bit errors while reducing the amount of noise makes the bit errors fall this one is essentially indicating the number of bit errors and by finding out the ratio or fraction of bit errors that you encounter over here and dividing it by the total number of symbols that are sent you can get a good idea of the bit error rate of this system as well in fact this should be consistent with the formula that you use in class to compute the bit error rates as well for example if you set it to close to 0.4 or 0.5 you can see that this starts rising and you can count the number of bit errors and make a claim on the consistency of the formula for the bit error rates as a final step let us also just change this constellation to qpsk and see whether it has any impact yeah it's the same constellation and you can see similar effects over here they're able to observe that if you increase the noise you get more errors now before we close this particular session I want to just give you a small piece of you know advice or warning in case we change this to something else like 16 quam in the case of 16 quam our BER counter will work but the only changes you need to make are that you need to make the random source go from 0 through 16 and you need the BER counter to go from 0 through 4 because in the case of quam 16 you're going to have 4 bits so now if you run this as you can see you have a normalized constellation now if you start increasing the noise by even a little bit even a smaller amount you're going to start seeing bit errors and for the same amount of noise as you had in the quam 4 case you're going to have a much much higher impact of noise in the quam 16 case so quam 16 is definitely much less resistant to BER than quam 4 as expected a similar exercise can be done for quam you know quam 8 as well but make sure you change this to 8 and this to 3 in this lecture we have seen how we can build a simple bit error rate testing mechanism in GNU radio as you have seen with a small extension we are able to conveniently characterize bit errors without much issue and we can visualize the bit error rate also by comparing the fraction of correctly decoded symbols and incorrectly decoded symbols using a histogram sync as well in the next lecture we will put all these things together to come up with a simulation of a practical communication system thank you