 This is lecture 16, ok. So, this week we will have couple of lectures today and tomorrow and then Wednesday and Thursday I am out. So, there will be no class on Wednesday and Thursday. So, it is two lectures, let us see you will be ok. So, the last thing we saw in the previous class was this square root of race cosine ok. So, I am going to write that down once again real quick, but before that I want to draw the whole system once again it is good to just keep reminding ourselves as to where this where this where everything fits in it is quite important ok. So, this is the kind of picture block diagram that you should keep in mind when you are thinking of this whole whole ISI and all these things ok. So, you have what I called as a vector b ok b 0 through b l minus 1 n times l bits ok and then you have a constellation script x ok. So, the size of this constellation is what 2 power n ok. So, that is how I am imagining my constellation to be ok. So, as a result you get ok a sequence of in general complex numbers a k plus j b k ok. So, you are imagining l symbols here ok and here I am going to put what I called a transmit filter right. So, g of t this is my transmit filter it is also sometimes called a pulse shaping filter ok. So, it shapes your pulse ok and ok. So, there are various choice ways of picking it first of all it has to fit into the bandwidth of your channel under consideration and then it is also good to have g of t and g of t minus k t being orthogonal ok. So, if you do all that then you will have no ISI it is almost as if you are dealing with the same picture as before ok. So, let me qualify that more, but anyway that is the transmit filter. So, in general you can think of g of t without worrying about anything as a square root raise cosine filter ok. So, it is simple response is actually given by square root of the raise cosine ok. So, the only situation that it would not take into consideration is the very first situation we saw what was the very first setup for us and when a signaling time is extremely large compared to 1 by w ok. So, I am choosing a very very small bandwidth in that case what was g of t? What was g of t? It was a constant between 0 and t right. So, we could choose a very simple g of t at that case ok. So, now when we want to increase our bandwidth to a signaling rate close to the bandwidth we cannot do that anymore we will have to deal with ISI if you want to get rid of ISI also then you will have to have a square root raise cosine ok. So, that is the setup there ok. So, then you do then you do what? Then you do an up conversion and then a down conversion ok. So, I am simply going to ignore that just for sake of space in this picture ok. So, you do an up conversion down conversion ok some noise gets added and all that ok. After you do the up conversion and down conversion what do you do next? Assuming I took care of ISI ok well I should put a channel here I am sorry I will put a channel here. So, I will write my channel in base band right. So, since I did not write the up conversion down conversion this is the base band channel they assumed the bandwidth of minus W by 2 to W by 2 ok. So, this was my ideal channel ok. So, very ideal channel and phase when you can assume 0 phase or we will say linear phase for ideal for an ideal channel adjusting for delay layer ok. So, then a noise gets added to it ok. Assuming orthonormality is satisfied for g of t and g of t minus kt and all that all I have to do here is what? Match filter ok. So, you do g star of minus t ok and sample every t seconds ok. If it was not true if it was not true that g of t and g of t minus t and all that are orthogonal then this is not optimal anymore you cannot do these things ok. You will have to do something else, but since I chose my g of t to be a square root trace cosine and I chose my bandwidth suitably I know that I can do g star of minus t and sample every t ok. So, what will I what will I get? Once I do that I can be sure that what I get will be s of k plus noise. If you did not know that I might get something I do not know what I would be getting ok. So, that is the next thing to keep on right ok. So, a few more things I want to point out ok. Suppose instead of choosing so, when I do match filter by g star of minus t I am doing correlation with some orthonormal basis. What is the orthonormal basis? g of t g of t minus t and so on. So, can I choose any other orthonormal basis for the same set of signals and do correlation? Can I do that? Will I lose optimality in any way? Some other set of yeah as long as they are orthonormal you should not lose optimality, but then what will you lose? Will there something that you will lose? If I choose some other orthonormal basis here as opposed to g star of as opposed to g of t minus k t. Yeah the ISI you are not sure if it will go away or not right right. So, you might have to do more complicated detection ok. So, this may not be just s of k plus n of k it might be s of k plus some constant times s of k minus 1 some other constant times s of k plus 1. Then you have to do a complicated detection we do not know how to do detection for that it ok. So, we might see it later on, but for now we only know how to do detection when my symbol only noise gets added to my symbol ok. Since I chose my orthonormal basis to be g of t minus k t again and I am having to do only this very simple detection as in there is no ISI ok. So, fact that we eliminate ISI is important for the detector ok. So, that is the those are the things to keep in mind ok. So, you run a simple detector with no ISI. So, you can run this detector for each k ok. If I had ISI I cannot do this ok and it would not be optimal anymore ok. So, you should know which part of it is optimal for what reason ok. So, do not have to simply take each block to be as it is you could choose another orthonormal basis as long as you know how to detect the resulting signal resulting symbol values you get ok. So, that is the main thing to keep in mind ok. So, this is ultimately will give me B hat ok. So, this is the picture and just to complete this picture let me draw let me write down the square root raise cosine expression because one could pick that as opposed to anything else. So, the square root raise cosine g of t is 4 beta divided by pi root t cos 1 plus beta pi t by capital T plus t sin 1 minus beta pi t by capital T divided by 4 alpha t whole thing divided by 1 minus ok. So, suddenly I put alpha there 4 beta t capital T square ok. So, this is a fairly complicated looking expression from what we had before. So, previously when you were willing to so what is t what is t that is important 1 by t we choose to be equal to what w by 1 plus beta ok and this is true for any beta between 0 and 1 ok. So, this is all this is very important do not ignore this ok. This is an important design choice you are making ok. So, you cannot happily throw that out 1 by t the signaling rate symbol rate ok we are choosing to be w by 1 plus beta you could choose to be equal to w in which case this g of t should become a sink ok. There was some question as to whether that works out or not I think at the end people agreed that it works ok. So, it should work out hopefully you see that alright. So, like I said before when t is very very large when you choose your signaling rate to be extremely small you can get away with the g of t which is flat between 0 and your symbol time 0 to t ok. So, that is good in one way it simplifies your correlator and all that even your transmitter gets detected and this is more complicated in the sense that every t seconds you are producing a signal which lasts for a long time ok. So, well this will not be causal. So, you will have to shift it to the right by suitable amount to get it causal. Maybe it will last for 4 or 5 symbol intervals maybe 4 or 5 is too less maybe it is 10 symbol intervals ok right. So, that is fairly complicated when you think in terms of implementing at the transmitter right. So, every signal you produce at every for every symbol will last for how many symbols say some 10 symbols ok you will have to choose that how will you make the choice why did I say 10 why cannot it be 100 ok. So, you have to plot this and see after how many symbol symbol times does it become very small and I can ignore it ok. So, 10 seems like a good number to pick ok. So, maybe it lasts for 10 symbol. So, it complicates your transmitter right. So, whenever you transmit a symbol not only do you have to add the contribution from that symbol you have to add the contribution from 10 previous symbols. But since I chose my pulse shape to be specifically a square root race cosine and then I correlate on the receiver with G star of minus t my S of k what I get after sampling at t will only have that symbol corresponding to that particular ok. So, those are little bit of a complicated picture in mind, but I also drew a simple picture to show why at every t seconds the pulse shape is going off to 0 and why it does not get any contribution ok. So, that is a simple way of thinking about why it is working out that way ok. I also asked another question at that point is it settled? It is settled any answers for why something like that is ok. Suppose, I do not have a low pass filter then yeah then noise will go through the roof ok. So, if your noise is very very large bandwidth compared to your signal if you do not use a low pass filter there then noise ok. So, those are things to keep in mind. An intelligent question to ask which you might face in a quiz or an exam is I pick G of t as sum square root race cosine ok with beta not 0 and then I use only a low pass filter right an ideal low pass filter at the receiver what do you think will happen ok. So, how do you quantify that? Do you think you have all the tools to quantify that is some question you have to ask yourself ok. Did this question make sense to everybody or looks like enough people are staring at me like I asked a very complicated question ok. So, that is an interesting thing to do because in the receiver you may not want to do this G star of minus t which is fairly complex ok. So, you might want to know at what point can I get away with a simple low pass filter even though I am doing a square root race cosine. So, that seems like a reasonable design question to ask ok. So, you will have to answer that with what are you paying for? Will you get something? What will you get? Can you detect that very easily ok. See remember low pass filter sampled is also an orthonormal correlation. So, technically that is optimal should not lose anything, but your detector will become more complicated ok because you may not get rid of ISI may not ok you will not right. So, because Nyquist criteria will not be satisfied anymore if you do G of t and not if you do not do G star of minus t right. Remember how Nyquist criteria are satisfied for what? Mod G of f squared ok. So, assuming at this point whenever you sample right at this point when is Nyquist criteria satisfied only if you have mod G of f squared if you have something else there that may not satisfy the Nyquist criteria ok. So, you should remember that all right any other comment that I wanted to make does anyone want to make any comment about this anything that is disturbing you it is fine ok. Yeah, you should span that is something important ok. So, yeah I am sorry that is that is an important point ok. Only you can any other correlation with respect to any other orthonormal basis will be optimal only if it spans the entire signal space. So, in the in the Ray's cosine sync case you might lose information I am not saying all right ok. So, another thing to keep in mind is if you if you just want to get rid of ISI it seems like the most important time domain criteria is every t seconds your signal corresponding to every symbol should go off to 0 ok. Pretty much that seems to be the important thing ok. So, you have to make sure your 0 crossings occur every capital t seconds and your signal falls off fast enough ok that seems to be a rough way of satisfying the ISI condition right at the transmitter end at the receiver end if nothing else is getting added and if you if you filter properly it should work out ok. So, that is one more thing to keep in mind ok. Next question I am going to ask is very simple, but we will quickly do that I want to I want to emphasize that suppose instead of this w by 2 and minus w by 2 I say minus w and w does not mean anything right. So, wherever you have w by 2 wherever you have w you should put 2 w ok. So, notice if you notice in the pulse shape nothing will change because it is in terms of capital t ok. So, if you look at the definition of my race cosine and square root race cosine everything is in terms of capital t ok and t is related to the bandwidth, but everything is in terms of capital t ok. So, that is a good lesson to have any algorithm you have for transmitter and receiver it should be as a function of symbol time capital t ok. Do not make it a function of bandwidth ok then what will happen anytime you change your bandwidth you have to change all steps of your algorithm, but if you make it a function of your symbol time big advantages all you have to do is one big change in the beginning t equals change that according to your bandwidth everything else will adapt on its own. So, such subtle things might be crucial ok particularly sometimes when you solve problems or when in the lab for instance when you write your code these things might be easy. Tomorrow if you want to change your bandwidth it is the code software that you write see remember all these things are being implemented by software on some processes ok. So, that is how you should think about it and it is easy to change that if you have such notations ok. So, only thing that will change here is this w will become 2 w ok. So, when you when you go to minus w to w that will become 2 w everything else pretty much remains the same ok. So, another thing I have done here is to assume that this is a complex constellation right. So, that is something people like to relax later on, but anyway so we can stick to the complex constellation for good ok. Is it fine everybody is happy so far. So, this picture is very important and a similar picture for the previous case when we were signaling over a very long time ok. So, that picture is also important ok. So, both those pictures should be clear in your mind because that is the first cut implementation for any digital communication system ok. So, that you should know very clearly ok. So, I think it is pretty much all I wanted to say ok. All right. So, let us so the next few things we will see is to write down some ok. So, next things we will see is notions of capacity SNR and a quantity called EBOR and not ok. So, these are important because these are ultimately the figures of merit that we will be shooting for ok. So, whenever we want to communicate you want to communicate close to capacity because capacity tells you that that is the maximum you can do ok. So, you want to communicate close to capacity and SNR is an important parameter that we saw and this EBOR and not also turns out is a very crucial parameter. So, I will define that and then we will fix the whole thing up ok. So, first thing is capacity what capacity for the ideal band limited AWGN channel like I said was given by Shannon in this 1940 49 paper. So, it has been known for a long long time ok. So, this is ideal band limited. So, I am assuming my H of F is I will say band limited to minus W to W because that is the standard assumption that most books will make. So, I have been doing W by 2 I do not know why I did W by 2 maybe I should also done minus W to W anyway it does not make any difference ok. So, it is the same thing. So, this is H of F. So, which means what my X of T is such that bandwidth of X of T is less than or equal to W ok. So, I have fixed that So, what is my maximum signaling rate? Signaling rate can be 2 W ok. So, one can imagine using a 2 W signaling rate. So, now we are looking at capacity. So, once you want capacity you are going to use the maximum signaling rate there is no point in using something less than that ok. So, it will work ok. So, a few definitions before we define capacity we will call P as power of X of T ok. So, this is the power in the signal ok. So, remember Y of T is your received signal ok. So, power in the signal component of the received signal ok. So, that is what X of T is ok. So, I am assuming it is come to that ok. What about power and noise ok. So, we have been thinking of the power spectral density of noise as what? Flat at N naught by 2 this is the PSD of noise ok. So, now I have a bandwidth minus W to W. So, what is PSD? It is power spectral density basically power per hertz per unit frequency. So, if you want total power and noise I should take N naught by 2 and multiply by the total frequency ok. So, N naught by 2 times 2 W you will get N naught W. So, you see N naught W is power of noise ok. So, that is this is crucial ok. And if you do all this and if you choose your symbol rate you can choose your symbol rate to be 2 W ok. So, this is symbols per second ok. It turns out you can show the maximum rate that you can transmit ok. So, the maximum rate with arbitrarily low error rates ok. So, this is called capacity. This guy has a very very simple nice formula works out to be W log 1 plus p by N naught W bits per second ok. So, it is difficult to do a proof of this and so log base 2 ok. So, it is difficult to do a proof of this in this class if at all you get to do a course in information theory this is definitely a standard proof which will be done in that ok. So, like I said it is a proof it is a theorem and a proof it can be stated very clearly it is not one of those hand waving formulas which you use ok. So, it is an exact result ok. So, it is not under this channel model this is an exact result you cannot do better ok. So, it works both ways as in there is also a converse. What do I mean by that? If you transmit at a rate higher than this then you cannot achieve arbitrarily low error rates. Your error rate that you can achieve is bounded away from 0 by some finite number you cannot keep on decreasing it ok. So, since it is a strong relationship. So, this is the best you can do and you can do no better than this both ways ok. So, keep that in mind this is the operational meaning of capacity. One nice thing to notice is capacity is a function of I can write it as function of two quantities ok. So, well you might say 3 p N naught and W, but the ratio p by N naught W also has an operational meaning which is the SNR ok. So, the SNR is defined as p by N naught W for this waveform channel ok. So, this is remember this is for the waveform channel right continuous time channel ok. So, one can say capacity is a function of two things bandwidth and SNR, but remember how is my SNR defined SNR also has bandwidth in it ok. So, you cannot just throw that away ok SNR also has bandwidth in it. So, just by increasing bandwidth what will happen to my SNR? It is going down does it make sense? Why does my SNR go down if I increase bandwidth? Increase my signal rate ok I am changing only time rate I am signaling faster. So, why should my SNR go bad? You are letting in more noise ok. So, that is the way to think about it when you are signaling faster you are using a larger bandwidth which means you are letting in more noise for the same signal power that you had ok. So, it makes sense that SNR should go down if you either increase bandwidth or decrease your power ok. Alternatively there are two ways to improve your SNR ok. What are the two ways to improve your SNR? One is to increase your signaling power or decrease the bandwidth or slow down your transmit rate ok. So, reduce your symbol rate there is one more way ok which is that way ok. So, you control N naught what is N naught? Now it is ambient temperature noise right. So, how will you control the ambient temperature noise? Reducing the temperature. So, you will see a lot of receivers particularly for satellites and all that will be cooled by liquid nitrogen and all that ok. So, they will keep the receivers at such low temperature so, that is N naught you can control ok. So, but somewhere something has to give you know I mean if you go all the way down to like 0 Kelvin or something. So, you can do some such calculation and come up with the minimum N naught possible, but that is one more way of decreasing your improving your SNR ok cool your receiver ok. Do not cool the transmitter ok. So, cool your receiver to some very low temperature. So, N naught will not automatically go down and you can improve your SNR. So, all these tricks are used in practice. So, if you see most receivers will always be cooled they will never keep it hot ok. All right. So, this is capacity in terms of bits per second ok. So, we have to translate this into our vector model into our constellation ok. So, that is important once you do that then we can then we do not have to worry about X of t and N of t and we can work with our model itself and be happy about it ok. So, let us see how to do that ok. So, what was our discrete time no ISI model ok, our discrete time model remember it is no ISI right now my assumption is Y of t is X of t plus N of t. So, I have chosen my signaling rate and my transmit filter and all that suitably so, that ISI is avoided ok. So, in the discrete time model Y equals X plus N this is my model. One thing I know that will be carried over is energy ok. So, energy in the signal ES is what we saw it is expected value of X squared ok right. This energy is being put out every t seconds ok. So, well it is an average energy, but everything is average ok. So, average energy is being put out every t seconds. So, what will be my power ES by t ok. So, that is a good way of relating ES to power ok. So, P the P that I had before the same P that I had before can be written as ES by t ok. And then what do I know about t? T is 1 by 2 W ok. So, if I use this relationship I see that my waveform SNR which is P by N naught W becomes what? ES by N naught T W which is T W is what? 1 by 2 ok. So, it becomes ES by N naught by 2 ok. So, in my discrete time model what is N naught by 2? It is the variance of noise per dimension remember it is not the some total variance in 2D or anything right. In each dimension I have a variance N naught by 2 ok. So, that is how I showed it right you remember I said N of t when filtered by a bank of correlators at the output in each dimension I will get N naught by 2 as the variance ok. So, ES is my energy every t seconds ok depending on whether or not I do PAM or QAM or QPSK this quantity will be evaluated differently ok. So, if I do MPAM what is it? If I do MPAM ES is what? M squared minus 1 D squared by 12. If I do M squared QAM what is ES? Same guy multiplied by 2 you know I mean you calculate it differently ok, but what about the denominator N naught by 2? It remains the same, but it is per dimension ok. So, N naught by 2 is noise variance or noise energy per dimension per signal space dimension ok. So, this is a conversion that one needs to be aware of ok. Sometimes people use ES by N naught instead of SNR ok what will be the difference between those two measures? It will be a factor of 2 ok. So, ES of SNR is 2 times ES by N naught some people use that instead of these things ok. So, we will quickly see all these things do not matter just shifting of the curve and I will tell you how to get rid of all these things ok. So, it is very easy to not worry about these things ok. So, the next thing is to look at capacity ok. So, it seems to be confusing if you use bits per second I want to be able to convert capacity into bits per signal space dimension. Once I convert that into bit once I convert that into bits per signal space dimension then everything works out perfectly fine. So, then I can I can I can also look at my constellation and figure out my probability of error rate and all that per dimension and then I can compare ok. So, that turns out you have to divide the capacity by 2 W ok. So, why once again we will see maybe we will see later on maybe in this course or if it will be done rigorously later on if you do an information theory course ok. So, capacity in bits per dimension turns out to be the previous guy divided by 2 W ok. There are easy ways to motivate this, but I am I do not know maybe later on if we have time we will do it ok. Remember this is bits per dimension dimension of signal space ok. So, that is the way you define capacity. So, so you see you are divided by bandwidth you have divided by bandwidth right. So, this unit can also be thought of as bits per second per hertz right this way I divided by bandwidth and I said dimension, but instead of that you can also retain that bits per second per hertz. Remember second times hertz becomes a dimensionless quantity. So, it is ok to simply call it bits, but but it is common also to say bits per second per hertz ok. So, anything in this units is typically called spectral efficiency ok. So, how efficiently you are using bandwidth ok and this is some kind of a maximum right you can never do better than this if I give you W hertz of positive bandwidth you can never do better than half log 1 plus SNR bits per dimension ok if you have an SNR of whatever SNR that you have. Remember W also plays a role here in SNR ok W plays a role there. So, you should be careful about the computation. So, you should not get carried away by this is half log 1 plus SNR ok. If you if you reduce your signaling rate SNR will actually improve ok alright. So, that is the capacity and we will come back to this soon enough ok. So, before that we want to do a few more things ok. So, so what are all these things useful for like I said whenever you design or develop a communication system or evaluate a communication system you are always interested in probability of error versus some quantity which which is like SNR. So, far we just saw SNR right and you want to do that and then you want to compare against capacity ok for the same SNR what is the best I can do and then you compare those two things ok that is where these things are useful. So, one more quantity which is very related very very often used much more than SNR is this EB over N naught ok. So, what is this EB? EB is defined to be energy per information bit ok right. Suppose you are using see now the number of bits in the constellation becomes important ok. So, I am defining EB as the energy per information bit what is this information bit will come to later ok. So, but right now just take it as a definition ok. So, suppose I have a constellation X ok and suppose now say R is the rate in bits per symbol ok. So, if you do uncoded transmission what is R in terms of size of X I am sorry log base 2 size X ok. So, this is equal to log base 2 size X if you do uncoded transmission ok. So, this is for uncoded transmission ok. So, remember that. So, it is a very simple calculation to do if you do uncoded transmission ok. So, later on we will see maybe we will see simple examples or later on I will point out that several times you do coded transmission as in even though you have log base 2 size X log base 2 size X possible you would not do that you will do something lesser than that it turns out to have advantages and it turns out it is powerful enough to take you to capacity. So, that is the idea of coding but for now whenever I say constellation X the rate is log base 2 size X. So, for instance if it is MPAM what is the rate log M base 2 if it is M square QAM 2 times log M base 2 ok M MPSK log M base 2 ok. So, though that is the way you do this computation it is a very simple computation ok. So, once you do that the energy per information bit is very easy to define ok in terms of symbol energy every symbol has energy ES ok every symbol carries R bits per symbol. So, the energy per information bit has to be ES by R ok. So, that is the simple definition for EB ok. So, now one can relate SNR and EB over N naught in a very very easy fashion ok. So, what is SNR? SNR is we saw it is ES by sigma squared ok ES we just now saw is EB times R what is sigma square in terms of N naught it is N naught by 2. So, you see EB over N naught is SNR divided by 2 times R ok. So, this is an important formula ok it seems like a very trivial thing to do, but anyway it is important enough that I should box it ok. So, that is why it is also called rate normalized SNR ok. So, this is EB over N naught is important for various reasons. So, for instance SNR somehow does not have the information about number of bits per symbol which is a crucial thing because capacity is defined in terms of number of bits per dimension or number of bits per symbol ok. So, you should know what rate you are doing and that should be somehow closely related to the SNR. So, somebody says I am achieving 10 power minus 6 symbol error rate right that at some SNR of 3 dB if you have two systems and SNR of 3 dB both of them are achieving 10 power minus 6 symbol error rate you should not say both the systems are doing equally well ok. You should also know what rate each system is doing one system might be doing two bits per dimension while other system is doing one bit per dimension ok. So, in that case one system is better than the other ok. So, one to make a fair comparison you have to normalize by rate. Once you normalize by rate it is enough if you just look at EB over N naught. If you take EB over N naught into consideration you do not have to worry about rate anymore. So, that is the advantage of EB over N naught ok. So, one calculation that is very illustrative in EB over N naught is calculating EB over N naught for MPAM and M squared QAM ok. How will you compute EB over N naught? So, this is uncoded ok. So, it is all uncoded. So, we will take what is EB over N naught now you would look at ES, ES is what? M squared minus 1 D squared by 12 right divided by what SNR right then you divide by 2 times rate ok. So, it is good to write it this way. So, it is a little bit clear ok. So, what do you get the answer to be? M squared minus 1 D squared by 12 N naught log M base 2 ok. So, you see the division by log M base 2 shows up in EB over N naught ok alright. So, now we know for MPAM probability of error is what? Q did we have a 2 in front? So, it is about 2 Q square root of 3 SNR right. So, SNR is what we SNR divided by M squared minus 1 is that right? Is this the formula we had? Ok. So, if you do the simplification and write it in terms of EB over N naught what do you think you will get? It is not a very difficult thing to write down ok. So, remember EB over N naught is SNR by 2 times log M base 2. So, you will multiply by 2 times log M base 2. So, you will get square root of 6 by M squared minus 1 EB over N naught ok. So, the uncoded plot for MPAM with respect to EB over N naught will be slightly different from PE versus SNR. So, if you plot PE versus SNR ok. So, you will get a certain picture ok. So, this suppose this is 10 power minus 6 this point will be roughly around let us say 14 dB ok some number ok. So, if I do the same plot with respect to EB over N naught what will happen? Ok in dB ok what will happen? This curve will shift to the left or right to the left by both 3 dB right factor of 2 ok. So, this you will get the same picture except that this point will be around will be around roughly 11 dB ok. So, that is the it is a picture I think I believe this this this is for say M equals 2 ok. So, so it is very standard to do this in dB remember this plot is much more standard than SNR ok. Particularly when you do coding and all that you have to take care of R ok the rate R. And for those cases PE versus EB over N naught is much more standard than PE versus SNR ok you do not have to worry too much about this is just a 3 dB shift ok. But in terms of when you do coding and when you do more advanced of EB over N naught turns out to be a much more natural and important measure to have ok. So, it is always done and you know how to go from one to the other simply divide by 2R ok. So, that is the way you go from 2. You can also do for M squared QAM and you will get some interesting similar results ok. So, the last piece of information is capacity in terms of EB over N naught ok. So, this is interesting ok. So, how did we do capacity in terms of SNR half log 1 plus SNR remember it is all base 2 bits per dimension ok this was my capacity. So, now EB over N naught is SNR by 2 times R ok what is R? R is number of information bits remember per symbol. So, this was my EB over N naught ok any rate R that I can achieve will be less than or equal to capacity ok. So, I can say R will be less than or equal to half log base 2 1 plus SNR which is ok. So, let me keep it as SNR ok. So, the fact that I am able to achieve this rate R with very low probability of error means SNR has to be greater than or equal to what? You do the simple conversion it is 2R 2 power 2R minus 1 ok right. So, this is another way of stating capacity ok any rate I can achieve has to be less than or equal to half log 1 plus SNR bits per dimension or the SNR I am using has to be greater than or equal to 2 power 2R minus 1 ok right. So, if I convert this to EB over N naught what will happen? EB over N naught has to be greater than or equal to 2 power 2R minus 1 divided by 2R ok. So, to achieve any rate R I need a EB over N naught of at least 2 power 2R minus 1 divided by 2R ok. So, a typical point to take is R equals 1 if you take R equals 1 what is the answer you get 3 by 2 ok. So, 3 by 2 in terms of dB will work out as what? So, 1.7 right. So, 1.7 dB ok. So, for R equals 1 EB over N naught should be greater than or equal to 1.7 dB ok right. So, you can do a plot of PE versus EB over N naught for the system that you have and figure out at what point you are hitting 10 power minus 6 like a like the plot I had before ok and then compare that with 1.7 dB if you are achieving R equals 1 ok compare that with 1.7 dB then you will know how far you are from capacity. So, for instance if you are doing un coded BPSK what is R? 1 ok. So, you can do plot of PE versus EB over N naught it will hit that around 9 dB or so 9, 10 dB ok. So, you will see the gap is about 9 dB it is a big gap ok. So, that is the picture to keep in mind ok. So, that maybe I should draw that picture ok. So, if you take un coded BPSK ok I think I showed this picture before and if you plot PE versus EB over N naught you are going to get a plot which looks something like this and maybe this goes around 10 dB to 10 power minus 6 ok. So, this is un coded BPSK and what capacity tells you is you can achieve R equals 1 at around 1.7 dB this is capacity. So, this is the possible coding gain that you can have ok alright this is for if you are doing R equals 1. So, these are plots that are very standard whenever people describe digital communication systems they would put a plot like this and say this is my coding gain this is how far I am away from capacity and all ok. So, an interesting exercise is to say look at this formula EB over N naught ok it is greater than or equal to 2 part 2 R minus 1 divided by 2 R ok. So, now, suppose I say I want to keep decreasing my rate R and find out the lowest possible EB over N naught at which I can have reliable communication ok. So, that is an interesting measure to think of. Suppose I am willing to decrease my rate to as lower point as possible, but I want to still have error free communication. So, what is the lowest possible EB over N naught? So, to answer that question you can take this expression and let what R tend to 0 what do you think it will converge to if you tend to R to 0 2 power x minus 1 by x converges to log 2 base E ok. So, RHS will convert to converge to log 2 base E which is what roughly 0.693 this will work out to minus 1.59 dB ok. So, there you go that is an interesting result reliable communication is possible only if EB over N naught is at least minus 1.6 dB. So, in some ways that is a fundamental capacity ok, but you have to be willing to tend R to very low numbers to get achieve something like this ok. So, but for larger R the capacity is larger ok. So, I think it is pretty much most of the general things that I wanted to say ok. So, yeah this kind of plot I have to emphasize once again that this is this is very important ok. It might seem like a very dumb thing right now, but as you as you see more and more complicated systems it will be impossible to analyze them ok. So, the only thing you can do is do what I call Monte Carlo simulation. So, you repeat the same simulation over and over again and then you measure your bit error rate as a function of EB over N naught and plot it on top of this ok. So, then you get some good comparisons ok. So, that is where we will stop today pick up from here tomorrow.