 lecture 41 okay and we were looking at the last thing we were doing was looking at OFDM in the presence of ISI and we were looking at a discrete time model and this is how the model looked okay so my the overall system is going to have symbols coming in okay so I will call it SK then your first thing you do is a serial to parallel conversion okay so you have S of 0 all the way to SN minus 1 then you do what then you do one what's the next step IFFT right so it's you do an IFFT N and point IFFT and this gives you a set of intermediate symbols okay I think I called it X tilde X tilde 0 to tilde N minus 1 okay and okay so in order to deal with ISI we came up with this suboptimal idea of converting doing what converting linear convolution to circular convolution on a block like I said it's a suboptimal idea but it works okay so you repeat from N minus L plus 1 all the way to X tilde N minus 1 after that you do the same transmission as before okay so once you do that you all your all you do next is a parallel to serial conversion and then you transfer okay so this goes out and the total number of complex numbers that you will be sending out will be N plus what L minus 1 okay so that's what you'll be sending out okay I don't want to call them symbols I'll say complex numbers okay so so so this will go through D2A converter right with the sync interpolation at at the rate of T by N okay and then you send it out up convert and then send it and then you don't convert and then you do anti-aliasing filter followed by a sampler then you'll get a sequence of symbols corresponding to each transmitted symbol okay so all that we are modeling as a filter HK okay so I might have done everything with N did I do everything with N okay so let me do okay so you have HN and then noise getting added okay so the point of adding the cyclic prefix was to make sure that this linear convolution with HN becomes a circular convolution on a subset of the numbers that you are sending out okay so you do a on the received values here you do a serial to parallel conversion okay you will get received values corresponding to the cyclic prefix which you ignore and then the received values corresponding to the actual transmitted numbers that I am going to call y tilde 0 so on down to y tilde N minus 1 okay and and I said I am going to write the vector of y tilde 0 to y tilde N minus 1 as a matrix times the vector x tilde 0 to x tilde N minus 1 okay and I said this this channel is this L tabs H of 0 to H of L minus 1 okay so if you start writing out that you can do that very carefully see y tilde 0 maybe I'll show y tilde 1 N minus 1 it's going to be equal to what will be the size of this matrix N by N right so it's got still does 0 x tilde 1 so the first row is going to be what okay so first row is going to be a convolution output at this point right so this is where you have to see okay so the convolution output at that point is the first row so H of 0 multiplies x of 0 then H of 1 multiplies what x tilde N minus 1 then H of 2 multiplies so on till H of L minus 1 which multiplies x tilde of N minus L plus 1 okay so that's the remaining entries will be 0 okay so what about the next row it'll be a cyclically shifted version okay so you can show that it will be this think about it and carefully make sure that you can understand that 0 H of L minus 1 this thing will be H of 2 okay it just comes from the simple linear convolution formula but since we repeated the values once again even the first thing becomes a circular convolution if we did not repeat it what will be the first row will simply be H of 0 followed by all zeros since I repeated it the other entries also enter the picture okay so to complete this all you have to do is just repeatedly do this n times if you do it n times this row is going to end at L minus 1 right afterwards you'll have 0 okay so the last row to figure out the last row you can take the first row and do what rotate shift left okay so that's an easy way of figuring out what the last row has to be so that will work out as H of L minus 1 all the way down to H of 0 okay so the last this guy will be some H of L minus 1 here followed by 0 okay so there will be a H of 0 line coming through also okay so you can check all okay so so this is the this is the matrix representation of this and this becomes a circular matrix circle and matrix with the first column being the channel tabs okay so what's the next column next column is a cyclic downshift of the first column likewise that's another way of viewing this matrix these are all also a version of topletes matrices this these things have these matrices have very nice properties okay which you use quite often okay so now there are several ways of describing FFT the FFT matrix and a circular matrix are very closely related okay so what's the relationship yeah so if you the every column of the FFT matrix is an eigenvector of this circular matrix what's the eigenvalue it's the value of the FFT at that point okay so that's the relationship but let me show you how that works out so I'll not I'm not going to prove that relationship here I'm assuming you know that I'm going to simply write down what the answer is okay so it turns out the circular matrix can be written as let me be careful here so I need the right kind of expression FFT FFT and inverse times a diagonal matrix times FFTN of yeah this is the way in which you can write the circular matrix okay so what goes on the diagonal is capital H of 0 capital H of 1 all the way down to capital H of n minus 1 and what's capital H of k is I think summation they'll be they might be 1 by root n okay n equals 0 to n minus 1 H of n e power minus j 2 pi n k by okay so this is a very standard result which you've used all the time in FFT right circular convolution of two sequences becomes the product of the point-wise product of the FFTs okay so it's the same result read in another another way okay so now going back to our so this this is all 0 going back to the previous equation the vector y tilde 0 to y tilde n minus 1 becomes what okay becomes FFT inverse times this matrix times FFTN x okay so I'm going to pull the FFT inverse to this side okay so you get FFTN times this equals a diagonal matrix H of 0 H of 1 H of n minus 1 0 here 0 here times what FFT times x tilde what is FFT times x tilde the vector of s okay remember how did we get x x was the if FFT inverse of s so if you multiply that out you'll simply get the vector of s s 0 2 s n minus 1 okay so so so this equation suggests an obvious way of processing the the vector y okay the y vector should be done what should you do do it in this picture what should I do to this y it should take FFT okay if I take FFT at the output I can expect each entry to be a scaled version of the symbol scaled by some complex number okay so it's like a one tap channel for each symbol each symbol went through a one tap channel and you do a simple one tap equalizer for each subcarrier separately followed by a slicer independently you can decode the symbols okay so that's the that's the story of the receiver so at the receiver okay you have y tilde 0 y tilde n minus 1 which goes through a FFT n and then you do one tap equalizer to get your uh then you slice your s hat of 0 to s hat of n minus 1 that's console complex no yeah that is the one tap equalizer that's the definition of a one tap equalizer okay one tap equalizer will basically find the scaling if you train it so even the agc for instance right the automatic gain control that you might want to do can be pulled into the one tap equalizer it's enough if you do that as long as you have a training phase you can determine the scaling it's really easy to do that okay so this is a very simple equalizer okay so I wrote it down as if this is an optimal way of processing and dealing with isa clearly it's not optimal okay so it's far away from being optimal and this is a major overkill but look at the huge advantage for the receiver okay receiver doesn't change at all okay and only thing you have to do is an FFT which is which has been researched so deeply that people have very very efficient implementations of FFT and uh in digital okay so it's very easy to get nice uh what are called IP course for doing these FFTs it's very easy to get that and that very low power implementations are available so once you do that beyond that it's a simple one tap equalizes nothing in fact even the tap usually you don't train there's some pilot some pilot assisted thing available for which the tap is pretty much given to you okay so you know what that is and all you have to do is slice on each uh subcarrier and on top of that it gives you so much flexibility at the transmitter you can put different type of alphabets on different subcarriers depending on the strength of the tap okay if you see that the SNR on on one subcarrier is higher then you maybe do a larger constellation so you can you get all that flexibility at the transmitter the flexibility of working with a set of parallel channels okay so that that's a huge advantage and because of that OFDM is really really popular today both in wireless applications and even in wireline applications like DSL OFDM is very very popular it gives a lot of good performance and people have studied capacity closely and you can even show that it approaches capacity also when if n is very large the penalty you pay in terms of the cyclic prefix will become negligible okay so if n is large then it's okay if n is small then you're paying two penalties okay one is the decrease decrease in the data rate okay the data rate overall decreases by a fraction n by n plus l approximately okay right you not every symbol you're transmitting carries data in fact the cyclic prefix is not used at all at the receiver you just throw it out okay so only a smaller fraction is there so that's one penalty and another penalty you pay is you have to actually transmit those symbols okay so that again consumes power at the transmitter okay so you pay double penalty but in spite of that this is very popular because of the simplicity of the receiver okay so this kind of equalization is also called frequency domain equalization so instead of dealing with ISI in the time domain by building filters maybe to minimize mean square error etc in the time domain completely you kind of move comfortably to a frequency domain where to the frequency domain where the equalizer becomes one tap for instance if you choose the numbers carefully okay but one more thing I have to point out is the cyclic prefix while use well while it's totally lost from a data recovery point of view can be used for symbol timing recovery and all that because it's the same repetitive pattern which shows up at the interval of at a certain interval okay so you know the same pattern will appear again so you can look for that and then figure out some timing information from that so that's done sometimes okay so that's useful so the cyclic prefix has some added benefit in a implementation sense okay however there's one practical problem because of the OFDM symbols okay so your so that a final transmitted signal after you do the D2A and in complex baseband right when and then so the analog signal is going to actually be a sum of several sinusoids okay so when you take some of several sinusoids what ends up happening is what's called the peak to average ratio becomes very large okay so one way of thinking about it is the sum of several sinusoids will typically tend to a kind of a Gaussian distribution for values okay so you can show these things it's possible to show the different amplitudes and faces if you add a whole bunch of sinusoids it will tend towards a Gaussian distribution when you take samples okay so on these Gaussian distributions if you take a whole bunch of values and look at the maximum value as opposed to the average the maximum value can be fairly large okay so it's possible to find all these things okay so the peak to average ratio is going to be large for this this X of t so what so what's the problem I'm sorry yeah so the problem will come at the power amplifier level the point is whenever you design whenever you computed SNR do you speak power or average power use average power right so that's how we computed it was always expected value of x squared so it's always the expected value it's the average power and all the theory you do is for the average power okay but the power amplifier has to deal with the signal which goes all the way to the peak amplitude and hopefully doesn't distort okay so you want a linear response for the peak amplitude and if your average to power peak ratio the peak to average ratio is very high then it puts extra strain on the power amplifier okay so these things become more expensive so there are all kinds of techniques you'll see a lot of research trying to reduce what's called the PAPR for OFDM signals peak to average power ratio okay so that's that's one penalty you pay from a very practical implementation level but obviously it can be overcome and it can be done it can be overcome in several ways in fact sometimes people tolerate non-linear distortions in the power amplifier and compensate for it with a similar non-linear processing at the receiver so that's also possible so you can do some some such thing okay so so that's OFDM so in short OFDM is a block modulation scheme okay so instead of doing serial PAM you do block modulation scheme and it gives you it's it's easiest to understand it in the frequency domain okay so even though it's not FDM even though the subcarriers and the signals used per subcarrier overlap in frequency okay it's easiest to understand it in the frequency domain it gives you rough separation in the frequency in the frequency domain okay it still overlaps but by clever baseband processing you can completely separate the signals even in the presence of ISI by adding a suitable cyclic prefix so because of all these things OFDM is very very popular okay any questions questions or comments about what this is okay so so if you take any of today's wireless standards okay or the future wireless standards they will all for sure use OFDM so it's no doubt about it OFDM both in the uplink and a version of it the downlink and the version of it in the uplink also so we use OFDM everywhere okay it gives a lot of flexibility okay so i think yeah so what happens is when you do when you're a lot so so so i've been talking about this flexibility of putting different kind of alphabets on different subcarriers okay so usually in systems when you actually design it you expect some feedback from the receiver to tell you these are my good subcarriers these are my bad subcarriers so the bad subcarriers you can show even from an information theoretic capacity maximization point of view that in the bad subcarriers you should not put anything so you don't put anything in the bad subcarriers so there'll be no data coming there so you use only the power it's best used that you have is best used in the subcarriers that are good okay of course there are other constraints to look at it look at but usually that's how the power is allotted okay so in fact if you have a notch you'll allot zero power to that subcarrier so that kind of flexibility you get with OFDM if you're doing completely in time domain adjusting your signal spectrum to look like that is not so easy there are clever ways of doing it but it requires too much of processing at the receiver here this receiver doesn't do anything only does FFT so that's the main advantage any other question okay so I think that brings us to the end of OFDM and I'm going to so for the I think the only thing that's left is to talk briefly about coding okay so what we'll see in the next few lectures is a brief introduction to error control coding as it's used in digital communication systems and we'll see some very simple examples and then we'll see convolutional codes as the main coding technique okay so I won't introduce anything else only convolutional codes so so I think we have roughly four or five lectures left maybe maybe I'm over counting it a little bit and well actually there's more and after that after four or five lectures I'm going to do tutorial sessions okay and yesterday there was an exam a few people wrote it I think so maybe I should put out solutions for problem 7 so in the problem set 7 there are two things for which you have to think a little bit what are the two things so I've asked you to formulate the equations for the zero forcing linear equalizer and the MMSE DFE okay so I think most people who wrote the exam yesterday didn't formulate the equations for zero forcing linear equalizer did you or didn't think some people might have done it but some people did do it so it's very easy zero forcing linear is very very easy right so it's enough to make sure that some things go to zero so maybe maybe I'll put out a solutions solutions for that sometime tomorrow or day after okay so next week also we'll have an exam on the on problem set 8 but I think 8 has very few problems so maybe I'll add few more problems to it and then there'll be then maybe there'll be one problem set on the OFDM OFDM things and then one more encoding and with that it'll be over okay so we'll have a total of 10 problem sets to deal with okay so let's move towards coding okay so but before that so okay so the first thing is about the terminology okay so you always talk about error control codes and the thinking is you design a digital communication system and then if there are errors the errors can be corrected by error control codes yeah of course that's a very good way of viewing error control codes not wrong that is also possible but today people have not just today even traditionally the best way of understanding the role of error control codes in digital communication system is through what's called the notion of coding gain okay okay so so far we have never talked about any signal processing technique which provides something called coding gain okay so what is coding gain what is gain okay so so far we always did some processing okay whatever processing we did and then we saw the probability of error as a function of SNR okay for that processing okay and then we took whatever it gave right so maybe there was an optimal way of processing it but ultimately we just took probability of error as a function of SNR as based on the analysis whatever it gave you okay so none of the techniques really give you a method to improve on that trade-off okay how can I get the same a better probability of error at the same SNR how can I get what's the other other thing okay how can I get the same probability of error at a much lower SNR okay so those are two things which are important trade-offs if you remember the very first class we talked about these trade-offs in many in any communication system ultimately there will be one plot what will be that plot there will be some error rate here okay so maybe just to be specific I'll say some bit error rate versus what's called eb over and not okay so it's what I've been talking about energy per information bit divided by noise okay so the energy you're spending per information bit or noise I mean power per information bit divided by noise okay so and and then you make a plot okay so if you take for instance QAM or QPSK or BPSK okay it's a very good example then you make a plot here this point will be around what say 10 dB when this is 10 power minus 6 okay so these are plots okay and if you have ISI then this plot might shift to the right or left or it might worsen usually it will shift only to the right or it will stay where it is okay so that's what will happen okay and okay so the so I somehow went into ISI maybe I'll come back and address the issue of ISI later but for now if you imagine there's no ISI or anything the most interesting thing to look at in this curve is what the question most important and interesting question to ask with respect to this curve is how do I move this to the left okay so far we have never seen a method to do that an error control codes pretty much are the only method available today to move this curve to the left okay so in that way coding should not just be thought of as correcting errors it's the fundamental tradeoff between eb over and not n bit error okay so in fact in Shannon's first paper on theory of communication he had codes and he showed what's the best tradeoff is okay so that's what the whole thing about capacity is okay so codes form part and parcel of communication systems and but the side effect of that is the problem with that is it requires more complicated receiver processing of course right without getting any gain itself we saw equalization was fairly tough okay so receiver processing is required so if you're getting coding gain then obviously processing will become difficult so that's why it was not so popular for quite some time okay once with advanced implementation possibilities today people of course obviously implement very complicated error control codes in today's communication systems okay so and so study of digital communication is not really complete without looking at error control codes for that purpose we'll quickly see it okay so if somebody asks you what the main purpose of error control codes is is to move this curve to the left provides you a method for moving this curve to the left okay so it turns out for instance for if you look at bpsk or something this curve can move all the way to zero very close to zero okay right so if you can come very close to zero so gains of eight db and nine db are theoretically possible with error control codes okay so eight and nine db are huge numbers okay billions of dollars okay so maybe billions of dollars doesn't mean anything now but but anyway so that's a number to keep in mind okay so so so we'll we'll study this and for very simple situations okay so we won't study it in the most general way the model that I will take for for looking at error control codes is the dpsk over awgn model okay I might briefly hint at other constellations okay but mostly I'll be talking only about bpsk over awgn so how does this model look so you make bits no isi okay what do you do if there is isi yeah equalize and once you equalize if you look at the input to the slicer from your symbols to the input to the slicer it's going to be roughly bpsk over awgn if you're doing bpsk with isi also right so that's the logic behind just sticking to bpsk over awgn so all these other ugly signal processing techniques you use to get to the slicer we will not look at right now we've already seen it you know how to do that okay so since we've already seen it we'll only look at this part for the coding game okay so bits go through a mapper zero goes to plus one one goes to minus one okay and then you produce a symbol sequence okay so the symbols are plus minus one and then what what what happens to the symbol sequence noise gets added to it remember it's all discrete time model okay so this noise will be normal distributed with zero mean and some variance sigma squared okay and then you get a received value which you have to process okay so so far we've been just talking about slicing the received value okay so maybe soft slicing okay so soft slicing or hard slicing hard slicing will mean or the output of the slicer you decide whether it is zero or one soft slicing will be you provide the LLR okay so LLR in this case is very simple right so what's the LLR if you have a received value R the LLR will work out to two times R by sigma square okay so it's as good as R okay so so if your receiver works with R itself it's said to do soft processing such receivers are called soft receivers okay so if it's working with the quantized version as in zero one bit quantized version we make a slicing it's called a hard receive okay so we look at both situations we in when we do coding we might want to sometime process the soft values directly either the LLR or the receive value R itself or we might want to process the hard decision values okay so we'll do both okay so this is this is the model we'll be dealing okay so like I said if you have other complicated parts you deal with them in whatever way you want to get to this model then we'll do coding with this model okay so that's the situation okay so when you don't do any coding each symbol is sent independently one at a time and you can decode each received value there's nothing better you can do right if one symbol is independent from the other symbol there's nothing you can do so what you do in coding is you introduce some dependence between symbols that you are transmitting okay so that's the basic idea behind coding when you do coding you add controlled redundancy okay so I'm saying redundancy because it's redundant you're repeating something okay so well you'll repeat it very smartly so it's controlled in a certain way okay so you don't necessarily do a repeating okay so if you do coding okay if you do coding okay so first thing I want to show is an uncoded system so once again from the same picture as before okay so you have a bit b of k going to a symbol s of k noise gets added to it n of k you get rk so at rk if I want to define snr and eb over n0 we know how to do that okay so snr is going to be what okay symbol energy is 1 right in bpsk noise energy is sigma square okay so if you want to do eb over n0 what will happen there'll be a factor of 2 that enters from somewhere right because n0 is 2 sigma square so you will get what you should get 1 by 2 sigma square am I right okay so these are the definitions so in this model you know exactly how to compute snr and eb over n0 okay and remember this snr and eb over n0 has a practical relevance in the in a real system also okay maybe not in absolute terms but in relative terms it definitely has a real meaning right so when we convert from symbols to wave forms we keep track of that we make sure the energy is the same right and the sigma square can be also obtained from the noise spectrum at the receiver so all these things are very realistic numbers okay but from a model point of view I'm pointing out what this is okay so usually this is converted to db when you convert to db what do you do 10 times log 10 okay so that's a very standard okay so so if you notice this thing which is actually es energy in the symbol was equal to eb which is energy per information bit okay because every symbol carries one information so this is the no coding picture okay so this is no coding and when you do this you know exactly what the tradeoff between probability of error and eb over n0 is what is the tradeoff probability of error is going to be q of 1 by sigma which is square root of 2 eb over n0 okay and you can plot this and you'll get the previous plot I said I showed you okay there's nothing better you can do okay so the idea of coding is to not send one bit at a time independently but to collect a bunch of bits add some control redundancy and then send all the symbols together okay and then look at all the received values together and try to do collective decoding now you can't just slice independently you have to do collective decoding okay so that's the idea and this is how the picture looks okay so I'm going to change a few things here so I take a vector b say which is k bits and then first do something called n coding to get a vector c which is say n bits okay so usually you pick a rate k by n which will be less than 1 okay so because you are repeating it okay so the k bits get converted into n bits okay so one way to think of it is c will be composed of two parts b followed by a vector p b will be the same k bits and then p will be additional n minus k parity bits that you added okay so this is how coding is done in most cases in practice okay so there are very few cases where you don't do this in fact actually there are cases where you don't do this also okay so so in fact there's no need for you to repeat b it can be any set of n bits okay so this c is said to be a codeword okay so this this is called the message this is a codeword okay so the message b belongs to okay so I've not used this notation before what is this notation 0 1 raise power k what is that okay set of all k tuples binary k tuples so any k bit vector okay c actually belongs to 0 1 power n see notice the message can be any 1 of 2 power k possibilities but the codeword since you are doing a definite mapping will be only what the list of all codewords will be a subset of 0 1 n okay so this is the codeword this is there and the code is the set of all codewords it will be a subset of 0 1 okay so what does the encoder do now encoder does a 1 to 1 mapping between yeah the message from the message space to the code space the code okay so that's so that's what the encoder does and typically the mapping is done this way all right so k is less than less than or equal to 1 less than or equal to n and that's a reasonable choice in most cases okay so this is encoding once you do encoding the rest of the model does not change okay so next you do mapping 0 to plus 1 1 to minus 1 okay so if your codeword bit is 0 you send plus 1 the same bpsk mapping as before if the codeword bit is 1 you send minus 1 there's no change there and noise gets added to it you can't control this okay so there's too many ks here so we'll simply write noise okay noise gets added to it normal mean 0 variance sigma square and then you get a received vector r okay so this r is going to be r 1 r 2 so on till r n right r will be what some some real real vector okay n n tuple real numbers okay so the first question I want to ask before we look at how to do any decoding or anything like that is to define a suitable eb over n0 for the coded situation okay we have to have to carefully do it why exactly the information is one bit of information is not carried by each symbol less than one bit of information is carried in each symbol how many bits of information is carried in each symbol k by n so it's like everybody knows the answer okay so eb will become what now n by k right so eb will become n by k okay so you notice in the previous case eb was equal to 1 here it's become n by k it has artificially gone up because I'm sending lesser information in one thing okay so eb over n0 will be what n by k 2 sigma squared okay another way of doing that this rate is typically denoted r so 1 by 2 r sigma square is a typical computation for eb over n0 okay so if you are doing coding at a certain rate r say half or one-third or two-thirds or 0.9 or whatever number that is then your eb over n0 definition changes compared to the uncoded situation so if I did a computation for probability of error with a certain eb over n0 here at the same eb over n0 I will have a different value of what sigma in my simulation model okay so if this is a kind of a loose simulation model or the way you're thinking about it at the same eb over n0 the sigma for the coded situation will in general be will be larger right so that's the right way of comparing the coded and the uncoded systems so this is a very fair comparison between coded and coded systems okay if you have a certain eb over n0 in the uncoded case each symbol is carrying more information so I use a certain sigma if I now do rate half coding at the same eb over n0 I have to use a larger sigma to break a fair comparison okay so when coding gives you coding gain it has to work against a larger noise larger noise that's being added so in spite of that coding gives you a huge gain okay so that's the way to think about it okay so in practice there's another way of thinking about how this noise enhancement comes about okay suppose you have you have to send k bits and you're using a certain clock okay certain clock rate and you have to send it within one second so use a certain clock clock rate now within that one second you have to send n bits which means your clock rate would have increased by n by k so why was the noise increase you're using a larger bandwidth right obviously if your clock rate is increasing by n by k you're using a larger bandwidth so in a receiver you'll have to filter a larger bandwidth which means what you're letting in more noise and the noise variance is increased okay so that's one way of thinking about it any other way you deal with to make a fair comparison between coding and encoding you have to pay this penalty in eb or n you cannot do it without makes total sense okay so that's the thing I described of increasing the clock rate is one very practical way of doing it and most systems that's how they do it they would keep a certain uncoated rate and then they would introduce a coding block and increase their clock rate use more frequency let in more noise and deal with that in spite of that coding gives you gain in eb over n naught with this type of eb over n naught calculation coding can give you enormous gain okay so that's the way to understand it in the digital communication system okay so beginning with next week we'll start with some very simple coding schemes which don't really give you coding in okay but just to understand the decoding better okay we'll do that first and then we'll slowly move on to convolutional codes which give you a lot of coding