 Hello, welcome to this lecture on digital communication using GNU radio. My name is Kumar Appaya and I belong to the department of electrical engineering at IIT Bombay. In this lecture, we are going to take a look at channel equalization. In particular, we are going to see the impact of having a medium or channel that affects the propagation. So, in addition to noise and receiver impairments, what happens if the medium of communication actually changes your signal? So, as we just mentioned, the medium of propagation affects performance. Your medium may be a wire where at very high frequencies, there is some distortion. It may be an air or wireless medium where there may be reflection, there may be some noise and there may be some phase changes and it could be underwater, it could be a fiber optic cable. So, different media produce different modifications of the signal that you transmitted and these also affect your ability to detect symbols correctly. So, mediums effectively produce linear or non-linear changes in the signal. The question is, can we measure these changes and undo them? Of course, the answer is yes, because you have been using your telephones or cell phones every day and many of your media devices work because they essentially calibrate to the medium and they are able to correct for the effects of the medium as well. So, in this context, we are going to look at some simple channel models and then look at linear equalization that is wherein you are going to know what the channel distortion is and try to undo it, especially if the channel is an LTI system. That is what we are going to see in the next few lectures. So, we will begin with the same transmission model that we have been seeing throughout this course. The signal that you are transmitting is your X of t also known as S of t in some context. It is summation k runs from 0 to minus infinity to infinity, bk gtx of t minus kt. You can think of your bk as being the data points or constellation points, gtx of t minus kt is the pulse that you use. It is like a sync pulse or a root rate cosine pulse or rectangular pulse, whatever pulse that honors your power constraints and the bandwidth constraints and this is essentially transmitted through the medium that we are interested in. We will now for this lecture assume that the channel is an LTI, there is a linear and time invariant system with an impulse response gc of t. Now, in general you must typically assume that the channel is not a linear and time invariant medium. Examples of non-linear media are let us say fiber optic cables where you spend a lot of power then it affects your signal in a non-linear way and what about time invariance? For example, a wireless medium where suppose you are moving in a car or a train then the actual medium which you are experiencing in terms of the reflections which the signal undergoes before coming to your device are actually different. So the assumption that channels or media are actually linear and time invariant is not specifically correct but over snapshots of small durations it is linear and time invariant and this assumption comes in handy when we want to understand what the medium does and try to correct for it. So that is the consideration in this particular lecture series but eventually you may encounter some media that do not satisfy the conditions. We will be using this channel example because it is somewhat instructive. Basically this is a simple example that is adapted from one of the reference books for this course. You are going to send symbols at the rate of sorry yeah you are going to send symbols at the rate of half symbols per second and the pulse that you are using this g t x of t like I mentioned is a rectangular pulse that goes from 0 to 2, 2 seconds you can assume 0 to 2. So you are going to send the symbols at half symbols per second. The channel we are assuming is an impulsive channel that is it has an impulse at 1 of weight 1 and impulse at 2 of weight negative half. So now you may say why is this an impulsive channel you know is that realistic. Actually an impulsive channel does not necessarily mean that the channel is actually impulsive. In the bandwidth of interest treating it as an impulse probably does a good job because essentially your channel is just shifting for example this particular channel if you look at it from an impulse you know representation perspective you can write it as delta of t minus 1 minus half delta of t minus half. So in a sense your channel is just delaying and then adding another delayed copy with a negative sign. So this is probably good enough for you know assuming that it is a delay and combined channel therefore the impulses are justified. Now this p of t is what you end up getting if you convolve g t x and g c. Now the reason why this makes sense is because you have g t x and g c in the earlier situation you have only g t x but now the effective pulse after propagating through the channel is g t x convolved with g c. So that is something which you have to keep in mind that is earlier the matched filter was matched to g t x of t but now the new effective pulses p of t which is g t x convolution g c of t. Now the optimal matched filter is matched to p star of minus t and that can be proved but before that let us just go back and prove that this is indeed the p of t that you will get. So our g t x has this form and g c has the form of delta of t minus 1 plus minus half delta of t minus 2. Now let us actually just do this. So if you remember our g t x of t let us not use right let us use. So our g t x of t has this form and g c of t let me write it in terms of the impulses because it is convenient delta of t minus 1 minus half delta of t minus 2. By the way remember that the rate of signaling is actually half symbol per second. So one symbol is from 0 to 2 seconds the next is from 2 to 4 and so on. If you now convolve these then it is very simple all you need to do is take g t x of t delay it by 1 that is the impact of this and delay it by 2 and minus half. So from 2 to 4 it is minus half this amplitude is 1 this amplitude is 1. If you combine these you are going to get from 1 to 2 this does not affect you. So from 1 to 2 you are going to get 1 you are going to get p of t from 1 to 2 you are going to get 1 from 2 to 3 this is 1 and this is minus half. So you are going to get half and from 3 to 4 there is nothing here and here there is minus half. So this is 1 this is half minus half this is precisely the p of t that we have shown here. So this is just p of t is now the effective pulse that is received at the receiver. The major change that you see here is that your symbol was between 0 and 2 seconds at the transmitter at the receiver it now spreads between 1 and 4 seconds. In other words a delay is something that we can adjust in fact in the previous lectures we have seen how we can estimate the delay unfortunately now what has happened is that the signal is not only delayed but it is spread. So these channels are generally known as dispersive because the symbols spread due to the longer length. Now the problem is that the spreading actually means we are going to now stretch into the next symbols duration because from 0 to 2 seconds you send symbol 1 2 to 4 you send symbol 2. Now symbols the 0 to 2 seconds you send the first symbol that occupies 1 to 4. The second symbol is going to occupy between 2 and 5 or rather sorry the first symbol is between 0 and 2 this first symbol was sent between 0 and 2 it occupies between 1 and 4 because of the delay. The second symbol is between 2 and 4 it will occupy between 3 and I believe 3 and 6 3 and 6. So that means this particular area is a place where there is inter-symbol interference that is you are going to have a mixing of the first and second symbols. Similarly in the second and third symbols will also have a bit of mixing therefore you have what is called inter-symbol interference which means mixing of symbols and this is something that you have to handle. Now just re-emphasizing we saw the new effective pulse now the optimal matched filter at the receiver is something you have to consider. It turns out that the optimal matched filter is matched to a new effective pulse P of t which is obtained by convolving GTX and GC. The proof outline is similar you just have to use hypothesis testing in AWGN but intuitively what you are doing is you are just looking at minimizing the norm of the received minus the transmitted norm square and if you take the norm square and essentially expand it you end up getting GTX convolution GC terms this is something that is skipped for brevity but the intuition is that the pulses spread using PT now so P of minus t star is the natural matched filter. Now this is actually okay but what is the visual that you can sort of see. So whenever you have a channel which is dispersive what does dispersive mean it spreads the pulses. Let us say that this is actually a multi level channel which has you know it is a 4 level channel I am sending PAM 4 and what I did is for PAM 4 if you use something like a root trace cosine pulse you have these 4 levels okay 0 you have 1 you have 2 and you have 3 that is why you have 0, 1, 2, 3 over here and a single pulse is essentially sent and then the next pulse is you know the single pulse is received next pulse is also received and then overlapped next pulse is received and also overlapped and this kind of constant overlap can be used to produce what is called an eye diagram. An eye diagram basically says okay take a symbol interval and keep drawing the symbol on top of each other and that will essentially give you an eye diagram. Now in this eye diagram you can see that 0 is one of the levels 1 is a level 2 is a level 3 is a level so it is PAM 4 and if you look at the vertical gap between any 2 levels that gives you an indication of the robustness to noise like how robust are you to noise noise will essentially change the vertical level because noise adds similarly horizontally you are able to see how much the timing offsets are going to affect you. So whenever you have a channel which convols these eye diagrams are useful to check how reliable your detectability is because open eyes means channel is good you do not need to do much in terms of correction a closed eye means you have to really struggle because you have to do something to really recover your data you may not be able to recover it. So eye diagram actually gives you an intuition on how good or bad the channel is and we can actually generate eye diagrams easily in GNU radio as well that is something which we will see in a subsequent lecture. But the intuition that you are taking away is the channel effect can be easily observed by an eye diagram and closing an opening of this eye essentially indicates the effect of this channel and your ability to detect the signal. Now the question which we are going to ask is what is the optimal strategy for example when we were about to detect a single signal under additive white Gaussian noise or some other such impairment what we ended up doing was we just wrote the hypothesis we had to maximize the likelihood we said minimum distance and we wrote it out. What is the answer in this situation when you have a sequence of symbols? So for this the strategy that we use is maximum likelihood sequence estimation also maximum likelihood sequence detection you know the optimal strategy is to estimate the sequence that is the most likely one that was sent given that you observe a sequence of voice. In other words if you have B as BK a vector of possible sequences right in the earlier situation you could make symbol by symbol decisions because one symbol did not overlap into the time region of the next symbol. So you could take this particular time interval and you have even in the case of sync you may argue a sync was longer than or a root trace cosine was longer than the duration of one symbol that is true. But the way you designed the root trace cosine and the sync was that after match filtering the Nyquist ISI criterion was satisfied and by sampling at the right locations the effect of all the other syncs go away or trace cosines go away unfortunately that is not going to happen here because you are going to have the impact of the this symbol on the next symbol also. So you have now a convolution effect which is going to persist like we mentioned 1, 2, 3 seconds was 1, 2, 4 seconds was the first symbol 3, 2, 6 seconds was the second symbol so between 3 and 4 is the combination of the first and second symbol. Now we have to maximize this particular metric it is called capital lambda of B where B is a vector it is a column vector is real part of Y SB now what is SB? SB is actually the sequence that is contains all these vectors that is S of t is in a sense summation over K BK P of t minus KT this is the sequence that you are essentially received. So, this is fine or you know if you write out your Gaussian noise hypothesis and minimize this will come in the e power minus you use the fact that the noise affecting each is a noise affecting each of the values of you know B is essentially independent and multiply it out you will get this this is not very difficult the now how do you maximize this you know why because you have the observations of why you essentially got them you then have to maximize this particular quantity that is real part of angle bracket Y SB. Now the problem here is that let us say that you take a group of a 1000 symbols let us say that there is 1000 QPSK symbols to detect what is the number of possible SB's that you can consider SB's are these let us say K equal to 0 to 999 these are my SB's the question which we have to ask is what are the possible DK's that can occur it turns out that there are 4 to the power 1000 why because it can be 1 symbol 4 4 possible symbols the first 4 possible symbols in the second one 4 possible symbols in the third and so on which means the total number of possibilities is 4 power 1000 4 power 1000 brute force search is just not feasible even if you reduce the number and you know try to do this or if you go for some constellation like quam 16 or quam let us say people are even using quam 2048 quam 1024 it's actually very prohibitive and this kind of approach is not going to be possible. So now we have some questions we have to now look at some questions one question to ask is do we actually just say we can't do maximum likelihood sequence estimation or detection we just give up and do something suboptimal and the answer to that is a little complicated the thing is you have to do something which is you know suboptimal on some occasions but there is a better approach the better approach is can we actually do something to reduce the number of computations well it turns out that we can transform this particular quantity into an additive metric and start making decisions as we go along in fact this particular approach which uses the so-called Witterby algorithm allows you to perform maximum likelihood sequence estimation without having to do the brute force search of 4 power 1000 or similar that is something which we will see in the next class where we derive the we will derive the Witterby algorithm branch matrix and use that to compute what optimal decisions on B case can be made as and when we go on. So this is something we will cover in the subsequent lecture. Thank you.