 So our next speaker is Josh Mormon, who is filling in for someone who couldn't make it or we had to swap around. And he is going to talk to us about channel equalization using GNU Radio. Alright, good morning everyone. My name is Josh Mormon. I am from the United States. I work for an applied research organization, Prospectal Labs. I'm also one of the project officers, GNU Radio. And motivation for this work found myself on a few projects where we needed equalizers. GNU Radio has some existing equalizers. You know, if you dig through, you'll find them in the block tree. But they're missing some features that I needed. So needed to go in and add some functionality, namely, need to equalize on training sequences. These are all blind equalizers. And expand the available adaptive algorithms. And restructure a bit. So we'll talk through, I think I swapped this slide. So we're going to go through and talk through why do you need equalization. Talk a little bit about the theory of equalization. And then different types of equalizers. What are the structures? Why are they different? Why would we need different structures? And then we'll get into the GNU Radio implementation of this. Alright, so all the code is posted up on GitHub here. There's one dependency. So if you run CMake, it'll find it doesn't work unless you have this. So grab this, and if you're interested in running these examples locally. So why do we need equalization? So the main reason that your signal gets messed up and you can recover it better with a good equalizer is because of the wireless channel. So if you think about, if you're trying to transmit a signal from point A to point B, you get multiple copies of that signal bouncing off things, all getting received. So if these are my data symbols I'm transmitting, I'm going to get a copy and then a time-shifted copy, another time-shifted copy, much more complex. This is some notional example, right? What that's going to lead to is inner symbol interference. So our nice packed constellation here gets all smeared and mixed up with all the adjacent symbols. And so we get this jumbled mess. So that's what we need an equalizer to get from here back to here, even cleaner depending on our SNR of our receiver at that point. So all of this has some time domain dispersion, right? This is the time domain representation of some notional channel that we've set up. It also has a frequency domain representation. So if your signal is wideband enough, you're going to get peaks and nulls at different frequencies. You're going to have frequency selective fading. And this is all just normalized frequency. We don't see how this exactly relates to symbol rate and bandwidth and all that. And then in the time domain, if we transmitted these perfectly square symbols on the other side across the channel, we're going to get all time dispersed things, which we would have some kind of symbol shaping to begin with here to help with ISI. So channel effects, main reason that we need an equalizer. Another reason is the hardware filters. Even just slight roll off is going to cause smearing of one symbol into the next. So you look at just something with a couple of dB on the edge. Maybe you have some received filter, maybe you have some amplifier, maybe even some nonlinear things, amplifier distortion. So it's going to cause smearing and spread out your constellation. So one concept we have to keep in mind here when we're talking about our need for an equalizer is coherence bandwidth. So if we have, so say this is the response of the channel over frequency. It's only, and we look at, these are two different channels. This top one will be a highly dispersive channel, a very long tail on the response of the taps. On the bottom will be a less dispersive channel. So this would have a wider coherence bandwidth. We can operate in a wider band without the need for an equalizer. Up top, we can barely operate on a very narrow band signal without needing some equalization. So this is really the problem we're looking at in this presentation is single carrier wideband signals. And how are we going to equalize for those? We want to be able to operate over this whole band and equalize for all this mess. So the basic signal model we're going to be looking at is if we've transmitted a signal, our channel model is going to be the combination of our filter effects and our channel effects. There's going to be some additive white noise, additive calcium white noise at the receiver. And then this U of N is what we're going to receive. So just a simple convolutional channel model. We're going to assume linear time invariance at this point as well. Just so we want to be able to observe it over a period of time and then counteract the effects of the channel. And not worry just yet about tracking it and time varying aspects of the channel. So in theory what we should be able to do is we receive some signals that have been modified by the channel with some noise. We should be able to just inverse that channel. We're done. Not quite that easy. This is what we will call a zero forcing equalizer. And the problem is that first we have to come up with an estimate of this channel. And once we come up with an estimate of this channel, it's a finite response estimate to really an infinite channel response. And then so we truncate it there and then we have to invert it. So we're taking something finite inverting it in another estimation. It's just not going to do a good approximation to minimize the error we're trying to minimize. Because this is also our problem too. We have additive noise. So once we do this, we're not inverting the channel in a way that's going to minimize the noise as well. So zero forcing filters, they don't work great in practice on single carrier signals. So we need another criterion. And the optimal way to back out of your channel response is through a maximum likelihood sequence estimation. You do that by tracing what you receive through the trellis of possible states. And then you use the Viterbi algorithm to find that maximum likelihood sequence. That's very computationally intensive. It's probably not something you would do in practice necessarily. So what we want to do is we want to look at the minimum mean squared error criteria for this problem. So what we want to do is if we're saying we want to create this filter W. So this filter W is basically what we want to come up as the inverse of our channel. We want to come up with a W such that it does the best job of recovering these symbols. So if W is the filter taps we're going after, then the error is the original symbols we sent minus that filter convolved with our received signal. So that's the error we're trying to minimize. So if we set up a cost function here, what we want to do is we want to find the minimum of that cost function. And so dot, dot, dot, we get down to the answer, which is the covariance of our signal, the cross correlation of our signal with our data symbols. That's P. And so the inverse of our covariance matrix multiplied by this, this is our optimum MMSV filter. And that's obtained by finding the gradient of our cost function, figuring out where it's set to zero and finding the minimum. So this is a static solution to the MMSV equation. We'll get into adaptive algorithms that are going to be able to track this as your channel moves. So as people move around in your environment, your transmitter, your receiver move around, we need to track that and keep up with the optimal filter at any point in time. So but right now we're just looking at one point in time we can find the best filter taps that are going to get rid of the channel effects. So the structure, so this is the structure that exists in the good radio box currently and this is the linear equalizers. So linear equalizer, it's going to be our received signal passed through some filter that gets updated in some way. So just an FIR filter, we come up with some estimate of the received signal after we've gone through this filter and then we calculate error. The error and the received signal are somehow used to update this filter. So we'll talk about how these adaptive algorithms more in a minute. But for right now, the general structure is just FIR filter. FIR filter, we're going to update the taps based on our calculated error. So there's one more variation that we haven't talked about, decision directed. So all the current equalizers in good radio are decision directed. They don't know anything about the training sequence. So they make some estimate. They say, okay, I was expecting a QPSK constellation. This symbol I just received was closest to this constellation point. So I'm going to assume that. So that's pretty good in high SNR environments. When you get into lower SNR, that's not good. And your decision directed equalizer is just adding tons of noise back into the error signal and it just won't work. So another varying structure of the equalizer is the decision feedback equalizer. So if you notice, the first part of this looks just like the linear equalizer. We have a feed forward FIR filter. We have an adaptive algorithm that's going to set these taps, but there's this feedback step. So what the decision feedback equalizer assumes is that not all of the inner symbol interference is going to be canceled by this filter. Some is going to bleed into the past and future symbols. And so what we do here is we go ahead and make symbol decisions as you normally would in your receiver, but then use those symbol decisions as another filter that's going to feed into the output of this is going to feed back into your symbol slicer. So now what a feedback equalizer is able to do, if you have a highly nonlinear channel, this is a nonlinear structure. So that's one situation where you need a DFE. Another is that linear equalizers, if you have very strong nulls in your frequency response, the linear equalizer can enhance the noise in those spots and so you get a very noisy equalized signal. So we'll see some examples of that later on. All right. So the current GNU radio blocks, they work. These were the basis of this development was the CMA block and this LMS block, structurally very similar. They're each just an FIR with an adaptive algorithm to update the taps. But this one, I just thought this was funny, and GNU radio, there's actually a comment in the code that said, if this doesn't work, I don't know if it works. So it was obviously a part of the code that needed some love here. So another issue with the existing GNU radio blocks is that the adaptive algorithm is baked into the block itself. So one of the themes that we talked about at the HackFest that you probably hear Marcus talk about is modularity and GNU radio. And one of the things that we want to do here is have a more modular equalizer structure. The adaptive algorithm is very separate from the actual equalizer structure. So we want to pull those apart. You know, essentially if you look at this block and this block, they're 90% the same code. So we want to break out the parts that are the same and break out the parts that are different. So what we have up on the GitHub is some new blocks. So there's two new blocks. There's this linear equalizer block, which we talked about, you know, which is, you know, it's the basis for these blocks. It's just these blocks stripped out, stripped out the adaptive algorithm part. And then there is now a decision feedback algorithm equalizer. And then so now these blocks take in an adaptive algorithm object. So this is modeled on the digital constellation objects. So it's just, it holds a few methods and then you can use it in either of these, either of the equalizer structures. And it also, this algorithm object also takes in a constellation object if you want to do decision directed equalization. All right. So we just talked about this, the linear and decision feedback, it's just both a filter. They're fractional spaced equalizers, which means it's going to take in an upsampled signal by some samples per signal. And then it's going to decimate the output down to your sample rate. It's also, so each adaptive algorithm is going to initialize the weights some way, update the taps and also provide some error estimation. So now we get into adaptive equalization. So how can we track this, track our channel state as the channel is changing? So there's a lot of different algorithms. This is a very small list of adaptive algorithms. So, you know, one thing we could do, we talked about the MMSE direct solution. We can directly invert the matrix at every time we get this training sequence. Okay. That doesn't track very well. We have to just keep updating it. LMS, it converges more slowly, but computationally it's very simple. It's just one dot product, essentially. Normalize LMS, recursively squares. This adapts very quickly. And we'll see the difference between RLS and LMS. But it's more computationally intensive. Not crazy. It's just doing a little bit of matrix math. And then CMA, constant modulus algorithm. It's a blind method. So rather than using a training sequence or decisions that we make, we're just using the property that the signal has a constant modulus. And we're going to get how far from the unit circle is our error. So rather than doing a whole derivation of these, we just get right down to the algorithm. So LMS, what we do, we have some initial weights. We assume something. We have a starting point. We have the cost function, the minimum error cost function, that we're trying to find adaptively the minimum of. So we start at some weights. We're somewhere on this surface. And we're going to descend in a way that gets us to the bottom. And the way we do that, we're going to push our weight estimate in the steepest direction toward the minimum of that cost function. So the way we're going to do that is our next weights are going to be our previous weights plus some step size times the signal we received pushed away from the error we calculated. We're going to push in a way that's orthogonal to the error. And then in LMS, normalized LMS is just a very slight modification. We're just going to normalize that step size by the amplitude of the signal we received, by the magnitude of that signal. So CMA, mentioned briefly, it's the same weight update as LMS. We're going to push in the way away from how we calculated our error. But what we're going to do is we're going to calculate our error differently. Assume we're receiving like a QPSK signal, APSK, any kind of constant modulus signal. What we want to do is calculate the error how far from that circle we expect our symbols to be. So that's all CMA is. So we don't have to be phase aligned at this point. We just have to be AGC'd. We have to, you know, this has to be a known unit circle. And then RLS is a recursive solution to the MMSE problem. You know, we're assuming that recursively all of the decisions we made in our tap updates have built up. And so there's a forgetting factor, which says how much of the previous calculation am I going to include in the next step. And then here's the math here. It's not terrible. There's just a few matrix multiplies, which makes it more computationally intensive than the LMS. But it converges very quickly. So, and then looking forward with this stuff, now that we've separated the algorithms from the equalizer structure, we have, it's much easier to add more algorithms. You know, you want to try out some equalizer structure. It's a very little bit of code that has to be added, a little object. You could even add neural network-based equalizers if you wanted to. That's a thing. And then OFDM, we haven't even touched. OFDM, and there's folks that are, you know, much better experts on this than me. But if you look at, say, LTE, you have an LTE, you have training symbols that are spaced across frequency and time. So that gives you a surface that you can back, you can actually do a zero-forcing equalizer in LTE. And there's other things you can do. But it's a different problem than the single-carrier example. So, one note, I mentioned this briefly. One of the key drivers of this is we wanted to handle bursty data, once in equalizers. So, you know, a lot of the blocks in Gini Radio, including these equalizers, have all the functionality baked into the work function. So just an implementation note, I really tried in this implementation to pull the signal processing into a function that could be called from outside of Gini Radio, that you could just use this as a signal processing library. So looking forward, you know, in work I'm doing, I'm trying to do that more. Hopefully, that's something we address in the future of Gini Radio. So let's look at some performance comparison. I think I have these as videos. Let's see if they work. Nope. Oh, there we go. So this particular example, we have, actually, you know what? Rather than doing this, I'm just going to pull up Gini Radio. So let's look up first, RLS versus LMS. So they both get you to the same point, but if you look at the top, there's some rough measurement of EVM of the signal. You know, how far, how far are each of these clusters in magnitude from the entire constellation magnitude? So if you look, RLS converged down toward zero a little bit quicker than the LMS. It's all relative. Who knows, you know, what the SNR is set to in here. So you can see that, that the RLS converged more quickly. Let's take a look at, let's take a look at this one. So this, so you'll see this is a comparison of just an LMS. LMS converged, you know, very quickly when it was given a trading sequence. So that was one of the other drivers of this work, was wanted to be able to equalize when you actually see a trading sequence. And then, but the decision directed version, it took a while to converge. It went, it didn't know what to do, and finally it latched on to what it should be doing and converged down toward zero. It was good in the end. The original Gunner radio block never quite converged in this case under these scenarios. So there were some slight implementation differences I won't get into. Okay, and I think I'm starting to run out of time here. Just do a quick other resources. There's another out-of-tree module, GRAdapt, that someone, I don't know this person. I don't know if Karell is here. But this was an excellent implementation of other applications of adaptive algorithms. So this is worth taking a look at. And then there is some actually very good YouTube videos, university lectures on MMS-E, LMS-RLS, all the derivation. So highly recommend those. They do a much better than me, job than me, and then books and papers and things. All right, so questions, anybody? Your product note, if we could do a little bit of defragging so we can get these people in and I can figure out the room change over in people. And questions. Yes. So the question for the training sequences, do you algorithms require very specific training sequences so it can be defined? Because for example, if I want to design my own customer, customize the protocol, I must have one to have some training sequence of preamble piloting, right? Yes. Can I use them also in the code? Yes. So for this particular implementation, you can give it anything for the training sequence. You just give it a series of symbols. It's going to pop up real quick. I'll just show you really quick what else. Don't worry. OK. All right. So this just threw in, these are just based on the stock examples in GNU Radio in GR Digital. And it just, there's a preamble plus data. Data is some random symbols. Preamble is just, I think, I think I made up some gold sequence here. But you can give it anything. You know, it... Only five is a semester cost. Yeah, as long as, I mean, the real thing here with your training sequence is you have to be able to correlate against it. So whatever the auto-correlation properties of that sequence are is what's going to be a limiting factor. Oh, I see. OK. Yeah. Thank you. Cool. Yes. I want to quick on that. So if you like the training, but it's fine when you have a peer-to-peer static. But what if some one side or the other or both, they move through the space. How can then train a thing? Yes. Good question. So the question was if your transmitter receiver are moving, how can it train? Yes. So if you're constantly sending new training sequences, the time between those training sequences needs to be less than the coherence time of the channel. Right? And your LMS, TAP Updates, RLS, whatever algorithm you're using needs to be able to update with the changes in your channel. Go low, low, low. It constantly retrain. And so the LMS is going to track that and keep finding that minimum. How quick is that? It all depends on the channel. OK. Yeah. So in-door channel might be very long coherence time because things aren't moving around. If you're on a high-speed train, it'll be a very short coherence time. All right. Yes, iiike? maybe a stupid question. But how much does this benefit if you over sample by the far higher than the symbol rate? How much to? Not a stupid question. So the question is, what's the benefit of oversampling at a much higher rate? I know that equalizers generally work better if you're oversampled by at least two or four. If you go more than that, I'm not sure. Because the equalizer, it's going to the taps. I didn't show one here that outputs the taps. That's one of the outputs in the equalizer block here is the taps. I thought I had one. This one, does this one output the taps? Well, you know what? Limited by the signal bandwidth. But the taps that it finds, it's going to come up with some time domain shape that just needs to be within the coherence bandwidth of what you're trying to do. Yes? Is anyone going to leave? You could go ahead and leave. Yeah. Yeah. I don't have those numbers now. It's that there were a couple of things I was able to, compared to the original Gini radio blocks, there was a couple of things I was able to vocify in here. So that definitely improved the performance, but I don't have quantitative numbers. All right. Thank you. Yes.