 Yeah. Well, hello again everyone. I just spoken an hour ago, I didn't actually introduce myself, so my name is Martin. I work for a company called Edis Research, and I'm also involved in the GINradio project. Today, I will be talking about GRFEC, and as a German living abroad, I'm trying to get to grips with this concept of humor, so I try and come up with a funny title. But actually, if I do this talk again, I will call it Shocking Tales of Redundancies. It's an even better title. If you don't understand why you might at the end of my talk, I just love this picture too. So, yeah, I'm going to talk about GRFEC. First, before I go into this, so GRFEC is about forward error correction. This is a part of every communication link. However, it's a very dense topic. So, this is something where typically you would have two semesters worth of classes just on the theory, but I actually also want to show you how to use it in GRFEC in 20 minutes. So, I'm going to have to gloss over things very quickly, and really, this is designed for people who don't actually know a lot about GRFEC, or people who don't know how to use it in GRFEC, but I just want to tease the topic for you. I don't want to actually give you the introduction, because I just don't have the time to do that. Okay. So, let's start at the beginning. So, one of my personal heroes, Claude Shannon, came up with most of information theory in the 1940s. So, I personally, as a scientist, I look up to him for various reasons. He was very smart, obviously. He was not fixated on theory or practice. He could do both. He built computers and whatnot. He built one of the things that people might have seen is this machine learning little mouse that goes through a labyrinth. It's crazy. But I think the most interesting contribution is this whole concept of information theory. So, before he started working, I could have asked you how much information fits in this whiteboard. And people would have said, what are you talking about? The question makes no sense. And not only did Shannon come up with the framework to understand and answer that question, he also came up with fundamental things like theories that are still valid today and they will never go away. They will remain. And the most important one is the so-called Shannon limit, the Shannon theorem, which states that given basically a signal-to-noise ratio and a bandwidth, like there's a certain amount of data you can carry across that channel. And the interesting thing about this, he did this first. This was basically the first thing in digital communication. He just laid the entire groundwork. And he said, now everyone else can go figure out how to actually do that. But I mean, that is amazing. That is like, I find even more amazing at doing it the other way around. So, yeah, so this is like the single equation. I left all the other equations out of this presentation. Very simple to understand. So, P plus N over N, you can also change it to one plus P over N. So P over N being your SNR. If your SNR is zero in linear, so minus infinity dB, which means you don't get anything across, this term becomes zero. So obviously you can't transmit anything. If the dB, SNR is zero dB equals to one, you have one plus one equals two, log two of two is one, which means your channel rate is equal to the bandwidth. So you have one megahertz, you get one megabit, very simple. And it's true. And it's not like this is never going to change. Amazing. So, oh yeah, quick interlude. So I have some GRC examples. At the end, like my final version, I have exactly one GRC example which is not already in the radio and I will upload it to the Fawcett website afterwards. But disclaimer, I used main 37 for this branch. So we are currently working on Gunuradio 3.8. It still has too many bugs for me to actually run this stuff. So there's a feature called bus ports which is currently broken. I'm sorry. The point is GRC files don't have a, I am not sure if they have a version detection field. So if you try and run this with master branch, it won't work. All right. So here's, I'm not good at computer graphics. So this is how I make pictures. This is just a very hypothetical setup. So we have a transmit antenna, receive antenna. I use the number 10 a lot because in DB that works out. I have a transmit gain of 10 DB, receive gain of 10 DB, a noise figure of 10 DB, 10 kilometers distance. And these are all very even numbers. I transmit, I use free space equations to just estimate my receive power. Really the precise numbers don't really matter. The point is I'm gonna exclude all effects other than thermal noise, which is a very, very strong assumption. Like you typically, you do worse than that. Then I say, okay, like I put in like sort of the basic like first year electrical engineering equations and I find that I have a thermal noise of minus one or three dBm and then approximately 10 dB above that I have my noise power. And I'm gonna do some other simplifications which aren't quite accurate, but hey, I've already graduated. So people can take that away from me. Turns out that the equation earlier states we should be able to achieve about three megabits worth of bit rate with an arbitrarily low error rate. And I'm like, okay, arbitrarily low. Okay, I want one bit error in like 10 billion years. Like, can we do that? So wait a minute. Oh yeah, uncoded.grc is here. So this is like stripped everything that is, this is super, super like simplified. There is no synchronization here. There's not even a channel. Basically I'm just creating BPSK symbols. And I do that because I can equate SNR and eb over n zero and add some noise. And then I turn them back to bits and then I measure the bit error rate. So, and what you care about, which for some reason is failing me right now, is the bit error rate is actually zero. And that's not supposed to happen. So this number should be higher than that. It should be about 10 to the minus six or minus four or something like that. Like this is what I get for doing demos. Oh, I was playing around with the amplitude. Now I'm just gonna crank it up for now. Where's my slider? This is like the simplest possible. Just gonna change this to some other random number. Whoops, that's a bit much. Ah, whatever. It doesn't matter. The point is I'm gonna add some noise. And you will see, I mean, I kind of gave it a bit. I assume no one else would expect that. If you, let me just stop that for a sec and then zoom in. So I have like plus one minus one symbols, but I've added so much noise that it looks like this. And then like a whole bunch of these bits come out wrong. Okay, so now I fudged the noise amplitude just now. So I don't actually have that SNR that I mentioned earlier. But if I had the SNR, it would still be like that. So if Shannon says, well, I can transmit like a whole bunch of bits, like three megabits without error and I'm transmitting one megabit with errors, like what am I doing wrong? And the answer is, well, I guess my transceiver is not sufficiently complicated because that's the beauty of the original paper by Shannon. He says a sufficiently complicated encoding system will achieve that. Now you go find out that encoding system. That's my time. Now if you look at this setup that I used, like I'm transmitting plus one minus one bits, but I add noise. So I actually get this probability distribution of received symbols. And as soon as they sort of move over here, I will interpret the plus one symbol as minus one symbol, which is a bit error. This here is something I can actually trivially calculate for simple schemes like mine, using like the error function, like the Gaussian error function. So I can actually predict accurately my bit error rate. But that doesn't help me because I wanted to transmit without any bit errors. And that's where we need forward error correction. And what forward error correction does in a nutshell is it adds redundancy. Now this is where, like the next two slides is where I skip over two semesters of classes in one minute. So there's no way I can actually sort of relay the full information of it. But consider this case here. I want to transmit four bits, one, one, zero, one. And you might say, okay, well, I'm gonna just put them onto plus minus one symbol. So I can, this could be on a wire or this could be over the air through some other kind of modulation. So here's my bits, one, one, zero, one, four bits. But like, who says that I have to do that? Like, no one. I can do whatever I want. As long as my receiver knows how I put this into some physically, physical representation, I could do this, one, zero, one, zero, one. And you'd say, well, wait a minute, that makes no sense. Like, this bit sequence is not in this bit sequence. And I said, so what? Like, I'm making up my encoder here. So like, as long as my receiver knows that one, zero, one, zero, one, zero equals one, zero, one, one, zero, one, I'm done. And you say, okay, well, fine. So what do we have here? First of all, like I said, we don't have the original bits in this code word, which is not necessary. Like a lot of forward error correction codes actually do keep the original bits, but it's not a requirement. But what's more interesting is that I have more bits now. And I have to transmit them in the same amount of time if I wanna sort of like not change my actual setup, which has some interesting implications. Because if I just transmit these slower, then I might also, you know, keep the original one, transmit that a bit slower, which means I have a little bit more power. And that will also increase my bit error rate. So this is where it gets really, really difficult. And this is where I'm just gonna have to jump over it and say, you have to come up with this encoding such that sending more bits in the same amount of time, which increases the bit error rate of the individual symbol is still better than doing this. And that's what people have, you know, many PhD theses have been written on this topic. There's a couple of words that you, I just wanna say the words out loud that you've heard them once. If you know a forward error correction, then this makes sense. Otherwise just keep the sort of, like basically say, okay, I heard it in Martin's talk. I don't know what it means, but it doesn't matter. So we talk about a systematic code if the uncoded data is included in the encoding. Encoding increases latency. It's a problem. We have to deal with it somehow. We can have multiple codes and we can combine them. And then we typically, you know, we call that concatenation. Like I said earlier, you have to have a code that is better with more shorter bits than the uncoded one. And if it is better, then we say we have a coding gain. And this is also a concept that I'd need like two or three slides and I think you'd get it, but I'm just gonna skip over that. Puncturing is an interesting feature that we can employ. You know, when we add redundancy, we send more bits. So, but I can just also leave some of those bits out again. So we first add bits, we take out other bits. Why does that make sense? Yes, it does make sense because it allows us to change the rate at which we encode. Just, like I said, this is just a word that I wanted you to have heard. I'm just gonna give you a couple of examples. So how, like you would think like someone can figure out the way to do forward error correction, but that is not correct because all applications are different. Consider a satellite talking to a ground station. Like there is nothing between the satellite and the ground station. This is a point-to-point link. Like, I mean, sure, maybe like a plane or something will fly through the beam briefly, but it's not a big deal. The only thing that matters is that the satellite is moving around a little bit, like relative, if it's like a geo-satellite, for example, if it's like any other satellite, it'll move around a lot, which means like the distance here changes. So we have changes in our SNR, but they are usually predictable. So if we wanna have a code for this kind of link, it'll be different than what a CD player does. So if you consider a CD player, like you have a scratch on a CD, that means like all of these bits are fine and then like poof, like a whole bunch of bits are gone and then you continue here with like good bits. Like this is a different kind of error that you obviously need a different kind of code to work out. And then mobile phones have like the worst constraints of all, like literally everything matters in a mobile phone. We have bad SNR typically, you have Doppler shifts, you have all kinds of like channels that look like this, but also people wanna watch YouTube immediately and they wanna be on the phone, so they have to do everything quickly. So this is where it gets really interesting. And these are just a couple of other names of codes that you should have heard before. So if you go to school and learn about codes, often they like the first code, they talk about our Heming codes, they're not really that relevant in mobile communications and convolutional codes which are used a lot, like Wi-Fi for example, Turbo codes and polar codes are just names of codes that are used in wireless communications and then there's a whole bunch of other codes. You can see like people have thought about this a lot. All right, what's my time? Okay, so this was basically the theory, thank you, this was the theory, except I left out all the theory. But we have most of this inguinal radio in a modular fashion which is useful because you don't actually have to understand all of the theory and trust me, most people who do work in wireless communications don't understand all of the theory. Not because they're lazy, it's just a lot of stuff to put inside your head and if you're building like a point-to-point wireless communication link, like you have so many things to worry about, like understanding every little nuance of the equations of forward error correction is a lot to ask. So that is also true for equalizers, synchronizers, et cetera, which we all have inguinal radio, so GIFEC fits well into that category. Okay, so you build inguinal radio, you make sure that GIFEC is enabled, there's usually no reason to disable it and then you have FEC blocks available and it also has a bunch of examples and I will show some of those. So the first example that I wanna show you is called FECAPID-COTUS.GSE and it is actually part of the source stream. All right, this is squashed up a little bit, unfortunately. Even on the newer HD screens, we don't have that much space available. I'll try and untangle it a little bit. So what do we do here? This is effectively the same example that I showed you earlier, except we don't have any noise anymore. And what we do is we generate bits in this block and then we send them to the receiver on four different paths and like I said, this is just an example. This is not actually a useful communication link. So we take bits, we encode them, we turn them into BPSK symbols, like this was the plus one, minus one representations of our bits and then we decode those and then we just look at the result and we have three different codes here. And you will notice that the block is always the same even though we have different codes. So let me just run this real quick. It's not the most enlightening visualization here, but it is just nice to see that things are obviously working out. So the random, oopsie, sorry about that. The random bits that I'm generating are coming out identically in all four cases. You can see I have my uncoded bits and I have a thing called a dummy encoder, a repetition encoder and a convolutional encoder. And it's maybe not obvious, but basically this is just like the representation of the bits and they're all the same. So that's why you can only see one line here, which is black, which means our encoding decoding is doing something correctly. But the key concept that I wanted to show you with this example is not like the visualization, rather it's like how the encoders are set up. So like I said, we have bits generated here and they go into this block called the FEC extended encoder. If you look at its parameters, it has this thing called an encoder object here. And if I scroll down a little bit, I have different encoder definitions. So I have a CCSDS encoder, dummy encoder, I'll talk about that in a sec, and a repetition encoder. And all the code specific details are hidden away in that block. So I'll open this for a sec, and you'll see there's like a frame size, a streaming behavior. These are all parameters that other codes don't necessarily have. And then there's an equivalent decoder object that has a bunch of other settings. But the thing to keep in mind is we have these guys here, these are the actual encoders, and these are the blocks that capture those encoders and actually run them. So the distinction that we make here is blocks versus kernels, I need to hurry up a little bit. So yeah, we have this guy, it's the block, and then you pull in the encoder, and this thing we call a kernel. And these are exchangeable, and you can reload your own and write them very quickly, and you don't have to worry about the radio as much, which makes it easy to integrate it with other libraries and our SIMD extensions. So there's three types of blocks. Actually, there's six types of blocks, and every block has an extended and a non-extended version. And the extended version, for everyone who's starting, is the only one that you need, because it's the one that adds a whole bunch of sugar and adds some Python to make it easier to use. And then really, you just have to ask yourself, do you want this continuous streaming model? Then you use this guy. Are you using ASIC messages? Then you use this guy. Or are you using tag stream blocks, which are streams with boundaries? Then you use this block. But you could pull in the same kernel in all cases. So as you can tell by the signatures here, the purple versus the orange, the encoder takes actual bits and outputs actual bits, and then you modulate, whereas the decoder takes soft bits. So this is a floating point representation of your bit that includes uncertainty. So if you were, so we differentiate between soft and hard decision decoding, but like no one does hard decision decoding. And in a soft decision decoding, you don't just give it a plus one or a minus one or a one or a zero, rather you give it a spectrum of values where the absolute value sort of determines your certainty. You could also give it a zero, which is neither a plus one or a minus one to indicate that you have no idea what the bit should have been, and then the decoder can handle that. The other options that are interesting are the threading model. If you have lots of stuff going on, you have plenty of cores, there might be some process optimizations that you can get, and puncturing is handled by the blocks and not by the kernels, because that's an identical process. The available kernels that we have are, I mentioned these guys, Polar and TurboCodes, LDPC codes. So these three codes are important for wireless communications. So Polar codes will be in 5G new radio, TurboCodes and Polar codes are in 5G new radio, TurboCodes are used in LTE, for example, convolutional codes in Wi-Fi and in GSM. The dummy encoder is actually not an encoder, it's more of a debugging block, it doesn't actually do anything. And the repetition encoder is an encoder that often comes up in classes where what you do in a repetition encoder is you send the bit multiple times. Turns out that's actually not very useful. I'll show you why in a sec. So now my first example was very, very brief. There's another one that I picked out as a more useful example, which is the Polar encoder. So the Polar encoder once again has a comparison between what happens when you encode and when you don't encode. So the right-hand side is what I showed earlier, it's the exact same thing. I have like noise and bits and I don't do anything smart and I end up with bit errors, 10 to the minus three. So every thousandth bit is wrong. Whereas if I add Polar encoding and decoding, it's fine. Why is that? Because Shannon says it's fine within the parameters that I said and Polar codes actually achieve the Shannon rate at sufficiently long block lengths. That means long latency, but it also means no bit errors, yes. Okay, eventually even the Polar codes give out advice, you know, I can, I think, yeah, there we go. Eventually they also start having bit errors and that's because my block lengths aren't actually that long and maybe I'm also leaving the Shannon limit, but I have much more space to work with. Yeah, the flow graph is pretty much the same as before. So I'm just gonna hurry up because I'm running out of time here. The other thing that you can do with GinoRadio is BER simulations, but it's questionable whether or not that is actually the right tool to do that. The way you do that is a little bit different than you would in your typical scripted application because what it'll do is it'll run all of these at once and then sort of continuously update this. Why is this useful? Well, because GinoRadio is sort of a continuous streaming model and you would really only use this to basically test the various encoders versus one. Now, I also want to caution people to interpret this correctly. I thought this was interesting. We have it in our tree because it's a little bit of a dangerous graph. Why is that? The lowest bit error rate is related to the convolutional coder, so LEPC and T-Turbo codes are worse. And also we have this red line versus this blue line. This kind of doesn't make sense. Something is not correct. Like how on earth can I transmit more data if I repeat bits? And why is this better than this? Reason is I'm not actually comparing apples to apples here. In the red case, I'm actually transmitting less data, like one third the amount of data per second. And the same is true for the rest. I haven't basically corrected for the different rates that I've achieved. So, okay, like I said, running out of time. So this is basically more of a debugging tool if you want to do theoretical research. Like this is one of the rare cases where I would probably not recommend in a radio. But the fact that our kernels are separate from our block model means you can still share code very easily. Oh, no, okay, that was the example I just wanted to show you. And before I conclude, I just wanted to name a couple of people that worked very hard on this. So, Nick McCarthy, I think came up with the original FAC API. Tim was probably involved, but then there's a couple of people who contributed like the really relevant codes. Johannes, Manu and Tracy were actually GSOC students. And Johannes spoke about his codes, his Polar Code Implementation GI Con 16. This is a link to the actual talk. There was another talk about Polar Codes at the same conference, but they were not upstreamed into a GI FAC. Okay, so, some redundancy is good, some is bad. If you see it's definitely of the good variety. I like the modular approach. It fits very well into Guna Radio. We do need help making sure those. And we have a lot of really good codes in there, but we can always increase their speed. That's something that is very difficult. And as sort of new wireless protocols appear in the wild, we also need more different types of encoders anyway. So yeah, I hope people consider taking a look at GRF you see and thank you very much. Thank you.