 Hi, everyone. I'm Mike Calabro. I work at Booz Allen out in Los Angeles where we run a digital communications lab, focus a lot on communications and actually navigation and timing signals as well. We use software-defined radios as prototypes for as rapid prototyping platforms. There are a lot of great development frameworks out there. I've used most of them, I shouldn't say all because I'm sure there's one I haven't heard of. In general, software-defined radio has done a really good job of commoditizing digital communications. It lets you be a really smart software engineer or a ham guy. With very little formal education, plug a system in your laptop and download a new radio for free. One of the many pre-canned applications, for example, is a really cool ADS-B demo that you can hook up an antenna into your laptop and observe all, kind of get an air picture of what's around you. That's great. For a lot of what we do, building deployable systems, building systems that interact with real commercial systems, if you're just doing that, you're leaving a lot of performance on the table. The goal of this talk is to hopefully streamline some of your developments a little bit, solve some questions, point you in the right direction as to where you can start immediately debugging some common issues, and take some of that performance back off the table and put it back into your system so that maybe you can use some of these software-defined radios integrated with GNU Radio or MATLAB and Simulink for some more interesting and more demanding applications. We'll come back to this picture towards the end of the briefing, but this is a really good example of a problem I was working on a couple of weeks ago where we had this signal of interest that this is what we're interested in processing. We just saw this on the side and, spoiler, it shouldn't be there or it shouldn't look like that. By the end, hopefully you'll have some tools to help solve this problem yourself. Just to level-set everybody, talk a little bit about what exactly a software-defined radio system is. It's much more than just the thing you plug into your computer and then into a development framework. Fundamentally, we know that there's an RF signal propagating through some medium. It can be free space. It doesn't have to be. It could be water. It could be a building. It could be propagating as a ground wave. All of those things come with unique bonus characteristics that make processing the signal interesting. Common platforms like the USRP in the RTLSDR, they all have different architectures for how they implement the RF front-end. Fundamentally, with all that's doing, mixing down the baseband into an ADC that's digitizing your signal, the ADC has a dynamic range, something that will come back to you and is very important for optimizing the performance of your application. There may be some filtering and decimation that happens afterwards and then, of course, the baseband processing. So the baseband processing is what you're doing in GNU radio when you're building your flow graph. Modulation, demodulation, encoding, decoding, eventually maybe generating messages that you might pass up to Wireshark or some other waveform-specific processing. So the older architecture of implementing these front-ends on the first gen of software-defined radios that really made their way out into academia were these RF data boards that, you know, they had relatively flat performance over some frequency range. And if you wanted to process an HF signal after processing Wi-Fi, you had to swap out the data board to one that was specifically tuned for that frequency range. Nowadays, what you're getting into with the latest generations of the ADIS radios are these RF-integrated circuits. The Blade RF also uses one, so there are two main manufacturers, analog devices and line microsystems, both of which have done a really cool job of consolidating a board that was maybe about this big into an integrated circuit that does gain, does filtering. The analog devices chip actually has the ADC integrated into the RFIC. And that RFIC is capable of being fed directly into an FPGA or DSP, and then if you want to, the host, which is typically your laptop or a desktop PC. Then you have this third class of devices like the RTL-SDR. These are really at the entry level of software-defined radio hardware platforms. They weren't designed to be software-defined radios. The RTLs, for example, was a DVPT digital video broadcast for terrestrial TV signals that someone figured out that, hey, I can enable this developer mode and get the raw IQ out of it, and I'll feed those, I'll just feed that IQ into a host and see what else I can see. While these were designed for signal processing and have relatively flat performance characteristics over all of the bands they're designed to be operated on, this one does not. And so you'll see all kinds of interesting things if you try and tune outside the TV bands and just weird artifacts. It does, however, have an AGC, an automatic gain control. The two USRPs here do not have AGCs integrated onto the boards, and that has an important implication for maximizing the performance of the ADC, which we'll talk about in a minute. All right, so this is the favorite thing I like to do whenever I get a new software-defined radio hardware platform is break open the box and figure out what all the chips are on the circuit board. Yes, sometimes there are schematics published, but even if you read the schematic, sometimes they'll just give you the family of chips, so the USRPs are a good example of that. If you look at the schematic and you Google the ADC on it, a TI datasheet pops up with seven families of specific chips that it might be, and you have to know a little bit more about the system to narrow it down. It's much easier just to read the chip number off the board. And the takeaway here is that each one of these components that processes the RF signal has some kind of performance curve versus frequency. Ideally it's flat, typically it's not, and even as we'll show for the USRP later in the amplifiers the USRP uses, it's got a pretty monotonically decreasing gain curve, so you can set your gain to 31, but you are probably not getting 31 dB a gain. Alright, so talking about hardware now. Talk will be broken up into three main parts, hardware, the development platforms, and then some specific case studies where we walk through some examples of work that I've done and seen and how we solve some of the challenges. So with hardware, how do you zoom in on your signal of interest? So you have an idea of something that you want to collect. What do you need to know about it? Do you need to know where it is? How do you find out about it? The RF front end and important parameters that drive its performance, its gain, noise figure, is it doing any filtering for you? Is there a way to optimize that filtering? Sometimes there is. And then the baseband processor, which that can be implemented on some combination of the FPGA DSP that is typically part of the software defined radios hardware platform, or it might be implemented completely on the laptop. So typically, boxes like the USRP, there is room on the FPGA for your own code, but most people don't know how to use VHDL, or code VHDL, right? You might be a comm systems engineer or a software person, and you may not have the time or the inclination to go learn how to program that FPGA. And there is a really cool effort out of Edison Research right now, RFNock, RF network on a chip, where they've developed an interface to go back to the FPGA, and some pre-canned logic packages to, for example, they have an FFT and a tuner that lets you actually send samples back to the FPGA for really fast processing and then take those samples back. Right now, most of the interface is just one way in new radio. You just have an output port, and it's whatever you can do, whatever you can process, and how fast you can process it determines what you're capable of achieving. So, if you have a background in digital communications, you may recognize the link equation as a pre-generic form of it. You know, the performance of a digital comm system is typically specced in terms of the signal-to-noise ratio of energy per bit to noise, and, you know, it doesn't, it's great from a theoretical perspective, and I highly recommend that you, when you're trying to attack a specific signal, you do this out in Excel and figure out maybe what you think you should be seeing, but you very rarely know all the parameters of this, especially when you're collecting against the third-party target where you don't control the transmitter. The other important thing is that, so, you know, thermal noise floor is super low, right? Maybe you're in a band with this atmospheric noise, so the noise floor is close to 120 dBm, but none of that really matters. What matters is your ADC sensitivity. So, if you are familiar with the USRP N210, you know, they advertise, depending on where you look, minus 10 dB to plus 10 dB max input power and minus 85 dB sensitivity level. So, what that means is that if your signal is below that level, you won't see it. So, an example of a signal that should, you know, everyone processes probably almost every single day that falls below that level is the GPS signal that comes in and around, I believe it's minus 137 dBm. So, immediately you need some kind of external amplification if you want to process it with the USRP. And there are other applications, and then there are other consequences of it. So, for example, if your signal level is hovering around minus 80 dB, your small signal, if there's a signal in an adjacent band that is also being processed by the ADC, your signal is going to get squished and won't be properly quantized. So, it's important that you amplify the band of interest to get into that really sweet blue spot of the ADC. And this is going to be different for every hardware platform. It's also going to be different for different sampling rates of the ADC. So, what you'll find if you look up the USRP M210's ADC is that it's a family of parts, and they have, there's a 150-megasample ADC, there's a 100-megasample ADC, there's a 60-megasample ADC, and they all have different performance curves, which can impact that. So, let's see, things that you can do when you're building a receiver to fix this. The easiest way, best thing you can do is external amplification. Depending on the frequency range you're working in, you can usually get some nice amplifiers for, you know, maybe 50 bucks to boost your signal. So, this is the same thing for the M210 that I did for the RTL, calling out the key components. The ones that you probably care most about are those two amplifiers on the daughter board. So, this is the SBX daughter board, and this is an example of why this matters. So, this is the noise figure over frequency, and this is the gain over frequency. And this is the frequency that the SBX is specced for operation over. So, what you can see here is that you can have a 4 dB deviation on up to two amplifiers in your processing chain if you're trying to use them in expecting flat performance. So, what does it mean when you set the gain value in the properties of the USRP source, right? So, you're trying to tune this gain, and it may not correspond one-to-one with what you think it is. The good thing is that they do have this really low noise figure, which is great. But you might consider adding additional amplification in front of it. The other thing to point out is this FPGA. So, there is some room on the FPGA for your custom code. And, you know, with RFNock, and I personally use MATLAB and Simulink as a development platform, and have had some great success with their code synthesis and putting code onto that FPGA, you know, you can really save your host processing, which I think is the next thing I'm going to talk about. So, the pipe is always big enough, right? The thing that you plug into your computer, whether it be USB 3.0, 2.0, or your Gigabit Ethernet, there are very few signal processing applications that won't fit over those interfaces. You're processing challenged. So, how many samples per second can you process? Or if you're not looking for real-time processing, how many samples per second can you store in memory before you overflow your memory buffer? And it's never as many as you'd like it to be. You know, from my experience, very challenging requirements to meet with these plug-and-play software to find radio to host interface architectures include latency requirements. So, if I need to transmit on a time slot and meet some guard interval, frequency hopping applications, wideband spread spectrum where the signal of interest, you know, actually is only maybe a megahertz wide, but it's spread over 20 megahertz. And then TDD and FDD timing where the, you know, the band changes from uplink to downlink. Some of those have very challenging requirements to meet. And, you know, how do you schedule a transmit time? It's not really possible to do reliably without getting into this FPGA. So, I already talked a little bit about RFNock. And, you know, before this was the old and this is the new. To my knowledge, they are only, this is specific to edis and their USRPs. It is just started beta. So, it is, you know, open source. So, you can go participate in it right now, but what you can do with it is limited. But I really hope it takes off and becomes adopted by the community because I think it would be really powerful to have an accessible library that you can deploy to these FPGAs and, you know, make that part of software-defined radio accessible to everybody. All right. So, development platforms and some application-specific stuff. So, there are three main development platforms that I'm aware of. GNU Radio, MATLAB and this thing called RedHawk. So, I think most people here are familiar with GNU Radio. You know, it's free open source, supports a ton of hardware. RFNock is coming along and integrates directly into it. My one, I guess, bone to pick with it is other hardware support and some of the consistency I've seen in the block. So, for example, you know, one block decimation might be specified as a rate and another one might be a factor. So, just consistent nomenclature is important when you're trying to develop things that you might deploy. And then other, what I mean by other hardware is, you know, integrating into TI DSP chips or various sensor interfaces or APIs. And if, you know, right now, if someone has the interest and the time to write that interface, it gets written. And if not, you know, you kind of got to do it yourself. MATLAB, you know, it is known for being an expensive commercial license, but what many people don't know is that for home use for, I think, about $130 plus $30 per toolbox, you can have the full professional capability of MATLAB and their various hardware support packages. I am a fan of its code synthesis. I know not everyone is. But I'll show an example later where we were able to do something very simple to the FPGA on the N210 with that code synthesis capability that, you know, it solved our problem. And before it would have, you know, we would have given it to a digital developer, you know, maybe he's on staff, but he's super busy and just doesn't have the time to take care of our problem. RedHawk is something that is starting to come out. It was released about two years ago. It is open source, but it is maintained by the government. And it is, if any of you are familiar with SCA, if you develop a waveform in RedHawk, it's SCA compliant. And it has very limited hardware support right now. I've heard rumors that they may be adding more hardware support in the near future. But for now, I would classify it more as a signal processing framework than a true hardware and the loop development framework like MATLAB and New Radio are. Okay, so this is what you see when you double click on your source. It looks about the same in MATLAB and New Radio. And something to point out here is that the... When you use these blocks, you know, the UHD and the USRP has a lot of functionality that isn't necessarily exposed by these blocks. One example is, you know, changing bits per sample that does support the 8-bit sampling mode. Good luck enabling it in MATLAB because it doesn't support it. The TX metadata, that's a way that you can use the timestamp, the samples. So if you have a time-aware application, it's super useful for scheduling transmit times. But again, not exposed in Simulink and I don't believe it's exposed in GNU Radio. Someone can correct me if I'm mistaken. And so, okay, well, let's focus on what you can affect and can change. So DC, so for example, if you tune to a frequency and you digitally down-convert and you're looking at the spectrum analyzer and you just see this giant spike where there should be no signal at 0 hertz, you know, that's some kind of DC noise. So ways to cheat, you know, you can use the LO offset to kind of shift your signal over maybe 20 kilohertz, 40 kilohertz, and then manually down-convert it to avoid that noise. Destination factor, the USRPs are sensitive to the destination factor. On the N210 specifically, the way they cascade the CIC filters, choose your destination factor wisely because, you know, powers of two will result in, and I guess I don't know if I should classify this as a gain or less of a loss, but you will see an SNR difference if you use a destination factor that is a power of two versus a non-power of two destination factor. And the reason for that is the roll-off, kind of where your signal is falling and the roll-off of the filters. So the filter is actually a 10, even though it's decimating and filtering out noise, it's also taking out a piece of your signal. So use powers of two for the USRP. And then this parameter is interesting because the latest USRPs actually have ADCs with configurable sample rates. And so that can also be used when combined with destination. That can solve a lot of problems. I'd rather just sample at an inherently lower sample rate than, you know, be stuck with the 100 mega sample per second fixed rate and then have to manually estimate down later. Okay, so that was kind of the theory level set, kind of shotgun to a whole bunch of parameters and trade spaces at you. So now we're going to focus more on specific examples and applications. So, you know, one common thing is, okay, so let's build a GSM network, right? So it's cellular, there's lots of commercial equipment available. There's open BTS that can run this part of it. And, you know, all this hardware, you know, it's narrow band. It should be super easy, right? Well, so you do have to do some work on the MS, and depending on what you're trying to do, you still have to do some work on the BTS. And if you actually want the COTS MS to interoperate with your network in a reliable way, before you want to start thinking about doing other stuff to it, you know, this is what you should be drawing. So, you know, that's the only one that you might show to a manager. But this is what you as a developer need to be aware of. So open BTS, for those of you who aren't aware, it's a really cool open source project that implements a GSM BTS that runs on USRPs. They used to make their own hardware, they've since stopped. And it combines their own interface to the UHD with a couple of other open source projects to, you know, give you a fully functioning GSM network that you can camp your own phone to. And, you know, you can actually share your internet connection through the Ethernet on your computer and get data running over GPRS. And it's really cool. But it's also, when you want to build the MS side of it, it gets pretty challenging because all of a sudden you have to think about all of these things that maybe you're not used to thinking about. So this is an example of the GSM frame and the GSM time slot. And you see, so the blue is your data. And you have these guard intervals on either side. And this yellow thing here is some, is a synchronization pad. And right in the middle of the time slot, so your data is actually split. So when you're transmitting, when you're transmitting as the MS, you have to basically start and stop in that little red area. Excuse me. So this is what your interfaces look like. Each one had, you know, this is the Cots area. The RF goes into, you know, probably a nice fancy Qualcomm chip that is, GSM is occupying some small percentage of it and it's also running 3G modems, 4G modems, maybe even Bluetooth Wi-Fi and GPS. But these delays are deterministic and known. On this chip, when you're implementing a mobile stack, you can start the counter, count for 10 cycles, stop the counter and know exactly how much time has passed. How do you do that here? And the answer is you really can't without deploying code here. So the trick, the key is to either reduce, to reduce the amount of processing that happens here, or to reduce the amount of data that's flowing over this interface. Either way, it will buy you closer to real-time capability. Now it's significantly easier to build a, just a collector, right? So for those of you who aren't familiar, this is the Simulink, this is Simulink MathWorks product, where, you know, you can kind of say go, model will run, buffers, if you start, if you start falling behind it, buffers in memory on a variable-sized buffer that you can configure. And it, you know, it's really cool. Just output, you know, it has similar display, displays as the new radio does. Much easier to build, receive only, much harder to build, transmit and receive capability in a single model of flow graph. Post-processing. So many times people just want to record signals. Another interesting application that I was working with recently was actually radar. So radar is interesting because, you know, you actually care about propagation time and distance. And so when you're trying to test these systems, unless you're on a radar range, you have to find a really long hallway and set up an antenna, hopefully directional, point it down the hallway, shoot the signal off something that is reflective, wait for it to come back. And then, you know, how many nanoseconds was that? What was I sampling at? Am I even able to capture the reflection? And so, you know, in the time and frequency domain, you know, here we're, I think, we're talking roughly 30 nanoseconds per sample. And, you know, we're able to measure a little nice reflection here at about 230 feet, one nanosecond per foot. And then the frequency domain, measure, you know, velocities, we just had an intern run down the hall really fast and he got up to 8 miles an hour. So good for him. And measure the Doppler shift. And so when, it's an example of an application area the software defined radios can be, you know, they're communicated, the pieces of communication equipment but hey, they can do any kind of signal application so why not try radar, right? And let's see. So to make, to enable this kind of processing, radar is an exception. Here we're actually sampling at 25 mega samples per second for about one second. And, you know, Morse, maybe you can get realistically for real time processing one, two, three mega samples if you're using the new radio and the pre-existing blocks and companion. But over sample where you can because that will give you, that will let you have some digital gain in your signal processing. And most probably, I should have put this as number one but the important thing you can do is use the full dynamic range of the ADC. So pay attention to the value of the samples that are coming out of the source block. If their floating point is the magnitude approaching one, if their integers, what's the dynamic range of the ADC? Is it 14 bits? Is it 12 bits? And what are the values of the integers coming out of the source block? If they're like in the tens, you're not even close. If they're in the hundreds, you're still not close. And you ideally want to see that plus or minus 15,000 value. It's also important to note for the USRP specifically, you have a 14-bit ADC but it's outputting 16-bit integers. So how is that working? And it turns out they use the most significant bit and the least two significant bits are zero padded. All right, so now I'm going to talk about what optimizing and improving the performance of the system. So I have one really specific example and then I have a little bit of a case study that solved a very specific problem that we were working through and had a bunch of unique challenges. It's related to the picture that was at the very beginning of the briefing. Maximizing the baseband host interface. So we had a problem where we wanted to do a live demo of a client and we needed a demo to run for about five minutes because that's about how long you're going to pay attention for. And we couldn't get it there. So we built up our receiver architecture and the deadline was approaching. We didn't really want to move significant parts of the receiver processing into the FPGA or DSP, just kind of too high a risk. So we got the idea of, well, so at some sample rate, the SDR is delivering us 16-bit samples. I can process a million samples before I overflow and continuous processing was important here. So what if we go into the FPGA and have it truncate the sample that comes out of the ADC and send four 4-bit samples packed into a 16-bit word? Okay, that sounds pretty easy. So we actually were able to use the HGL census capability of Simulink to implement that. And this is what it looks like. The blocks there are just custom packing functions or maybe one line of code each. Very simple and it was a great example of, well, we have this product and let's see if it actually works as advertised and it did. And when comparing it to GNU radio, I think it's important to understand what you're getting in both development platform. And so I am not a VHD developer, but I was still able to deploy code to the FPGA to solve a problem and this did solve a problem and quadrupled our runtime. And why did this work? So we took 12-bit samples, made them 4-bit samples. So we lost a lot of fidelity in what we were sampling. But because we had done our homework on the link budget and really found that blue sweet spot on the performance curve to maximize the dynamic range, to take full advantage of the dynamic range in the ADC, 4-bits was enough to accurately represent our signal. Okay. So this is the signal I was trying to process earlier. So for those of you who aren't familiar, E-Laran is a terrestrial navigation system. It was active in the United States and then it was shut down maybe about five, to forget exactly when, maybe five, ten years ago. Because, hey, we have GPS, so why do we need this terrestrial navigation system? Well, now it's coming back because people are realizing that GPS isn't always super reliable. And for example, if you're South Korea, North Korea is just jamming your GPS all day. So you're seeing E-Laran being deployed internationally and they just turned on a station, one of the old Laurence stations in Wildwood, New Jersey, just maybe 20 minutes south of Atlantic City. They're super high, they're really interesting signals to work with. And they capture a lot of the trade space of, SDRs are, there's no reason you can't process this signal with an SDR, but you really have to know what you're doing to do it. Because they're low frequency, they're 99% of the powers from 90 kilohertz to 110 kilohertz, not a very standard frequency range to be operating in. They also, because of their frequency, propagate as ground waves, they propagate as sky waves. They're subject to re-radiation, so if you're in an urban environment and where we collected it, and I'll show in a minute, it was kind of near the Chesapeake Bay, so if there are a lot of bridges, the wires in a bridge will re-radiate the signal. It's an extremely high power signal. The advertised range is about 1,000 miles. And so the challenges we were facing was, okay, we're so included. Well, it's such a strange band. It's hard to find amplifiers for this band. It's hard to find things whose, their frequency dependent parameters go below 100 kilohertz. Most places start at 100 kilohertz, and then when you look at the data sheets, the performance data on the data sheet starts at like 30 megahertz. So even though it's spec that low, you don't necessarily see the full performance curve, so you have to characterize it yourself and it's flat. Because we don't, not a lot of work is done here, we actually found a software bug in the interface. In this case, we were using Simulink, so we found a bug in the way they were handling it, which to you, the math works as credit. They've since fixed and super responsive in working with us there. And the LFRX dot board for the USRP does not have an LNA in front of it. So you're dealing with an extremely noisy ADC with about a 30 dB noise figure. So you need one of those amplifiers that's hard to find in the performance range that you're operating in. So, let's see. Oh, and for comparison, so this is what the signal should look like. This was collected on a spectrum analyzer. Believe, this is a three hertz resolution bandwidth. So there's clearly some difference between what we're observing in our SDR development framework and what the truth is. You know, here there is no amplification. Here there is an amplification. So that's maybe a hint as to what's going on there. Okay, so, you know, I talked a little bit earlier about why you want to know your link budget and build your link budget. And this is what we tried to do for you, Loran, and obviously just didn't work, because we don't know any of these things. So, you know, we don't know their true transmit power. We know it's typically in the hundreds of kilowatts to low megawatts, but we don't know exactly what it is. We don't know the gain of the antenna. We don't know the gain of our antenna, because, again, of the frequency range, you can go buy a magnetic core antenna that will give you flat performance, but, you know, this costs $20 at Home Depot. It's just maybe about 100 feet of wire wound around a PVC pipe. There is no capacitor to pull down the resonant frequency. But it did the job, you know, and it worked. We don't know what our gain was. We had a significant mismatch loss. We did characterize it on a network analyzer, and, you know, we basically built the giant inductor. And let's see, other things. Ah, the X. So here, X matters very much so, because atmospheric noise, you would expect maybe 50 to 80 dB of atmospheric noise at these frequencies depending on what time of day you collect. And that's important because it depends on the time of day. So if you collect at 9 a.m. in the morning, you'll see different performance than if you collect at 8 p.m. at night. And so, before, you know, one day you work really late, you leave your model where it is, and you think you had it working, the next day you come in and start on it in the morning, you might see 30 dB of variation in what you were processing. So, you know, if you just plug all this stuff into Excel with kind of swags at it, at what these parameters should be, you end up with, well, I think I should be like maybe minus 50 dBm. And, you know, Truth said that we were at minus 99. So it helps a lot to have some of this measuring equipment available to you to validate what you're measuring. And then for various signals, for example, cellular coverages, there are coverage maps available online that will give you signal measurements and various monitoring points, like what the RISI is at, you know, one mile out, five miles out, and people will just drive along the highway taking signal measurements. So, this is what we ended up doing. So, the LFRX data board, we did use it. The software bug ended up being related to... So, this is where knowing your hardware really helps because the LFRX is really just a... It's just an SMA input that just goes directly into the motherboard of the N210. There's not much going on there. So, we knew that there was digital down conversion happening in the N210, and what we were finding was that if you set the center frequency, well, nothing's happening. So, we ended up needing to... Fortunately, what our center frequency of interest was only 100 kHz, the ADC was just capable of capturing that bandwidth, and we were able to down convert it manually. With atmospheric noise, we did our collection in mid-morning when we believed that we had characterized it over a couple of days, which really just run the spectrum analyzer and then have it report back once an hour with what the noise floor was. The LNA had a 3DB noise figure, so that brought down our system noise figure. We ended up just testing a family of LNAs to... Which one has the best edge of spec performance? We were 10 kHz out of spec, and we think that that was the source of... the source of that distortion that we were seeing. And we ended up with this really nice... We did the correlation, so Loran is just a pulse signal, and it's designed to be super easy to receive, and you would expect something kind of like this if you were to correlate it against the Loran pulse. So at the end of the day, we were able to get it, but we were only really able to get it because we put the time into analyzing the hardware and trying to dive into the theory a little bit about what's actually going on. Okay, so in summary, if you want to write down a checklist of things to do, from our experience, this is what we do. Whenever we have to attack a signal or we're given a piece of hardware, this is our development process, right? We will try and build the excelling budgets. For many systems, very strong performance data is out there. For example, the old Loran towers, the Coast Guard used to go out and do these signal surveys and publish coverage maps of what the received field signal strength was at various points, and they're very useful for bounding your expectations. Even if you start where we did, where you don't know what all the parameters are, as you develop the system, some of them will reveal themselves to you, and then you can start populating them and sharpening the pencil on, well, okay, I know I'll need at least 20 dB of amplification, so let's just start with that, and then you'll refine the receiver to get the performance that you want. Understand the hardware, so take full advantage of the dynamic range. Very important. For transmitting and receiving, try to move some of that processing onto the FPGA or DSP. There's usually a very low-hanging fruit in whatever system you're trying to attack. If it's a spread spectrum signal, can you do the de-spreading on the FPGA? Because that dramatically reduces the sample rate you need to pass into your host processor. And I think I have really high-hope strarif knock. I think it will be a good... There's really nothing... I'm not aware of any other efforts that are like it, so I think it has a lot of promise for the community and helping to commoditize some of that, putting some of that processing on the application-specific hardware. One thing I didn't mention. You may have noticed this in the photograph of the daughter board. Terminate your unused RF ports. Going back to the USRP as an example, the isolation between the two paths is only about 20 dB, so if you're processing a high-power signal or transmitting a high-power signal, your signal will absolutely jump paths. And if you're not terminating, it's just going to reflect right back down on the electronics, which, depending on what your signal power level is, that might be damaging. Which is another point to make. The ADC will say the max input power of the ADC is X. The max input power to the amplifiers that you might put in front of the ADC, or that may already be in front of the ADC, is likely lower. For a good example, LNA's 5 dB max input power is pretty typical. High noise amplifiers with high noise figures can go a little higher, usually. And if you're transmitting, amplifying the transmit side, not the receive side, because if you do control both ends and you amplify on the receive side, you're also amplifying your noise. On the transmit side, you're just amplifying your signal. So that's what I got. I'd be happy to take any questions or discussion, but thanks for listening. So I haven't used the HackRF too much. So I haven't used the other platforms too much. And I guess I would answer that by saying the best thing you can do would be some kind of analysis. This doesn't take that long. This may be able to take two hours of your time. And I'm sure it's out there in some capacity. Sorry I can't speak to that platform. Yes? Yes, sure. Some SDR platforms have system on chips, so they have some element of embedded logic, and you can actually just code in whatever C language is used on that chip. Any of the zinc boards or anything, like the new Edis E310s are an example of something like that. Otherwise, the only thing I'm aware of that really lowers that barrier of entry to close to zero is this RF knock capability. There is some capability there. I know they have an FFT, and they are working on other blocks and working with the community to figure out what the other blocks should look like and how to develop them. And they do deliver. It is easy to use it. They just drop right in your flowgraph and your special caller to a user like me who is also unfamiliar with detailed digital development. It's fairly transparent. But you can run some benchmarks and see huge increases in performance as you would expect. I know because I've used it and successfully deployed code to the FPGA that does what I wanted it to do. I started not knowing about it, and through only their documentation and examples, I was able to deploy the code. It does work as advertised, and it can do some pretty... It will solve your problems. The home licenses, I think are pretty new. Once I saw that they had those, I was a big fan of them because for $30 you can add the HGL generation capability of what's otherwise a very expensive license commercially. Yes? Yeah, so that's a good question. They've had these hardware support packages available for many years, and some of it is probably just personal preference. I really like their interface and their glue and their capability. I think a new radio has come a really long way, especially in their data visualization, where a couple of years ago, if you wanted to, you know, the waterfall plots and the spectrograms were not super easy, at least in my experience to get working. And, you know, doing the development professionally, having the number you can call and say, hey, this is a bug in your software that we've discovered, please fix this, and it'll be fixed within a week. There's really no comparison for that. That said, we do use, we do use GNU Radio for a lot of pretty basic stuff, so where we need to get somebody trained up on SDR, what SDR is, and walking through something like this, like, here's GNU Radio, go, here are a couple of models in Flowgrass we've built up. I mean, it absolutely has its place in our development cycle. Yes. The way they approach it is they use Xilinx for the Xilinx system generator to generate the HDL, and you target a specific board, so you can target the N210, any of the other Edis boards, or you can create your own FPGA, or board with the FPGA on it, which we've also done, so it actually preserves all of the functionality that Edis has in there. If the Edis is filling up the FPGA 40%, you have 60% to work with. You can't take up 65% and break Edis's code. It's completely abstracted from you. Thanks.