 Hi again. My name is Manuelis. I'm from Libre Space Foundation, a non-profit organization in Greece. Actually we are a team of space enthusiasts and freaks. And we use SDRS as our main way to communicate with satellites or other stuff that we do. So our main project is SADNOX. SADNOX is a network of ground stations that track mainly low-earth orbit satellites. For those who are not familiar with low-earth orbit satellites, they spin in a high velocity around the Earth in about 400 kilometers above the surface of the sea. So you have to track them and to compensate the Doppler shift that is there. So another very interesting feature of our ground station is that we have a network of them. So because you do not have so much time to track it, it's about 15-20 minutes, you lose the line of sight with the satellite so you lose the signal of course. But another ground station in another place in the Earth can continue and track it. So this is our concept and what we are trying to do. Our tracker costs about 300 to 500 euros depending on the SDR that you want to use. We now support about six or seven different SDR devices. So this is the tracker. It's a completely open source and open hardware. All the schematics are available on our repository on GitLab. Many of the parts are 3D printed and all the others are very easily available in the market. And you can build it for yourself. So it is crowdsourced, of course, every operator who wants to enter the SADNOX project built his own tracker and he registered it in the network. So until now, all the available software that is out there, it is used in Windows and of course most of them are closed source and are targeting only specific missions or satellites. Another very common problem with these software is that you cannot parameterize them. They are black boxes and they are operating for a single purpose. So what we do, we provide new ready decoders for a larger family of low-orbit satellites and we give the operator the ability to choose whatever SDR hardware he is willing to pay. So most of ground stations right now, they are using the TIP RTL-SDRs with a simple LNA in front to get a single property. We provide all the web services, the data that each ground station downloads they get uploaded automatically on the web for further visualization or we have an app so everyone can write some scripts to grab data for a specific satellite in a specific time instance or so. So from last year for them, it was a very, very busy year for us. We had the UPSAT launching space which was our satellite. I will talk about it in a few slides. We do some very serious hardware and software improvements in the rotator and also, as we were growing up, our web infrastructures were completely fried so we have to update them as well. Also, we were involved with high-power rocketry for the Kansat project. Kansat project is an educational contest that students are launching small satellites inside the amateur rocket in about one kilometer height and then the parachute deploys and moves down to the earth. But the high-power rocketry part and the telemetry of the rocket was our project. And in November, we also contribute to the organization to the organization of the open source QBSAT workshop with ESA. So what is UPSAT? UPSAT is the first open source, completely open source and open hardware QBSAT. It was part of the QB50 project. The contributor of this project was the von Karman Institute and it was designed to measure the plasma concentration on the anosphere of the earth. It was developed by Liberia Space Foundation together with the University of Paris. So we tried to use as much as possible free software. So, for example, the PCB designs were designed in Kaikad. The structural of the mechanical designs were designed in FreeCAD and all of the designs and the code is available on our GitLab repository. And at April, using an Atlas V rocket and resupply mission to the ISS the satellite traveled to the International Space Station where the astronauts took it. It was placed in a launch tube with a spring behind it. The ports opened and the first one is actually the absent satellite and it was generalized into space. You can see the massive panels of ISS here. So, just after 30 minutes, we were able to receive our telemetry by Statenok's ground station in USA. So this is actually a very good example of the benefits of network ground station around the globe. Here you can see a CW signal and there are also some FSK frames that were decoded. And as I told before, High Power Racketry was another project of LSF. We designed this board that has some sensors on it. About G-forces, velocity has a GPS sensor, temperature sensor and uses FSK to transmit back-to-earth data. The decoder was completely based on GNU radio. So, this is a waterfall of the first launch that we actually do and we missed it. We didn't manage to decode frames during the flight. If you see here, it's a strange phenomenon. The G-forces actually did damage to the telemetry board and we had a frequency drift. We had a very steep filter in order to receive it and the frequency drift caused the signal to go outside this spectrum region and we get no data. So this is a team that struggles to find the problem. But using GNU radio we have stored the whole IQ of the flight. So by plotting the waterfall it was very easy to see that it was a drift. So we just relaxed the low pass filter that we had and after that, at the second, third and fourth launch we were okay and we get the limited data. So during this year we have added some significant improvements in our SATNOX infrastructure and we managed to parameterize completely all the flow graphs in order to support all the different SDR devices. So we have made a database with some good known values, for example, for a game, for something great, for each device, for the user piece, for the air spy, for the RTL. And when a flow graph is executed on the Raspberry that runs on its ground station, the flow graph takes automatically the SDR device that it uses and applies the proper configuration regarding the gain and so on. But because we have some operators that they use, for example, LNAs or amplifiers, we allow, of course, each of these parameters to be overridden by command line arguments. For every observation we store the IQ and then we generate a flow graph, a waterfall. We convert it to PNZ and this waterfall match is uploaded to our network. And if the satellite that we are targeting is supported, we choose one of the available real-time decoders in order to decode data. If the decoded data are successfully passed some criteria, like, for example, CRC if it's available or it has meaningful data, it also gets uploaded into the network. So what we have right now, as far as the automatic decoder concerns, we have a CW decoder, which actually is a Morse code decoder. FSK, FSK1200, it's a very common in space missions. Apt, it's the NOAA image satellites, weather satellites, sorry. FSK9600, DAV, which is very interesting, is data under voice. It is used in AMSAT satellites. They have some kind of FM transporters to allow amateur operators to speak to each other. And they have a very, very slow speed telemetry transporter below the FM device. So I think that it is 300 Hz from 0 to 300 Hz and it is using 8B, 10B encoding. And we have right now under development the LRPT, which is also a weather satellite that uses digital imaging and some FSK31 decoders that I believe in a couple of months, it will be ready. So all the decoders are available under the apps directory of the GR Sandungs out of 3 modules. You can go and check them and play with them. So some words about the CW. Surprisingly to me it was, I didn't find any radio based decoder. So I started to experiment with one. It was quite hard, I can say, because to me that I was coming from digital signal processing and digital telecommunications and others, I didn't have perhaps expertise. But with enough experimentation I ended up using a PLL based solution. So I filtered the spectrum and the CW is just a sign that goes on and off. And using a PLL, the PLL automatically shifts this sign into the DC. So they are applied a very steep low pass filter and after doing the MagSquared of the signal you can decide if you have a tone or you have an absence of tone. So it works pretty well, I can say. The problem is that the very steep filtering that you have to apply in this part requires more CPU resources. So the trick is to just perform decimation in many stages and you are okay. You can run this decoder without a problem in the Raspberry PI. Another interesting decoder is the automatic picture transmission decoder. It's an analog transmission scheme. It uses signals from the NOAA weather satellites orbiting the earth in a polar orbit. It is AM over FM at the VHF band and you can see the spectrum in the left and the resulting image on the right. We use some synchronization patterns that this image has to synchronize the image and get it properly in place. So about the AX25 FSK, as you can see we also automatically decode this scheme. Hopefully this packet mode has a CRC so we can check if the data are well decoded and if yes, we upload the binary data into our network for further visualization. In the same manner there is the lower bitrate FSK. Unfortunately it has not scrambling so it's a bit harder to decode properly and it has a very, very, very poor training sequence. So our main problem here was to get rid of the false alarms. We get too many false alarms because of the one byte synchronization sequence that it has in front. And then we ended up, the problem was to reduce these false alarms because we miss the frames. The problem is that you cannot apply energy detection techniques, especially in our case because every operator uses its own SDR device, it has its own RF setup. So somehow we needed to find a way that it is not affected by all these parameters. So the solution came with the idea of observing the quadrature demodulate signal and when there is noise the quadrature demodulation block generates a very noisy signal of course. But when there is signal the quadrature demodulation output is more structured. So by continuously measuring the mean and the variance of the signal you can decide if it is signal free or with actual signal. So to give you an idea of how it's ported this filtering is, I have a flow graph. This is the decoder for the Fox 1B and 1D satellite. And I will try to execute it without the filtering. So these tags on the blue signal is actually the false alarms because it has only one byte of frame synchronization word it is triggering too many false alarms in the quadrature demodulated signal. The red one is the output of our filter. So if you see when the signal is more structured because actually the satellite transmits the blue signal comes out of the block and we can further processing. So right now I will disable the multiple analysis. So you can see no false alarms and when the red signal comes you can see some tags indicating the start and the end of the frame. Another advantage of this filter is that you are blocking the next decoder so you spare some CPU by doing this. So this filter can be used on every FSK, MSK, modulator so if any one of you have the same problem just go and grab the code. So what are the real challenges that we face? We have found ourselves that it is very, very difficult to apply filtering especially on small factor pieces like the Raspberry and there is still poor SIMD support. So we have tried to use the Pine 64 for example or the Tinker board that has some better neon instructions but we haven't yet managed to visualize the results in terms of performance and CPU utilization. Also a very big problem especially in the Raspberry PI is the USB communication with the SDR. For example we cannot get in real time samples from the SPI and the CPU utilization. Actually I have seen that the CPU is a bottom there. One core gets 100%. Another very common problem with us is the low SNR. Satellite signals have very low SNR and it is very hard to detect frames in the signal so if you want to be more accurate or to apply more deep searching on the signal the CPU goes completely crazy in the Raspberry so we have to keep our detection algorithm very, very simple. Another great problem is the lack of framing information. Satellite operators for some strange reasons do not publish their frame information so we cannot easily detect them. In most cases this is done by hand and by the eye so we get the IQ and we demodulate it with quadrature demodulation block and we try to analyze the resulting bit stream and of course another problem is that we do not have any kind of IQ information. It is very, very helpful to have an IQ and apply your algorithms in order to decode it but for now there is no such database. We plan to build one. So right now we store the signal of capture in an audio way so we convert it down to audio and we upload it to the network so our intention is to store it in the IQ format but this has many, many problems. For example, storage may be an issue or many of the operators do not want to spend their bandwidth so we are a bit skeptical about it but audio format loses information so if we want to go to the next step we should store it in IQ format. So we have seen the CMF stuff and all the discussion about it and we are willing to provide all the necessary storage but we are a small team so if some of the CMF community wants to contribute and apply some kind of CMF integration in our network we will be glad to cooperate. So if you are interested just talk with us. And another very interesting feature that we want to apply is the AX25 forwarding of the packets because Linux kernel has its own module that can IP forward this kind of packet so for example an operator can have a listener in a server and when we get AX25 packets we automatically forward to it so it will be very helpful for the community and last but not least we believe that SDR can drive the next big space thing using SDR we have managed to track satellites to command satellites to get the limited from the rocket so it is a very good tool and it drastically reduces the cost of a mission believe me when we are debugging our satellite if we have not an SDR expertise our budget will be 10,000 above it we spoke with teams that did not have the SDR expertise and they were buying very expensive equipment just for sending FSK or getting the spectrum of FSK which is ridiculous so this for me, this all for me if you are interested to go to the Liber Space Foundation repository on GitLab get the code, see the code, comment us or whatever you want so thank you we can probably squeeze in one question like considering the video switchover thing yes, any questions? so I have one like can you like so who is, how many people are the Liber Space Foundation most of them are right there can you raise your hand? okay we are about 10 to 12 we are crossing new right now on the Liber Space Foundation but like you are a small group and like what about funding, is that like the initial funding was after a hockey day prize that we won a couple of years ago by applying in a contest with the tracker and the whole SDR stuff so that was the initial funding and then the apps at Project K and all of these nothing else okay well, thank you again