 Okay, thank you for being still here for last talk. So this last talk would be about decoding meter M2. So what I would like to emphasize is I assume that most people in this room don't care at all about meter M2. The topic here I would like to address is to use meter M2 as the reason for addressing all these fascinating topics. I should emphasize I'm trying to be a physicist. I've never been taught about any of these things here. So I discovered everything by myself. I found it very fascinating to be discovering all these signal processing techniques that you find in the various documentation about digital communication. I want to show you how getting from the raw data QPSK all the way to a JPEG image as you can see here will be addressing this talk. So Martin did not care to give you a talk with two semesters of signal processing in 20 minutes. I do. So also I should emphasize if you just want to get the image you can go out of room because this meter decoder is doing a much better job at what I'm going to show you. This is working very well. I want to go step by step into what is going here. So really for me the topic is understanding all these things not using readily available software. So that's the topic of the talk. So I'm lucky enough to be going twice every year in Arctic regions for glacier monitoring. And so I'm lucky enough to be following all these polar orbiting satellites or low earth polar orbiting weather satellites including amongst the various Leo satellites the one from the Russian meteor. So why are Arctic regions most favorable for this kind of monitoring when you have solar synchronous satellites there will be in the polar orbit 98 degree. So 90 degree would be right over the north pole. And one of these polar orbiting satellites here I've plotted the trajectory over one day. You see that when you're in Western Europe here is but also in France the green circle is the place you would be if you wanted to see the satellite horizon. This is at an altitude of elevation of 15 degree this is at elevation of 60 degree below which I don't even bother to take my antenna out. In France you will get at most one pass every day of one of these polar orbiting satellites while in the Arctic region where you've got all these passes. So it's fascinating because when you're learning to decode a new satellite here you have like 10 12 passes per day while here you have one at best. So I'm investigating Metro M2 in this context and also using the little RTL-SDR receivers because of course when you go there you're not supposed to take a big bag full of hardware that's not related to your research. So here I can just put one of these little receivers at the bottom of my backpack just find any two wires to make a dipole antenna once you arrive there and you have your setup for receiving Metro M2. So I'm sure many of you here in the room have already listened to NOAA so the NOAA satellites are dying breed because NOAA is no longer renewing their constellation of analog satellites. They started in the 70s now they are at NOAA-19 I think it was going to go up to 21 and then it will stop the NOAA. So now we have to think about the future and the future is digital communication. So digital communication is what is provided by CCSDS so because the last speaker could not say it is the consultative committee for space data systems and this is basically a body trying to standardize this communication. So that's a bit of a layout of what I want to talk. So when Bastion was showing you I have my JPEG picture then I have TCPIP then I have TCP then I have IP you've got all your OZ layers and of course when you want to decode the JPEG image on Firefox well you've got all these libraries for you and for me the exploration here was I collect this QPSK data at the output of new radio at running the RTL-SDR data stream and then how do you go from this QPSK all the way to the JPEG image. So for me it's a little bit like if you were trying to listen to a JPEG image being transmitted in HTML with just an oscilloscope connected to your internet cables and for me it's an adventure to try to get all the layers one after the other. So of course in this talk I don't claim to be going in detail I would like to show you the outline the slides will be available on the website and I hope it will make you curious about getting into all this story. Now you might be wondering people have been working happily with NOAA so why even bother with such a complex networking. So this is a talk I saw when I was at Huntsville at Marshall Space Center from Dave Israel at the YZ conference where he was showing you've got the International Space Station. The International Space Station is flying at 400 kilometer altitude and is visible if you remember my little circles on a radius of 1,500 kilometers. So this means you would need to put one station every 1,500 kilometers along the path. So this is what the Americans did for Jimmy and I and Mercury they had one ship or one station every 1,500 kilometers but that's no problem because they only did one or two orbit so you just needed to put a few stations along the orbit. ISS is rotating all over the Earth. Of course you cannot put one station every 1,500 kilometers all over the Earth. So what is happening now is ISS is completely automated the astronauts are running experiments but everything on the ISS is automated from ground stations and the ISS is only visible from a radius of 1,500 kilometers. So the ISS is not directly talking to the Earth but it's talking about the geostationary satellites. So ISS 400 kilometers rotating quite quickly talks to TDRS. TDRS are talking with each other and TDRS is sending signal back to Earth. Same is true for Hubble. Hubble Space Telescope is very expensive. You don't want to just run it as it is flying over your head. You want to continuously monitor Hubble Space Telescope measurements and of course you don't think that the US Air Force cares much about the science in Hubble Space Telescope TDRS here in the space. So of course here you have multiple satellites with multiple experiments so you need a way of packetizing your data. You need to say it's satellite number X which is sending data from instrument number Y and how you do this? Well that's where you go from QPSK which would be your Ethernet cable all the way to the JPEG image through all the layers of OZ layer. Ok let's try to have fun with the OZ layers. So first of all we need to predict where a meteor is flying. I still use SATRAC despite the Y2K bug which is still a patch and is not inserted. You can use WX2IMG in the paper I explain to you how you can cheat WX2IMG which is no longer maintained into thinking that one of the NOAA satellites is actually a Meteor M2 satellite so you can use WX2IMG and if you have internet access you can use the heavens above website. Again the beautiful thing about being in Spiesbergen Arctic region 79°N you've got all these passes at high elevations which of course you don't get in the western European country, lower latitude country you'll get one, two passes at best. Ok so we know when Meteor M2 is flying so we take our RTL SDR, we collect the data which is all stolen from the airspy website for receiving Meteor M2. Rational resampler you have your clocking so your costas loop which locks on the frequency offset between the carrier and the local oscillator bit recovery and so that's a clock recovery and at the end you've got your soft bits so already I needed to learn the first word when I was doing this soft bit is IQ coefficients where you're in 1 and 0 are not yet saturated but are still represented here by an 8 bit value and you still need to identify whether it is most probably a 1 or a 0 so I didn't know what soft bit meant so the first question is are my data even worth investigating so this is a spectrum unlike GPS now we have strong signals so you see there is something happening here is it a QPSK signal well if we expand what Paul Boven taught us about GPS where BPSK is collapsed by squaring the signal well of course if you take the end power of an NPSK signal you collapse again the spectrum spreading due to the PSK modulation so here we take our raw signal well there is something but we don't know if it's the right modulation we square it it is not BPSK so we haven't collapsed the spectrum we raise to the fourth power it is QPSK because your spectrum spreading has collapsed in the carrier so we can get a signal that are worth investigating further it seems to be QPSK modulation so once we've done this we know it's a packetized system packetized system means something will be repeated if we look at the documentation you will find that all CCSDS compliant communication start with a header well if you have packets you need to know where the packet starts and the packet start header is 1ACFFC1D try to remember this because I will keep on repeating this sentence all over the time so at first I don't know what this packet is I just want to know whether there is some repeated header in the signal so as shown in the previous offer speaker you just auto-correlate the signal if there is some redundancy this redundancy will show so by auto-correlating my signal well indeed I see a peak at 16 kilobyte I see a signal at 32 kilobyte so there is some redundancy every 16,000 samples I have some repetition so it's definitely worth working further on this data so the first thing that got me stuck because again I'm physicist I haven't been taught about signal digital communication was convolutional encoding so that's the topic of Martin's talk this morning I'm just going a slightly bit further into it because I want to show you how it's decoded I don't want to get into the encoding part the encoding actually is very simple because they show you on various documentation as an XOR as was shown by Martin this morning you just take your data stream and take convolution so you try to mix all this data to create as much randomness as you can it's one of these bits is corrupted you have a lot of chance because you've spread the information of a long duration to recover this information so here it's a seven bit long shift register you have tabs from which you XOR and you get twice as many bits on the output as you had on the input so this shift here will clock up down up down and sometimes take the output with one polynomial and on the other so if you do this you can also express this as a matrix where time is evolving over the X axis and you jump first coefficient second coefficient first coefficient of first polynomial first coefficient of second polynomial second coefficient of first polynomial and so on so you have your polynomial which are interleaved and you just shift time so that's another way of implementing your convolution encoding and the last way of saying it is you can do this as a state machine so you take the various states of your polynomial here you input a new bit into your system and by inputting a new bit in your system your shift register changes so if you had zero and you inject a zero you stay at zero and your output you run the XOR on this all the zero you get a zero output if you inject a one well your zero goes to the last zero drops the one comes here and you run this for the XOR you get one one so you can make this as a state machine so once you've discovered you have a state machine expression you can write this as the evolution between the various states so basically I had given names a b c d to my various states and then you can draw the state machine so a stays in a a goes to b if you have a one then so these are the input bits so if a is fed or zero it stays at a if a when it is fed a zero will output zero zero a when it is fed a zero where a when it is fed a one will output one one and so on so you can draw your state machine so encoding is very efficient very easy it's just XOR now the reason I wanted to show you this is if you take the same description that we had here but now you take it to decode that's a 32nd description of the Viterbian decoding algorithm in 30 seconds what you have here is let's imagine I have received this bit stream so this is what I have received so I split what I have received into zero zero zero zero zero zero and so on and what you see here I start with zero zero okay I get zero zero that is most probably state a with a zero output zero zero state a zero zero zero zero state a zero zero so these three zeros are just a looping into the zero zero now we get one one one one one is a feasible output of state a that gets us into state b so we're going to state b when you're in state b we get one one that's not possible we cannot get a one one out of state b we can get one zero zero one well at the moment we don't know what the best option so let's follow the two possible path we know it's wrong but let's follow the two possible path after that we got zero one so we could be here c but c cannot have zero one it can be only one one or zero zero so c would create two errors that's the wrong path so we cut it out and Viterbian tells you let's not follow this one now we go into this path here because zero one is a valid output of d that would be considered as a zero and then you go on and you follow your path so we add the output bits here you have the number in red of errors two errors means we give up on this particular branch and we continue with a branch with only one error and this unique error continues with a consistent path that tells you in the transmission this bit was erroneous so you see that by spreading the information over a long duration we had just a burst of one bit error that's the point of convolution encoding it's just a noise on one bit and this unique bit has been recovered by Viterbi decoding and then indeed we recover one a which is the first bit the first byte of our synchronization word so ok we've understood Viterbi decoding so now well we can go for so yeah sorry this is what I so if you don't want to go into all the math by yourself you have leapfake with you and by k thank you and leapfake will do the job for you here I put for you a very simplified chart of running leapfake for viterbi decoding just don't do the same mistake as I did leapfake will not take as input 0 or 1 you need to feed it 0 or 255 it's working on a byte so I struggle for like a couple of weeks why is leapfake not decoding just because I was giving it 0 and 1 let's give it 0 and 255 and again the word encoded word here will be decoded as 1 a c f f c 1 d so we know that the encoded word of viterbi you can use leapfake to encode or to decode so this is the encoded word and this is the decoded word this is how you do it with leapfake ok so we've got leapfake we can check that indeed we can decode our word so if we have the sequence so this is an example that was given to me by the author of GR Satellite by Daniel Estevez I hope I pronounce his name correctly it's you've got leapfake decoder here as was shown by by martin this morning and if I feed my GR decoder here with the encoded word indeed I can get the output which is 1 a c f f c 1 d except sometimes I get wrong messages because you see here that my input stream is repeating and if I repeat my input stream the hypothesis of viterbi is to start with a shift register that is full of 0 and that is not correct because here after the first decoding I don't have a shift register full of 0 so then I have one wrong sequence and then I go back to 1 a c f f c 1 d which is the correct sequence so you can play with this and it's an opportunity to see how the header word of c c s d s is decoded by by the fake decoder in the radio what is the sequence well as we did for GPS now I should be able to correlate the signal with a synchronization word after encoding by viterbi and if miserably fails you see absolutely no correlation peak I cannot find in my QPSK signal the set of bits of my encoding word why is that well I associated the usual constellation to my QPSK QPSK is four phases so I have 90, 180, 270, 360 degrees and 270 degrees I associated a symbol a set of bits to each one of the symbols of the states but why would I do that why would I not associate a different symbol to a different set of bits pair of bits to each symbol so this is actually what you figure out when you read the source code of meter decoder you figure out that meter decoder starts by creating rotated copies all the possible rotated copies and then you can say let's take the standard distribution of bit pairs of QPSK and let's imagine like in BPSK you can have 0 or pi or pi or 0 but in BPSK it doesn't care because you just go to a 0, 1 or 1, 0 but you will still correlate only you have an anti-correlation but for QPSK you've got all these possible shift positions so you can swap the real part and so if you look into meter decoder you indeed find that I will not get with you but you swap all the possible bit pairs so 1, 1 becomes 0, 1, 1 can become 0, 0, 1, 1 can become 1, 0 so you make all the possible combinations and because you don't care about anti-correlation these 8 possible bit swaps actually become 4 possible combinations because you have 4 ways of combining all these bits if you think that 1 and 0, 0 are the same so having done that now you can see here all the possible correlations and the only one that gives you correlation I don't know if you can see this from the room but you've got no correlation peak for all these cases but here you've got these correlation peaks every 16,000 bits so this means this is the right assignment of each symbol into the bit pair so now I found how to convert my QPSK signal to the encoded VTRB keyword and by encoded VTRB synchronization word I can start decoding my my sentences so I will skip Reed Solomon because actually Reed Solomon is a block encoder so VTRB is to eliminate random bits that have flipped during the communication while Reed Solomon or what well the reason I'm skipping it is because I investigated quite more deeply BCH which is the encoder block encoder in RDS that was investigated heavily by Bastion so when I worked on BCH I've put a reference here you can look at how BCH is working and Reed Solomon is just an extension of this the only reason I'm mentioning this is you've got your data here if you don't want block correction which is someone is emitting and you've got whole blocks of data that have been corrupted well you can get rid of Reed Solomon if you want to use it just be aware because again you want to spread information over time because you want to recover as much information as possible you will have interleaved Reed Solomon meaning you have four interleaved Reed Solomon you have data 1, data 2, data 3, data 4 data 1, data 2, data 3, data 4 you need to deinterleave render Reed Solomon recovery and then reinterleave your data so it's just I will skip this because well because only I don't have time to get the details and again I give you the example how to run how to run the Reed Solomon decoder in LeapFec so if you want to give it a try by yourself here is my data set I voluntarily corrupt four bytes I voluntarily corrupt bytes in the data set so in the payload or in the correction code and if I run this in my Reed Solomon decoder indeed I detect or I LeapFec detects four corrupted bytes and these four corrupted bytes are these values which can be recovered so you don't only discover which bytes are corrupted but you can find the properly initial values of these bytes so just demonstration again you have to run it by yourself I can talk as much as I want if you don't run it by yourself you don't learn so try it by yourself good so we claim to have found out how to work with Reed Solomon decoder we claim to have worked understood how Reed Solomon is working so there are the bits that we get out of the Viterbi decoder valid that I can claim now I've done the job I can go away well we want a picture we don't want a random set of bits so the first thing we can do when we look at the datasheet is or the documentation of a metro M2 transmission whose Web I'll give you the references where you can find the documentation the last slide you see that there is telemetry data and this telemetry data are encoded in a sentence to recognize so that you know where your telemetry data are located so you've got this magic sentence 224 168 163 146 blah blah blah 191 so this magic sentence tells you I am sending a telemetry frame so what do we do well we take all our decoded bits and we cross correlate our decoded bits with this sequence and this is one of these awe inspiring moments where it works you find this sentence in all your bits and if you decode the following bytes hours minute seconds you find that the information was collected at 11 o'clock 48 minutes 33 seconds which is indeed the output of the metro decoder provided as a reference so you see here that we have indeed properly decoded the Viterbi and Reed Solomon or actually in this part I skip Reed Solomon but we have understood Viterbi because we can find the telemetry sentence and we can decode proper information now I found the telemetry but still a bit far from pictures but the next part the end now is easy because I don't get the details it took me a couple of months this work has been started a bit more than a year ago but once you've got the bits it's just a matter of basically finding what the bytes are and checking whether they follow the standard so I will not get the details but indeed you see that you've got this header which is always the same that tells you well we're in a good job that there's an ID then they tell you have a counter well indeed these three bytes you see that they're increasing one by one so we're on the right path and then they tell you here is a header so this is an address of the first payload because the difficulty is that you've got the data packet and then you've got the payload packet and there is no reason for the data packet to be synchronized on payload packets so you might have payload lying over multiple data packets so this is the address at which the first payload packet is starting and so on and so on so I will not get the details but this is just a matter of following the protocol so once you've got the bytes it's really easy and finally you are supposed on the payload to get JPEG images and this is where I gave up I said okay I'm not going to re-encode the whole Huffman encoder and everything so this is where I just took this port of the decoder the metro M2 decoder that was ported from Pascal to C++ and I just used the decoder JPEG is standard bachelor level computer science signal processing training I didn't want to write all of it again maybe for the next training session but yeah I wanted to get some images so I went so to conclude the talk that's what I get at first so you are told in the standard again it's all detailed in the paper that I uploaded on the FOSDEM website you are told that your JPEG images are 8x8 bit frames these 8x8 bit thumbnails repeat 14 times and each one of these 14x8 bit sequences repeats 14 times along one picture line and then you jump to a next instrument because there are three wavelengths which they call three instruments you get one line of the next instrument 8 bits wide 1500 bit long and then you go to the next instrument and so on so here you see that I had some missing frames so some missing thumbnails that I had to introduce so what I did here is because you've got a counter you know that when you have missing frames here I just very stupidly did if you're missing a frame copy the previous frame and this way I could miss feel the missing thumbnails and here you start seeing some pattern and here you've got one parameter in JPEG which is called the coefficient quality coefficient which gives you the relation between the quantization and the quantization matrix and here is no quality so you see that the sharp pictures here don't have the same tone as the flatter area of the picture but you start seeing here the alps and finally by applying the quality coefficient you get an image that is a bit more smooth and that compares I think quite favorably to the reference picture that was decoded using the satellite decoder that you can find on the internet so here you see Istria you have Balaton Lake somewhere over here I think you've got Vinice over here so if you take the metadata decoder you get an image that is quite consistent with what we got by step by step decoding so that was of course in 25, 27 minutes a very fast highlight of the main steps go through the paper the paper is actually about 50 pages long at the moment and increasing and trying to put every detail about the data from IQ coefficient all the way to the JPEG image CCSDS is a protocol for space communication you might not care about weather satellites but what Daniel Estevez is showing on his blog and as mentioned by Paul is this is a standard for most satellite communication and that's going to be the future because noAA is going to stop the analog satellites so I think if you're interested in satellite decoding this is really worth investigating Daniel Estevez is an amateur satellite up there Lucas Teske is a Brazilian guy who's working on Goethe satellites, geosynchronous and his website was very inspiring despite at some point splitting the path of decoder his beginning was very insightful and he helped me a lot by email these two guys helped me a lot this is the website that was hosting all the files about Metro M2 somehow in between in the middle of this investigation it disappeared I don't know where the site is, hopefully we've got a web archive I don't know where this website is it's a fundamental repository of all the data some of which cannot be found anywhere else and finally there is one article which is not very technical but it tells you that it could be done and with that I conclude my talk and I thank you for your attention next one so just as a quick conclusion Martin introduced during his introductory talk that we are organizing the European New Radio Day so my frustration is that here we have two days of first-time we have one day full of talks we never have time to discuss with each other everyone's running to our sessions and I wanted to have an opportunity to meet with people and sit together so the way I organize this is one day or we organize this is one day of oral presentation one day of tutorials everything is open at the moment we are proposing some tutorial please feel free to propose new tutorials Robin gets is coming from analog devices to demonstrate the Pluto so he told me I hope and I trust him it is located in France in Besançon Besançon is a tiny remote city which means that hotels are readily available here in east of France it's two and a half hour train trip from Paris it's a few hours from Karlsruhe it's a couple of hours from Karlsruhe the call from contribution is March 21st registration is free but please register because I need to organize I need to know how many people are coming so registration deadline is May 1st the website is over here and hopefully the evening dinner will be a barbecue so that everyone can talk to each other and have more time to discuss so I will not waste our time with more advertisement but please come