 You can start at any time. I can start. All right. So we're going to get started. So this talk is about localization of RF transmitters. So if you've been working in the wireless communication community for some time, at some point you will work on localizing RF transmitters, all right? There's been a huge amount of money all through history for these particular applications. And it's been at a standstill for quite a while. And then suddenly, when SDRs came around, there was a huge influx again in the scientific community, because all of a sudden we can actually test all those algorithms that we've been developing for decades. And this talk in particular is targeting angle of arrival estimation, so determining the direction of an RF transmitter. So how would you do this classically? Well, if you want to determine the direction of a transmitter, you would use a calibrated multi-antenner array, all right? So you have multiple antenna elements. If the front ends are properly calibrated, you can look at the phase difference at the signal of each received antenna and determine the direction of the transmitter. There's been tons of algorithms that have been published about this music, beam forming, and malestimation, and so on. So what are the drawbacks of using multi-antenner arrays? Well, there are actually two big drawbacks. The first one, they're quite expensive. Well, if you have four antennas, you need four front ends. And this is kind of expensive. Even with software-defined radios, it's still a big threshold financially. The second problem is these arrays are actually quite big, all right? All the antennas need to be separated by typically half wavelength. So if you have four antennas here, around 1 gigahertz, well, you have an array that's this size, right? So not so practical to put in your cell phone or your laptop. So this talk presents a way of doing angle of arrival estimation with a single antenna and combining this with IMU signals, which every cell phone has nowadays, all right? So the idea is the following. Imagine you have a transmitter, which is sending a periodic signal or just multiple signals. And you know the header of that signal, which would be the case in any practical standard. As your receiver is moving around, he receives several of these different data packets. And you can see each received data packet as virtual antenna elements. And you can combine the signal from these different received packets to a determined direction of your transmitter. So the outline is going to be as follows. I'm first going to show a bit what are different challenges, how this differs from conventional MIMO estimation. Show how we can solve these different problems. And in particular, how we can solve frequency offset problems and how we can determine the location of our receiver through IMU processing. And finally, we'll show some implementation and some implementation results. So if you look at a regular MIMO array and this virtual MIMO ID, you have two main differences. The first one is that in a regular MIMO array, you know the position of your antennas. You know your antennas are along an array, a linear array, and you know to have this distance in between them. You don't know this in a virtual MIMO case. You're just moving around. So you need to determine where your receiver is each time he receives a data packet. The second problem is that you have frequency offset. So in a regular MIMO array, you don't have any frequency offset between a signal received at a different antennas because they all use the same local oscillator. In this case, because you're receiving a signal at different time instants, the frequency offset between a transmitter or your receiver causes the signal to have additional phase each time you receive a new packet. So just to illustrate this frequency offset problem for those who are a bit less familiar with wireless communication, each receiver who stands still here and he's receiving multiple packets, at the first instant, he's going to receive some baseband signal RfM. At the next time instant, at time t1, he's going to receive the same signal with some phase shift, all right? Which is due to the frequency offset if not here. All right, so if we have a line of side channel, which we're going to take as the main assumption here, so we're not going to deal with multiple channels at this stage, the phase of each received packet that you get is given by this following term, where you have some initial phase, some term due to frequency offset, and then some term due to displacement of the receiver. So if you remember this from a signal crossing or antenna course, this is basically the beam steering vector that you have. So the difference between conventional MIMO and this virtual MIMO are these two terms here which are in the red boxes. So the first one is a frequency offset which you don't have in this case. And the second one is that you don't know the Xn and the Yn of your different virtual antennas here. All right, so the first part, we're going to see how we can deal with this term, which is actually fairly easy to deal with. And then the second part, we're going to see how we can estimate these Xn and these Yn terms using inertial measurement units. So the first part, we started with a very naive approach. Say, okay, let's do a stop and start approach. The receiver first stands still. Only the frequency offset changes, so we estimate the frequency offset and then we use that when we start moving, all right? To compensate frequency offset out of our signals. And once you do that, well, once you compensate frequency offset out, you can use whatever angle of arrival estimation technique you want. You can use this free music, beamforming, ML, whatever you prefer. Now, this works provided you have a decent frequency estimation when you're standing still and provided the frequency offset doesn't change too much between the moment you estimate it and the moment you're using it to compensate it, all right? So basically, you should not wait too long between your estimation and the moment when you move. So that also means that the movement that you do should be quite short. Now, this is okay because if you do angle of arrival estimation, you can only use these kind of algorithms within a certain spatial area, all right? So you cannot move too much, otherwise your stationarity assumptions are no longer true. But this is quite impractical because that means that you first need to stop and then you need to start moving. So in practical scenario, if you're imagining this in a car or in a drone or something, that's not gonna work very well. So there is another approach which works quite well, which is just augmenting the signal model that you use in your estimation, in your angle for arrival estimation angle, and augment it to include the effect of frequency offsets. So if you use this kind of signal model in your music algorithm, you can just augment the steering vector to add the term which is due to frequency offset. And then you can do a double dimensional estimation over the angles and over the possible frequency offset values. And we'll see that this works a bit less, but it does work, but it's not as stable as the stop and start approach. So the next big problem is how do I know the position of my receiver every time I receive a packet, right? So for this, we're gonna use IMU. So IMU is this inertial measurement unit. It's something you have in every cell phone. They make MEMSI IMUs, they just cost a few dollars now. And IMUs basically contain what? They contain accelerometers and gyroscope, right? They contain accelerometers along all three axes and gyroscopes along all three axes. Now you can use these to do that reckoning navigation. Now if anyone here is from the control community or if you've been interested in this before, you know that this doesn't work very well. Now the good thing is that in our case, we only need it to work for a short amount of time. We only need our position over a few meters, over a few seconds, right? So the IMU solution doesn't have that much time to drift off. One question we get a lot is why don't you use some very fancy GPS for this? Well, the reason is that we need an accuracy that's typically in the order of a fraction of a wavelength. So GPS is not gonna give you that kind of accuracy, right? The second reason is that if your antenna radiation pattern is not isotropic, you also need to know the orientation of your receiver, right? So unless you're working with a linear antenna and we're only in the X, Y plane, if you have something like a patch antenna in your phone, then you do need to know the orientation so you can take this into account in your angle of rival estimation algorithms. So just a few, I'm gonna explain the challenges of IMU processing a bit. So this is a bit outside of the wireless community, but it's actually quite fun and it's interesting to know what are the limits that we can reach with IMU when we work with wireless because when you look at localization and tracking, people have all sorts of crazy idea about what they could achieve if they use IMU and it's actually very disappointing performance. So an IMU basically works as follows. So you have this IMU which is attached to your vehicle. So in our case, the vehicle is just the phone we're moving around and it gives you the acceleration and the angular speeds along each axis in the reference frame of the IMU. So if you turn your IMU around, you're turning your reference frame around. So you need to somehow convert this to a navigation frame, which is the absolute frame you're working in. And so one of the big problems of IMU is that the accelerometers are also picking up the gravitation field of the earth, right? So one of your IMU axis is gonna pick up an acceleration of 9.8 meter per second square, right? So if you're perfectly flat, it's gonna be the z axis, but as soon as you tilt your IMU, you're gonna have this gravitation vector along multiple axes of your IMU. And why is this a problem? Well, the whole IMU processing chain is as follows. You have your gyroscope signals so this measure angular speeds, you integrate this once and you get your orientation, right? Then you use this orientation to project your acceleration signals from the body frame. So from the frame from the IMU to the navigation frame, to the absolute frame, and then you need to remove the gravitation vector, right? So you can measure the acceleration that are only due to the movement. Now this is big, 9.8 meter per second square is very large compared to what we have with typical movements. If I'm moving something like this, I'm gonna have something like 0.5 meter per second square. So removing this is a big challenge and to remove this correctly, you need to know the orientation quite accurately and that's a huge challenge when you do this kind of processing because any error you have on the orientation here is gonna get double integrated in the ends and it's gonna cause huge errors and the errors are gonna increase over time, right? So a few problems that you typically have in IMU estimation is how do I determine my initial orientation? I can put my IMU on the table here and say, okay, I'm perfectly flat, I'm zero, zero, zero degrees in yaw pitch roll. That's not gonna work very well because my table is never perfectly flat. As an order of magnitude, 0.1 degree in error results in about one meter error after five seconds, and this error increases with a square of time. So this grows really rapidly. So obviously you need to go through some calibration procedure. If you have a very good IMU, you can do this calibration once every six months. If you have a very bad one, you need to do it pretty much every time you turn it on. And then there are some kind of tricks from the control community to increase stability of these IMU measurements. I'm not gonna go into details here. So let's go to the implementation, which is really the more interesting side here. We put this on a USRP software defined radio. So the transmitter is just sending the 3G primary synchronization sequence. So this is a sequence which runs at 1.8 megahertz, more or less, and is very periodic. So it sends the same sequence every 667 microseconds. Well, the transmitter is just sending this through UHD. So we're just using radio and USRP sync and repeating the same packet over and over again. The receiver oversamples by a factor of two, and we do part of the processing of the receiver in the USRP FPGA. So we just do the correlation in the FPGA to offload some of the more heavy processing. We could have just saved everything in a batch and process offline, but this is much more fun to do. Plus we also need to actually read the data from an IMU. So this is a rather high-end IMU. It's something that's used for more expensive vehicles. It's in the order of a few hundred euros, if you remember correct. So we have a thread in the radio that's reading in parallel data from the IMU and data from the USRP and combining them in some meaningful way and dumping everything to an output file, which we can process offline. So the IMU is reading data fairly low rate, compared to our radio, and that's it. So how do we do the experiment to the start if all this theory actually works? Well, we go to an anechoic chamber to do some very clean, controlled environment experiments. Well, first we don't have any multi-pads, so that's always nice when you want to try out something for the first time and especially because we have the turntable of the anechoic chamber. So you know that the anechoic chambers usually have this turntable to turn antennas in all direction around, so you can measure the radiation pattern. Well, we're going to use the turntable to generate a movement that we know. So we know that the movement should be about, we're going to turn in a semicircle here. We know how far away from the center of the turntable we are. So we have this very controlled movement. At least we're going to see how our IMU processing does with respect to the movement. So these are just a couple of pictures from the experiment, nothing exciting. So you have our USRP or antenna here and this little orange box is the IMU that I placed there. Now I said at some point that you need to estimate the initial orientation of your IMU. Well, now you see why this is important. I cannot guarantee that this is perfectly vertical, right? So you do need to estimate your initial orientation of this. I'm not going to speak about it, but there are quite a few papers in literature that deal how you can do this using the gravitation vector. So what kind of data do we get? So this is just IMU processing at first. So these are your accelerometer signals. These are the raw accelerometer signals that you measure. So as you can see around the z-axis, you have something that's very close to 10 meter per second square. And this is our actual acceleration that we're trying to use to determine the movement. So you can see the order of magnitude difference that you have between this term that you want to cancel out and what you're actually using to determine your movement. So gyroscope signals are typically a bit cleaner. So here you can clearly see that we have a rotation around the z-axis mainly, but there is something happening along around the y-axis as well. This is because we're not perfectly vertical, right? So when you do this whole processing through a Kalman filter, through an extended Kalman filter, you can eventually get the orientation. So you can see here that we're turning from roughly 180 degree to zero degree, which corresponds to the movement of the turntable. These are the speeds along three axes of the IMUs. You can see that most of the speeds along the x-axis, which should be the case given how we place our IMU. And this is what we get as the final result of one of the experiment runs. So we start at zero, and you can see that towards the end, we don't quite reach the 180 degrees there. So at the end of the movement, we typically have an error of around 10 centimeters, right? So this is still good enough to do what we're trying to do, but you cannot make any longer movement. So this movement takes about five seconds to make, and this is kind of the limit that we can reach with this particular IMU. So let's go back a bit to the radio signal. What do we have here? So this is the phase of each received packet over the whole duration of the experiment. So we're standing still for 30 seconds, and then we're moving around here somewhere. So if I zoom in on that, you can clearly see the effect of frequency offset, right? So every packet has certain phase shifts compared to the previous one. You use a standstill period to estimate this frequency offset, and then you cancel it out, and then you get this figure here. So this is the phase of every received packet. So we have a packet every 6,667 microseconds, which is why this is fairly continuous. And you can see two things here. Here you can see something which looks a bit noisy. So this is just random walk noise. So this is actually phase noise, all right? You're standing still. So the only thing you're seeing is frequency offset and phase noise. So you can see there's just a little bit of drift, but not too much. And then you can see here that you have a very clear pattern, right? And if I were to show you all the runs of the experiment, you would see that it's the same pattern over and over again. Well, because you're always making the same movements. So obviously you expect the phase relationship to always have the same pattern. So this is the IMU processing I already showed on the previous slide. And then when you combine the phase of the different signals and the position every time you receive one of these packets and you enter this into angle of rival estimation algorithms such as music. So in this case, we use music. You get the music spectrum and you can see that you have a clear peak here, which is at 94 degrees, which corresponds to the setup we have in the anechoic chamber. So the real angle is at 90 degrees. So we have a few degrees error. And if you repeat this experiment over a couple of times, you get the root mean square error, which is given here for different radiuses. So we put the IMU and the radio at different distances from the center of the turntable. And you can see that when we have a bigger movement, we have smaller error. And this is actually quite interesting because this is in agreement with what you have in MIMO theory. If you have larger array as long as you're not subsampling spatially, you have better resolution. So we can find the same results in here. So this is for the stop and start approach. I also told you about this joint beam forming where we estimate the angle and the frequency offset jointly. So this is your music spectrum if you do the search over the two dimensions, right? So this is frequency offset and this is angle. And you can see that you have, well, something that's fairly typical of this kind of music spectrum. You have a main loop here, a main peak, and then you have a bunch of side loops, all right? But in our case, if I take a horizontal cut of here, you can see that you have a nice main peak, which is at 96 degrees, so you still have some error, all right? So it's not a perfect technique. But it has the advantage of not having to stop before your movement, all right? All right, so if you look at all the experiment runs, you can see that for smaller movement, the error is quite disastrous. This is very, very large with compared to conventional techniques that exist. But for bigger movements, we get close to what we have with the stop and start approach. So there is some trade-off here. This is much more flexible, but it's less precise, all right? So there is still room for improvement here. Most of the error is due to the IMU estimation, all right? So the IMU drifts off. So if you think your antenna is not where it actually is, this is gonna affect your music algorithm by quite a bit. So one way to deal with this would be to lower the impact of later measurements during the movement. All right, so final step is we are currently in the process of pouring this on the E310 SRP. So it's the one that Martin showed a bit before. It's a small one that has embedded processor. So why do we want to do this? Well, first of all, it's a fun platform. I needed an excuse to do something on that. It has an embedded IMU, which is of much lower quality than what we used in this previous experiment. So this would allow us to see, okay, what can we achieve if we use something that's typically in your phone, all right? Okay, it's not as bad as what you have in your phone, but it's not that far away. It also allows us to use worse quality oscillator. And eventually, we want to put this on the quadrotor that one of my students is working on here. So we have the whole SDR, which will be controlling both the quadrotor and the radio aspect. So where do we stand? Well, the green parts are what has been done. The red part is what's missing here. So part of the processing is done in the FPGA again. So this one through some work was I wasn't using RFNOC, so that's my fault. And it took some time to get up to speed with the USRP3 generation of the FPGA. We have everything that's working when we use it in a simulator, even when we feed it with actual measured USRP signals. But when synthesizing the whole thing, it will run out of slices. So yeah, we have to find some way to optimize this or offload some of the processing. I'm gonna show some results that we have on the IMU processing because that's probably the most interesting thing I can show here. So I'm gonna skip a few slides here because I'm not really not that interesting. So we did this control experiment where we put the E310 on a XY positioner and we moved it, and this is a very precise positioner so we know how much we moved it exactly. So we could see how our IMU processing does compared to real movement. Now one thing I need to mention here because the results are gonna be a bit scary at first is that the error on the IMU only depends on the time that you do that reckoning navigation. It doesn't depend on the actual movement, right? So now that you know this, and this turntable is actually moving fairly slowly. It's moving at only, what, 10 centimeter over five seconds, something like that. And this is the error we get. So the black line here is the real movement that we generate in the blue lines or the IMU estimated trajectory. So you can see that for a 10 centimeter movement here, we have around five centimeters there at the end of the run, right? But it's not 10 centimeters versus five that you need to think of, but we have five centimeters there over a four second run, all right? So this is the way you should see it. Over four seconds, I can move myself on much more than just 10 centimeters, all right? When we go for longer runs, the movement at last, say, 10 seconds, well, you can see that the error increases quite dramatically. We're almost to half a meter error here, right? This is why people don't use IMUs when they actually do that reckoning navigation. You can see that the error is always going in the same direction. So this is, there is some leftover bias after the calibration procedure. We did the calibration procedure on the day before the experiment and on this type of IMUs, the stability of your IMU is really not that good. So you need to repeat the calibration before every run, all right? All right, so this brings me to the end of my talk. So this is just some high level idea of what you can do with software in different radio when you want to try really new things. So here we're really about trying to do angle for rival estimation with a single antenna, which is something quite new and quite unexpected. A lot of people just say not possible. Well, it's actually possible if you need to do some sensor integration in there. So a bunch of things that we need to do. Little comment. So if you want to have this code, I usually don't publish my code because I'm pretty poor software programmer. But if you send me an email, I will send it to you because I don't want to have people who just take it, think it's plug and play because it's usually not. There is still some, you know, tinkering that you need to do. So, but if you're interested in this, please get in touch with me and I will send you the code. So that's it. Thank you. So one question. Yeah. Platform you've worked on half a magnetometer because maybe you didn't get help for the orientation. Yeah, so the question is if there is a magnetometer on the IMU. Well, yes, there is. I've talked a lot with people from the control, from the control side or these things and they all dis-advise using magnetometers because they're very sensitive to electric fields. If you have a computer running next to it, this is gonna affect your magnetometer reading and this might be critical in the long run. So yeah, in general, people don't use magnetometers that much. Yeah. Yes, so no, this is based on MEMS IMU. So the idea is that we want, so the question was if we use gyro laser which are much more precise gyroscopes that are used in planes. So the idea here is really to go with low cost elements to eventually, you know, the idea to the long run is to say, okay, I can take my cell phone, do this and you know, have an angle of rival estimation. This will be really cool. That's also one of the reasons to go to the E310 because you know, it's small enough that you can do this at a conference and get away with it. One more and then let's start to. So we haven't addressed looking at GPS in this case. It's something that's on the long-term to-do list, let's say. There's certainly ways that you can use GPS data whether it's just GPS position or the metadata from the satellites. Yeah, so we haven't done that now. All right, we should probably start switching. And the next presenter is. Thanks, I actually enjoyed the IMU stuff. Thank you. It's been a fun day.