 Yeah, it's working. Okay, so hi, I'm Arnold FNL from Bitcray's. And I will be talking about the local positioning system. That's a local positioning system we've been developing for the last one year and a half. So Bitcray's is a company we created to support CrazyFly quadcopter. That's a project we did as a hobby originally and that finally we started selling. So we need a company to sell it. And from the origin we did that as an open source quadcopter. And when finally we could work full-time for a company we made CrazyFly 2.0, which was, yes, fully designed as a flying development platform. So we did that by putting a CPU that was way too powerful in it just to fly. And we made an expansion port, which we call deck expansion port. Since I really know you need to give a name to your expansion port apparently. So ours are decks. And this was intended to allow people to add their own software, add their own hardware and use that as a development platform, as a flying platform to do more with. And but since we started making quadcopter we always wanted to make them fly autonomously. I mean it's quite easy to or it's quite reachable to make a quadcopter that you fly manually, but it's not that easy to make it fly automatically because the sensor it has is a gyroscope accelerometer. So it will basically be able to get its orientation in here. And with the orientation and the fact that we, we have the force going down, so we go up, we can move around. But if you would ask it to stay stable, it won't. It will just drift away eventually. So to fly autonomously we've tried many things, many hacks. Very early with, yeah, up facing camera, just looking, open CV detecting the copter over the white ceiling. The size of the copter gives us a distance and X, Y we get by its position in the picture. This didn't work really well because we did not have the stabilization software to actually be stable above the camera. Or actually we did not have them really good. So it tend to go away from the camera field very quickly. So it's actually very frustrating to work with that. We have a couple of tried with Kinect camera. Those are really nice because they are 3D cameras. They can give you X, Y with the picture. It's also tell you how far the object is. So yeah, we have one generation of Kinect one and one with Kinect two that actually worked very well. We used that at Maker Fares and we could fly full battery time but we're still limited to the field of view of the camera. And our software was still not so great. So it was still fairly stable if you get some wind or these kind of things. We also had a try with a simple webcam looking at AR marker. This is things when you can put augmented reality objects on top of your image. So the library that does that are really great. They give you the position X, Y, Z and orientation of the marker. This worked straight until there is sun. So we had that at Maker Fares. Morning, no problem. Afternoon, the sun coming to the capture and nothing is detected. So all that was pretty brittle and our main problem was that since we didn't have a proper positioning system, we didn't have any excuse or an incentive to develop proper autonomous algorithm. Since we didn't have proper autonomous algorithm, none of these were working really great. And that kind of changed one and a half year ago when we discovered this chip, the Decar-DV1000. It's an officer chef ultra-advent radio. It is advertised as a radio chip that can range. So you can get the distance using time flight measurement between two radio chips. And that was great for us because then, I mean, this can be the base of a local positioning system. And it's also implementing a standard. So even though as far as we know that the only chip we can source for this standard, there is hope that also manufacturer makes chip and we can be compatible with more hardware and we can choose our chips later. But so what do we call a local positioning system? So it's very simple. It's basically like GPS, but local. GPS will allow you to get an absolute position anywhere in the world, but it works really badly indoors. And that's where making a system that will allow you to get an absolute position indoors might be very interesting. So what we basically want is that if we have our drone, we want to know that if we are here, our origin will be there. We want to know that we're one meter, one meter, one meter in the room. That's what we are trying to reach. The reason you would like to do that, one industrial reason or yeah, one more common reason that is developed a lot lately is indoor navigation, but for people. So imagine you have a positioning system in a shopping mall or an airport, you will be able to make up that, help people navigate and reach the gate more easily. Asset tracking is a pretty big thing. So tracking assets in warehouse, these are expensive stuff for just packages. But what we are interested in as a big phrase is robotics. So that's what we are focusing in a system for flying robotic or any kind of robotics actually. So there is a system that exists already. There has been people using our quadcopter to do swarm research and autonomous flight and these kind of things. The most used is motion capture systems. So that's the same system they use in movies for capturing the motion of an actor. So you have reflectors on joints and you will be able to find out how the skeleton is moving. But instead of that, we put reflectors, there's these bolts here on the platform, on the quadcopter and that works really good, really, really well. You get some millimeter precision even in a room big like that. But you will need dozens of cameras for that and it costs a lot of money. So much that the university usually shares the system. They buy one or two and they share it between labs. So it's very impractical to do research when you have a limited resource. There's optical flow platforms that exist. The idea is that instead of putting camera around, you put camera on the quadcopter to find out how the world is moving around you. And there is a couple of radio system. We like radio system because it's less dependent to the lightning environment. The most used or the most developed is receive signal strength system. So that's what you have with this Bluetooth low energy key fob you can buy to find your keys with your mobile phone. But basically what you will be able to get is near or far. You will not be able to get a very accurate position. I know that Cisco has a system to detect computers if you have a lot of Wi-Fi router. But that same thing you will get in which room and roughly where in the room the computer is. That's not something very useful for flying robotic. Not precise enough. The scope of system is angle detection. Either you detect the angle of arrival. There was talk about that yesterday in the SDR room. It's pretty nice, complicated. And all you can also look at where you are in regard to the antenna that transmits. That's used a lot by planes. I mean, there's basically you have a rotating antenna and you will find out where you are regarding the antenna. So those could be used to locate where you are knowing the angle. And finally there is time of flight solution which is what we've been using. There you just measure the time it takes for the radio signal to go from the transmitter to the receiver. And knowing the time you will know the distance because you know speed of light. And that will allow you to get your position. Though this requires pretty wide bandwidth to work properly with multi-path. So yeah, what is your triadband? In more conventional radio, what you're sending is a narrowband signal. So you're basically sending a sinus and you're modulating some data around the sinus. And what you will get is a pretty narrow band in the frequency domain with pretty narrow bandwidth and fairly high power. What we do in a triadband instead is that instead of sending data about modulating sinus we send pulses, very short pulses. This will result in a very wide bandwidth. And then we can keep the power low because the amount, the complete amount of power might be similar, but here we have 20 dBm and here we have minus 41 dBm, they call permigate. So we have really low power. This radio actually works below what normal radio consider noise. So they are legal by working where you're allowed to transmit noise when you have a conventional radio or narrowband radio. That can be very interesting to the timing, to time the arrival of the packet because you have a short pulse to detect. Though with narrowband you could also detect when the sinus crosses zero. You could detect the phase and you could get fairly accurate timing like with that as well, but this behaves really badly with multipath. So multipath is when radio signal is transmitted from transmitter to receiver, you will have a lot of reflective surfaces all around. Windows, wall, everything's often, a lot of things are reflective to radio. So when you receive, you will receive what we call direct path, which is the fastest way between the two antennas, but there will be also secondary path that will be the signal that bounced around. This will be later, they will arrive later and so they will be delayed a little bit. In case of narrowband, if we simplify that saying it's a sinus, you will get some of the sinuses with same frequency and that will give you a sinus with same frequency. Only problem is that amplitude is modified and the phase is modified. So if you want to detect the phase to get the timing, you're out. Ultrawide band, since you send very short pulses, if you're lucky enough that your multipaths are not too close, you will basically receive a couple of paths. So you will be able to see the first path and you will be also able to see the secondary path. There is a pretty cool thing actually that's not related to timing, but that a normal radio will be affected negatively by multipath. That would be a problem usually that you try to solve. For example, wifi routers, sorry, will have multiple antenna to deal with that among other things. In ultrawide band, the radio will be able to use the energy of the other path. So ultrawide band will work better in a corridor or in a closed environment for transmitting packet because we'll be able to use energy of all the signal that bounced around, not only the one that went directly to the antenna. And so the way that's transmitted is similar to a conventional radio packet. You have a preamble. So when you have a radio system, you usually start with a preamble. So it's a sequence that repeats itself, usually 101010 and that allows to synchronize the transmitter with the receiver to then get the data. The preamble in ultrawide band is a bit more complex. It's more like a longer sort of random sequence. And it's very, very long and it will allow the receiver to do a cross correlation between what it sees in the air and the wanted preamble. And out of that you get what we call the impulse response for the channel. So you will get the first path and the multiple deals a path. You will get how much energy of the signal you get over time. And then you can use that to synchronize yourself and to get how much energy you need at which time to decode the data path. And while in a normal radio, for example, the two megabit per second radio we use to communicate with the graphite normally, the preamble is one byte and the packet can be 32 bytes. Here's a preamble is like 150 microseconds when you have 10 microseconds of data. So it's really disproportionate. Yeah, and just to point out, this is a normal radio. It's not a specific thing. It transmits data. It's based on IEEE 822-154. So it has a MAC header and you can just put data in. It's just that as a side effect you will be able to measure very exactly when the packet leaves the antenna of the transmitter and when it arrived at the antenna of the receiver. So internally it looks a little bit like that. We took that from a debug registers of the radio. That's over time and each line here is the amount of energy that was received for one packet over time. And that's multiple packets. We've taken that to see the effect of occlusion, non-line of sight. And here we've had someone that pass in the way. And this is one of the drawback of this radio is that if you pass in the way, it will actually affect the timing. And it doesn't work so well with non-line of sight situation. You will actually get an offset because yeah, it kind of blurs. You can see that here there's a very high peak to times the arrival of the packet. We know exactly when to write. Here it's a bit more blurry and the algorithm that searches for this peak will tend to get confused. So to wrap it up, we have a radio that's able to give us very precise timing of when packets leaves and arrive. They have 64 gigahertz timer. So it's about five millimeter precision in timing. Though it's specified for plus minus 10 centimeter of precision, when you do ranging with it, that's due to the fact that your timing will be affected by non-line of sight, by temperature, by receive power. There's a lot of things that can affect your precision. And yeah, it's very robust to multi-pass which is very important if you want accurate timing of the packet. So now that we have a radio capable of measuring timestamp, we need to use it in such a way that we can do a local positioning system with it. Common architecture will be to have anchors and tag. What we call anchors is fixed radios that you place around the room and you measure the position. So we have a couple here. And just this morning, we measured their position so we have entered them. We know where they are in space. And the tag is a radio that's gonna move around and that's gonna want to know its position or we want to know the position of. And there is, we have at least, we have a couple of ways we can do this ranging now that we have this architecture and timing capability. The simplest is called two arranging. So we will just ping. So the tag sends packet to the anchors, the anchor answers. We remove the answer time and you get two times the time of flight. Divide by two, you get time of flight. The only problem with that is that the tag and the anchor runs different clock. They are not synchronized. They will drift to watch each other. So if this answer time is big or even if it's small actually, you will get a very big error. Very, very, very quick. You will have zero or induced by the answer time here. Dominate the measurement. So the way we deal with that, that we just add a flood packet. So intuitively, we do the two arranging one way. The ping one way, the ping's the other way and the error will be symmetrical. So we can get our time of flight measurement in a virtual cloud that will drift within the anchor and the tag. Now we have a problem. The last problem is that we want in our application to have the location in the copter. So we want all information in the copter to measure the location. And here's the last timestamp is in the anchor, in the system. So we just add a force packet. So the current implementation transmits all these three timestamps in the force packet. And so out of the six timestamps, the copter can get time of flight to the anchor and we know the distance to the single anchor. And it's bi-directional communication and the copter can choose the rate and can choose to, with which anchor to range, which is nice because we can imagine so optimization when you would have a system in this room, a system in the room beside here and you choose to, with which anchor you want to range when you leave the room and want to go to the other room. So when we have distances, we can place ourselves in a sphere around the anchor. We range with multiple anchor, there is multiple sphere and we will be at the intersection. So that's how we calculate our position. This behaves quite nicely. It works, it works well even if we leave the vicinity of the anchors. So even though here we will have a setup in this, in some kind of a cube here, we can go away, it's still gonna work okay. Though there is a drawback, if you want to do swarm research, for example, if you want to fly a lot of your copters, each tag will have to communicate with the anchor, which means that each tag will have to synchronize with each other to not communicate at the same time. You'll have to share the error. And so that doesn't scale really well when you want to add new tags. We've been able to run four, I think. We certainly could run 10 with some optimization, copters with system, but we cannot run 50 or 100. It's just not scaling. So one solution is to instead look at time different of arrival. So that's very similar to how GPS is working. That's why you don't need more bandwidth in GPS system than in Stein, you produce in GPS. And what you do is that you only listen. And if we imagine that the anchors will be able to send a signal exactly at the same time, we could just listen at when the packet arrive. And the difference of time of arrival will give us a difference of distance between the anchor. We'll be able to know that we are 30 centimeter closer to anchor one than anchor two. Now it is not possible to send old packet at the same time, that's not gonna work, so we just define time slots. Anchors sends in their time slot and we just subtract the time slot when we want to do the time difference of arrival. But now we have a point because these anchors, they are independent radios. They have independent clocks, they need to be synchronized together and so that we can synchronize this time slot and we can know how long time it took between two packets to be transmitted to be able to measure accurately the difference of time of arrival. And luckily we can do that with two arranging. So the idea is that this packet, they are broadcasted, everyone's receiving it and so even the anchors are receiving it. And we recognize here's a double ping we saw before, which will allow us to measure time of flight between two anchor. And if we have time of flight between two anchors, we know if we have time of flight between anchor one and two, and we know when anchor one received a packet from anchor two, we can calculate what time it was in anchor one when anchor two packet has been sent. And this will allow us to synchronize all the system and to measure different time of arrival between all the anchors. Once we have different time of arrival, then we know how close we are to an anchor towards another one. And it will place us in a parabola in space, in between the two anchor. Or parabola we think is called in 3D. This works nicely when you are within the anchor space. So as we call that the convex hole formed by the anchor. So that's the space that is enclosed by the anchor system. Then you get similar performance than with two-way ranging. The problem is that it degrades very, very quickly when you get away. And I mean, we can see that here that if you would be here, you have an intersection of the two parabola which is very shallow. So a little noise will mean a very big uncertainty in your position. So this case really well, once you have managed to range one copter with that, you can range 100 copter. It doesn't matter because the copter only listens to the information that are broadcasted in the air and can create some position. But yeah, we have more constraint of how you can use it. You really have to rig the room. You have to put anchors all around your space. So the system we did out of this radio, what we call the local positioning system, is based on the DVM-1000. We intend it as an open-source local positioning system for robotic, any kind of robotic. Since we're making a flying platform, we start with that. So that's our focus currently to make it work as well as possible with the crazy fly. But we don't forget all the robotic and we intend to make it useful for anyone that wants to locate something in real time. We've made two pieces of hardware for this system. One is what we call the node. That's our anchor that we have here. The reason why we don't call it anchor is that it's basically an ultra ribbon radio with a processor and open-source firmware. So it could be used as a tag, it could be used as a sniffer. It's really nice as a debug tool as well. So yeah. And the second piece of hardware is a deck for the crazy fly. So this is just the radio connected to the CPU of the crazy fly. The crazy fly is calculating its position using its main CPU, using the firmware. And yeah, all-ranging, all-control is implemented by the crazy flies. So to range, we need to make some firmware, obviously, for the crazy fly, for the anchor. Right now, we have to arrange in working. We consider it stable, it's working very well. TDI, we consider very experimental. We had it working. We had flown the five-copter in our lab. It tends to do strange things every now and then. So that's still in progress. It's published, though. It's open-source, so if anyone wants to book in it. But we also discovered that there is, I mean, we knew about it, but we discovered it even more that there is more than just ranging into making a local positioning system. One of these, the very important piece was a sense of fusion algorithm. So we use a command filter. And the reason is that the measurement you get from the system are very noisy. We've been flying with them, one measurement directly, but you get very unstable flight and it's not very nice to look at. It's not very useful. So, but luckily we have inertial sensor in the crazy flight. That's how it flies. So we can use this inertial sensor to get an idea on how we're moving in short period of time. This inertial measurement we tend to drift very quickly. You can get a couple of seconds of it and then your estimated position will drift way too much compared to your real position. So what the sensor fusion algorithm will do is that it will fuse these two measurements. Inertial measurement, which are very good for short term but will tend to drift in long term. Ultri-band measurement, which are very stable over long term. You can integrate them and you will get pretty good measurement that's not gonna drift, but in short terms they're almost useless because they are very noisy. And this has helped us to get much better performance. And I also discovered that a lot of work was needed in trajectory control. Apparently it's not good enough to just the helicopter be here and just move by moving the set point. You need to do it cleverly and you need to make it follow trajectory. So that has been developed a lot recently as well and it increases a lot smoothness of the flight and how it behaves, the performance, more generally. Software-wise, so on the PC, we've started implementing system with ROS. ROS is a robotic operating system. It's not an operating system, it's a framework. It's a robotic framework that is used by a lot of researchers around the world in robotics and that gives you a lot of basic pieces to work with robotics like 3D visualizer and debug tools. That was really nice to work with and it fits also our first target, which are researchers in robotics. But we are currently working at supporting the system with the crazy fly lib that already exists. We already have a Python lib and Python client controls the crazy fly. And this will allow to set up the system much more easily and then you don't need to have, I mean, ROS needs a specific version of Ubuntu and it's fairly hard to get around it and to learn how to use it. So we intend to make it easier to use. And also, we intend to make it easier to set up. Here, we had to be very careful about where we put the anchors and carefully measuring their position. It would be nice if we could just push the button and everything is set up and all the anchors know where they are. After all, they are ranging radios so they could find their own position as well. So, we've been releasing the system about six months ago in what we call early access, which means that we tested the hardware, but the software was very, very much work in progress. People have been getting it anyway, starting to work with it. Universities and industries have been using it to do research and tech demo and these kind of things. We also had a lot of interest by tech artists, like to do light show or to, yeah, using to-do shows, marginally. We knew there was a need, but we were a little bit surprised about the amount of people that would like to have a flying platform to do shows. So that would be a lot of fun, actually, to see that coming. And, yes, we have always a lot of ideas and a lot of software that needs to be done. Marco Tempest, it's tech illusionist, he's been making a blender choreography plugin for, yeah, a blender plugin for making choreography for flying platforms. So basically, you make a choreography, you export CSV, and then you have a playback tool, which will be a swarm management software. It's a nice name for it. To broadcast this to your copter and follow the choreography, you have set up. And also, we, as I said, working a lot on easing the set up, like automatic anchor position measurements. Yes, and to support more general robotic, we're planning on making a small tag with IMU. So, yeah, basically a crazy fly without the wing, without the 2.4 gigahertz radio, but with your tribe and radio, that you could just hook up to your existing robot and get XYZ, or even GPS-like cell data if you would like to retrofit on an existing robot. So that's some things we are looking at for the future. And so, yeah, I'm personally very interested by that. This system will allow more university, more research lab, more people to access local positioning, because it is tamed 100 times less expensive than these motion capture systems that were state-of-the-art for doing this kind of research so far. It's less precise, but it's good enough for a lot of use cases. But it's still out of reach for hobbyist hackers. I mean, it's still a bit on the expensive side. But all the software has been developed, all the sense of future and trajectory control, all that will allow this attempt to work much better with sense of fusion, we can lose one second of image and it still flies. So all this camera-based, webcam-based will be able to work much, much better now that we have all the software in place and that we've kind of state-of-the-art algorithms in the crazy fly. So I'm hoping at coming back at that and allowing anyone to do autonomous flight with the crazy fly without having to have to set up a local positioning system at home. So that's the dangerous part. Thank you. I was actually... Yeah, that's not going to work out. So yeah, that's a wanted position of the anchors. So we have six anchors, two here, two here and two on the front. So it's kind of prison. And that's what our normal client looks like. So we are not gooey people. So what I'm doing is that I'm starting the crazy fly facing X. We have X, Y and Z. It's important because it has to know its initial orientation. It can correct its orientation with the sense of fusion algorithm, but you need a good initial orientation. And yes, I need to connect it. Here. I don't know if you see, yeah, a little bit. Yeah, as was meant for development, we have this log subsystem that allows to log a lot of values from the crazy fly. And that is the distance to the anchor. So if I move around, for example, that's a bad one. The red one here is going to... No, it's a pink one. That's going low. And we also see some outliers. So that's the distance to all the anchors I have set up in my system. So last week, we've had a position hold. So if I press this button... Yes. So now it's trying to keep its velocity. So it's some kind of velocity control. There is some drift due to a bug in the common filter, but basically I can control the velocity of the platform and I can make it move at constant velocity and just break. So that's what we can do so far with our client. Still work in progress. But so with ROS that we've been using for much longer for this system. That doesn't seem to work. Oh, yeah, I know why. There it is. So you can... I don't know if you can see... But yes, here I have a 3D visualization of my space and we can see the crazy flight that is this axis in the middle. So I can just move my set point, take off, and yeah, here we go. So, yeah. I'm not touching. Yeah, I'm not completely getting used to it. It's really nice. So that has been working for months now and now I can try to show what has been working for days. We've had a lot of contact recently with ETH Zurich. My camera over there has been doing a lot of algorithms. He has been doing the common filter and recently he's been making a nonlinear controller which is apparently a much better controller and we've had student Marcus Griff here from Lumiere Steel has been doing trajectory planning. So the idea is that here I had a set point and I could move the set point around but we've been trying, for example, to make circle, behaves very badly because copter gets late, then tries to catch up, then it's too early and then it stops and you get a very, very bad trajectory. What we're doing here instead is that we pre-calculate velocity and position and acceleration and we feed that to the platform. So we tell the crazy fly what velocity, what acceleration it should reach to follow the trajectory and this should result in much better trajectory control. Oh yeah, and I... Yes, this is what we're supposed to do. So yes, very briefly we have an ellipse. Then we should go back and forth and then we should have a spiral. So let's try. So that's the ellipse. Yes, the spiral was cool, we made it two times. Yeah, but that's all I think. So sometimes the question... Wait? Yes? Yeah, that's a problem. So the question is that since we are using radio what about the directionality of the antennas, right? Yes, we're using the decalive module here that uses a cheap antenna and yes, there is some effect of directionality. So if I would... If I would make it turn, that was awesome. Okay? So if I would make it turn, it starts. This control is awesome because it manages to keep its stability even if you do very... I have not been able to crush it yet. But anyway... Okay, I'm having too much fun now. But anyway, the key is that... I attribute that to directionality of the antenna. I think we are not turning on the same point because we get a different position because of just turning. So that, I guess, is things that you might be able to account for if you know your system and you calibrate for it. It's just a single receiver, a single antenna. And this antenna will have different propagation delay depending on the... We'll have small errors depending on where the signal comes from. It could, I don't know. Yeah, it has to pass... It could have an effect when the signal has to pass through the motors, for example. Yeah, so the question is about the accuracy. We expect plus minus 10 cm of 3D accuracy. It has been done, but I'm not sure about the result exactly. So it has been done with the Vicon system and it's in the order 10 cm x, y. We have a little bit more problem with that. Thank you very much. So we have been starting selling this setup. So everything, the radios, the copter and the system for 1,000 euros. So that's, yeah. So that's why, yeah. That's why I mean it's very affordable for universities. For hobbies it's a little bit steep. So that's why I have some interest also making it work with the webcam and stuff like that. If you want to easily source the chip, it is the one. I know that there is also chip available that have been used, but they are not standard and they are not so widespread. It's not so easy to source. Okay, thank you.