 So, at LCA last year, I talked about the effort to port an 8-bit autopilot to Linux, or something that was originally an 8-bit autopilot running on Arduino-style AVRs, running on boards like this, still a very popular autopilot called the APM-2, predecessor APM-1, etc. So in that talk, I talked about the plans to hopefully have a real Linux-based autopilot in the future. And a lot has happened in the last year, towards the end of the talk last year. So these pictures you're seeing up there are the progression of the history of the autopilots that ArduPilot runs on, starting from the original ArduPilot in 2009, APM-1, APM-2, 2.5, like this one, and then the current generation of autopilot called a Pixhawk, which is sitting here blinking at slides. So then the existing system design that, you know, this is, a few years ago I talked about the Outback Challenge in an effort to build a search and rescue aircraft for a search and rescue aircraft competition. And this is the basic system design that we had in those aircraft. So we would have a dedicated autopilot, something like a Pixhawk, we've built in, it's got an extra microcontroller for the fail safes shown in a separate box. And then there was also another computer on board, an Odroid, something like this. This is an Odroid U3, wonderful little embedded Linux box, by the way. So an Odroid U3, we actually had an Odroid XU, but they're very similar. And that's doing all the image recognition and the high level processing, running lots of Python image recognition code, et cetera. And that was the basic design. So I would like to combine some of those elements so that all of those bits in the red box is replaced with one system, partly because it's a fun challenge, but also because it allows us to make things smaller, lighter, less wiring, less complex for users, and did I mention it would be fun? So at the end of last year's talk, I showed this diagram, which is of the, at the time, we were planning to build something called a Pixhawk Fire Cape, which is a cape to go on top of a Beaglebone Black, and that was to provide all of the sensors and goodies you need to turn a embedded Linux box like the Beaglebone Black into an autopilot. And what I said at the end of that talk is I was hoping that in the next year we would actually produce this board, ride all the drivers for it, and get it flying. And today I'm going to be hopefully showing you a flight demonstration using this board. So this is the board. That is a PXF Cape, different color than the one in the diagram there. It's sitting on top of a Beaglebone Black. And in fact, an awful lot has happened in the last year far more than I originally expected. Ardupilot is now regularly flying on Linux. Multiple ports of Ardupilot have been done to a variety of different boards. So there's not just one autopilot Cape for Ardupilot, there's several. In fact, in the last year, two companies have sprung up whose purpose is to sell commercialized autopilots based on Linux and this design and other designs. So they're now commercially available, which is just wonderful. And I didn't even imagine that might happen a year ago. OK, so I thought I'd start this talk with a demonstration, but it's a little bit tricky getting a plane in my luggage and bring it over here. I could have got a really, really small, tiny copter or something and had it hovering around on the stage. But I thought it would be more fun to fly a slightly bigger plane, not a really big one. So this is a picture of good friend of mine, Jack Pitta, the chief pilot for Canberra UAV, with a plane, a Skywalker. It's just an electric glider, 2.2 kilogram glider. So it's currently sitting on the runway in Canberra, waiting for the go for it to be launched in Canberra. And we're going to have a live feed from that to watch the flight and see how it goes. Now, it is running a Beagle Bone Black with a PXF Cape like this. It's running Ardupilot 3.2.1, which isn't actually released yet, so that might be next week's job. And while it's flying around, I thought a good demo for this audience would be it's going to be compiling the Linux kernel while flying on the same CPU. All right, so what we're going to do now is we're going to switch over to demonstration. Now, when we were setting up this demonstration, we originally just thought we'd have the telemetry system going. So here we have the telemetry from the aircraft. See it's coming through live. It's currently just being held a little bit off the ground. You can see its ground speed. There's lots of information I'll explain after it's in the air about what's happening. And we have got to go and kick off a, let's see, LCA screen. Let's see if that works. So there I am. I'm now logged into the aircraft. So far it's working. So let's take a screen within a screen and CD slash data slash kernel Linux. And yes, the clock isn't set. My apologies. What is the date today? It's sort of month. It's sort of 16th. I forgot to, we don't do automatic NTP. We do when it's got an ethernet bridge link, but it doesn't today for various reasons. So what is a date? First 16 and hour, you know, 11, 32, 2, 0, 1, 5, something like that. There you go. And make minus J2. Let's start building a kernel there. So it's already in going before we take off. And that should, that'll take a little bit of time. It's on a micro SD card. All right. So while it's flying, I will be, you know, let's also hook up a bit of a video link. We were imagining just telemetry. And then when we were out doing the test flight for this, we thought, hey, it would be cool to have some video as well. What have we got in our pockets? And Grant Morphert, who's out at the field at the moment, very kindly, just pulled out his Android phone and realized, I've got Skype on this phone. So why don't we do a Skype video link? So let's see if that works. And yes, there's the plane sitting out on the field at the moment. And the person you're seeing there is Jack Pitter, holding the transmitter. And there's the plane sitting on the grass, ready to go. That's our flying field, the red runway behind it. All right, so we're just about ready. And what we'll be doing is watching it move on this map and watch it compile this kernel. So, and I'll just create another root arm there. We can sort of, you know, run normal commands on this thing while it's, you know, doing stuff. All right, so, Grant, can you hear me? Grant, do you hear me? Great, Grant, OK. So I believe we are ready to launch when you are. Yeah, ready to go. OK, so let's, now, I don't know if he's organized a good way with the video link, we'll sort of, let's see how this goes with all these screens, et cetera. All right, we're in manual mode there. There's the plane. He's checking the stabilization, making sure that it's going to stabilize and fly. These are the standard ground checks you do before takeoff. Mode auto, way point one. Yep, he's ready to launch it and it'll automatically detect the acceleration from the throwing of the plane and then start the propeller based on that acceleration. So it's an automatic takeoff, but with him just throwing it. There it is, it's heading off into the sky. So, and we can see that the kernel is still compiling down here and so the plane is heading towards its first way point up here. If I zoom in on that, you'll see what it does and the video is actually fairly meaningless at this point because the plane is just a dot in the sky. We might bring it back for the landing, but it's now turning around it. We may lose the video link, Skype's not that reliable. Now, there's a fair bit of wind. There's about 12 to 15 knots of wind, which is a fair bit for a plane of this type. And so you will see it crabbing a fair bit as it goes around the way points. You can see it's approaching that next way point. It'll do three circuits around this automatically and then it'll switch to its landing pattern. So you can see if I just make this map a little bit bigger, then you'll see that you see it's crabbing a fair bit from the wind effects. It'll automatically switch over to way point seven here when it gets to this point and then come in for what we hope is a perfect landing, automatic landing. And there really is a fair bit of wind there, as you can see. But it's tracking quite well. You can see that the actual path between the way points is not bad. Lennox is doing its job. We haven't made a lot of progress in this kernel compile. So it does. This is a very slow CPU. The Beagle Bone Black is a very slow CPU. Compiling the kernel would normally take with all the modules and everything about five or six hours. While it's flying a plane, it takes about 11 hours to compile the kernel from scratch. So, but the main thing is to show it as a bit of a stress test. And it was actually quite a good stress test. As I'll show you some of the talk later in the talk about things that, you know, don't always work exactly right. You can see it's much faster downwind than upwind. It's really starting to accelerate. So there we've got, yeah, so we had like 25 meters a second ground speed, 15 meters second air speeds. We had about 10 meters a second, 20 knots of wind at that point. And it'll struggle a bit more upwind. So the wind varies a little bit. Right, so let's watch it go around. So let's think about what it's doing while it's doing this. So this little board, or it's just like one of these, is it's doing, in this particular configuration, it's doing about 1200 SPI transactions per second. Those SPI transactions are pulling data from a three-axis gyroscope and a three-axis accelerometer. It's also doing a lot of SPI transactions from an MS5611 barometer, an HMC5883 magnetometer to get the heading, and it's pulling data. We don't actually have the laser rangefinder on this. So we're reliant on it being a fairly short flight so we don't get a lot of barometric drift on a larger plane for a longer flight. We normally have a laser, you know, Linux-powered flying robots with lasers on them. What's not to like? So there it is rounding that corner and it should soon then do the last leg up towards the landing. It's also doing an awful lot of IO. It's helped by this board having a couple of little real-time units, programmable real-time units, and they're doing the microsecond accurate stuff in the plane because there's some things that need to be microsecond accurate. Most things don't need to be microsecond accurate. Most things can be, you can see it's heading towards the landing path there now. So it might be worth popping back. Once it's in the final approach, the video may make more sense to not just be a dot in the sky. So let's see. Okay, I might have to recall him there. We may well have lost the Skype link. Let's see if we can get the landing on video. Unless, of course, it's a terrible landing that I'll shut it down. But that photo I showed of the plane in the middle of the runway nicely, that was the last test result. So can we get the landing grant? Oh, we lost video. There it is. Okay, we should get the plane coming in here now. So hopefully we will see the plane on the video. And let's see how it does. So anyone spot the plane there yet? It should be coming in. Somewhere there, there's a plane. Presumably Grant can see it because he's pointing the phone at it. Oh, there it is. All right, there it is coming in for a landing. Crabbing L34, speak to the camera. Yeah, quite a bit of crabbing there. Hey, so that's not bad. It's landed okay. The plane has survived. Anything you could walk away from is a good landing. So thank you very much, Grant. Thank you very much, Jack. A round of applause to our helpers in Canberra. All right, so thank you very much. And so we'll shut that link down now and I'll get on to the other technical part of the talk. So shut that down. We've still got the kernel compile. We've made a bit more progress now. The kernel compile. So, but I'll just leave that running. That's fine. And eventually we'll get a kernel we can use. So somewhere here, I've got a talk that I was going to give. Yes, okay. All right, so as I said, that was a Beagle Bone Black with a PXF Cape, a little electric model. You can also fly much bigger planes, people where you can do ground vehicles, you can do multi rotors, lots of things. I'll just turn off that audio from coming from the ground station there. They might go for another flight, you never know. So the setup in that particular demonstration, just so you know what it was composed of, there was a 3813 with the RT preempt patches. Thank you, Ingo. And so it was running RGPilot 3.2.1. The particular sensors that that flight was using is the MPU9250 accelerometer gyroscope, MS5611 barometer, U blocks, GPS, a compass, a air speed sensor, I squared C, air speed sensor, differential pressure sensor. The IO input, in case Jack needed it, Jack was the pilot, it's a model aircraft with a model aircraft controller, and Jack has at any time the ability to take complete control. And so that was via S-Bus, and there's some interesting twists to doing S-Bus on Linux that I'll talk about a bit later. And the PWM output was on the other PRU with the microsecond timing to in order to control all the servos. There were two telemetry radios, one for the main mavelink telemetry and the other for the root shell. So, and that's how it all worked. Okay, so autopilot boards and ports. There's been a number of ports of RGPilot done in the last year, and the PXF Beaglebone Black was the one that I showed the diagram of at the end of the last talk last year from 3D Robotics, and 3D Robotics has really been pushing this technology hard and I'm enormously thankful to the support they give the RGPilot community. There's also the Earl Brain, and so this board here is actually from Earl Robotics. They made all of the hardware, is open hardware, all of the software is free software, open source software under the GPLv3, and so Earl Robotics is Victor, he was a Google Summer of Code student. And at the end of that, he started a robotics company, which is a great way to go through from your GSOC, fantastically successful GSOC we had, and he started a company, Earl Robotics, and then he started making these boards based on the open hardware design, and he's now selling them as a commercial autopilot solution based around Linux, which is brilliant. Then a second one popped up, I thought that was fantastic that somebody had actually started producing a Linux-based autopilot on the open hardware designs. Then another one cropped up, Emlid from Russia. They did a Kickstarter, and they've started a company, they've got a whole bunch of employees, creating Raspberry Pi capes. So there is the NavIO cape for the Raspberry Pi, that's a standard Raspberry Pi underneath it, and it flies planes and copters and drives rovers and boats. And again, it was a relatively easy change because the way we'd built the code to be portable and easy to adapt to new boards, they were able to create the NavIO board, and they've just announced last week the NavIO plus. Unfortunately, the one they're sending me hasn't arrived in the post yet. It's probably in FedEx now, and that adds even more features. That's a great way to get into things. I'm delighted to see them starting and I wish them every success. There's other ports as well that have happened in the last year, and IMX6 port was done within 3D Robotics by Alan Matthew, a zinc port, which is an ARM core with an FPGA on the side, was done by John Williams in Queensland, and the newest port, I was delighted. Sitting in another talk here at LCA with my laptop open when a pull request arrived, adding support for a new board. And that's the BBB mini port, which is also Beaglebone Black Base, but a much, much cheaper, simpler sensor board on the top, more of a sort of do-it-yourself cape put together by a guy called Merkix. And the thing I was really delighted about by that patch, A, it was really nicely structured, B, the entire patch to add a support for a new autopilot board, 43 lines. Tiny. And basically his patch said, hey, it looked like it'd be easy to throw together another one, one that's easier and cheaper for people to throw together on top of the Beaglebone. Here's the patch set. I'll start a wiki page, documented, et cetera. Marvelous. We're really getting a community going around this stuff. So I was delighted to see that one come in. So there's now a wealth of boards available. And I should say, by the way, that don't get the impression that ArduPilot is the first open autopilot to build on top of Linux. There's lots of free software, open source autopilots out there. There's some really great ones. Tower Labs, OpenPilot, Paparazzi. There's a PENGU pilot. There's AutoQuad, lots of other good ones. And several of them have done Linux ports as well. So we're not the only game in town, but we're really delighted to see the progress in our particular corner of the free autopilot world. All right, so I was going to tell you a little bit about what's going on underneath when it's flying a plane like this. So all of the fast sensors, the ones you've got to get the data off really quickly. That's basically accelerometers and gyroscopes in particular. You have to get data very quickly in order to be able to get a good result. And by quickly, I mean that you want to be sampling them at around a kilohertz or so. So a millisecond or so between samples. And they're on the SPI bus, right? Serial peripheral interface bus, which is an ideal bus for really short haul, centimetre or so within a board and high rate. They're short transactions. So each transaction on the SPI bus is typically around 20 bytes or so. And they're not done with DMA in case anyone's curious because the DMA setup costs are too high. So you're better off just doing it, just hitting the register of the peripheral to get it to do the transfer and then waiting for it. This board, this Beaglebone Black, despite the fact it's a very slow board, as I said, this sort of board takes five or six hours to build the Linux kernel. It's not a fast processor, single core. It can happily handle through the user space SPI dev interface around 4,000 SPI transactions per second with a 25% CPU load and no lost packets, which is pretty good. And that was one of the big questions as an open question during my talk last year is would we need to do kernel drivers for all these devices to make it work? Or could we use our user space drivers? And you may ask, why do we want to do user space drivers? We've got this nice GPL kernel. We can just shove drivers down in there. The reason is that by using user space drivers, the same driver via our hardware abstraction layer in ArduPilot runs on lots of different operating systems, not just different flavors of Linux, but the same driver that drives the MPU9250, MPU6000, these various sensors, the same driver is used on this APM2 with eight kilobytes of RAM and an eight bit microprocessor running at 16 megahertz. The same driver is used on the Linux box at a gigahertz by just writing a tiny wrapper thing that exposes SPI-DEV. It means we only have to write the driver once. We can make sure the driver's reliable and we just have this nice hardware abstraction layer that I talked about last year. Okay, so that was actually quite a surprise that SPI-DEV worked so well on these boards. At each board we've tried it on, it works really well, which I didn't really expect. So that made things an awful lot simpler and that's partly why we've made so much progress this year. The slower sensors tend to be on I squared C. So I squared C goes, it tends to go across cable lengths of maybe 10 to 20 centimeter type lengths. So that's for going off to your compass or your airspeed sensor. It's particularly used where you don't need really high data rates and the sensor prefers to be further away from the rest of the action in the plane and it wants to be further away for interference purposes. There's this rule in drone design that every component of the drone wants to be as far away as it can be from every other component just because they all interfere with each other in horrible ways. So we put some things out on I squared C buses. In the future, UAV CAN bus is gonna be replacing a lot of that and so we have our first CAN bus electronic speed controller relying on the UAV CAN and the CAN bus support in Linux which is great and on other platforms and there's a CAN bus GPS and barometer and compass. Bit big, but yeah, CAN bus is great stuff. You'll be seeing a lot more of it I think. All right, you don't like CAN bus? Bit tortured by it at the time. Bit tortured by it, okay. Well, I love it. The UAV CAN, if you see UAV CAN.org and Pavel's work on that, absolutely brilliant. He does a lovely abstraction and things portable to lots of different operating systems, et cetera. We just take advantage of Pavel's work. All right, so scheduling. So the first question I tend to get from people who've worked a bit in the Linux space when I talk about running an autopilot directly on Linux is so what about scheduling? Aren't you gonna get long delays sometimes and isn't that gonna cause the plane to crash? And I hoped the answer would be no. And so we tried to design things appropriately. In that particular plane that was flying then, it had six real-time, as in FIFO scheduled threads with all the usual real-time stuff, pre-faulted stack, locking all the pages, all the usual things that you do in a FIFO scheduled real-time task to stop surprises. These are the threads that it's got and they're in priority order, from highest priority to lowest priority. So there's a timer thread, which provides any device drive or any component of the autopilot can say, I'd like something to happen regularly, please, as a multiple of one millisecond. And the timer thread basically does all those things and calls them. You can register a callback that happens at one kilohertz and that happens on that timer thread. There's a UART thread for all the UART serial operations for talking out to telemetry, also talking to GPSs if the GPS is a UART-based GPS. Not all GPSs are UART-based. The one on this board is actually SPI-based. And there's a RC input thread for processing all of the RPS radio input pulses from transmitters. And there's a main thread, which is the bulk of the autopilot logic, the navigation and that sort of thing that's actually in one thread, which suits our architecture, but we use our own little mini-schedular system within that thread to manage the timing within that thread. Then we have a tome alarm thread on this particular board that handles the buzzer. If the sound had been good enough, you would have heard it singing a little bit to us when it was starting up and making various buzzing sounds to say things are good or things are bad. Hopefully all the buzzers were good. Then there's an IO thread, the lowest priority for all of the file system IO, logging, parameters, terrain data. That's something that's happened in the last year. RGPilot can now automatically follow terrain off in the distance, and we use that in the Outback Challenge very successfully. And so it's got to store the terrain database, build the terrain database and store it somewhere. So that all happens in the IO thread. So that's the basic scheduling. And so I thought that that was all good. It seemed to be working really, really well until I was preparing for this talk. And as so often happens, when you're putting together a talk, sometimes you actually run across bugs. And this talk was no exception. So I did an 11 hour test. I thought, well, let's see how long it does take to build the kernel while running the autopilot. So I ran the autopilot just in a bench test, still doing all the same work it does in the air. And an 11 hour test, building the kernel off a micro SD while running ARGIPilot on a big old bone black. It's a 50 megahertz main loop. So we expect a 20 millisecond loop time. And you can actually see here, if I scroll back a little bit, hopefully you'll see little numbers pop up. Maybe I've scrolled too far. There it is. Can you see all those timing numbers? You know, maximum minimum. You saw that sort of maximum 20 milliseconds minimum 18. That's sort of reporting every 10 seconds what the worst case and best case timing for a 10 second period is. And you want it to be around 20 milliseconds, you know, millisecond or two either side doesn't matter, but around 20 milliseconds. So during that 11 hour test, there's about two million loops were executed. 19 of them were over 30 milliseconds. That's no problem for an autopilot. All the algorithms in autopilot scale with time, you know, to some degree. 20 milliseconds, 30 milliseconds, 50 milliseconds. Yeah, if it's 50 milliseconds a lot of the time, it'll look a little bit jittery in the air, but it'll still fly. You know, most users would never notice. One loop in that entire 11 hours was over 40 milliseconds and that was 1.7 seconds. So once there was a 1.7 second miss. I still need to find that and the difficulty is reproducing it. So I've been chatting to some people here at this conference about how to debug that sort of thing, how to find the cause, almost certainly some driver. I have my eye on the MMC storage driver for the SD card. Maybe waiting for some response from the micro SD card and it waited 1.7 seconds with interrupts disabled is my current guess, but I haven't yet delved into it enough to really know. And I may be, you know, aligning the MMC driver completely in an unwarranted fashion. So the challenge now is to find the cause and fix it. And the suggestion actually from Paul McKenney was to concentrate on the 30 millisecond times because they happen a lot more often. And if we can find the cause through the latency analyzer or some other toolkit of the 30 millisecond times, then perhaps that will lead the way to the longer timeout. So apart from that, in the 11 hours, you can see it does actually fly. It actually flies very nicely, flies just as well on Linux as it does on the dedicated autopilot boards. But just occasionally you could get a long delay. Now, if that happened during a real flight, if it's straight and level flight, you wouldn't even notice it unless you look at the log carefully, right? Because we log all these things. But if it was taking off or landing at the time, that could be tricky because then it's going to go straight ahead for 1.7 seconds, which at, you know, 15, 20 meters a second, you might have gone 30 meters in the wrong direction. And that's, you know, hit a fence, hit a tree, something like that. Hopefully not hit a person. All right, so that's the, you know, if you find bugs and found bug while preparing for this talk. Okay, so I thought I'd talk a little bit about the PRU code. And basically each autopilot board, you need to solve a few different problems. The most of the problems, just by using Linux are solved for you. You've got the SPI dev and the I2C dev. It's really just a matter of wiring. But there's two little areas that you really have to think about and solve for each particular instance, each autopilot board I've got here. And that is the two things that have to be microsecond timing. That's the RC input, right? Decoding the pulses that tell it the commands from the pilot what to do, right? They need to be decoded with microsecond precision. And the PWM output going to serve those or motors to basically make a motor spin at the right rate, right? That those pulse widths need to be microsecond accurate. And Linux is not good at that sort of, you know, microsecond stuff if you're doing something else as well, like flying a plane. So the solution that is used on this particular board is to use these two little PRUs, programmable real-time units. And what they are is two extra little CPUs on the Beaglebone Black that share 8K of RAM with the ARM. And they run at 200 megahertz, so they're a bit slower than the main ARM, but they run very, very simple code. There's no kernel or anything on them. They're just running extraordinarily simple code. And they have direct access to the IO pins so they can read pins and toggle pins at, you know, 200 megahertz type rates. And there is a C-compiler available. Unfortunately, the source code for the C-compiler isn't available, but there is at least a, you know, a binary of a C-compiler available. It's not a complete C-compiler, but it is usable. You can also program it directly in assembler if you want to, because the programs you write for the PRU are very, very simple. So the first high-rate timing task is radio control input. So you've got yourself a transmitter, RC transmitter. If any of you, you know, fly radio control planes, you'll be familiar with that. They tend to put out from the receiver in the aircraft one of three things. Either what's called PPM sum, which is just a multi-channel pulse width based RC input, right? Each width of each pulse in microseconds gives an idea of a number, what you want the plane to do. 1500 neutral, you know, 2000 turn right, 1000 turn left, that sort of thing. Then SBUS, which is a 100 kiloboard inverted serial input, right? And the fact that it's exactly 100,000 board is a bit awkward because typical UART chips don't do 100,000 board. And it's inverted, which is annoying, again, you would need extra components to invert it if you wanted to use it in the standard UART, unless your UART has the ability to handle inverted serial. Then there's DSMs, also known as Spectrum, which is 115-200 board, and it's not inverted, and it's got a nice framing as does SBUS. Okay, so we wanted a solution that was as generic as possible so that person who makes each Linux-based board doesn't have to solve these three problems each time, themselves, right? So what we did was we built a, on the ARM chip, which is running at gigahertz, has oodles of CPU relative to our past auto pilots, it has a software bit-banged serial port implementation that can in parallel decode all three protocol streams from one pin input. So you attach your receiver, any of the types of receivers to this one pin, and that one pin on the PRU, the 200 megahertz PRU, is just polling that pin, producing a pulse train, and it calls a function called processRC pulse, which puts it into three parallel decoders, bit-banged serial parallel decoders, that, and whichever one says unvalid, the checksums are sensible, the number of channels is sensible, unvalid, that's the one it is, and for that frame, you decode it that way, it works beautifully. And it meant that I was extremely simple to, on each autopilot board, to support all of the different RC input systems. It also avoids the need of multiple UARTs. One of the things that is scarce in microcontrollers tends to be UARTs, you run out of UARTs, and they're really valuable. All right, so on the PRU-1, we do the RC input, PRU-2, so it's 70 lines of C code on that PRU, that's it. Tiny, just manipulating a ring buffer. The second PRU has 235 lines of C code to handle all the PWM output, and that's going off to the motors, generating the pulse train for the motors. Again, just a shared, piece of shared memory with the main arm, no locks, it can all be done atomically, and very, very simple, and there's your real, there's your microsecond stuff. All right, so I've only got a few minutes left, and I did promise in my abstract that I'd talk a bit about the Outback Challenge. How many of you were around for my talk in 2011 where I talked about the Outback Challenge? So at that time I was talking about us planning to do this thing called the Outback Challenge, which really inspired all the work I've been doing for the past few years. And we entered the Outback Challenge in 2012, and we won the challenge, but we didn't get the grand prize because we didn't drop the bottle on Joe. The plane has Autopilot plus Odroid, like I showed you before. This is the plane we entered in October, 2014. That's it, flying at our local flying field. 50 cc petrol motor, 14 kilograms, 2.7 meter wingspan stuffed full of electronics. You can see the antenna stalk sticking up the top. So that's what it looks like on the ground station, big LCD monitor. That's the Canberra UAV team sitting there monitoring the flight. The plane is about five or six kilometers away at that time during the competition. And you can see the all our GPL software running on the screen, which is doing the automatic image recognition within the aircraft, recognizing Joe and displaying it to the user, so the operator and the map, et cetera. A lot of image recognition type stuff. This is the path we flew. So you can see we flew away from the airport at the top there down to do a search pattern, right? We flew about 60, 65 kilometers in 40 minutes or so to do the search, a lot of circling around at the end, above what was Joe. This is the big moment for us when it's dropping the parachute, dropping the bottle. You see the yellow water bottle there onto Joe. And you can see that blue square about a third of the way up from the bottom. That's Joe on the ground and the bottle going down towards him. The plane had estimated the wind automatically compensated for that and to drop in the right point and the timing, et cetera, all of that. And you'll notice that the image recognition system has automatically put a box around Joe because it keeps looking for Joe all the time, even though it's now doing the drop, right? And that was it automatically saying, that looks a bit like what we're looking for. So that's the end result. That's Joe in one box and the image recognition system getting confused and thinking those two people next to Joe look like they could be Joe too. So maybe there's two Joes now. And so that's our bottle on the ground, 2.6 meters away from Joe with the two organizers. We circled back over to make sure that we got reasonably close. We were circling above them at the time, monitoring what they were doing. And this is a photo taken from our plane while they were measuring the water out from the water bottle to make sure that we'd met the requirements. So, and there's the team afterwards where we got the prize and we finally get to meet Joe. And so that competition's now done and they've got to come up with a new competition next time. Four teams completed the competition in October. So it was fantastic effort and a couple of teams went very, very close. So it's marvellous to finally complete that. And next thing I'd like to talk about before we take some questions is dronecode.org because the other big news in the last year is we've started a new organization, dronecode.org which is an umbrella organization. It's a 501C6. If you've been to any of Karen's talks or Bradley's, you'll know what that means. And it's part of the Linux Foundation Collaborative Projects. It's a forum for collaboration between projects, users and companies using all of this technology we've developed free and open source auto pilots and ground stations and image recognition from the whole ecosystem around this. So if you wanna get involved, now is a great time to get into on the ground floor. The first dronecode conference is at the Embedded Linux Conference in San Jose in March where I'm gonna be giving a talk along with a lot of other people from the community. So if you're in that part of the world in March, please come along to ELC. There should be some fantastic presentations by people that know an awful lot more about auto pilot design than I do. I just write bits of code and they do all the maths and things to actually make it work. So that's really about it, the main presentation. So we've got just a few minutes for some questions, I think. We can see whether the plane is still there. So... Trich, first question. Yes. What is the permissions you need for this, like with the air control and... Right, very good question, yes. So before doing this flight, I did actually ring up CASA and discussed it with them and spoke to a very nice person at CASA who thought about it and spoke to his colleagues and got back to me and said that, yes, we can do it as a model aircraft. So Jack was there with the transmitter. He was the pilot. It was being flown there. I was not actually controlling the aircraft. Technologically, could have, but Jack was actually flying it as a model aircraft in Canberra. And there is a lot of things around, the regulation space. And so I thought the best thing to do was just to ring the Civil Air Aviation Safety Authority, ring them up directly just in the front door and say, is this OK? Right. The board's starting to look a lot like a flying tricorder. Right. Are you planning on actually adding any more input sensors? Oh, yes, sensors get added all the time. The flavour of the month sensor at the moment and the one a lot of our effort is going into is an optical flow sensor. So an optical flow sensor is basically an optical mouse for a plane where the Earth is a mouse mat. Right. And the plane has a mouse and it literally uses the same types of sensors as you would find in an optical mouse like that. So it's flying along. The only difference is that wobbly hand, it's moved around like this so it's got a gyroscope as well so it can compensate for the rotations. And what that gives you is basically the ratio of your height above the ground to your speed. And so if you've got your speed from GPS, you can get your terrain height. So you can actually automatically track what the terrain height is, great for landings, great. If you lose GPS, you can do great position hold, great for indoor flying, all sorts of things. There's also things like scanning lightars. You've got these couple of lightars like this. So that's a laser rangefinder. They're great and this one goes about 40, 50 metres or so. There's ones that go up to 100 metres. There's scanning ones that scan around. Then there's camera systems, imaging. I mean, one of the points of going to Linux is you move to gigahertz class, gigabyte class systems. So I'm planning to put this cape on an Odroid C1 that I've ordered. This cape is compatible because it uses the hat standard for Raspberry Pi's. And so the Odroid C1 is a quad core CPU, really quite fast, a lot faster than Raspberry Pi. This hat should fit on it. And that would then give me a quad core one for doing a whole lot of imaging stuff. So slam, that sort of thing. And recognizing other aircraft in the sky and heading away from them towards them, depending on your predisposition. So lots of fun things that you can do with lots and lots of different types of sensors, which we're adding all the time. Any experiment with radio transponders for positioning? Radio transponders for positioning. There's a lot of interest in radio transponders for avoiding other aircraft. Unfortunately, some manned full-sized aircraft don't carry transponders. And so you can't completely rely on that, which is one of the difficulties of integrating in national airspaces. There's also efforts on using cameras to recognize other aircraft and that sort of thing. But radio positioning, there are some efforts to try to do radio positioning, mostly for indoor in, say, a gymnasium or something or a room like this. And there's various technologies for local positioning systems. You can do things like radio direction findings, stuff like that. But outdoors, most people use GPSs or fancier GPSs like RTK GPSs or the differential GPS I talked about in Perth. There's lots of different positioning systems. There's far too many of them for me really to talk about in a one-minute answer. Next question? Yep. We should really fly our quadcopter over with the microphone on it. What could go wrong? Says Darryl, the quadcopter pilot. Yep. OK, yep, good. So in the last Outback Challenge, how many of the other competitors, competing teams were running your code as well? So the question was, how many of the other competitors were running our code? And I think all but one team was using at least some code that we had written. And partly because, and we wrote the firmware for the radios that most teams adopted, the RFD 900 radio, which was a long-range preliminary radio. But then they also, some teams used our image recognition system, some used our ground station. There was a lot of cross-pollination between teams, which we encouraged because we thought the aim of the competition was to advance the state of the art, not to win some money. So it was great. So to varying degrees, teams collaborated. And it was a great demonstration that being open in a competition, throwing all your code out there, everyone knew exactly what we were doing, could see every commit as we did it. On the day, we didn't hold anything back. You can still do well. And it was a good demonstration. We demonstrated that in 2012, then again in the 2014. And it was a fantastic collaboration between teams. Given the complexity of a modern Linux system and that you've got this thing flying around physically in the air, what happens if something goes wrong? What thought have you given to safety interlocks? Or somebody runs shut down dash H? Or the whole thing glitches? Right, you can get glitches, but there's actually, I mean, a two kilogram phone playing, it actually isn't really a great dangerous thing. But there are safety we've got on some of our autopilots and automatic bypass to give the pilot absolute manual control should the code die. You can also have foul safe systems to automatically shut down the motor. And in the Outback Challenge, we had a system where it was required by the competition that under a very complex set of conditions, any one of the following conditions happens, the plane had to automatically put itself into a spin and deliberately dive into the ground. And that had to be a separate microprocessor, a separate CPU with separate power. To did that, it was monitoring the main one. And if it didn't get the right little updates, a couple of hertz to say that everything's fine, or if anything else, one of there's a long list of things went wrong, the foul safe system had to automatically cut in and dive that plane into the ground. So there's lots of things you can do. And it was a great idea that the organizers had to have this termination system. And that code is now a standard part of AguPilot. Anyone can enable automatic termination. Make sure that you know what you're doing before you enable it. Please read the documentation first. I have seen people's log files where they've been flying with the AFS Terminate flag turned on. They must have thought, oh, that sounds like fun. And not actually configured the rest of it to say under what conditions should it terminate. Exactly, exactly. All right. OK, I think we're about done. Thank you very much.