 We're here today to talk about our new product, which is called LSAP, LiDAR Synchroness Acquisition and Processing. In shorter words, this is just in-flight processing of the LAS data. I guess we go forward. So basically, we're going to talk about today the main features of what LSAP does, some description of the LiDAR sensors, some description of the vehicles used, some demo projects, and the accuracy assessment. That's the first question everybody usually asks me, is compared to post-process kinematic, how does this measure up? I'll show you. OK, some of the basic features we have here is we use L-Band Satellite for differential correction. So there are several vendors that do that. They're both equally as good as far as we see. Basically, you get centimeter level positioning and X, Y, and Z almost anywhere in the globe, except for the extreme latitudes. What LSAP does, it creates a trajectory using this L-Band data, L-Band corrections. We generate a point cloud. It's RGB attributed at that time. The data is captured in flight lines as they're flown. So you don't get data in the turns, or in the takeoffs, or in the landings. We do an adjustment to the flight lines at the end of the mission before you land. And basically that takes care of small elevation disparities, also some slight trajectory adjustments as well. So then if you care to, in a lot of projects, people want to edge match their data so that it's not so noisy on the overlap areas. We allow you to do that. Also, we can convert or transform to any coordinate system that's either EPSG or some local system that you've generated the seven parameters for as well. And then as a safety precaution, we gather all the data that you would need if you wanted to run post-process PPK. So if there's some disaster for any reason, you can always post-process it like you've always done and no worries. So this first project we're going to look at is in, oh, I should tell you, we're a small company, but we have offices in Switzerland, of course. And then also China, Thailand, and the US. So in this particular session, I'm going to talk about projects that we captured in China because we are able to fly higher and be online of sight in those kind of things. We'll be giving a webinar here in a few months about some projects that we're capturing in the US. So quickly, we're getting 400 kilohertz laser data. Side lap is 80%. I'm forward 80%. Side lap is 60%. We're at 200 meters and 8 meters a second. We're using a typical VUX-1 long range Dutu. And he has specs that you will likely know. We're using a multi-copter here that's actually quite nice. It's got an empty weight of about six kilos, payload of 12 kilos. And with our payload, he can fly about an hour. OK, so here are some basic shots. Here we are taking off to the right is the LSAP interface. And he gives you GPS, INS information. You can set range gate, angle gate, those kind of things. Also, we have an onboard 42 megapixel camera that's calibrated. And we use that to attribute the RGB data. That's what that is on the right side. And then here's a guy capturing some RTK control before the flight from the customer. And on the laptop on the left, you can see this is the output that's available at landing. I'll show you more later. So basically, when you land, you plug in a cable and you connect to the onboard computer. And this is what you have. As you can see, there's several flight strips. This one is five flight strips that go in this direction. So basically, there's the RGB data that's already done, as I mentioned. Also, you get a digital surface model that's already corrected between strips, as I said. Quickly, this project was there was some control points that the customer measured, as you saw a minute ago. We did an analysis of that after we were done. And basically, we have about a 3 centimeter RMS, as you can see. And then we can also take that data and further process it. We can make a digital ortho-photo. We can also classify with other software, Terascan or whatever you might have or want. And then here's what that flight looks like. I was wrong. There's four flight lines on this one. And you can see the IMU initialization at the start in that figure eight. And then we keep wanting to say the final data of LSAP is just as good as post-process kinematic. We got another demo in a different part of China. He's going a lot faster. The laser is about 600 kilohertz here. We're at 150 meters and eight meters a second. It's about 17 meters for this guy. We use the same UAV. And this is just a park. And that's why it's all fancy and stuff. But this is six flight lines, same process as before. Edge matched, adjusted, and everything. And here's our colorized point cloud and our digital surface model. We don't have any control on this one, so we can only compare the LSAP to the post-process kinematic that we did. And it's centimeter level as well. Here's what the flight path looks like on that one. Same IMU initialization. And also, if you see the photo centers on here, we're capturing all the data, as I said, all the data that was on the entire flight path. But that is not used in the LSAP solution. We only need the flight line data. OK, here's another one that's interesting. This is a fixed wing, vertical takeoff. He's really fast. He's supposed to go about 30 meters a second. On this particular day, it was really windy. So we were going from 25 to 50 meters per second. So in one direction, 25, and the other direction, 50. So I just wanted to see if our algorithm would still work. Same sensor, by the way. And it does. No trouble. Doesn't matter about speed. Doesn't matter about elevation. And I want to spend a little time on this one. This is our original calibration site. And the idea is we build everything and we mount everything in a robust manner. Nothing ever moves. So in order for us to do this in real time, we have to know the laser calibration, we have to know the camera calibration, and also the bore sites for those two sensors. If we don't, this is impossible. So in order to do that, we set up a calibration site on a little island. And we gather all these control points. And I don't know if this is a pointer. No. OK, so basically, you can see two flight lines go this way and two flight lines go that way. What we did is we flew both of those in all four of those in both directions. So basically, we get eight flight lines. So the data in the middle, in this box here, every area or every object shows up in eight images and also eight laser flight lines. That allows us to do triangulation, so to speak, and make all these things as tight as possible. This works really well. And so here's the proof in the pudding, so to speak. If you look on the left, these are 30 control points in post-process kinematic. Here they are in LSAP. You can look point to point if you like. But the end result is you'll see that the standard deviation in post-processing kinematic is 2.7 centimeters. In LSAP, it's 2.9 centimeters. And the root mean square is similar to that. And in laser data, it's very difficult to determine the XY precision of the data. So what we do is we take the trajectories and we put them into photogrammetry software. And most of the errors, if your system is properly calibrated, most of the errors that you will see are trajectory-related. So basically what we're doing here is we're comparing the trajectory in Agisoft, for example, we're comparing the trajectory position from LSAP positions to what Agisoft calculated. And then here is the result. Basically, maybe you can't see, but on the bottom row is LSAP raw. And the difference between post-processing and LSAP is the fourth decimal position, which is the third is about 5 millimeters. So it's really quite small. And if you look, even as we go down here, we're still sub-centimeter. We're millimeter level and also third decimal position in the rotations. So our data is, I want to say, the statistical differences in those two is not significant whatsoever. This is as good as it gets. So in summary, basically what we get when landing, as I said, is RGB attribute data that's flight line adjusted. And this can be used with any type of platform, as I've shown. There's little or no training to be able to do this. I mean, there's some technical information you need in the front, like what coordinate system you need and all those kind of things. All the rest is automated. Anybody can do this. A child could do this. And then we can output any coordinate system that's an EPSG or any coordinate system that you can define. We have some kind of seven parameters, some kind of offset to something that we already know, anything like that. And last but not least, the accuracy difference between LSAP and post-process kinematic is insignificant.