 My name is Tristan Goulden, I'm a remote sensing scientist with the airborne observation platform. And I'm going to do a talk this afternoon on an introduction to LiDAR, primarily on discrete LiDAR, because Keith Cross is going to be following me talking about the waveform LiDAR. So LiDAR is an active remote sensing system, which means that it has its own energy source. And so the main subsystem of LiDAR is the laser. And what we use the laser for is to generate a pulse of energy that comes out of the sensor, which is pointed out at the bottom of the aircraft, travels down to the ground, reflects off targets on the ground, and then returns back to the aircraft. And so we're actually able to measure the time it takes for that laser pulse to go down, reflect, and come back. And based on that two-way travel time, we can calculate a range from the aircraft down to the ground. We can also have a GPS sensor on the roof of the aircraft to get the aircraft positioned, an inertial measurement unit inside the aircraft to get the orientation, roll pitch and yaw of the aircraft, and then a scanning mirror, which directs the laser pulse within a swamp beneath the aircraft. So when you combine all of these subsystems together, you can actually coordinate points on the ground based on all of the observations from these subsystems. So what makes LiDAR really unique is that it's able to achieve a really accurate and dense raw sample of the terrain. It's able to do that mostly from the laser ranger, which today's rangers can operate at about 500 kilohertz. So this means that the system is capable of sending out 500,000 pulses per second. Each pulse is capable of getting multiple returns. So it's possible that some of the energy is going to reflect off the top of vegetation. Then energy will continue down through the vegetation and might reflect off the understories, and then hopefully we'll make it down to the ground, reflect off the ground, and we can get multiple returns from each pulse. So that means we're able to get, for sending out 500,000 pulses per second, a multiple of that many points every second. So it's an incredible amount of data. So I just want to briefly introduce the difference between the discrete and full waveform. I won't get into a lot of details because Keith is going to talk about that in a minute. But basically there's kind of two flavors of the observations we get from the LiDAR. The discrete gives us points only. So when we get that reflection off of an object, we get that return range, and we calculate just a single coordinate. So from the multiple returns, we could get three or four individual three-dimensional coordinates. With the Optech Gemini, when that signal actually comes back, it's split, and part of the signal goes to a waveform digitizer, and that records the entire return energy signature. So this energy signature will include the outgoing pulse here, then some time will pass, and then as we're going through vegetation, we're getting these hams here where we're getting additional energy returns from the object. And so in the discrete LiDAR, we're going to cut off the timing at each one of these hams, potentially here, here, and here, and give us three individual points. But with the waveform LiDAR, we get this full return signal, and you're able to do more advanced analysis on the structure of the vegetation with that signal. So the Neon LiDAR, currently we're operating two Optech Gemini systems, which are slightly older technology. We purchased these in 2011 and 2012. In the future, we'll be also doing surveys with the Regal Q780, which is a more contemporary instrument. Right now we run our Optech at a pulse repetition frequency of 100 kilohertz. We chose this frequency because it's the highest we can go, and still maintain the accuracy that we want. If we go any higher than that, it's capable of going up to 167 kilohertz, but there's a large degradation and accuracy. We fly at 1,000 meters. We have 30% overlap in our flight lines, and so that gives us two to four pulses per square meter. So in the overlapped areas, we get four pulses per square meter, but in the non-overlapped areas, we get two. It's capable of recording up to four returns per pulse on the discrete LiDAR, and so that's theoretically we could achieve 400,000 points per second, but generally you'll never get four returns on every pulse. So in order to position all of the LiDAR data, we also have to determine out trajectory. So this is what the GPS IMU, the information that the GPS IMU collects. And so as part of that, we set up GPS base stations in the local vicinity of our survey areas. I think somebody asked about this yesterday. And so generally we try to exploit the cores network as much as we can, and these are GPS base stations that are set out across the United States and are run by the local governments or the federal government. And then we use those core stations. There's kind of stationary GPS sites with really accurate coordinates, and we use those to differentially correct the GPS trajectory. If we go to the site and there's a core station and we see that the distribution of core stations doesn't give us sort of less than a 20 kilometer baseline between that reference station and the aircraft, then we go ahead and we set up our own GPS base station to correct the airborne trajectory. So for the most part, unless we're transiting from the airport to the site, we'll never have base stations that are more than 20 kilometers from the aircraft. And we do this because we're aiming for errors in GPS trajectory to be between five centimeters and eight centimeters. In order to do that, we really need those local GPS base stations that are close to the airborne trajectory. And then we try to achieve errors of 0.005 degrees in pitch and roll and 0.008 degrees in yaw. This is just a picture of the IMU that's located inside the aircraft that gets the roll pitch in yaw, as well as the GPS antenna. So one of the reasons we're able to get these really accurate trajectories is because GPS IMU are really complementary technologies. The IMU is able to achieve really fast positioning, but it's prone to drift over time. Where GPS gives us really good position every second or so, but we can't get a position in between those two GPS observations. So the GPS operates at 1 Hertz. The plane's traveling at 50 meters per second. That means we're only getting one GPS observation every 50 meters. A lot can happen to the plane in 50 meters. So that's where the IMU takes over. And it's operating at 200 Hertz. So it takes care of the positioning in between those two GPS observations. So we get a good position 200 times per second. And so I should mention that as you're going between the two GPS observations, the IMU is prone to drift, but it gets corrected every time we get to a new GPS observation. So really it only needs to do its positioning for one second. And this is just an example of some results of a trajectory. This was done at the Smithsonian Environmental Research Center. This is the upper left-hand side. You can see the software that we use to process the trajectory and then the resulting trajectory in Google Earth where you can really see each one of the flight lines that we flew going up and down across the site. And then we actually also worked out statistics for all of our trajectories from our 2016 flights and we'll probably do the same for the 2017 flights just so we can get an impression of the type of quality we're getting on those trajectories. So you can see generally we try to keep our roll below 20 degrees so that we maintain lock to all the satellites. You can see that for the most part our roll was always in between 20 and negative 20. Generally we always had above six satellites, but more like eight or nine. The speed up was generally below four, which is quite good. And then this is the distance to the nearest base station. So you can see for most of the time we're below 20 kilometers. There is sometimes where we get up a little bit higher, but that's generally during transits between the site and the local airport. So once we have that trajectory we're able to mix that with the range and scanning information collected by the LiDAR sensor to produce a point cloud. And so this is an example of our L1 product which is point clouds produced in LAS format. LAS is a standard binary format for exchange of LiDAR point clouds. This is an example from the San Joaquin experimental range. You can see all the individual points that were collected by the LiDAR here and even the structure of the vegetation you can make up on those individual points. So that's just our L1 product. All the L3 products that we produce are rasters opposed to point clouds. So instead of all those individual coordinates we have a grid of points. And so we actually have to convert those points into a raster product. So you can imagine if we observe this area of land with the LiDAR we might get a sampling like this of all the LiDAR points. But what we really want to create our raster product is say the elevation at each one of these grid nodes. So basically we have all the points overlaying where we want those grid nodes. So what we do is we look in the area surrounding a particular grid node and then we use an interpolation method to calculate what the elevation of that grid node might be. And of course we can create that at any size and here at Neon we create these rasters of one meter resolution. So there's lots of different interpolation methods that you could use. I encourage you to go out there and research the different ones that are available. But at Neon we use what's called a triangular or regular network. Which basically means we're just creating linear connections between all the points and forming triangles between all the points. And so then if you think about it you can kind of lay that grid underneath this triangular or regular network that's connecting all those points. Just interpolate the elevation from the plane of the triangle that overlaps the raster cell to the elevation down and assign it to that raster cell. So that's how we get the elevation from the raw point cloud. So all the points here are applied to our observations. We interpolate in between them. Pull the elevation from the triangular plane and assign that elevation to the raster grid. So one of the advantages of this is that obviously it honors the location of the true data point. You're never sort of interpolating down or filtering a lot of observations and creating a new elevation from what you observed. And it's computationally efficient. This is the main reason that we want to use it. Because when we're producing a lot of data in an automated fashion we want a really computationally efficient algorithm. The main downfall of the triangular regular network is it doesn't exploit redundancy in the LiDAR data to improve your accuracy. So you can imagine if you had multiple points within a single grid cell it's not averaging those to reduce the error. You're just pretty much getting the elevation from the point that's closest to the center. So we'll talk more about that later in the week during the answer and the lessons. So as I mentioned, these are the main products that come from the LiDAR. You have the unclassified point cloud. That's the L1 product as well as the classified point cloud. It's a L1 product. And then the L3 products, which are the rasters. So you get a digital terrain model, digital surface model, slope and aspect, canopy height model, and then the slanted rangeway form, which we'll keep in the top of the list. So I'm just going to briefly go through each one of these products. I've seen this picture already. It's the L1 product. It's in last 1.3 format and available by flight line. And then we also have a classified point cloud. So we get this out in tiles. One kilometer by one kilometer tiles. And we perform the classification with a commercial software package called Last Tools, which basically just goes through and looks at the geometry of the points and determines whether the points are ground or vegetation based on their structure and how they're oriented to one another. And so then we further classify those also into building, noise and unclassified points. And then we also colorize the point cloud. And then we also colorize the point cloud so we take the high resolution camera imagery and apply the RGB colors to the point that overlaps on that image. So you can kind of get a full color 3D model of each of our sites with these. And so this is part of the classified point cloud. So next we have the digital surface model. This is one of our L3 raster products. So basically we just use that triangular regular network interpolation algorithm to do with all of the points. And so this is vegetation building all the points included we're interpolating between all those and getting that raster elevation. So again we created a one meter spatial resolution and then we created digital train model. So you saw in the classification we had a couple slides ago we remove all of the vegetation points that are classified as vegetation and then we go back a sec we remove all the vegetation points and then just interpolate just the ground points and so that gives us this just the ground surface. And I think I mentioned on the first day that LIDAR is really one of the only technologies that's able to do this to classify those ground versus vegetation points and then just get the ground so you get an idea of what the surface looks like. So this is just a little animation that shows the difference between DSM removing those vegetation points. So then we also collect we're creating slope and aspect rasters from the digital terrain model so this is also an L3 product so the slope is measured in degrees basically just the slope of the train while the aspect is the direction of the steepest slope so these are also measured in degrees between 0 and 360 degrees both of these come from the Horn algorithm which is the same algorithm that's using a lot of popular remote sensing packages like Esri or QGIS and MB to calculate slope and aspect so this is also produced at one meter and given only one kilometer but one kilometer tiles and then finally we have the canopy height model also L3 product a common issue in creating canopy height models from LiDAR is you get data pits so these are areas that go all the way down to the ground in the center of the canopy which sort of biases your estimates that you might be gathering from the canopy height model so we use an algorithm from this paper here that takes care of those data pits and so if you want more information on that you can go here and then the final product is that full wave form LiDAR product and you can see that these were points that were taken over canopy height model and then if you look at one of these individual points you have this outgoing wave form here and then a whole bunch of time passes but a whole bunch of time in LiDAR which is like 300 nanoseconds and then you get this return pulse here and so Keith it's actually 6000 I was thinking one way and so Keith is going to give us a presentation on wave form LiDAR