 Okay, hello everybody. I'm Keith Kraus. I'm also with the airborne observation platform team And I'm gonna kind of take what Tristan did and go a little more advanced And I'm gonna keep this presentation a little shorter than probably the number of slides. I have so fortunately Tristan covered several of my slides already and I'm gonna try to introduce a little more lidar theory because it's kind of in the context that hopefully will make sense of why maybe our waveform data looks the way it does and and you know, hopefully that'll make sense and Also talk a little bit more about you know, you ask questions about you know target detection and that sort of thing So some of that's going to come up a little more in the waveform Tristan already kind of showed you you know discrete lidar you're essentially finding sort of returns or objects He gets geo locations that could be XYZ on a map intensity other attributes And that's great But the hope is with full waveform lidar you're actually measuring that entire signal as a function of time So the hope is that you can do more with that data and we'll talk a little more about that One of the challenges in the past is that full waveform lidar data just hasn't been available to people So there's a handful of groups that are working with it right now But you don't typically see lots of papers or presentations on the subject and we're hoping to change that And at the current moment we have lidar products from somewhere our 2013 and 2014 flights and that's available by request Unfortunately, we weren't able to collect any waveform data last year to do some instrument hardware issues But we've been collecting it this year in 2016 and we're currently processing that data So it should be coming available hopefully in the next couple weeks and As I mentioned, we hope more people get involved with waveform lidar So these are just more graphical representations of what you've already seen from Tristan But I think the big thing in terms of waveform lidar and what I'm going to talk about Just keep in mind right once again you have this outgoing laser pulse Some time goes by and then you're able to record some reflection of light as a function of time And we're going to keep kind of coming back to this change in time This is just another form of one of the figures Tristan showed But from Texas A&M and Dr. Soren Papaskew We're going to zoom a little more into this plot here in a second, but yeah once again remember It's a 2d beam that's interacting with objects as a function of time and keep in mind with lidar time is distance distance is time And so you have discrete return and full waveform Discrete return there's usually on board processing that real time will look at that signal and try to do target detection and then it does the ranging and In Soren's figure he kind of talks about this idea that depending on what sort of algorithm or hardware is used There could be a period of time where it detects an object and then it might have to reset itself So it can actually miss things and the nice thing about the full waveform is you'll capture this entire signal as a function of time so hopefully with post-processing you can go in and get more detail, but as you'll see in a minute there are some complications too and The hope is looking at these waveforms You know just like with discrete data You could start to maybe imagine based on the way the tree structure is that you might have over story and some under story And maybe the ground and you can start thinking about stratification of either vegetation or other objects Not going to spend too much time here, but just the general process of lidar is You know you fire your laser you record your signal you do some sort of target detection Basically once you've identified a target you can then look at the change in time between that outgoing pulse and the received pulse You do some calculations That converts that time of flight into a range then from the range You know you have your GPS IMU and you can figure out what direction the scan mirrors pointed at and then that gets you your coordinates So just like discrete return points can have geolocation Full waveforms can too and you'll see what the product that we include geolocation information In general ranging you know follows kind of the basic Speed of light calculations from I don't know a couple hundred years ago But essentially in this case we know the speed of light, but you have the speed of light You have a change in time between that outgoing and return pulse Remember the light has to travel there and then also come back So the distance is actually half of that time and then of course you have the index of a fraction of air Because that laser lights actually going to slow down a little bit traveling in air than say it would in space if you were in a vacuum So that's just that absolute range You might also hear the term of range resolution and some people call this different but Tristan kind of mentioned You know when objects get too close to each other you can't resolve them anymore, and I'll show a figure of that But essentially that's going to be driven by the outgoing pulse shape So these laser pulses don't infinitely or infinitesimally Jump up to a peak signal You know it does take time for it to kind of ramp up fire that laser and then ramp back down And so that shape will actually cause blurring, and that's why you can't detect objects so there are several different algorithms for how you would do ranging and Different manufacturers will use their different proprietary algorithms I'm just going to show one of the really simple ones so you can imagine if you have your outgoing laser pulse Then some time goes by it reflects off in this case probably the ground since you just get a single peak We're going to find the peaks and then in this case We're going to say well Let's go and figure out where the 50% energy is on the left side, and so this would be called leading edge detection That's done in this case mostly because if you look at the shape of this outgoing pulse It actually is kind of pushed more onto the right side, so it's not perfectly Gaussian So combination of you have a sharper kind of rise, then you do a fall and then the other pieces You know this is the ground so it's pretty simple But if you're interacting with the canopy you can imagine that left edge is going to be the top of the canopy So that might be where you actually want to range to and I guess one other thing to note So yeah, so the time between the outgoing pulse and the return pulse Ends up being about 6,500 nanoseconds When you do all the conversion that comes out to about in this case 983 meters So you can imagine if we're trying to fly at about a thousand meters above the ground you have some terrain variation And there you get 983 So, you know this may address your question a little bit, but you can see just with discrete waveform you might get multiple peaks So in this case you could identify three objects and each of them has a leading edge And so you could identify in the discrete return three targets And if you were just looking at the relative time difference between these Maybe you could say this is the ground and this is a canopy top and in this case the canopy would be 14 meters tall So you can start to see that might be one way that you might analyze waveform data is rather than building a canopy height model on a raster grid You might be able to identify a canopies and ground within a single laser pulse and now start looking at distance measurements that way So a little more in range resolution and target separation this hopefully illustrates what Tristan talked about in this case I've just done a simulation and we're using a 10 nanosecond outgoing pulse Which is typical of the optic system think at 70 kilohertz and 100 kilohertz might be a little wider So actually blur more But you can see in this case if you have a 10 nanosecond wide Gaussian and you take two I kind of ideal targets and put them 40 nanoseconds away from each other Clearly you can see two peaks and that's easy If you move them closer, you can see that the signal starts to blend in the middle But you can still identify them And even here no problem But you can see here if you actually separate them by exactly one of the full-width half maxes You know to you and I we still see kind of a double peak But actually a lot of algorithms might have a hard time Trying to determine exactly where those two peaks are and it might still say that there's one peak and as you get below You know if you get less than the full-width half max you still had two targets in the original But you can see the signal actually sums Into a single shape so at this point you've effectively lost your ability to say that there's definitely two objects there Could just be one object that was brighter And as you go even further same kind of thing And you'll see if we put some actual galsians on this at least in this case if you had a really sensitive algorithm You might say that I only have one object, but it's not a perfect Gaussian. So maybe there's something else there But at this point at half the full-width half max You'd probably have no way of knowing that there's two objects So that's kind of the idea of range resolution So you can imagine different branches in a tree if they're too close together Their signal is just going to sum up and it's going to look like just one big branch I'm not going to talk too much about this other than I do have a figure to kind of explain this But one of the challenges with all these systems is being able to write the data fast enough to keep up So kind of as a comparison the hyperspectral data You have a 640 by 40 array. You're running it at a hundred lines per second And that's effectively equivalent to the data rate that the LiDAR runs at at a hundred kilohertz If we had 310 time bins that we were trying to save out Now the difference is the spectrometer has a fancy computer And I think it simultaneously writes to four hard drives at the same time Whereas the LiDAR I think has a single hard drive So there's kind of games you have to play Making sure you're saving out that data fast enough or else the laser is going to keep firing and you'll just miss everything So as an example, you know You might love to save the entire data space from when you fire that outgoing laser all the way through the air down to the ground back but unfortunately that would be over 6,000 bins of data and Just with a hundred kilohertz, which is our nominal PRF And if we had 8-bit data, let's say which most of the newer systems are running higher than that like 16 bits You'd actually need to write out at about 5 gigabits per second Now the other day I just copied some data from a hard drive and it was running at like 30 megabits per second So you can imagine, you know, it's orders of magnitude. So you just can't save everything So there's some solutions to that which we're going to talk about which is multiple segments We don't save all the data The challenges are you have to set a threshold if you set that threshold too high You'll miss stuff if you set it too low things like just hazing the atmosphere could trigger the lidar and a lot of times They'll limit how many bins you can save so you might burn up that entire space and not even get close to the ground So this is just kind of an example of how the multiple segment works, this is a simulated waveform over here Maybe we set a threshold at 50 dns. You can see anything that's above You know, it would definitely be triggered as a target. We might also buffer it a little bit So that's kind of where these green lines are but you can see here We've totally missed this low signal peak and when we save out the waveform data, it's just gone and we never knew it existed And this is just one more example where I move that center feature over a little bit So you'll see this a lot of times in our waveforms where, you know, you'd say oh well there must still be something here But unfortunately you've lost that data So that's something you can't recover unless we were to go and drop that threshold and try to recreate the data Another thing that you'll see is it's digital data. We have to sample it onto some time bin So we'll actually do a digitization to one nanoseconds Now based on those kind of range resolution type numbers that still applies here One nanoseconds is about 15 centimeters So we're saving the data in 15 centimeter range bins But some of the ranging target algorithms, they can do better than that So you can get higher absolute precision But what happens is, you know, if you have a wide outgoing pulse here with the raw kind of simulation and digitized You can see it still looks pretty much like the original But with the newer systems they're taking those pulse widths making them really short Because what that does is that gives us more 3D structure resolution But you can see when you put this onto a one nanosecond grid Now you start to get funny kind of triangles and flat tops and other weird artifacts So essentially if you were just working with this raw data by itself, you might run into errors So a lot of the algorithms in processing will actually go and fit this with some sort of a shape That maybe is on a higher resolution and can recreate where the actual peaks are And then you'll see there's also noise, some of that could just be electronic noise in the system Or in some cases, you know, remember, even though the laser is at 1064 nanometers And they try to kind of hold the receiver to only see that wavelength The sun is reflecting off of trees at 1064 nanometers So that'll actually cause sort of an overall bias that might raise the signal up And, you know, so that might be something you want to look at like an offset and do some relative scaling Just really quick on our product right now, it's a series of binary files We have the outgoing pulses, we have the return waveforms We also provide geolocation information So essentially I've gone and geolocated what I think the first return is But then also provided other information to be able to transfer that to any other bin in the waveform We also provide some observation information, so that's viewing geometry distances And then finally we provide some ephemeral state of the GPS and IMU So the hope was even though we're doing the geolocation ourselves If somebody really wanted to go kind of back to the beginning and recreate it all They have hopefully all the information they need to do rigorous calculations And then finally there's also some QC files, essentially they're just point clouds that were derived from the waveform So that we know if it worked or not So if there was some big bug in my processing code we would see it where it might not map things on the ground correctly And you can see this is just kind of what the waveform product looks like So you have your laser pulse number on the vertical, you have outgoing pulses So this is its own data array, you have return waveforms as their own data array And if we just grab one horizontal slice out of here, so this is just one laser pulse You can see you get a waveform with multiple peaks And this is a good example where if you were just looking at peaks you'd say oh well there's four peaks But like my algorithm actually can't do the leading edge on this guy so it'll say well I know there's something here But I don't know how to geolocate it so sorry The other thing that I want to use this slide for is to say some of the power of the waveform is You can see how there's kind of these bumps going on sometimes on the right So you have light interacting with the canopy and there's photons doing multiple scattering And they kind of get delayed a little bit And so with waveform you might actually be able to take advantage of this Whereas all that information is just thrown away in the discrete LiDAR I'm going to skip over geolocation stuff You can see different targets make different shapes but it's not like very straightforward It's not like oh well conifer tree is always going to look like this So even though this is a nice example the real world is never as nice But you can see bare soil is pretty much a hard target So the return waveform is going to look very much like the outgoing pulse shape Different trees, deciduous trees might have more reflection off say the top Whereas a conifer with its cone shape you're going to have more photons coming later from lower levels in the tree In some cases like pine plantation where they've cut down several of the trees You'll have a beam that'll hit part of a tree but then it also hits the ground So you can see a strong ground return but also some vegetation Just once again power of waveform here are four different plots of What might only come out as a single return in the discrete data But you can see the shapes are very different from each other So the hope is that with waveform LiDAR this information can be extracted And I'm just going to show one quick example of what you might do with LiDAR data I'm not going to explain this too much other than to say here's a raw waveform I've smoothed it out There's algorithms called watershed segmentation So effectively that's looking for the peaks and then kind of separating them into different objects So you can imagine if you flip this upside down and you filled it with water That explains kind of the watershed concept of the different sections would be different watersheds And then what I've done is taking the peak of the first return I just calculated the rise time so from the left to the peak Fall time peak to maybe some fraction of energy that could be where it ends And now I've gone and done that for several laser pulses And colorized a point cloud based on that fall time So in this case blue and purple is going to be very short fall time So you can see the bare earth and ground come out blue Some of the pine plantation tends to have more of that structure as a function of time So it'll come out oranges and reds So one of the challenges here is on this map you might say oh I can see the pine plantation This is an easy land cover classification But you can see there's red speckled throughout probably what are oak trees here is deciduous So maybe you have more yellows and reds here but it might not be as straightforward to just make a land cover map But just something to think about of what you might be able to do with waveform And then just finally what a lot of the universities that are working with waveform They tend to do this thing called deconvolution So the idea is once again you have that outgoing pulse that blurs the data And essentially deconvolution they're just trying to sharpen that up And see what this underlying structure might really look like So this is just a basic example of one algorithm called Richardson-Lucy You can see the raw waveform kind of looks like this As you start to deconvolve it actually turns more into a Gaussian shape And then here now you kind of see three features And as you keep going it says well there's really two objects here and then there's two over here Now one of the challenges with this is is any of this real So a lot of times people might end up doing an intensity threshold So in some I think I ran this kind of like the plot you just saw with the fall time And sometimes like with these points you actually end up getting noise above and below the ground And when you look at that you say well this isn't even realistic It's just an artifact of this algorithm So you know it's kind of like buyer beware with some of these algorithms That if you don't totally know what they're doing and you over process It might not be realistic So I mean there's a lot of research going on with simulations And ray tracing in kind of a 3D CAD world to sort of understand if this is real or not And then finally this is an example from Tan Zhu at Texas A&M But he's just done some different processing levels and presented this nice figure So you have your discrete return up top If you were just to take those full waveforms and kind of put them in a 3D land They'd be very blurry and confusing But he's analyzed them through just fitting Gaussians to that raw waveform Or running two different deconvolution approaches And really the hope with the deconvolution is if those objects that we saw in the previous slide were real You might be able to get more of a densified point cloud than you could with the discrete data Because hopefully the waveform is picking up some of those objects that were lost And with that I will leave it to questions