 This is Jonathan Yildon with the University of Vermont Spatial Analysis Lab, and in this video I'll be covering some basics of high resolution feature extraction using imagery in LiDAR. I'll make use of a few software packages in this video tutorial, including eCognition, Quick Train Modeler, and ArcGIS. There are two datasets we'll be using in this tutorial, an ortho-photo in imagined format, and a LiDAR point cloud dataset in LAS format. Let's look at the image dataset within ArcGIS. If we go into the properties you'll notice that this is a digital ortho-photo that has four bands associated with it. We're symbolizing it in ArcGIS as a 412 RGB composite, or a color infrared composite. In Quick Train Modeler, we're going to open up that LAS LiDAR point cloud dataset. This particular LiDAR dataset also has intensity data so we can toggle on and off the vertex colors. In addition, this LAS file has been classified by the contractor. Of greatest interest to us for feature extraction is the fact that ground points, shown here in green, have been separated from non-ground points. If we go into the properties of the LiDAR dataset, two key factors of keen interest to us. Scale, which is approximately spacing between points, and density, which is a number of points per square unit, in this case square meters. Now we're going to remove the LiDAR point cloud file, and we're going to reload the data as a gridded surface model. We're going to set the grid sampling to a value of 1 for 1 meter, and we're going to the gridding options. Here we're going to click on a help. First we're going to set our gridding options to create a DEM or digital elevation model. In order to do this, we'll want to ensure that we're only importing the ground points. By clicking on the classification button, we can go into the LAS filter selection. With the filter using classification checkbox checked, we confirm that the LAS ground 2 class is selected. We want to make sure that we use all returns, and then we can click go to import the point cloud into a gridded raster surface model. Now we'll right click on the model, and export it to a geotiff or raster DEM. Now we're going to clear that model, and we're going to repeat that same process. Once again loading the model is a QTT format, and in this case instead of generating a DEM, we're going to generate a DSM or digital surface model. We'll want to set the grid sampling to 1 again, but in the gridding options we're going to go to the help, and we're going to follow the recommended profile settings for the DSM. For the DSM we don't want to sort by classification, but we do want to ensure that we only bring in the first returns. Now we have a true 3D surface representing all features. This is the DSM, so we'll export it to a raster TIF format as a DSM. Now we'll start up eCognition, ensuring that we're activating it in rule set mode. With an eCognition, please make sure you're in the develop rule sets, or number 4. We'll click on the create new project button, and choose to import the image data. We're going to give our project a name, and then we're going to double click on each one of the layers, and give them an alias. In this case band 1 is red, band 2 is green, band 3 is blue, and band 4 corresponds to near and farad. Giving them aliases will make it easier to work with those layers, and also make our rule set more transferable. We're also going to load in the raster DEM that we created, and give it the alias DEM, and load in the DSM we created, the digital surface model, and call it DSM. Once again, the DEM represents the bare earth surface, and the DSM represents the 3D surface of all features. Now we're going to edit our layer mixing. We're going to choose to display first the DEM, just to make sure the data loaded correctly, and then the DSM. Finally we'll assign the near and farad band to the red color gun, the red band to the green color gun, and the green band to the blue color gun to create a color infrared composite. Now we'll begin rule set building. We'll want to right click in the process tree, and click on append new. The first thing we're going to do is just create a parent process. This is nothing more than a container for all our other rules. We can call this whatever we want, so let's just call it rule set. Now we're going to append another rule. One of the first steps in object-based image analysis is to perform an image segmentation, that is to group pixels together into objects. We're going to use the multi-resolution segmentation algorithm. We're going to create a new level called level 1, and under the segmentation settings for image layer weights, we're going to choose to not use the DEM and the DSM, and we're going to give the NIR band a value of 2, or a higher weight. We'll set the scale parameter to 30, and then finally weight the shape to 0.3. You can play around with these settings to get the optimal segmentation. But it's important to remember that no segmentation is perfect. Once we've segmented those image objects, we can click on them and see that they have attributes associated with them that appear in the image object information window. We can choose to display additional image objects. In this case we're displaying both the DEM and the DSM values, and we also chose to display the NIR infrared values. So for each object we click on now, we see its mean digital elevation model, mean digital surface model, and mean NIR infrared band values. Let's right click on that rule and slide it under our rule set to fit it under the parent process. Now we're going to add another rule. We're going to do some layer arithmetics. We're going to create a new layer called the normalized digital surface model, or NDSM. Here we're going to take the DSM and subtract the DEM from it. The normalized digital surface model is height above ground. And because we're doing math on floating point rasters, we're going to want to make sure the output is also a floating point. After we execute that function, you'll see that we have a new layer called the NDSM, or normalized digital surface model. Once again, this represents the height above ground, and it's a bit more useful than the DEM or the DSM, both of which are relative to sea level. Now we can choose to add the NDSM to our image object information. When we click on each object, we see the NDSM information populated in the image object information window. We can see that features such as buildings are higher than those of ground features. Similarly, we see the same pattern with trees. Thus, the NDSM is a really great way to distinguish tall features from ground features. However, we'd also like to add the normalized difference vegetation index, or NDVI. We can do that by entering a customized feature in which we take the near and for red band, subtract the red band, and divide it by the near and for red band plus the red band. Once we've created NDVI, it automatically appears in the image object information window. When we click on vegetation, we see that they are very high NDVI values, whereas impervious surfaces generally have low NDVI values. Thus, you can see we have some very easy information to do feature extraction. We've got a combination of height and spectral information. But let's add some other procedures. Here we're going to add a procedure to delete our image object level. This is simply going to allow us to run the rule set from the start and clear out any existing image object levels prior to running the segmentation. And finally, we're going to put the layer arithmetic up top. Now these are preparatory steps, so let's insert another parent process called prep here. And we're going to right click and slide our existing processes under that prep step. The parent process just allows us to collect our rules all into a single place. Now let's add a secondary parent process called classify. This is where we'll put our rules to classify the image objects. So let's insert a child under this parent process. And we'll use the algorithm assigned class. We're going to want to run this only on all those unclassified objects. And we're going to use a very simple condition. Under the layer values, we're going to look at the mean, normalized digital surface model and say anything that has a value of greater than 2 should be assigned to a new class called tall. And we'll make the color for this tall class yellow. Once we click OK, we see that tall now appears in our class hierarchy. And when we execute that rule, we see that all those image objects with a normalized digital surface model value of greater than 2, that is 2 meters or more above ground, are classified to the tall class. So here we've used LIDAR information. Let's append another rule. Once again, we'll use the assigned class algorithm. You can access that easily by clicking on the algorithm and pressing A. And we're going to run this only on those existing image objects that are assigned to the class tall. And we're going to use an NDVI threshold of less than 0. And anything that's tall and has an NDVI of less than 0, we're going to assign to a new class we're calling buildings. So once again, these are things that are tall already, but now they have a low NDVI. Let's right click and execute that rule. We see now that all tall features that have a low NDVI are classified as buildings. However, we've got a little bit of a problem here. And we also see that some water features, which were interpolated incorrectly in the LIDAR data sets we created, have also been classified as buildings. So we might want to slightly rethink our workflow. Our workflow would be much improved if we actually classified water features first. So let's first insert a temporary parent process. And under that, we're going to insert a child process. And we're simply going to use the remove classification algorithm. The remove classification algorithm preserves the image object level, but deletes the classification. Now we're going to insert another child process. This is going to be the sign class algorithm again. We're putting it before our other classification algorithms. We're focusing on those unclassified image objects. And we're using a simple threshold here. That threshold is near infrared values lower than the number of 45. And we're going to assign those to the water class. So all image objects with a near infrared value of less than 45 are going to be assigned to water. This is being run prior to our algorithm, which assigns those objects that are tall greater than 2 meters to the tall class. As a result, we don't have any image objects erroneously classified as buildings in water areas. Now all of our tall remaining features tend to be tree canopy. So we'll just put a simple assign class algorithm here that says anything as tall is now going to be assigned to the class tree canopy. There's no threshold associated with this assign class algorithm. It's really just a reclassification step. We can continue in this workflow by now focusing on the ground features since we've classified all of the tall objects. We can use NDVI once again in a threshold of 0 to assign all the remaining unclassified objects that have lower NDVI values to a new class called other impervious. These will be roads, parking lots, and driveways. For all the remaining objects, those that have yet to be assigned to a class and are called unclassified, we can assign them to a new class called other vegetation. That is to say, all the remaining objects that are vegetation but not tree canopy are now in the other vegetation class. This is a pretty decent classification. But if we zoom in, we'll notice that we have problems in some areas, specifically with these buildings adjacent to water. If we turn off the classification, we can see that these aren't buildings at all. Rather, we have some objects with erroneous, high, normalized digital surface model values that also have lower NDVI. But these have some very distinct properties associated with them. First, they have a very high length to width ratio. So let's display length to width in our image object information window. You're going to notice that the length to width ratio of these false buildings is much, much higher than those of our actual buildings. There's another key characteristic, and that's context. The false building that we're looking at here has a high relative border to water features. So we're going to create a new relative water to feature here and display that. You can see that more than 50% of that image object that is a false building borders an existing object classified as water. So now we can use an assigned class algorithm to go in and focus only on those objects classified as buildings. We're going to use two thresholds. The first threshold is going to say if your relative border to existing water objects is greater than or equal to 0.4, which means 40% of your border is shared by a water feature, the second is going to be geometry related. That is to say that if your length width ratio is greater than 6. So two criteria here, length width ratio and relative border to water. Both of those criteria are met. You're now assigned to the water class. So we can see here that using a variety of steps, segmentation, simple classification, and complex classification, we've been able to classify these features. Finally, we're inserting a new process here to export the classification as a raster file. We'll choose the classes that we want to export, give it an export name, and ensure that we select the option for classification. Now that we've built our rule set, we can execute the entire rule set and we'll run through the segmentation classification and export functions all by clicking on that top parent process rule set and then right clicking and choosing execute. With our new classification, we can now view it in GIS software packages such as ArcGIS. Finally, we'll want to save our project. And it's also a good idea to right click and choose Save Rule Set so you can reuse your rule set again with another project using similar data. That concludes this tutorial. Thanks for watching.