 This video will take a look at change detection using Landsat data from the Yosemite Rim Fire as an example. We will employ object-based post-classification change detection to map the extent of vegetation lost from the fire. The Yosemite Rim Fire was a major fire in the Sierra Nevada's in California. It burned over 250,000 acres and was started as a result of an illegal campfire from a hunter. The fire was clearly visible in satellite imagery. We'll use August Landsat imagery acquired prior to the fire, and September Landsat imagery acquired after most of the damage had occurred to map the change of vegetation loss for the Yosemite Rim Fire. I've created an e-cognition project containing the pre- and post-fire Landsat scenes. If we go in and view the project properties, we see that I have a very simple project set up. I've used the red and near-infrared bands from the August scene, which is pre-fire, and the red and near-infrared bands from the September scene, which were after most of the damage had occurred. One could develop a more robust approach using all Landsat bands, but we'll keep this example simple. I'm displaying the data in two different ways. In the top view, I'm looking at only the near-infrared and red bands from the August or pre-fire image. In the bottom image, I'm displaying a combination of the data, pre- and post-fire. The bottom image helps to highlight the burned area. The top image shows the conditions prior to the onset of the Yosemite Rim Fire. Now we'll move step-by-step through the rule set. The first algorithm in the rule set is a segmentation algorithm. The pre- and post-fire Landsat scenes have been weighted equally, and we're using a scale parameter of 100 and weighting the spectral properties of the data over the shape properties. I'll now execute the segmentation algorithm and we'll zoom in to explore some of the resulting object properties. Selecting an object reveals those properties in the Image Object Information window. You can see that we've got three different NDVI calculations, NDVI for August, September, and then also the difference. NDVI values are not automatically calculated in eCognition, they're customized features, which is to say that I entered these calculations. Let's explore the NDVI calculation for August. A similar NDVI calculation was developed using values from the September imagery. Finally, I used both NDVI values in a customized feature called NDVI Diff, which subtracts the NDVI values from September from those in August. Customized features are computed at the Image Object level, and thus each and every object in our project has NDVI values for August, September, along with the NDVI difference. We can use these NDVI values in our classification algorithms to identify those areas that have experienced vegetation loss due to the fire. To obtain these values, we'll need to explore our data both by clicking on the image objects and using other tools such as Feature View. Using Feature View, we can symbolize our objects based on their attributes. In this case I'm looking at the NDVI difference values displayed as grayscale for the entire area. Now I'll move through the rest of the rule set. After segmentation, I have our rule cold remove classification. This simply deletes the classification and allows us to start with a clean slate after segmentation. Following that, we have a simple assigned class algorithm. This simply says, if you're an unclassified image object and your NDVI value from August is greater than 0.2, you're assigned to the veg August class. Thus in this rule set, identifying vegetation in the prefire image is the starting point for our whole change detection classification. We see that this simple rule and NDVI threshold does an excellent job of identifying areas in August that were vegetated. Our next algorithm, which is also an assigned class algorithm, uses the August vegetation classification as the starting point. The algorithm says, if you're a vegetation in August but your NDVI difference is greater than 0.1, you're assigned to the vegetation loss class. When we execute this algorithm, we see that the area of vegetation loss corresponds very well with our change detection composite image on the lower main. Now that I've identified areas of vegetation loss, I can use the assigned class algorithm to remove the August vegetation classification and merge both the vegetation loss and unclassified objects. Although the classification seems to be largely successful, it's not the most cartographically pleasing representation of the extent of the Yosemite rimfire. We can now make use of the spatial properties contained within our objects to clean up the classification. One of the great advantages of object-based approaches is that we can incorporate information such as the geometric characteristics and spatial relationships. To remove some of the isolated areas classified as vegetation loss that are clearly outside of the rimfire, we'll employ a simple rule that says if the number of pixels is less than 3,000, call the vegetation loss objects unclassified. When we execute this algorithm, small patches of isolated vegetation loss are removed. Given that we just assigned those small isolated vegetation loss objects to the unclassified category, we'll follow this up with a merged region algorithm to break down the boundaries. And zooming into the area affected by the rimfire, we see that we have a lot of small patches of unclassified objects. It appears that these objects were likely affected by the rimfire, but probably didn't experience the severe vegetation loss that other areas did. We'll employ a simple rule that says all unclassified objects that have a relative border to the vegetation loss of 1, meaning they're completely surrounded by vegetation loss, get reassigned to the vegetation loss category. Finally, we'll clean things up by merging all the objects in the vegetation loss category. Assuming out, you can see that we have an excellent representation of the extent of the rimfire that's both accurate and cartographically pleasing. In this video, we looked at an example of post-classification change detection using object-based techniques. We started off finding out those areas in the pre-fire image that were healthy vegetation, and then using that classification as the baseline for identifying areas of change using differences in NDVI from the pre- and post-fire images. We then employed geometric characteristics and spatial properties to clean up the classification to make it more cartographically appealing.