 This video is going to provide an overview of object-based threshold classification techniques using eCognition. For this example, I have a 4-band 1-meter aerial image dataset. If I go into File and Modify Open Project in eCognition, you can see that I've assigned alias names to each one of my 4 image layers. We'll start over in the process tree while I'll right-click and choose a pen new to insert a new process. This process will just serve as a parent process for all my algorithms to fall underneath. This is handy because I can execute this single process and all other algorithms will subsequently execute as well. Object-based classification requires that we have image objects, so I'll insert a child process and use the multi-resolution segmentation algorithm to segment my pixels into objects. I'm to call my new level, that is the level that the image objects are stored on, as level 1. I'm then going to adjust my image layer weight settings to emphasize the near and for red band, given that I've got only a single near and for red band and three visible bands. Finally, I'll update my scale parameter to make my image objects a bit larger, and then adjust the shape and compactness settings to emphasize the shape a little bit more, and also to try to get more compact objects. To execute the multi-resolution segmentation algorithm, I simply right-click on it and choose execute. This runs the algorithm, grouping the pixels into objects. I can then toggle on and off the image objects, using the show hide outlines button. The attributes about image objects known as features within an e-cognition are displayed within the image object information window. Not all features are displayed by default, so by right-clicking and choosing select features to display, you can choose from the available list of features. Some of the most popular ones are under object features, layer values, and the mean values. So I'm going to add the mean values for my four image layer bands. Now I'll go down to geometry, extent, and double-click on area to add the area feature. This is the area of each image object in pixels. Now when I select an object, you'll notice that the corresponding feature information is displayed in the image object information window. The goal in this example is to classify vegetation. Clicking around on our image objects, we see that the mean band values are somewhat useful, but not perfect for separating out vegetation from non-vegetation features. So we're going to create a customized feature. A customized feature allows us to create indices, such as NDVI or the Normalize Difference Vegetation Index. We can simply give the feature a name, and then plug in the formula for NDVI. Now when we select an object, NDVI appears in the image object information window. It's clear that NDVI provides a very robust means by which to separate out vegetated from impervious surfaces. We can use the values within the image object information window to help us assist with classification, but we can also display the actual values by going over to feature view. Double-clicking on NDVI, for example, assigns a grayscale color ramp based on the NDVI values to each one of my objects. By going into the lower left-hand corner of the feature view window and selecting the checkbox, I can play around with the actual value ranges. Prior to doing this, you'll probably want to right-click on the feature and choose Update Range to get the full range of values. You can then use the arrows to select the lower and upper range of values. This isn't actually doing a classification, it's just previewing what would happen if you use these threshold values for classification. Splitting your view will allow you to display your data side-by-side with feature view. You can then also go in and choose side-by-side view. This will sync the two views, allowing simultaneous panning in both views. Now let's add another customized feature. This time we'll create one for visible brightness. This is simply going to be the mean of the visible bands, so the red, green, and blue bands added together divided by three. Just as with NDVI, once we've created the visible brightness customized feature, it appears in the image object information window whenever we select an image object. Now that we have a good idea about the features we can use for classification, let's right-click on our multi-resolution segmentation algorithm, choose a pen new, and insert a new algorithm. For classification, we'll use the assign class algorithm, which is a very simple classification algorithm for threshold-based classification. For the class filter, we'll check the box for unclassified, so they're only focusing on objects that are unclassified. Then we'll go to condition, and we'll click the ellipse button. The first condition we'll use is going to be based on NDVI. So under value one, we'll choose front feature, we'll go to our customized features, and select NDVI. And we're going to choose anything that has a value greater than zero. So our threshold condition says any objects that are unclassified, which is all of our image objects that have an NDVI value greater than zero. We're going to assign this to a new class we're calling veg. Because veg is a new class that we just created, the class description dialog pops up, and we can choose a color to represent that class. Once we right-click and execute this algorithm, any objects that meet the criteria, that is they have an NDVI greater than zero, will be assigned to the veg class. The view classification button will display the classification, and then we can choose to display the classification as either outlines or a solid fill. Overall, NDVI was very effective for helping us classify vegetated objects. But in this example, let's say we're unhappy with some of these small bright patches of bare soil. How could we separate them out? Well, we could use visible brightness. So let's go back into our sign class algorithm and add an additional condition. We'll use the visible brightness feature this time, and it looked like those patches of bare soil had values greater than 150. So we'll change our condition so that if the visible brightness is less than 150, they won't be assigned to the veg class. If I go ahead and execute the assigned class algorithm, I'm going to receive an error domain as empty. The reason for this error is that I've already classified objects whose NDVI was greater than zero. As a result, there are no image objects whose NDVI is both greater than zero and with a visible brightness less than 150 in the unclassified class. How do we resolve this? Well, one way is to go up to the top, that is my threshold classification demo parent process, and re-execute the entire rule set. Running the rule set from the top created brand new image objects, which were then classified based on my new rule, NDVI greater than zero and visible brightness less than 150, which does not classify those bare soil patches. Now let's look at an alternative approach in which I separate out the classification into separate steps, rather than using both conditional statements within a single assigned class algorithm. I'm going back into my assigned class algorithm and removing that visible brightness condition. I'm going to insert this later in a separate assigned class algorithm. Re-running the segmentation algorithm every time I change my threshold parameters for classification isn't very efficient. So I'm going to insert a new algorithm here, remove classification. This will go right after the segmentation algorithm and it simply clears the classification from the image objects. This is a very quick algorithm and it's an efficient algorithm to run if I want to play around with my classification parameters. Now we're back to our original rule with all objects with an NDVI greater than zero assigned to the veg class. Now I'm going to put in another algorithm assigned class here, and then I'm going to change the domain from unclassified to veg. I'm going to say if you're assigned to the veg class and you have a visible brightness greater than 150, you're going to be assigned to a new class called false veg. Now when we run this classification, any veg objects with visible brightness greater than 150 will be assigned to the false veg class. By adding another assigned class algorithm, we can reclassify these false veg objects back to the unclassified category, effectively removing them from the classification. The second approach requires more algorithms but allows us to clearly see what's happening with our classification as opposed to stacking multiple conditions into a single assigned class algorithm. This video introduced you to threshold-based classifications and e-cognition. We looked at two approaches. In the first, we stacked our threshold classifications into a single algorithm. In the second, we broke those classifications out into a series of steps which gave us visual confirmation as to what our classification was doing.