 This video will show you how to develop rule sets using eCognition. You should already have some experience setting up workspaces and projects within the eCognition environment. When you start eCognition, make sure you select rule set mode, then click OK to launch eCognition. For the example, we're going to extract water features from 4 band aerial imagery. This is high resolution 1 meter aerial imagery. As you can see here, the water features are fairly distinct. And prior to building the rule set, we want to think what makes these water features stand out. Well first of all, their spectral characteristics are that they're very dark. Second of all, their geometric characteristics are that they're quite large compared to other features in the scene. So when we build our rule set, which is essentially a knowledge based expert system, we want to transfer this information into a process with an eCognition to automatically extract these features using a combination of segmentation and classification algorithms. And then we want to export our results to a vector feature class so that we can then bring into our GIS software package. After you've set up your eCognition project, you'll want to move to view for the develop rule sets view. This has all the toolbars and windows configured correctly for rule set development. Now I'm going to move over and adjust my image layer mixing because I've have aliases assigned to this project, I can create a color infrared composite by assigning the near infrared, red and green blends to the red, green and blue color guns respectively. You can also play around with the equalizing settings in order to optimize the display. Now we're ready to start developing our rule sets. Over in the process tree, I'm going to right click and choose a pen new. I'm first going to insert a parent process. A parent process is the main process from which all child processes will fall under. The way to think of the rule set is more like a program. And thus having a parent process allows me to execute all subsequent algorithms with one click. So I'll just call this extract water. And then I'll click OK to add the rule to my process tree. Typically the first step in an object based image analysis workflow is to create image objects through a segmentation algorithm. I'll insert a segmentation algorithm by right clicking on the extract water process and choosing insert child. This will insert the segmentation algorithm below the parent extract water process. The first step in the edit process window is to specify the algorithm. I'm going to choose to use the multi resolution segmentation algorithm under the segmentation section. The next is to specify the image object domain. We don't have any image objects so we need to create image objects from the pixel level. So we'll leave pixel level specified under image object domain. We'll leave the other image object domain settings as is and move over to the algorithm parameter setting. We'll choose a new level name for our level and we'll call this level one. Then we can move down to the segmentation settings. I'll expand the image layer weights. Because the blue bands, red green and blue are highly correlated, I'll give a higher weight to the near infrared band. This means when image objects are created, the near infrared band will be twice as important as the visible bands. I'm after larger objects water features, so I'll increase the scale parameter to 40. Finally, moving down to the final composition sections, I'll adjust the shape and compactness settings. The shape settings specify how much the spectral information should be used. You want to leave this as high as possible, so we'll set this at 0.2. Finally is the compactness setting. Water features tend to be rather compact, so we'll raise this to a value of 0.8. It's important not to get wrapped up in the segmentation settings. It's a trial and error process, but no segmentation is ever going to be perfect. Once I've finished setting my settings, I can click OK on the edit process window. To run the algorithm, I'm just going to right click on it and choose execute. The time it took to execute the algorithm will appear in the process tree once it's done running. You can turn on and off the image object outlines by clicking on the show hide image object outlines button. Clicking on the pixel view button will transition between displaying the image pixel values and the mean band values for each image object. Within eCognition, image object attributes are known as features. To display these attributes, you should go down to the image object information window, right click and choose select features to display. eCognition has literally hundreds of features for image objects, but we'll start with some basic ones. The mean band values for the four bands for our aerial image, the blue, green, red, and near infrared values. Double clicking on them will add them to the selected side. Once we click OK, we'll see them displayed in the image object information window. Clicking on an image object will display all the features we recently added to the image object information window. You can use the image object information window to help determine the appropriate threshold value to separate water from other features. We see that water objects typically have much lower near infrared values than the other image objects in the scene. Another useful way for determining the appropriate thresholds is feature view. Feature view provides a mechanism by which to display the values for each features across all objects. You can toggle between feature view and your regular image view using the appropriate buttons in the toolbar. It's a good idea to right click and choose update range together the most recent information about the image objects in your scene. You can then use the threshold bars down below to threshold out specific values. You may want to start by thresholding out lower values and then experiment with thresholding out upper values. By using feature view in combination with the image object information, we can determine the appropriate near infrared threshold for separating out water features from other features in the scene. It looks like a value of 50 will be a good starting point. Now we can insert a rule to classify image objects based on this value. We'll right click on our segmentation algorithm and click on a pen new to insert a new rule in our process tree. Under the algorithm dropdown menu, we'll select assign class. This is a very simple classification algorithm that's used for threshold classification based on image object features. Under the image object domain, we'll leave it as the image object level because we want to classify our image objects. Those image objects are stored on level one, the level we created through segmentation and we'll click on the class filter and choose to only classify those objects that have no classification. For our threshold value, we'll go in and select the mean near infrared properties of each image object. For the threshold condition, we'll say the value is less than 50. This means that all image objects that are unclassified that have a near infrared value of less than 50 will meet our criteria. Next, we'll move over to the algorithm parameters. We have to say the class for which we want our image objects to be classified as if they meet this criteria. We'll enter the term water and press enter. Because we don't currently have a class named water, we'll have to define it. All we need to do here is select the color, we'll choose blue, and then click OK to create the class. Once we've created the class, you'll see it appear on the right in the class description window. Finally, we can click OK to add our new algorithm to the process tree. When the assigned class algorithm is executed, all image objects that have a mean near infrared value of less than 50 will be assigned to the water clause. To see the objects that were classified, we'll click on the view classification button, and then we can toggle on and off the image object outlines, or choose to display transparent, or solid color image object outlines to better understand the results of our classification. As has been mentioned before, the process tree is a linear workflow, and thus executing an algorithm for a second time will often result in errors. For example, if we choose to run the assigned class algorithm again, we receive the domain is empty error. This is because this assigned class algorithm said anything that's unclassified with a mean near infrared value of less than 50 gets assigned to the class water. Well, we've run that algorithm already, and we've classified image objects as water, so there are no more unclassified image objects that meet this criteria. One of the best practices for generating rule sets is to have a reset function at the beginning of your rule set. By using the delete levels algorithm, you can remove all existing image object levels. This has the effect of essentially giving you a blank slate, resetting your project, so that your rule set can execute from start to finish without incident. The delete image object level algorithm simply deletes the image object level. This includes all objects, and the classification associated with those objects. Now when we execute our extract water parent process, it will execute each and every single algorithm beneath it. It will start off by deleting the image object levels, it will then perform the multi-resolution segmentation, and then finally it will classify those image objects to the water class based on the mean near infrared values for each image object. Although our classification has been largely successful, if we zoom into some of the residential areas, we notice that shadows, which have very low near infrared values, have been mistakenly classified as water. Now these shadowed areas don't have any spectral characteristics that allow us to distinguish them from water, but their geometric characteristics are very different. Specifically, they're much smaller. So let's go over to the image object information window, right click, and choose select features to display, and let's add some geometric attributes. We can find the geometric features located under the object features section. Under the object features section we'll scroll down to expand geometry, and then within geometry we'll expand extent to get the area feature. We'll double click it to add it to the selected features, and click OK to add it to our image object information window. Now when we click on an image object we see that its area has been displayed. Our water polygons are currently comprised of numerous objects. That's in order to use the area threshold most effectively, we'll need to merge our water objects. In our process tree we'll click a pen new to add a new algorithm. Under the algorithms dropdown menu we're going to scroll down to the basic object reshaping section and choose merge region. We want to merge image objects in the image object level level 1. Providing they meet the water classification. So we'll select the water class. We'll leave all other settings as they are and click OK to add this process to our process tree. When we right click the merge region algorithm and click execute you'll see that the boundaries between our water objects break down. However, all other image objects remain unchanged. This is because water was the only class used in the merge region image object domain. Now that we've merged our water objects we can append a new rule to assign all those image objects that belong to the water class that fall below our area threshold to a new class. So we'll use the assign class algorithm and specify water as the only image objects in our image object domain. For our threshold value we'll use an area threshold based on the number of pixels. We'll say any water objects that have less than 10,000 pixels should be assigned to this new class. We'll call our new class shadow. We'll give that class a color and then we'll click OK to add it to our class hierarchy. By executing this assign class algorithm all water objects with an area less than 10,000 pixels are assigned to the shadow class. Moving over to the residential section where we had the water confusion before we see that those objects that were previously classified as water now appear yellow. And we've done this without accidentally classifying any of those lakes and ponds that we have in our scene as shadows. So this rule executed successfully. Zooming into some of our water objects shows that we still have one lingering problem and that is we have image objects on the edge of these water objects that don't have very low near infrared values. As a result, they were not classified as water. The only way to successfully classify these image objects is based on their context. So in our image object information window we'll go to select features display and we'll add some context features. These appear under class related features. We'll go to relations to neighbor objects and choose relative border to. A new relative border to feature for water. This feature specifies for each image object what percent of its border is adjacent to a water object. When we examine the properties of these image objects we fail to classify as water we see that their near infrared value is generally less than 100. We also see that they typically share more than 50% of their border with an existing classified water object. We also see there are lots of other dark objects out there, shadows from trees but these don't nearly have as great a border with the water objects. Our first step in remedying this situation is going to be to use the assign class algorithm. We want to focus on only those image objects that have not yet been assigned a classification. So we'll specify unclassified for the image object domain. Our first threshold is going to be based on near infrared values. We want to make sure that we're not assigning just any image objects to this class. We want to focus on those that are relatively dark. So we'll use a mean near infrared value of less than 100. Our second threshold is going to be based on context. We want to say that only those objects that have a mean near infrared value of less than 100 and have more than 50% of their border shared with a water object can be assigned to the new class. We'll call the new class mistwater and we'll make it a very bright green color. When we execute the assign class algorithm we see that it has the intended effect and that those image objects that we failed to classify as water before are now assigned to the mistwater class. As a final step we'll insert a grow region algorithm. The grow region algorithm will grow the existing water objects into these mistwater objects. The image object domain will specify the class filter as water. Over in the algorithm parameters will specify the candidate classes as mistwater. What the grow region algorithm does is it grows all image objects from the image object domain into the candidate classes. So this will grow water objects into mistwater objects. We see when we execute it that the mistwater objects have been consumed by the water objects. The final step in our rule set will be to export the image objects classified as water to a vector layer that we can then use in our GIS software packages. So we'll append a new algorithm. We'll scroll all the way down and choose the export vector layer algorithm. We'll make sure that we don't want to specify all image objects. We only want to export those for our class of interest which are water. So we'll specify water in the image object domain. We'll give an output name for our water shape file. We'll call it water polys. We'll make sure that we change the shape type to polygons. And we'll choose to smooth the image object outlines. You'll want to periodically save your project which will save the state of the project and the associated rule set. But even more important, especially when you're done developing your rule set is to right click in the process tree and choose save rule set. Save your rule set independently of the project for use later on. You can use parent processes as subcontainers as a way to organize and document your rule set. Right clicking and dragging an algorithm to another process will insert it beneath that process. To put an algorithm in line with another algorithm, simply left click and drag it on top of that algorithm. As you can see the parent processes give structure and organization to our rule set. Now that our rule set is complete, we can execute it from start to finish by running the parent process and this will include exporting the final results to a shape file. The shape file will appear in a result sub folder located in the same location where we saved our eCognition project to. In this video we showed you how to create rule sets within the eCognition environment. We used a fairly simple set of rules to extract water features from high resolution imagery in a way that couldn't be done using traditional pixel based classifiers.