 This is the lesson five lecture, and this has a focus on rule-based Geobia image classification. In this lesson, we will have a focus on rule-based Geobia classification. And in particular, we are going to look at expert system Geobia rule sets. And these are based on expert domain knowledge. So remote sensing expert systems are dependent on experience-based firsthand knowledge of the study area and the phenomena being examined. And interpretation keys inform the development of these expert systems. So effective interpretation keys integrate the knowledge of the sensor, the study area reflectance characteristics, and domain expertise like forestry, ecology, disaster response, security, urban studies, engineering, agriculture, and so forth. Well-designed expert systems can be tweaked to be utilized for different data sets and different study areas. And this is known as rule set transferability, and this is an area of active research. The elements of image interpretation are very important in constructing an interpretation key that helps in Geobia classification. And this involves an examination of the shapes of the objects, the sizes of the objects, the tone or the spectra or the color of the objects, the patterns that the objects display in the scene, the texture which refers to the seeming roughness or the smoothness of a surface, the site of it, where is it located, and the association of it, how is it associated with other neighboring objects around it. In this slide, we will consider the relative complexity of the elements of image interpretation as related to rule set construction. So you can have qualitative image interpretation, but in here, in this lesson, we will focus on quantitative image interpretation that will help us develop our rule sets. So this triangle represents the relative complexity of these elements of image interpretation as once again related to authoring rule sets. So the low hanging fruit is spectra or color. So that's the first element that we focus in on while developing a rule set, but then we can also include things like texture, height, as we will in our rule sets. But as you start getting into elements of pattern shadow, the location of it, how a particular object is related to the other objects around it, you have to draw upon more involved and complex algorithms with an e-cognition. And so our focus will pretty much be on tone, color, which is spectra, the shape of the objects, the texture of the objects, the height of the objects, pretty much at this point. So the segmentation and object attribute exploration is a very important precursor activity before you start developing your interpretation key and your rule sets. So you have to focus in first on the characteristics of the sensors and the data that is being produced. So if you have spectral rasters like nape imagery or Landsat imagery, it is 8-bit imagery and the digital numbers, as you know, will have a range from 0 to 255. However, if you have Landsat 8 imagery and UAS imagery, you can have sensors that are producing 16-bit imagery. And some UAS cameras now are delivering 24-bit imagery such that the range of these digital numbers changes with the sensor. And the OBI rule set development that we will be doing for the Lab 5 activity will be done with 8-bit nape imagery. We'll also be looking at spatial rasters that is data or surfaces that are derived from the LiDAR data. And that specifically means the digital surface model, the digital elevation model and the NDSM, the normalized digital surface model. And so as you know, the NDSM will give you the heights from the ground. In particular, a very important metric is the standard deviation of the NDSM, which is also known as the Z deviation. And this is a measure of surface roughness such that if you have a flat horizontal surface, then the standard deviation of the elevation is very low. That means the scatter of the LiDAR points is in a very tight peak about the surface and the Z deviation value will be relatively lower than if you have a jagged surface like the top of a forest canopy where the surface is certainly not flat, the Z deviation is going to be high. So this way you can distinguish between a flat surface like a pavement or a flat roof versus the top of a forest canopy which is more jagged and the Z deviation is going to be high. Also bear in mind that if you had a flat sloped surface and you had an object defining it, the Z deviation can be high in this case as well because the height or the elevation of a sloped surface will change across the object. So there are more object attributes that we will be considering in the classification activity that we will be doing in Lab 5. We will be using texture measures in e-cognition, in particular a metric known as Texture by Horalic named after the person who came up with these ideas. And GLCM stands for Gray Level Co-Occurrence Matrix and we will use two of these texture measures, the GLCM homogeneity in all directions. So basically this metric looks at how homogeneous is that particular object looking in all directions such that a pasture or a water surface, if you had an object defining any one of these, it should be pretty homogeneous in all directions. The digital numbers on the image will remain pretty much the same. However, if you had a forest surface for example, then the homogeneity is not going to be the same in all directions because you have a more textured surface where the digital numbers are going to change pretty rapidly as you look in all directions because some of them are going to be bright and then you have some shadowed regions and then brighter regions and so forth on top of a forest canopy. Similarly, we will also use another texture measure called the GLCM contrast in all directions such that if you look in all directions, if the contrast is very high, then this parameter will have a higher value and if the contrast is lesser. If you look in all directions and things seem to be more homogeneous, then this metric will have a lower value and you will discover the values of these two texture metrics by object exploration to get a feel for what's the range of values for different types of objects as you explore them. We will also look at the LIDAR intensity data layer and that just represents the reflected intensity of the near infrared laser that is used in LIDAR acquisition and this layer is somewhat similar to the optical near infrared band, but it is not exactly the same. It's just somewhat similar and it is useful for feature discrimination. If you look at the LIDAR intensity layer on a viewer, you can very clearly see that it discriminates between the different features and you can clearly tell what is what on it and that it gets to be very useful in the segmentation process for teasing out the difference between the different ground features. Another useful metric that we are not going to necessarily use here, you can experiment with it if you want is the slope of the NDSM raster and this can be very useful for feature delineation and building outlines and object exploration is a very, very important precursor activity for developing rule sets and you will perform such an activity in the lab number five. Some comments for the lab number five activity in which you are going to perform a complete Geobia image classification starting from the data and then ending up with a map product, a land cover classification in the end. So in lab four you explored the multi resolution segmentation algorithm by varying the scale parameter and seeing the impact of that on the object size and how well these objects represented the landscape that you were segmenting and pulling out particular features. You also ran the rule set for basic water extraction in lab four and extracted the resulting classification as a shape file that gave you an idea of the entire Geobia workflow that begins with segmentation classification and then export. So in lab five you will use the same data as lab four to complete the classification of the six major land cover classes. So this is somewhat ambitious. I think you will be able to get at least four or five of these land cover classes very clearly, but there will most likely be one land cover class that may not be very satisfactorily mapped. Do your best and see how far you can get with this classification activity. The resulting land cover map will be exported as a shape file and you will need to find the area of each land cover class in ArcGIS Pro and then you will have to do a map composition of your land cover map. So this is a very foundational hands on activity demonstrating the entire Geobia workflow from imagery to map product and that is going from data to information. And this will give you a very clear idea of how to develop your own Geobia rule sets and applications using both ortho imagery and LIDAR. Now let's take a look at a few land cover class attributes for an interpretation key as they are reflected in the remotely sensed data that you have. And this is not a complete list and you folks will be working through the object exploration activity in lab five where you will be fleshing out these ideas based on your direct observation. So for forest and trees, for example, the height will be greater than roughly about two and a half meters or eight feet would be a good threshold for a tree. And anything lower than that, we could call it a shrub or it may possibly be a small tree, but we will catch most of the trees in the scene with this height thresholding. And certainly the tree objects will have a high NDVI, particularly if the trees are in leaf on condition and there will be a high Z deviation on the NDSM that represents the trees, the objects that represent the trees. If you look at the NDSM value over there, you will get a pretty large value because the surface is pretty jagged. For water, you will have a low near infrared reflectance and that's a good way to nail the classification of the water class and the water objects. The buildings will have a height greater than about eight feet or so. That's reasonable and will have a low Z deviation for flat roofs. And then I want you to explore the other building objects to see if you can tease out some other metrics for the objects representing buildings that help you identify buildings uniquely. Then for pasture and grass, you're going to have a low height. So the NDSM value will be very low. Let's say less than 0.1 meters or 0.2 meters will have a low Z deviation. It'll be pretty flat. We'll have a high NDVI. If you have a grassy field, the texture measures will also come into play in all of these. And I want you to learn by exploration. Bear soil typically has high reflectance in red, green and blue, all three of them. And then if it's pretty uniform, then you can bring in the texture measures in it as well. And the roads and the pavements typically have a low height and will have a low Z deviation. And typically roads and pavements also can have high reflectivity and red, green and blue. But I want you to find out by direct exploration. So you will be doing a formal exploration activity that is built in into lab five. And this is the kind of thing that is best learned by doing. I want you again to play with the texture measures and try to see what are the values and the ranges that you get for the homogeneity and the contrast textures for the different objects. And you'll get a feel for it as you fill out the table that you have to fill out the table of observations in the lab five object exploration activity. And this particular lesson is more hands-on oriented with the emphasis being you working through the lab activities and getting a good feel for how to work with e-cognition, its interface, the syntax that is used for the rule sets. And this greater emphasis on hands-on activities will also get you ready for your final project. If you have any questions and comments, please post them in the lesson five general question and comments discussion.