 Hi, this is Jonathan Yildon with the University of Vermont, and this webinar is part of the America View series on object-based image analysis. In today's presentation, I'll be giving an overview of some of the image segmentation techniques available within the eCognition software package. I'd like to point out that the true power of object-based image analysis is not the segmentation algorithm itself, but rather the ability to build context through iterations. As humans, we've evolved to make use of extensive contextual information when we identify features, and we can replicate that within the object-based image analysis environment. So with segmentation, when applied to an image, we can generate image objects. But once we have those objects, we may need to go ahead and apply additional segmentation routines, and we also may need to apply fusion classification and morphology routines to refine those image objects. It's not uncommon within a single expert system applied within an object-based image analysis workflow to need to segment five, ten, or even fifty times, and thus it becomes important to understand some of the performance issues associated with the various segmentation algorithms within eCognition. I'd like to give you a quick overview of the data that we'll be working with today. First, I have a vector layer, in this case hydrology polygons. I have a four-band imagery data set, in this case aerial imagery. I also have two LiDAR data sets. One is a LiDAR intensity image. And finally, the other is a LiDAR normalized digital surface model, or NDSM, in which the pixels represent the height of features relative to the ground. Moving over to eCognition, you can see over on the right here in my process tree, I have all the different segmentation examples I'm going to demonstrate today already preloaded. I'd like to point out that I have a little routine here called reset. You're going to see me execute this routine over and over again. Essentially, what I am doing here is I'm deleting any existing segmentation levels or maps, so that I have a clean slate for which to apply the new segmentations. So for segmentation types, let's start by going over the chessboard segmentation. Chessboard segmentation is the most basic type of segmentation you can do with any cognition. It's also the fastest segmentation from a processing standpoint. And all it's doing is cutting up your data into squares. So in this case, I have one meter data. I'm applying the image object domain to the pixel level. I'm creating image objects with a size of 100. So essentially, squares that are 100 meters on each side. And I'm creating a new level called level one. Let's go ahead and execute that. And we see here that we simply have 100 meter squares created over our image. These, of course, don't represent features very well. And you're typically not going to use the chessboard segmentation algorithm in this fashion. It is very useful for incorporating existing thematic data sets. So let's clear our slate here. And now we're going to apply the chessboard segmentation algorithm, once again to the pixel level because we don't have any existing image objects. But here I'm setting the object size to a very high value. I'm calling my new level level one. But what's different than last time is that I'm constraining my operations based on this existing thematic layer I have. This hydrology polygon layer. So while I'm creating these very large chessboard objects, it's going to be constrained by the boundaries of my existing vector data set. This is essentially the mechanism that you use to incorporate vector data into e-cognition by turning them into image objects. Executing that algorithm, we can see here that we now have those hydrology polygons within e-cognition existing as image objects. And of course, at this point in time, I can work with them like any other image object, classifying them, applying additional segmentation algorithms to them, and access all of the wonderful image object attributes that e-cognition provides me. Now the most popular type of segmentation, or I should say probably the most widely used segmentation algorithm with e-cognition and really what gained prominence early on within the software is the multi-resolution segmentation algorithm. So once again, let's clean our slate here. And I'm going to go ahead and open the multi-resolution segmentation algorithm I have here. You can see once again, I'm applying it to the pixel level. So I'm using strictly the pixels, not existing image objects, to create the new objects. I'm creating a new level called level one. And here I'm weighting the data sets that I've used. So I can weight any raster data sets that I have. So while I have some lidar data sets, in this case for multi-resolution segmentation, I'm only using my four bands. And I could choose if I wanted to, to weight something like my red band, or my near and for red band, more than the other bands. I also have a scale parameter here. This is what controls the size of the image objects. Now the scale parameter relates to the type of data you have for one meter data, such as I'm working with here. A scale parameter of 20 is going to produce very different sized objects. And if I was using 30 meter Landsat satellite imagery, I can also weight the shape and compactness of these image objects. Generally, you don't want your shape weight to be too high. The reason being that then it ignores the actual spectral information contained within your data. And the compactness setting is up to you. Now it's easy to spend hours going over and changing the scale parameter and the shape and compactness parameters trying to get the resulting image objects to accurately represent your features of interest. And I'd caution you to keep in mind that segmentation is something that you're going to have to do over and over again as part of this iterative process to build context. So my recommendation is to spend a few minutes playing around with these parameters, and generally you just want to ensure that whatever feature you're interested in that you don't have image objects resulting from your segmentation algorithm that contain more than one feature. So I wouldn't want image objects, for example, that include both buildings and trees within that image object if I'm concerned with differentiating those two features. So let's go ahead and execute the multi resolution segmentation algorithm. The first thing you'll notice is when it comes to the processing time for segmentation, the multi resolution segmentation on this image took 40 seconds compared to that initial chessboard segmentation we did, which took less than a second. So once again, multi resolution segmentation is very processor intensive. Of course, the nice thing about it is that we get image objects that are much more suitable for classification. While I'm not going to go into the full process of the iterative loops needed to sort of build context and create more meaningful objects, I do want to point out a segmentation algorithm that's very useful in this process, and that's the spectral difference segmentation algorithm. In order to apply the spectral difference segmentation algorithm, you have to have existing image objects. In this case, we created image objects based on the multi resolution segmentation algorithm. That's why you can see here under the image object domain, I'm applying this algorithm to the image object level. That means two existing image objects. I'm applying that on level one because that was the name of the level I created from the multi resolution segmentation. And here I'm applying a maximum spectral difference of five. And once again, this maximum spectral difference setting is totally dependent on the type of data that you use. And it may be different based on different areas and on different types of data. Much like multi resolution segmentation, I can also choose to weight my layers. So I can change the weight, for example, and perhaps up the influence of my multi resolution segmentation band. I can also choose to include or exclude thematic layers when I'm doing this. So let's go ahead and execute this spectral difference algorithm and take a look at the result. So what happened here is that we merged image objects that were spectrally similar. As you can see, we have a much better representation of this field now because rather than it being comprised of all these tiny little image objects, we have it as a much larger object, allowing us to perhaps use shape and size to classify it rather than simply the spectral information. Let's reset. Going back to a clean slate here. And now we're going to use another segmentation algorithm that breaks things into squares similar to the chessboard segmentation algorithm. Only in this one, we're going to have variable squares based on the heterogeneity of the data. So this is the quadtree-based segmentation algorithm. Once again, because I have a clean slate, I'm applying it to the pixel level here. There are different modes here, but generally you're going to always use the color mode. The scale parameter is similar to multi resolution segmentation. This will create larger image objects as you increase the scale parameter. And you have the opportunity to weight your layers here, but this is strictly a Boolean operation. So yes or no to use those layers. And we're going to click execute here. And so as I mentioned earlier, you get these squares in areas where it's more homogeneous. You're going to have larger squares in areas where there's more heterogeneity. From one feature to the next, you're going to get smaller squares. You're going to notice the quadtree segmentation, especially compared to the multi resolution segmentation, which took 40 seconds. Quadtree segmentation took a little over two seconds. So it's very, very fast. It doesn't produce particularly meaningful objects based on their shape. But I do find that the quadtree segmentation algorithm works very well sometimes for features that you want to classify based strictly on their spectral information. An example is a first pass at extracting impervious surfaces. Because we have image objects, we can do just what we did before with the multi resolution segmentation algorithm and go ahead and apply the spectral difference segmentation algorithm. So very similar to what we did before when we used the spectral difference segmentation algorithm, we're applying it to the image object level. So our existing image objects that we created as part of the quadtree segmentation, those objects are on level one. And this time I set the maximum spectral difference to eight. I can weight my layers here because it's a spectral difference segmentation algorithm. And I can choose to use or not use my thematic layers. And let's click execute. Now I don't think these objects look quite as good as what we achieved using the multi resolution segmentation algorithm followed by the spectral difference segmentation algorithm. But in some cases, particularly for these manmade features here, I think they do quite a good job of representing our features of interest. And if we look at the time, it took two seconds for the quadtree followed by 11.5 seconds for the spectral difference. That's much more efficient than the 40 seconds it took for the multi resolution segmentation followed by the nearly two seconds it took for the spectral difference algorithm. So when you're working with extremely large projects and processing time becomes a concern, you can achieve nearly identical results or results I should say that are perhaps good enough. Sometimes it's using these different segmentation algorithms can be a big time saver. I'm going to introduce you to some of the threshold approaches to segmentation. These are particularly useful when you're working with something like a LIDAR normalized digital surface model. So once again, what we're seeing here is the height of features relative to the ground. As I scroll around to you can see that my buildings, this one in particular is about five to six meters in height. And I have forest canopy that's much taller, up to 20 meters in some cases. So it's often nice as part of my workflow to do an initial segmentation where I just, for example, segment out all those tool features and then work with them separately. And then subsequent approaches do additional segmentations and so forth to separate out perhaps the trees in the buildings. So we reset here. The first one that I'm going to introduce you to is the contrast split algorithm. And in order to do a comparison to the another algorithm known as the multi-threshold segmentation algorithm, I'm going to use a map operation. Map operations are very powerful. And these have been introduced in E-Cognition 8. And they allow you to create essentially a duplicate copy of your data set and apply different segmentation algorithms to each one of their image maps. Automatically, when you start E-Cognition, you're working in a map called main. And you always have to have this main map. But you can run this algorithm known as CopyMap to create a brand new map. It's a duplicate of either all of your data or a subset of your data. So in this case, you can see that I'm running the CopyMap algorithm. My target map is going to be called Map2. And I'm copying over strictly that NDSM layer. And I'm going to choose to execute that. Now if I split my windows horizontally, in one, I can show Map2. And to the other, you're going to notice I'm displaying my main map. And these are also indicated in the lower right corner of each one of your views here. So the first thing is that on Map2, I'm going to use a contrast split segmentation algorithm. Like all the segmentation algorithms, I encourage you to read the reference materials so that you understand all the parameters. But I'm going to go over them quickly here. I'm applying this contrast split algorithm on the pixel level. And here I've specified that I want to use Map2 for that. And I have some other settings here. The main things to keep in mind is I have a new level, level one, that I'm creating as part of this, and that I'm going to be splitting off all objects that are two meters or taller in size. So these are really here the crucial settings in the contrast split algorithm. What's also very important is the layer that I'm using, in this case the normalized digital surface model. And then what's unique about contrast split compared to the other segmentation algorithms we've been using is that it segments and classifies in one fell swoop. So my class for bright objects here is going to be tall, bright objects, those that exceed the minimum threshold, and those that are below that threshold are going to be unclassified. Essentially what this is saying is that everything that's taller than two meters is going to be assigned to this class called tall. So let's go ahead and execute that. That's complete. You can see it took a little over 30 seconds to run that contrast split algorithm. And if we flicker back and forth here between our classification and the image objects, you can see that the contrast split algorithm essentially split off all of our tool features based on the normalized digital surface model. Now in our main map, we're going to apply an algorithm that generates very similar results, only much more quickly. And this is the multi threshold segmentation algorithm. Once again, this is an algorithm that's new in e-cognition version eight. And it's a bit simpler and one, like I said, the main benefit is that it's a lot faster than the contrast split segmentation algorithm. So I'm applying this on the pixel level. I'm using my main map here. And I'm using my normalized digital surface model. And down below here, I have my thresholds. So my first threshold is a cutoff at two. And I think below two goes to the unclassified category. Things between two and 255, which is the maximum value of my eight bit LiDAR normalized digital surface model are going to be assigned to the class tall. Let's execute multi threshold. Let's first compare the times over 30 seconds for a contrast split to a little over one second for the multi threshold segmentation algorithm. Putting these two side by side, we can zoom in here and we can see it looks like we've achieved identical results, segmenting out buildings, tree canopy, and tall features such as power lines from this data set. The main difference here being that the multi threshold segmentation algorithm is clearly much more efficient. So while you might find certain situations where contrast split works well, if you're looking to simply partition features based on the layer values in a single data set, I recommend using the multi threshold segmentation algorithm. Now let's look at a case where we want to take existing image objects and perhaps refine them and create image objects within those image objects. First of all, let's reset here to start a clean slate. Now I'm going to go ahead and remove the split from my view so that we have a single view and we'll move back to a three layer true color display. So I'm going to do what we did earlier in that I'm going to apply a chessboard segmentation algorithm with a very high object size and I'm going to use my thematic layer. So once again here, essentially incorporating the thematic layer image objects. You can see those here. And in the next step, I'm going to apply this image segmentation algorithm using image object domains that I create new image objects but only for these hydrology features. So the first thing that I need to do is I need to classify these image objects as such. So here I'm using the assign class algorithm and I'm simply going in and accessing the attribute table of my hydrology features here to say that if they have a value in the attribute table greater than equal to zero for this field calm ID, I'm going to assign them to the class water. We can see now that we have features classified as water simply incorporating the classification from our thematic layer. Now I may want to cut up those water features because I noticed that not all those water features appear to be actual water features. Some are digitizing errors. So I might want to reevaluate them by creating image objects within side of them. So here I'm going to apply a multi resolution segmentation algorithm. Obviously we used the pixel level as the image object domain. In this case, we're going to use the image object level. So that means we're going to be working off existing image objects. We're going to constrain the operation so that we only do it on objects classified as water. I'm using level one using similar layer weights and shaping compactness criterion as we did before. And let's click execute to take a look at how this works. I think the first thing that'll jump out at you now that this segmentation is complete is the fact that it took over nearly a minute. If we take a look at our image objects, I think that's going to be probably surprising because we only generated image objects within our existing image objects, in this case within our water objects. Once again, because we constrain the operation to those water objects by focusing on the image object level and the class water. So this took over a minute despite the fact that when we ran the multi resolution segmentation created more image objects on the pixel level before it only took us about 40 seconds. One interesting fact about eCognition is that when you use the image object domain for multi resolution segmentation, it actually decreases performance. So I'm going to give you a couple ways to get around that. The first is going to be using sub levels. So let's reset here to get a clean slate. First thing that I'm going to do here once again is create image objects based on my thematic layer. We can see those image objects here. Once again assigning them to that water class. And now I'm going to do a multi resolution segmentation but I'm going to do it on the image object level only I'm going to create a new level level sub that's below my main level here, level one. Excellent, now that's complete. You're going to notice that it ran a little bit faster, 51 seconds as opposed to a minute for the multi resolution segmentation. The challenge here of course is that our image objects are on a level called sub below our main level. So now we need to incorporate those image objects into our water polygons here. So I'm going to execute an algorithm called convert to sub objects. I'm going to do this on level one and focus the image object domain on those water features. So I execute that, you can see that what I've done is I've incorporated those image objects that were created on the sub level into my water polygons. And so I can now go ahead and run a delete image object level algorithm to remove that sub level. So that gives me a little bit of a performance boost there saving about close to 10 seconds overall. Now there's another way of doing this and that's using the map concept that I first showed you earlier. So let's once again, let's do a clean slate here. Walking through the steps, very similar to what we did before here. There's the chessboard segmentation. Now let's classify those features as water. Only now I'm going to copy this existing map over to map two here. And now on map two, I'm going to perform a multi resolution segmentation. Right away you'll notice that in terms of the time it took to do the segmentation, nearly identically to what we achieved before when we first ran the multi resolution segmentation on the pixel level. So let's go over to map two here. On map two you're going to notice that we have image objects not constrained by existing thematic layers. So going over to the main map, what we're going to do now is we're going to create a sub level on the main map. So that's this level here. And now we're going to synchronize those two maps. So essentially passing the image objects over from map two to this level sub only for those features in the class water. There we go, we can see that we have our image objects. Now we're going to use convert to sub objects to pass those image objects up to the main level and then finally delete the sub level. What we end up with is the exact same results we had in the first two examples. We have image objects created from a multi resolution segmentation within our water features. The difference is that we've saved roughly 20 seconds by using this map approach. So with any cognition, you may often find that routines that use many more algorithms such as this map concept of multi resolution segmentation while they may on the surface look like they're going to be more processor intensive because you're using several, perhaps even tens of algorithms are actually big time savers. Now let's look at a way where you could really save time by not using the multi resolution segmentation algorithm first of all and combining the quadtree segmentation algorithm with a multi resolution segmentation region grow. So here we're going to reset again. First of all, doing the same exact procedures we did before creating those image objects for our water polygons and classifying them based on their thematic object attributes. Now instead of using a multi resolution segmentation I'm going to use a quadtree based segmentation as we did with the multi resolution segmentation I'm using the image object domain to constrain the image segmentation operations. So I'm focusing on the image object level on class water and you can see here I have my scale parameter and image objects weight set. I'm going to go ahead and execute and in less than two seconds I have quadtree objects. Now the problem of course as we know with quadtree objects is they often don't represent features of interest very well but I can apply an algorithm called multi resolution segmentation region grow and this works very similar to the multi resolution segmentation algorithm with the exception is that it's growing objects into one another using the multi resolution segmentation algorithm. So I'm focusing here in the image object domain, level one my class filter is water here. You can see that I have image layer weights just as I would. If I was using the multi resolution segmentation algorithm I've got a scale parameter and I also have my shape and compactness criteria. Very important if you're using this generally you want to set the loops and cycles to loop while something changes. This will continue to merge and grow those objects together using the multi resolution segmentation algorithm while something changes. Also very important you want to constrain your class filters. I want class water and I only want to grow it into other water features. This will constrain my multi resolution segmentation region grow within the existing water features. There we go we have a very fast operation there and now I might want to go ahead and execute a spectral difference segmentation to perhaps clean things up a bit. The end result here is that we have image objects that I think do an excellent job of representing features of interest. The advantage of course with taking this quadtree multi resolution segmentation grow approach is if you look at the times on all of these algorithms they're all less than two seconds. And here's a little graph that I put together that illustrates some of the time savings we can get by using that quadtree multi resolution segmentation grow on the current level. You can see that it's overall it took less than five seconds. If we look at the map approach where we split off our data and ran multi resolution segmentation on a separate map then incorporated that. That was more than five times longer at just over 25 seconds. If we looked at doing the multi resolution segmentation on the sub level that took 35 seconds. And finally if it was multi resolution segmentation on the current level that took close to 45 seconds. So I'd like to stress once again that although multi resolution segmentation algorithm generally produces the most meaningful objects. If you're doing very, very large data set processing and you become concerned with time particularly if you're using lots of different segmentation algorithms in the course of your rule set. You might want to consider some of these alternate methods to segmentation particularly when you want to work within an existing image object domain. Hope you enjoyed this webinar. If you're interested in working with the data.