 This video will show you how to perform an accuracy assessment of a classified LandCover dataset in ArcGIS Pro 2.2. Using some automated feature extraction protocols, I've generated this 7-class, high-resolution LandCover dataset. As you can see here, it's in the form of a raster dataset stored inside a FileGear database. Each pixel has a value of 1 through 7 in this raster LandCover dataset. What I'd like to do now is figure out how accurate this LandCover dataset is by comparing it to the source image dataset. By using the ArcGIS Accuracy Assessment tools, I'm going to use a stratified random sampling approach to create a set of points. ArcGIS will automatically identify the LandCover class from my raster dataset for each point and then I'm going to manually assign it a class based on my interpretation of the imagery. The first step in an Accuracy Assessment is to run the geoprocessing tool create accuracy assessment points. The input dataset is going to be my LandCover raster. The output accuracy assessment points I'm going to call Accuracy Assessment and those are going to go into my project file geodatabase. The target field is going to be classified. The number of random points is going to be set at 500 and I'm going to use a stratified random sampling approach. You can read more about the different approaches for sampling by clicking on the little information button next to the sampling strategy. Once the geoprocessing tool finishes executing, you're going to see that I have a new layer called Accuracy Points loaded in my map. To make the points more visible, I'm going to adjust their symbology and turn off the LandCover data so I can see them overlaid on top of the image dataset. In opening up the attribute table for the Accuracy Points layer, we see that it includes a number of fields. The standard object ID and shape fields, but then also has two other fields, classified and ground truth. Classified is the class assignment pulled from that raster LandCover dataset. Ground truth, that's the field that I need to populate based on my interpretation of the source data, in this case the imagery. Of course the term ground truth isn't exactly accurate as I'm not going out and collecting data on the ground, I'm using my image as a reference dataset. Now making sure that I'm in the Edit tab, I can go in and for each and every single point, zoom in to that point and edit the ground truth field and populate it with the class code that's my interpretation of what the class is based on that reference imagery. I don't want to bias my interpretation of the imagery, so I'm going to go to that classified field and hide it to temporarily remove it from my view. Now I'm going to continue the process of populating that ground truth field using only my visual interpretation of the reference image dataset. Once I finish populating all the records, I can unhide my classified field and I want to do a quick review of my attribute table to ensure that each and every point has both a classified and ground truth attribute populated. Now I can compute my error matrix or confusion matrix by running the compute confusion matrix geoprocessing tool. This is going to compute the producers, users, overall accuracy and cappa for the accuracy assessment. My input for the compute confusion matrix tool is my accuracy points. The output is a confusion matrix table that's simply stored in my project geodatabase. You can see after the tool is executing that table appears in my table of contents. In opening up the confusion matrix table, we see that the reference or ground truth classes are in columns across the top and the mapped or classified classes are in rows along the side. The diagonals represent the number of points classified correctly for each class. For each one of the classes, we can also find the users accuracy and producers accuracy summarized for us. Finally, there's the measure of the overall accuracy, which in this case is approximately 93%. While the confusion matrix tool makes it easy to generate an error matrix from your accuracy assessment, it's not the most easy to read table. So now I'm going to show you how I'm going to use the Excel to table tool to export my table to Microsoft Excel. And then within Microsoft Excel, I'm going to go ahead and make some formatting changes to the table. I'm replacing the class codes with class names and formatting the data to make it more easily interpretable to the end user. An accuracy assessment is an important part of any remote sensing project. The quality of your accuracy assessment highly depends on how you carry it out. Make sure you consult reference data, such as the Congleton and Green Accuracy Assessment Book, for more information to help you guide your accuracy assessment methodology.