 This is the Lesson 7 lecture, and this has a focus on machine learning and classification of remotely sensed data. So machine learning is a subset of the larger field of artificial intelligence, and basically the intent of artificial intelligence that includes machine learning and deep learning is to emulate the cognitive facilities of a human being, particularly in interpreting an image, so that the rule set that you wrote in lab number 5 really is a computer vision application in which you figured out the numerical attributes of the different layers of imagery, so the values of the NDVI, the red, green, blue, near infrared, the NDSM, the Z deviation, and you use the ranges of these numbers to establish a particular land cover class and were able to encode it in a rule set that then automatically extracted the desired classes of features from the map. So you folks are already proficient in basic artificial intelligence expert system development, which you did in lab 5 and you will be developing rule sets similarly in lab 6 as well, and then you already have done some machine learning already where in lab number 4 you used the support vector machine classification system to do a object-based classification of the imagery that was provided to you, and in lab number 6 you will be utilizing both the support vector machine classifier and the random forest classifier. Both of these two classifiers learn from the data, so sample data that is given to it, and then the machine learns the characteristics of the imagery and relates it to particular land cover classes, and you will be utilizing these machine learning algorithms in lab number 6 as well, and they are typically used for land cover classification. Now deep learning ends up being a further subset of machine learning, which is also known as convolutional neural networks or CNN applications, and in the case of deep learning you have to most often be looking for a particular object. You can use deep learning to classify an entire image into a land cover map as well, but it has particular efficacy for locating particular types of objects, for example locating swimming pools or airplanes. However you have to train the classifier with lots and lots of training data, lots and lots of samples of the type of objects that you want to extract from the imagery, and that is a deep learning application. All of these, the machine learning and the deep learning, particularly the deep learning applications, and in machine learning the support vector machine and random forest, have been particularly well developed in software that is available for GIS and remote sensing work in the past five years. So these are very contemporary remote sensing techniques that are used for classifying imagery and work very well in the object oriented domain. You will be dealing with UAS based very high spatial resolution data in lab number 7, and the remote sensing based applications dependent on UAS platforms is an area that has really matured in the past four or five years, and such that we have miniaturized sensors that can fit onto unmanned aerial systems, and these are systems where you have a controller and a computer connected with the drone, the unmanned aerial system, to guide it to a particular type of a flight plan that will develop a particular type of a two-dimensional orthophoto mosaic or a three-dimensional model. So these are very powerful technologies that end up doing remote sensing in much smaller study areas to the order of tens to hundreds of acres and no more, but in extreme detail. So what this does is it makes remote sensing personal. In other words, we need this kind of a remote sensing distributed nationwide in communities, cities, counties doing solving problems involving resource management and natural resource management, urban management, and so forth. And this is a book that was published in 2019, and I was the editor for it, and it has a compendium of case studies to many different areas where this technology can be applied and where unmanned aerial systems can create mapping products that are very valuable in the public policy domain. This slide is to remind you that remote sensing and geography in general is all about scale. At different scales, at different spatial resolutions, you see different things and different types of data is amenable to different types of applications. So this graphic captures the range of data that you will have dealt with in this course that we've looked at Landsat moderate resolution data at about 30 meters resolution. Then we've worked with Nape imagery, which has one meter resolution. And in lab seven, you are going to be working with UAS imagery, which has a six centimeter resolution such that you have this very intricate detail, but this increased spatial detail brings with it other challenges and issues in classification. And it turns out that the object based approach is the only viable approach to deal with such high and ultra high spatial resolution imagery that a drone will produce. So let us now summarize the automated feature extraction perspectives approaches paradigms that we have looked at in this course. So we've looked at pixel based spectral pattern recognition techniques in just your basic pixel based unsupervised and supervised classification. Then we've gone on to object oriented segmentation pre processing or OBIA object based image analysis applications. And these are used in statistical techniques and machine learning techniques based in supervised unsupervised and rule based classification. Then we also have deep learning with what are known as convolutional neural networks or CNNs. And these kinds of tech deep learning techniques are very useful for object detection to identify objects in an image particular types of objects may they be vehicles, buildings and so forth. They are used for pixel classification as well for defining a land cover coming up with the land cover map and very useful for picture classification. That is, for example, determine if damaged buildings might exist in parcels and something like this could be very useful for disaster response. Here are some properties of feature extraction technologies that we have looked at. So for basic spectral pattern recognition, this is based on spectral signatures. So when it is based on pixel based spectral signatures, then it is limited by the purity of a pixel's signature. In other words, if you're looking at Landsat imagery, then most likely you do not have a pure pixel. Most likely you have more than one class represented in that pixel. And then the spectral pattern recognition is also the basis for object oriented classification, which does the segmentation and then looks at objects and the mean spectral and other morphological properties of that object instead of a pixel. Now in the past five years or so, these artificial intelligence and machine learning techniques have become very popular and powerful and available in commercially available software and some free software. And machine learning improves the training and classification logic beyond just using spectra. And then deep learning uses neural networks loosely inspired by the anatomy of the brain and the nervous system of a human being. This graphic summarizes the automated feature extraction algorithms, paradigms that we have encountered so far. And one is the pixel level supervised and unsupervised classification. We've done both of these in our lab activities. We have also done the object based supervised classification, have not done object based unsupervised classification. And we have done rule based object based image analysis classification using e-cognition as well. And in lesson seven, you will have the option of doing a deep learning activity in your lesson seven lab activity. For automated feature extraction, you have to have training samples and this process is known as labeling, training and learning. So these examples are referred to once again as training regions, training samples or labeled samples. And depending on the automated feature extraction method you're going to use, you either train the system and that's what we have done in this course. But you can also have the opportunity for the system to learn on its own. We will not be getting into those techniques in this course. Let us consider some pattern recognition approaches for image classification in the following slides. So what is spectral pattern recognition? You're familiar with this idea because you have made training samples to classify remotely sensed imagery in the activities that you have done. And basically a spectral pattern recognition is a method of feature extraction based on the assumption that different features in an image have different spectral reflectance signatures. And basically if your eyes can discern differences in the imagery, then it does tell you that different objects in the image have different spectral properties across different image bands. And statistical or machine learning methods are used to separate these signatures such that they can be assigned to particular classes. Each natural feature category can often be defined by a single signature while human-made features often have a wide variation in their spectral signatures. And ideally spectral signature techniques should use surface reflectance and to be effective therefore good spectral and radiometric calibration of the sensors is required to make sure that you have good calibrated surface reflectance data to begin with should it be available. So here's the workflow for basic supervised classification. You have your imagery that you have prepared where you have done the initial geometric corrections, atmospheric corrections if they are required and then put it in the proper coordinate system, clipped it to the study area and then you have your design schema. You agree upon the types of land cover classes that you want to extract from this image before you start the project and then you create the labeled samples for the different land cover classes and you create the training samples file. And then after the training is done then a classifier is used typically in pixel-based classification. You use maximum likelihood classifier but there are other classifiers available as well and that gives you a thematic raster with the labeled land cover classes. There are pros and cons to using supervised spectral pattern recognition approaches and the pros are since the training samples are directly used to train the classifier the statistics are just naturally linked to the land cover classes that you are looking for and you have considerable control over inputs to the classification logic and how you choose your training samples and the purity of those spectral purity of those training samples and so forth is in your control to an extent. However, the cons are when applied to the pixel level the output can be noisy. You start getting that salt and pepper effect for purely pixel-based classifications and if care is not taken in developing the schema, the land cover classes that you want to classify and then choosing representative training regions the results may be disappointing if this is not done correctly and that you have lots of mixed classes in your training samples. So it does require many training samples to have a good supervised classification and what you might see as separable classes within the image may not be what the computer sees after you have had a classifier operate on the data. Now let's summarize some object oriented approaches to remotely sensed image classification. As you know very well by now that object oriented approaches process objects as opposed to pixels and you have to segment the image first and this is a pre-processing step for the imagery and pixels will be grouped with similar characteristics into larger objects and you know by now that segmentation is an art form. You have to get your spatial and spectral parameters just right such that you have unique and unambiguous objects related to a particular land cover class and it is very important to get your segmentation to that point before you start choosing training samples. So classes will be assigned to entire objects and not to individual pixels and you can use objects in conjunction with supervised or unsupervised classification even though in this course we have just used objects with supervised classification and you can also use them in conjunction with machine learning approaches support vector machine random forest for example approaches that include contextual data such as shape, characteristics, nearness, values, texture measures and so forth. So here's a graphic that shows the workflow of object-based supervised classification and you have your prepared imagery and then you have to segment the imagery before you create the training samples and then the whichever classifier that you might be using then develops the thematic raster with the labeled land cover classes but by now you are very aware of the iterated nature of a image classification process that you may need to refine such a workflow through several iterations till you get a land cover map with the type of accuracy that you need. So once again some pros and cons for object-oriented supervised classification and once again since the training samples are directly used to train the classifier the statistics are naturally linked to the particular land cover class scheme that you have and there's a considerable control the operator has over the inputs to the classification logic and the outputs are generally very clean cartographically speaking in the object-oriented paradigm but the cons are once again like for pixel-based supervised classification if care is not taken developing the schema for the land cover classes and choosing the representative training samples properly the results may be disappointing and that's where the iterative nature of remote sensing classification comes in and what you may see as separable classes in the image may not be what the computer sees after the classification algorithm has operated upon the data as mentioned earlier deep learning approaches are a subset of machine learning approaches for classifying remotely sensed imagery deep learning is a method based on neural networks that originated from research in artificial intelligence the structure of neural networks is loosely based on the architecture of neurons in animal nervous systems the convolutional neural networks or CNNs are a primary technique for applying deep learning to imagery so labeled samples typically thousands are used to train the neural network so labeled samples are basically training samples that can be point based where you put a point on the object that you want to detect or you could put a polygon over there as well so that you have groups of pixels or you can select objects after segmentation and all of these can comprise labeled samples so learning is achieved by exposing the convolutional neural network to the samples and letting it predict their labels the CNN's internal connections that enabled correct predictions get reinforced over time and the ones that enabled incorrect predictions are weakened so once the convolutional neural network has achieved a successful level of learning the result is loaded onto an inference engine that can be used in production of the land cover map and the inference engine then can be used to process new imagery as it arrives so this is a process that takes effort to set up and you typically need lots of training samples to inform a deep learning network such that it learns well enough to be able to do good classification classification at the level of accuracies that you desire and it also needs a lot of computing but this is a process that is getting to be more and more developed and useful in contemporary times and contemporary remote sensing so here is a visual example of deep learning and object detection where you're looking for a particular object and it may be you're looking for cars or you're looking at let's say incidents of dead fish or dead birds on the water and as you can see you can have extraction of objects such as vehicles, buildings, invasive trees building damage, animals and so on and output is a feature layer of polygons showing where the objects are located so bear in mind this is where you have to train the convolutional neural network model with lots of training samples and eventually it learns to be able to identify these objects in imagery taken from the same sensor here is a graphic from ESRI that summarizes the deep learning workflow so basically you begin with the imagery, you develop the training samples and then you go through a process in which the model is trained and then land cover maps or detection of those particular objects is performed on the imagery that you started off with here is a deep learning pixel classification example it's based on convolutional neural networks and the output is a raster with a class for each pixel so what this means is that you can develop a land cover map using the CNN deep learning approach but then you're going to have to train the classifier with each one of these different land cover classes before you run the model to get the entire land cover map however deep learning approaches have their pros and cons as well the pros are that this approach incorporates spatial and spectral cues and the results for object identification and pixel classification are very promising and this is a technique that is being worked on and developed and it is a part of contemporary research in this area and the inference logic can hold up for new imagery within a given sensor so as long as you've got data coming from the same sensor then the trained CNN model can be used to classify it or to pick out objects of interest from it very well the cons for the deep learning approach are that it requires very large numbers of samples for training and also choosing the optimal parameters and evaluating the training quality and the inference effectiveness requires an experienced data scientist so you're getting a feel for these issues working with OBIA based ruleset classification that you need that experience to be able to iterate through parameters to refine your model so the experienced data scientist component is very important for machine learning and for deep learning applications and that in particular deep learning is very vulnerable to being over trained or under trained and once again it needs an experienced data scientist to do it just right so here is a deep learning classification processing model on a project done by the Chesapeake Conservancy so they are starting off with NAPE imagery at half a meter resolution and a digital surface model and then they develop a vegetative index and then look at the NIR band reflectance and then use the DSM as well to do the segmentation and then the model is trained with training samples and so forth and in the end you get the classification and here is a graphic of the ArcGIS model that implements the workflow that is shown over here just to give you an idea of it if you have any questions or any comments please post them in the lesson 7 general questions and comments discussion forum thank you