 This is the Lesson 2 instructor lecture, and we will have a focus on pre-processing of remotely sensed imagery. So remotely sensed data is captured by a satellite sensor, and then it has to be processed into a spectrally and spatially accurate photomap before it can be used for mapping and GIS modeling applications. So this pre-processing entails a series of steps before the remotely sensed data is ready to be converted into actionable information. So it turns out that Landsat data is available in two levels as level one and level two. For level one data, the data can be, which is data that can be converted to the top of atmosphere reflectance. It is radiometrically calibrated and it is geometrically corrected. In other words, it is a orthorectified product and serves as a photomap. For level two data, the surface reflectance is recovered from the top of atmosphere reflectance by atmospheric correction, and this generates the surface reflectance product, which, if available, is the desired product for mapping applications. So let's elaborate a little bit more on radiometric calibration or correction and geometric correction. So remote sensing systems do not function perfectly. Also, the Earth's atmosphere, land and water are complex and do not lend themselves well to being recorded by remote sensing devices that have constraints such as spatial, spectral, temporal and radiometric resolution. Consequently, error creeps into data acquisition process and this can degrade the quality of the remote sensor data collected. And the two most common types of error encountered in remotely sensed data are radiometric and geometric. So radiometric correction, once again, attempts to improve the accuracy of spectral reflectance, emittance or backscattered measurements obtained using a remote sensing system. And I will elaborate on this a little bit more in the following slides. And geometric correction is concerned with placing the reflected, emitted or backscattered measurements or derivative products in their proper planimetric location, which means inside a geographic coordinate system such that they can be associated or be in registration with lineup exactly with other spatial information in a geographic information system or a spatial decision support system. In other words, the geometric correction makes sure that the satellite image is in direct registration or completion with other remotely sensed data or with GIS information. So let's take a closer look at the idea of radiometric calibration. So once again, the sun is putting out irradiance and the solar irradiance reaches to the top of the atmosphere, comes all the way down to the study area and then reflects from the study area and also reflects from the neighboring areas of the study area. And we can see that the surface reflectance of the surface radiance as it is making its way back up to the sensor, we are just interested in the information coming from rays one, three and five, if you look at the diagram. However, due to diffuse sky irradiance which occurs due to scattering processes, we can see that ray number two is scattering into the sensor and the neighboring areas represented by the light coming from ray four is also going to the sensor. So therefore, this extraneous information arriving at the sensor has to be parsed such that you are getting light simply from the study area only and the process of this correction is known as radiometric calibration or radiometric correction. And this improves the accuracy of the image in depicting the terrain of interest. So once again, focusing on radiometric calibration, the remotely sensed data is captured on a flat imaging surface in the camera, in the satellite or it could be a aerial sensor as well. And this imaging surface has light or photon sensitive, photons are particles of light. So photon sensitive sensors which capture and record the incident energy on each sensor. So each sensor is receiving some energy coming to it and the energy incident from the stray and the scattered light and accounting for internal camera configuration, design parameters, components, the energy incident just from the surface reflectance is obtained upon a radiometric correction or calibration. And this corrected energy is then converted into a digital number, which in the case of eight-bit Landsat imagery has two to the eighth or 256 possible values ranging from zero to 255. Zero means total darkness, no signal coming in and 255 means extremely bright where it is just pegging out or saturating the sensor. So the top of atmosphere reflectance can be calculated from this image which has been radiometrically calibrated or corrected using a formula provided by the USGS and we will not get into these details right now. And once the image is radiometrically calibrated or corrected, it is ready for a geometric correction. So it is necessary to pre-process remotely sensed data and remove geometric distortion so that the individual picture elements or pixels are in their proper planimetric or X and Y map locations on the surface of the earth. And this allows remote sensing derived information to be related to other thematic geospatial information in geographic information systems and geometrically corrected imagery can be used to extract accurate distance, polygon area and direction information, meaning to say that a geometrically corrected image becomes like a photo map. The second processing step for a level one Landsat product is the geometric correction and the satellite captures the sensor radiance, the light arriving at the sensor and which can be directly converted to the top of atmosphere reflectance once the radiometric correction or calibration has taken place. Now the image is captured on a flat imaging surface on a camera and needs to be distorted like a rubber sheet to fit the spherical earth correctly in a geo coordinate system. So this is known as a geometric correction or image rectification. Furthermore, if the rectified image is draped over a digital elevation model or a DEM of the terrain, the perspective effects can be removed to produce an ortho rectified image. And an ortho rectified image is a photo map then which can be utilized for land cover mapping applications and as a base map for GIS analysis. This is a graphic that will help you visualize the process of geometric correction using the nearest neighbor resampling method and such that we have the original input image in coordinates X prime, Y prime that is a radiometrically corrected and is now going to be stretched and rubber seated such that it can be placed on the surface of the earth such that each pixel has an accurate location representing a position on the earth. So the coordinate within the original input image X prime, Y prime is going to be mapped to a point XY in the rectified output image and the digital number, the brightness value that we will take from the original image to the rectified image will be the brightness image of the pixel that was closest to the original point X prime, Y prime that digital number will be taken over and placed at the pixel in the rectified output image at the location X and Y. So in summary, the Landsat level one product is both radiometrically calibrated and is geometrically calibrated as well. And here is some information about Landsat level one processing detail, details from the Landsat science team. And you can see that this product comes in a geotiff format and the resampling method used in this case is the cubic convolution method. It's got a 30 meter resolution on the ground for thematic mapper and enhanced thematic mapper plus products and it's in the UTM map projection and in the WGS84 datum. And we can also see that the level one collections also are further divided up into different categories where the level one, the L1TP product is the best radiometrically calibrated and ortho rectified product, the most preferable one for land cover mapping. So Landsat level one is radiometric and geometric calibration correction such that you have an ortho rectified image. The Landsat level two products add value to the level one products by generating the surface reflectance and temperature and then level three products that are further value added products such that we can see from this graphic that the level two products are the surface reflectance. Remember, level one was the top of atmosphere reflectance that was radiometrically calibrated and geometrically corrected to give you ortho rectified image. Now atmospheric corrections have been done on the top of atmosphere reflectance to recover the surface reflectance which is the preferred product for land cover mapping and also similarly the surface temperature rasters can be generated as well for level two products and are available as level two products and then we have Landsat level three products like dynamic surface of water extent, fractional snow covered area and burned area. Furthermore, you need to be aware of the Landsat data collections. So the Landsat data collections are Landsat data archives that have been processed to provide accurately calibrated imagery that ensures a data continuity. And what data continuity means is that this ensures that the data has been consistently calibrated and processed and this makes the data more reliable for quantitative comparison over time. So please watch the following short video explaining the Landsat data collections and also watch the following short video on how positional accuracy of Landsat imagery is ascertained using the root mean square error or the RMSE. So within Landsat data collection one and therefore this means there's data collect there's collection one for Landsat data and there's a collection two as well. Now even within collection one there's tiers of data, which relate to the quality of that particular image and please watch the following video explaining the Landsat data collection one and the tiers of data available in this data collection. And also please review the following USGS Landsat collection one website just kind of look through it and see what are the different parameters for Landsat data collection one. So the Landsat data collection two archive was just released a couple of months ago in late 2020. And please peruse the link that is given in here that has the details about the Landsat collection two data archive and the collection two archive has both level one ortho rectified products which is geometric correction using a DEM and radiometric calibration which gives the sensor radiance and the sensor radiance can then be converted to the top of atmosphere reflection and can be downloaded from several portals as we will come to see. And then level two is the ortho rectified surface reflectance which means it is geometrically corrected and has been atmospherically corrected as well such that we have the radiance or the reflectance of the light reflecting right off of the surface. And this level two surface reflectance product is the preferred product for land cover mapping. So all of remote sensing is about going from data to information. So the image that has been collected and pre-processed is data. That data comes into a geographic information system and is converted into actionable information. So this graphic gives you the big picture of the process of remote sensing. So you have a satellite that takes the image and then there's the onboard analog to digital conversion and calibration. What that means is that the energy being received is converted to a digital number and then the digital number is calibrated and corrected such that we just have the surface reflectance coming in and that can be converted into the top of atmosphere reflectance product. Then that data is, so after the radiometric and geometric processing and using ancillary data information from the ground, then this data can be processed into land cover maps and then can go to work in GIS platforms. So these physical geospatial measurements fulfill a socio-economic need and then are applied to many, many different fields of human endeavor like agriculture, natural resources, urban planning, designing smart 21st centuries communities and habitats, security applications, disaster response applications to name a few. So this is a graph in which we have the atmospheric transmission of electromagnetic radiation through it as a function of wavelength such that the graph is showing you the atmospheric windows that are available to electromagnetic radiation. And this is also to remind you that the satellite imaging bands are placed in such a manner that they are within these atmospheric windows such that you can visualize the Landsat seven bands from the enhanced thematic mapper plus or ETM plus instrument. And you can see the bands one, two, three which are blue, green, red, near infrared is four, five is middle infrared, seven is middle infrared and number six is the far thermal infrared. Similarly, you can see the Landsat eight bands that are slightly different from the Landsat seven bands. The OLI is the operational land imager sensor that has these bands and please look up the wavelength intervals of both Landsat seven and Landsat eight. They are in your textbook. And then you have the thermal sensor, the thermal infrared sensors in Landsat eight which are bands 10 and 11 and they sense in the thermal infrared region of infrared. Thermal infrared is what we literally sense as heat as human beings sense as heat. So here we have an outline of a generalized remote sensing and image processing workflow. So thus far we have focused on the pre-processing, the radiometric and the geometric pre-processing that happens to the imagery such that in the end we get the preferred surface reflectance product or in the absence of a surface reflectance we just have to work with the top of atmosphere reflectance product. Then we will focus in on how this image that has been pre-processed can be displayed on a computer monitor and how it can be enhanced for purposes of image interpretation. And then towards the end of this lesson we will get into the idea of information extraction in which you will perform a unsupervised classification on a Landsat image. And then later on in the course that once you have a land cover map that you have developed from a Landsat image using the process of classification, of image classification, then you can extract knowledge, photogrammetric information that is you can make measurements on the map by measuring distances or areas of feature of interest and so forth. This is, it is also very important to compile the metadata and the image map lineage documentation. It is very important to know the image or the data provenance. Who took the data? Where did it come from? What kind of processing has happened to it before it came to you? These are questions very important for a remote sensor. And then image and map cartographic composition is very important as well. And I take it that you folks have lots of experience in making maps and cartographic maps with a scale bar and a legend and so forth such that the import of the map is very clear. And these maps can then be brought into geographic information systems and that geospatial modeling can be done with it. So once again, a reminder of the basics that a multispectral image is capturing an image in different wavelength bands, different colors and that each image band is a matrix, a two-dimensional matrix of numbers and that for an 8-bit image, these numbers range from zero to 255. So here is a visual example of the seven bands of Landsat thematic mapper which was deployed on Landsat 5 of Charleston, South Carolina. So you can see the seven bands here, band one being blue, band two being green, band three being red, band four being near infrared, band five is middle infrared, band six is thermal infrared and has a lesser spatial resolution. The pixel size is bigger over here, I believe is 120 meters for thematic mapper and then we have band seven which is once again middle infrared. So therefore if the image is a matrix of digital numbers, as we have discussed up to this point, it means that every image band is amenable to a statistical analysis. So if there's statistical analysis involved, that means there are histograms involved. So once again, you have these digital numbers in this two-dimensional image matrix that each band is with values from zero to 255, okay? And the representation of digital numbers in an image band can therefore be represented as a histogram. So if you look at the image over here on the right, you can see the histograms of the first four bands over here. Each histogram has digital numbers along the horizontal axis and count or the number of pixels along the vertical axis, okay? And that once you have digital numbers in a two-dimensional matrix, then you can calculate statistics for it like what's the minimum value of the digital number in the image? What's the maximum value? What's the mean or the average? What is the median and what is the standard deviation? And the shape of the histogram is indicative of the types of land cover represented in that image band and that the area of the histogram is proportional to the total area of the image. So this is just a graphic that is a review of basic statistics ideas. And you can see for a normal distribution, the mean, median and mode are all commensurate with each other, but you can have negatively skewed distributions, positively skewed distributions. And then you can have a uniform distribution where no mode exists. And you see all of these patterns will inform the interpretation of histograms of multispectral imagery acquired by Landsat or multispectral imagery acquired by any multispectral sensor. And this is also a reminder from statistics of the basic idea that if you look at a histogram, if you look at the area underneath the one standard distribution, one standard deviation of the histogram, then it covers 68% of the area underneath the histogram. If you have two standard deviations, you get, you're capturing 95.4% of the area underneath the histogram. And if you go out to three standard deviations, you're getting about 99.7% of the area underneath the histogram. So here is an example of Landsat image band statistics. And we can see that, for example, thematic mapper has seven bands of imagery and that for each band, you have the minimum digital number, the maximum digital number, the average digital number, and the standard deviation of all of the digital numbers in that particular band. And bear in mind that most usually if the standard deviation is large, that means you've got broad peaks that are kind of spread out in your histogram. And if you have small standard deviations, then you tend to have sharper peaks in the histogram. So usually in remote sensing, the lighting conditions are not optimal always and the image that is captured by the sensor has a narrow range of minimum and maximum digital numbers in the image as shown in the upper diagram below, okay? So if you stretch the histogram, that is if you extend the minimum and maximum, all the way to zero, which is the absolute minimum and the 255, which is the absolute maximum, you populate the darker and the brighter digital numbers and that the image ends up becoming brighter, clearer, and the contrast increases between different features such that it becomes easier to interpret the image by the human eye. So here is an example of a black and white image, the original image on the left and the image that has been contrast stretched. And on the right, you can see that the contrast has increased and you can more clearly discern the difference between the different features present in the image. So here are some more examples of stretching the histogram for contrast enhancement. So here is an example of a min-max contrast stretch, which is also known as a simple linear stretch where you have the original image and in the original image, you can see the minimal digital number is four and the maximum digital number is 104 and that you stretch this image such that the stretch in the stretched image, all the digital numbers from zero to 255 are populated and now you will have an enhanced brighter image in which there will be greater contrast between the different features. Similarly, another type of a stretch is known as a standard deviation contrast stretch in which you have, let's say, a sharply peaked histogram in the original image like so and we stretch it just to the extent of plus minus one standard deviation on each side and so that's one sigma or one standard deviation contrast stretch. Similarly, you can have a two sigma stretch to get more of that histogram or a three sigma stretch and so forth. So here's another graphic showing you non-linear contrast stretching as opposed to linear contrast stretching. So if you look at this diagram, the first diagram of the histogram stretch, in a linear stretch, the entire histogram is stretched out in a proportional manner throughout whereas in a non-linear stretch, parts of the histogram are stretched more than others depending on the particular application. So here's some imagery by which you can visually see an original image, an image that has been linearly stretched and how the contrast enhancement has increased such that the human eye can discern the difference between the different features on the ground. So here's yet another example of a multispectral image that has undergone a linear stretch and once again you can see how that makes this multispectral image more amenable to the human eye. So there are other ways that images can be enhanced as well and image filters are utilized to enhance imagery for either sharpening or blurring an image. Sometimes you may want to sharpen an image such that you can discern the line work, the roads, the boundaries more clearly and there may be certain applications in which you may need to smooth or blur an image that has too much noise in it. Then you can also enhance an image by pen sharpening multispectral imagery and that means that multispectral imagery can be spatially enhanced by fusing a higher spatial resolution panchromatic image which is just in black and white with a lower spatial resolution multispectral image So for example, a Landsat 8 multispectral image which has a ground resolution, a pixel resolution of 30 meters can be pen sharpened with the panchromatic 15 meter image that Landsat 8 takes and what that means is that the spectral information in the new 15 meter multispectral image that is obtained after pen sharpening the 30 meter image with a 15 meter panchromatic image is that the spectral information remains the same and the geometric information that the 15 meter panchromatic band image brought with it that is line work, roads is enhanced such that you have more geometric information but the spectral information remains the same in pen sharpened and that it turns out since an image is a raster you can do raster algebra to enhance certain types of features on the ground and so please click on the following hyperlink to quickly take a survey of Landsat surface reflectance derived indices these are treated very well in your textbook in your readings as well and a few important indices are the normalized difference vegetation index this is very important NDVI and for example this is obtained by doing the following raster algebra on a Landsat image or any multispectral image near infrared minus the red band all of it divided by the sum of the near infrared and the red band similarly there are other indices like soil adjusted vegetation index normalized difference moisture index the normalized burn ratio and it turns out that there are dozens of indices in the literature and it just depends on which problem you're working on as to which index that you will select for your purpose so we have some other important satellite constellations and most of these are in the readings in your textbook the first environmental satellite program was the Landsat program that began in 1972 the SPOT program was begun by the French in 1978 and several SPOT satellites had been launched and then we had the MODIS sensor which is the moderate resolution imaging spectroradiometer and that has 36 bands but its spatial resolution is very low with its pixels being to the order of 500 meters to a kilometer then there's the Indian remote sensing satellites the IRS satellites A, B, C and D and Cartosat then in around 2000 or so we started to have a proliferation of commercial high spatial resolution satellites so high spatial resolution means to the order of about two and a half meters and less and that includes the Iconos, Quickbird, Worldview, GOI and Rapid Eye constellations and all of the satellites these satellites discussed up to this point in this slide are in your readings in your textbook but from 2015 onwards there has been a proliferation of small satellite startups and please view the following link to see a listing of satellite startups worldwide and we can see that with all of these sensors being deployed there's a data deluge that is coming our way in fact it's already here and we need to develop a larger workforce to take all of this data and to convert it to value added information so we have had a particular focus on Landsat systems and understanding the preprocessing of Landsat multispectral data so all multispectral sensors produce data based on spectral characteristics of that particular sensor which means the wavelength bands of different sensors are usually different and so in other words if you look at the wavelength bands of spot in the blue band versus Landsat in the blue bands the wavelength ranges will be slightly different so therefore it is important to review the sensor characteristics like band wavelength intervals the spectral spatial radiometric and temporal resolutions that that sensor and satellite system afford before you start working with the data from that sensor very important so all multispectral imagery can be classified with techniques that are very similar the techniques that we bring to bear on Landsat imagery will work on spot imagery as well and lessons learned in working with Landsat multispectral imagery apply to multispectral imagery from other satellite sensors and actually will also apply to other aerial sensors and even sensors that are deployed on drones and it turns out that the reflectance curve for all land cover classes remains the same and is an essential aid for multispectral image display and interpretation so this slide is about the relationship between the level of detail required in your study and the spatial resolution of representative remote sensing systems as applied to vegetation inventories if you're working at the global scale you may want to work with a sensor like modus or AVHRR which has a very high temporal resolution it covers the earth very rapidly such that you can get a map for the entire earth every few days or if you're working at the continental level once again modus may be useful but this is where Landsat starts getting to be very useful once again if you're just looking at a particular forest or if you're looking at a particular study area at the level let's say of a state or within a state Landsat is extremely suitable for these types of studies and if you start getting to a very particular region let's say to a very particular state park then Landsat is very useful this is where aerial photography or aerial imaging starts to be extremely useful where you can have resolution actually you can have now you can have resolution with aerial imagery to the order of less than a meter and all the way to 30 meters for Landsat and if you are at a plot level that is you're looking at an area of a few acres or a few hundred acres then you need high spatial resolution imagery if you're working with satellites then iconos and quick bird will work or aerial photography sensors flown on an airplane would be extremely useful and this is where NAPE imagery the National Agricultural Imagery Program data would be extremely important and if you have to make in situ measurements to go and do ground truthing why then you may have to do some field work and there is yet another domain opening up if you're working right at a plot level and that is using drones such that if you want to image up to a hundred acres roughly in great spatial detail the new emergent field is unmanned aerial systems so once we have a multi-spectral image that has gone through the process of pre-processing that means that you have at a minimum a orthorectified image and preferably an atmospherically corrected image such that you have surface reflectance it is ready for conversion to a thematic land cover map and the conversion of imagery to a land cover map with different land cover types is known as image classification so in lesson two towards the end we will start focusing in on image classification and this will be a theme for the rest of the lessons and the rest of the course that how is it and what are the techniques that we use to classify an image and remotely sensed image can be classified into land cover maps using two fundamental approaches unsupervised classification and supervised classification and in this lesson you will be performing a unsupervised classification lab activity please post any questions or comments that you may have on the lesson two discussion forum thank you