 to today's lecture on the topic converting the radiance recorded in the sensor to reflectance. So, in the last lecture, we have noticed the several ways in which the atmosphere will affect the radiance that is reaching the sensor like the atmosphere. The atmosphere can act as an additive, can add some radiance to it as a path radiance. It can reduce both the incoming and upcoming radiance through the transmissivity term. And also it can add some more irradiance to the surface due to the irradiance due to diffuse skylight. So, the three components in which atmosphere will affect are the transmissivity terms, those are the multiplicative terms, the e down, the incoming irradiance due to diffuse skylight and the l path which is the path radiance. So, these are the ways in which atmosphere can affect our radiance reaching the sensor. Also, we came across the different ways in which we can do atmospheric correction. We saw different methods like having a radiative transfer model along with observed atmospheric profile data at the time of satellite overpass, radiative transfer model combined with data from atmospheric models, image based atmospheric correction, vicarious calibration procedure and so on. I also told that if we do not have any information nor if we do not have access to radiative transfer models, we can go for some sort of image based atmospheric correction. And what is the simplest image based atmospheric correction? The most commonly used atmospheric correction procedure is what is known as a dark object subtraction. Before we see the dark object subtraction method, we will just quickly glance through few examples in which the variation in look angle affects the atmospheric effect and also the surface reflectance. If you look at this picture given in this particular slide, the image in this particular slide is taken by a sensor known as MISR which has the capacity to MISR which has the capacity to look at different, different angles. Like say if this is the point where the sensor is, this point is what we call nadir, the exact ground point below the sensor we call it as nadir and this particular sensor has the capability to tilt the sensor at a wider angle and from nadir position and observe various ground points in different, different angles. The main aim of launching the sensor is to study how objects look when we look from different directions, essentially to study the directionality component of surface reflectance. So, this image was taken on 29th February 2000 over a place known as Canary Islands. This image is taken at nadir looking like sensor is directly above this particular point. This image was taken and 70.5 degree forward looking that is the sensor is here and the land surface that is image is here, sensor is moving in this particular direction. So, this is known as forward look angle. The sensor is looking little bit forward direction of its direction of motion. This image, the same area imaged in backward direction that is the sensor cross that particular area but tilt at the director backwards observe the same area again. So, we can see just by basis or just due to difference in direction in which the sensor is looking, the amount of atmospheric effect is so high like in nadir and the atmosphere looks pretty clear. We are able to see the ocean surface, this is ocean, this is land surface, the Canary Islands and so on. Whereas, at both 70.5 degree forward and 70.5 degree half looking camera images, there is so much of haze the image is not really clear. This haze is due to atmospheric scattering because as we know as the sensor tilts and observes a particular land surface, the path length increases like similar principles we use it for sun right like when the sun is at nadir or zenith, the path length is shorter and when the sun moves towards the horizon, the path length of the light travels larger same concept. When the sensor is exactly looking at nadir, the path length between the land surface and the sensor is quite short. When sensor tilts and looks at like at a father surface, the path length between the land surface and the sensor is longer. Longer the path length, more will be the effect of atmosphere. So, scattering will happen to a larger extent that is visible clearly in these pictures. This is one such example of how the difference and look angle will affect the influence of atmosphere on the image that we get. Also, another example of how look angle will change the surface reflectance or how viewing angle will change the surface reflectance. This image is again taken over parts of United States of America. So, here this image is nadir looking like sensor is directly overhead. This image is 70.5 degree forward looking, this is roughly 45 degrees forward looking, this is 45 degrees backward looking, this is 70.5 degree backward looking. So, again like sensor is here, nadir point, one image 40.5 degree, 45 degrees, 70.5 degree forward direction, 45 degree, 70.5 degree backward direction. Like sensor is moving like this, in this particular direction the sensor is moving. If we consider this, so the larger images the nadir one and the images that is given in like smaller sizes or images acquired with different viewing angles. We can clearly see the same terrain even when imaged at different viewing angles appears completely differently. This is like a combination of three different bands to provide like a color image. But still the color that is appearing in the image is like much is looks much different. Here the ocean looks little different, in this image ocean looks very different. Here we are having some sort of like purple patches whereas in this image it is not there. So, the viewing angle makes the remote sensing observations a little bit complex. That is the effect of atmosphere changes as the viewing angle goes away and away from the nadir looking the effect of atmosphere increases. Similarly, the feature also land surface feature also will provide a completely different picture like the surface reflectance will change as the direction in which the sensor looks changes. This is just to show an example of how the sensor viewing geometry will affect our radiance recorded in the sensor. Now, we go back to the image based atmospheric correction. I told you that when we do not have any information we can go for image based atmospheric correction and dark object subtraction is one of the most commonly used method of image based atmospheric correction. What is dark object subtraction? As the name suggests in each band of image that is whether it is red, blue, green, nir whatever we assume that within a given image we will be able to find some pixels which are completely dark that is they will have zero reflectance. Say for example, in nir band clear deep water bodies will have almost zero reflectance like maybe the reflectance maybe like one percent and so on almost zero reflectance. They will appear water bodies will appear completely dark in nir band. So, such pixels we have to identify in each band which pixels should be having zero reflectance by image analysis our knowledge of how different objects will look in different bands. We should be finding out different pixels in different bands which has which will have zero radiance or zero reflectance. Then we have to calculate the radiance for that particular pixel using the image data. Say for example, say this is like one particular pixel of water body. In nir band let us assume. So, in nir band water bodies will be completely dark with almost zero reflectance. If this is the case the dn recorded in the pixel should be the minimum dn that the sensor is capable of recording. Like while discussing about conversion of dn to radiance values we noted that each sensor will have a characteristic maximum and minimum dn values based on the quantization level used. So, for an 8-bit sensor the dn values can range either from 0 to 255 or 1 to 255 and so on depends on the calibration of the sensor. So, effectively at nir band water body should have the minimum dn value only because that is the nature of nature how water reflects. But there will be some other dn definitely because atmosphere has a path radiance term lot of other features. So, what we assume is whatever be the radiance recorded in that particular pixel that is use this dn recorded in this particular pixel convert it into radiance and whatever the radiance recorded for that particular pixel is purely due to path radiance that is the assumption we will make that is say the water body in nir band should appear completely dark with zero reflectance. Zero reflectance means the incoming energy will not be reflected back. So, radiance from the water body toward the sensor should be zero theoretically but some radiance will be recorded there. Calculate that radiance from the image data and assume whatever radiance was recorded for that particular pixel is purely due to atmospheric path radiance. So, what we will do we will take this estimated path radiance from the water body pixel and subtract it from the radiance recorded all other pixels uniformly that is say this is like an image. For example, I will say like the radiance calculated for the water body pixel let this be 10 units some random units I am assigning just for the sake of example. Say these are the pixels let us assume the water body is here you have calculated 10 as the radiance for that particular pixel say let other pixels have radiance of 15, 18, 22 and so on. So, from each of this pixel each of the radiance recorded in the pixel we will subtract this 10 15 minus 10 18 minus 10 22 minus 10 assuming that that 10 units of radiance is purely due to the atmospheric path radiance we will subtract it. So, the additive effect of atmosphere like atmosphere adds a constant amount of radiation in terms of path radiance and that additive effect is now removed uniformly. So, this is one of the most simplest method that is still being used in some situations. So, which is effectively it will remove path radiance effect. But while using this particular method we should realize that we are only correcting for the effect of path radiance. We are not correcting for the effect of atmospheric transmissivity like both tau s and tau v that effect is still there. We have not corrected for the effect of incoming diffuse skylight component those two corrections we are not doing we are correcting the image only due to path radiance. Our assumption is path radiance is the major factor which introduces error in the image or error in the radiance recorded. So, that is a major assumption behind this method. And one more thing we have to remember is in each band we must be able to find at least one dark pixel. If our area is like if our image is acquired over like entire of land surface without any single pixel of water body. Let us assume there may be like some large swath of land over which we record the image. Then in NIR band we will not be in a position to find a pixel which has zero reflectance because other than water bodies all the land surface features will have high reflectance in NIR band. So, if our image contains only land surface without even a single pixel of water body we will not be in a position to pick any pixel with zero reflectance. And in each band such features will vary naturally. We should be in a position to find such pixels and do it. Normally we take water bodies as the ones that has zero reflectance in almost even in different bands because of the nature in which water reflects on all. So, what essentially mean is unless we find a dark pixel in an each band in an image we will not be able to do this procedure. This is one thing. Second thing is if the image is over a very large region say you take one landsat image. Each image has a aerial coverage of roughly 185 kilometers by 185 kilometers or 190 kilometers pretty large aerial coverage. Such atmospheric correction what we will do? We will pick some 2, 3 pixels in the image assume that particular radiance as path radiance subtracted uniformly. So, what are we doing here? We are assuming that the path radiance or the effect of atmosphere is uniform across the entire area over which the image has been taken. That may not be the case. Atmosphere will vary very widely. Say here maybe it may be raining today some 5 kilometers east and direction from this particular place it may not be raining. Here it may be like bright sunshine may be there. With the next 10 kilometers it may be totally cloudy. We have seen such experiences. So, just imagine an image of 185 kilometers by 185 kilometers. Sometimes we will assume the atmospheric conditions as uniform within the center image which is not the case. Atmosphere may vary very widely. So, if we use this DOS dark object subtraction method and let us say there is just only a small part of water body there. Like one example is let us assume an image something like this a large landsat image 185 kilometers in this direction roughly some 190 kilometers in this direction north-south, north-south east-west. Let us assume there will be lot of pixels in the image. Let us assume the water body is located only here no other pixel has water body. So, what we will do we will take or we will calculate the path radiance from this particular area this particular water body assume it to be as a constant and subtract it from all the pixels within an image right that is what we will do. But here we are assuming the effect of atmosphere is uniform across the image that may not be the case atmosphere may not be uniform. So, the dark object subtraction method is there are lot of assumptions inside DOS assumptions may not be satisfied and the image even after we correct may have some errors within it and this method is also subjective. Subjective means it is the user who has to analyze the image visually interpreted and pick out the pixels which will appear dark which will have almost zero reflectance we have to select such pixels. So, if I do it I may select one or two pixels if someone else does it they may select a different set of pixels where the radiance recorded will vary. So, essentially this method is subjective. So, even though this method has lot of such disadvantages still this method is just a first order solution it is a first order solution means this will even it is like something is better than nothing right that is the popular saying same thing here without leaving the image or without correcting the image for atmospheric effects it is better to do some atmospheric correction that is the principle behind this dark object subtraction method. But please remember even after atmospheric correction using this method the image will still have some errors because we have not corrected it for transmissivity and incoming diffuse skylight errors okay. So, these are some of the points we should keep in mind that is I have listed in this particular slide what are the cautions we should make. They are simplified models the models have their own assumption the performance of the model will vary spatially and temporally that is say this model we do not correct it for transmissivity effect. Sometimes the image may be taken over extremely humid areas where the presence of water vapor in the atmosphere may be very high under such circumstances transmissivity will be very low because water vapor is a very good absorb absorber as well as scatterer. So, the atmospheric transmissivity will be quite low when water vapor is present is high. If we do only this DOS neglecting the effect of transmissivity the errors left in the image will be much higher right. So, such simple image based models may work over certain regions may not provide good results over certain regions. So, we should always keep in mind this method always comes with certain limitations. We have seen atmospheric correction removal of effect of atmosphere apart from this surface topography also will play a major role in changing the reflectance that is when I discussed about the radiance reaching the sensor like in the slides which showed what how the effect of what will be the radiance reaching the sensor by different different steps. When we calculated it I showed you one example of surface reflectance curve of vegetation and how the surface reflectance reaching the sensor will look I showed like a common slide in which I wrote the radiance reaching the sensor is affected by solar elimination atmosphere and also surface topography three major conditions I wrote. Solar illumination means due to the variation in incoming solar radiation that we will correct as soon as we calculate reflectance from radiance that is radiance we are dividing it by incoming solar radiation. So, effectively we are removing the effect of solar radiation. Assume we have done perfect atmospheric correction also. So, effect of atmosphere also removed then the effect still might be left as the topographic effect that is due to the variation in topography the reflectance or radiance recorded in the sensor may be different. Let us see an example. Let us assume it is like a small hilly terrain and sun is shining in this particular direction. Let us assume the sensor is locating or looking at the ground from this particular direction. So, solar radiation will be coming like this it will be falling on this particular side of the hill this side will be under shadow. If a sensor is looking from this angle the projection of the sensor may be like this it can be it can be like this. So, the sensor is effectively seeing the shadowed portion where incoming sunlight is not falling only diffuse skylight component will be there direct skylight may not fall due to the topography. Under such circumstances the radiance recorded for this particular side will be totally different and it may appear really dark. On the other hand take an example the same case where the sun is here and the sensor is also in the same direction as sun. So, the sun radiation is falling sensor also is looking and the same angle in which solar radiation is coming in. So, the sensor is going to record a extremely bright spot of this particular area we call it as like hot spots reflectance hot spots where the reflectance value may be very high. So, due to the presence of topography just because of the presence of hill the amount of solar radiation reaching that particular ground area changed and due to the influence of sensors viewing geometry the radiation reaching the sensor will be different either it may be too bright or it may be too dark and vice versa. So, the presence of any topographic features or variation in surface elevation may introduce certain errors in the image. Just one example I will show you in this particular slide is an image acquired over the same region United States Grand Canyon where this is acquired over like summer season, this is acquired over like in the fall season or autumn season. This is like where the sun's zenith angle is very high that is sun is almost in the lower like if this is the land surface this is the zenith the vertical sun is somewhere here like at a very low angle when this image is taken when these when this image was taken the sun was almost close to zenith. So, for the image on the left side. So, you can look at this same area and not much of effect has changed between these 2 imagers and you can see like when the image during summer when the sun is close to zenith there is not much effect of shadow like the it appears pixelated but you can still see almost all pixels are relatively brighter. On the other hand when the sun is at like at a very low angle we can still see there is large amount of shadow casting over it. If for these 2 images the sensor geometry was exactly the same. So, the sensor had the same viewing angle and the land surface also remained almost fairly constant. So, the only thing that changed is the variation in solar zenith angle just because of this the topography looks completely different in both the images or the surface looks totally different here there is not much of shadow here there is high amount of shadow here this is because of the presence of ridges and valleys this area has lot of ridges and valleys. So, just because of the presence of topography we are seeing this particular effect this is how presence of surface topography will change the reflectance or radiance recorded in a pixel how to correct for it before going for the correction we should just learn some concepts or some terminologies we will see what they are. So, the first thing is when we need to correct for topography we should have information about the surface elevation that is let us say our image is collected at 30 meter pixel resolution. In the same pixel size we should have what is known as a digital elevation model or I write it here elevation model which effectively has for any given ground point x y it also has a elevation information z. So, if this is the terrain for this x y what is the z value for this x y what is the z value like this if you create a matrix like this this is what we call it as DEM digital elevation model simply saying. So, if we have such a DEM then we will have for each point what is the elevation if we have that we can calculate certain properties of the terrain what are the properties we can calculate we can calculate what is the slope of the terrain and what is the aspect of the terrain. So, what is slope? Slope is the angle a terrain makes with respect to horizontal say if a mountain is like this. So, this side has a slope of theta 1 from horizontal this side has a slope of theta 2 from the horizontal. So, slope is angle a surface makes with respect to horizontal aspect is with respect to any given reference direction in which way the slope is oriented let us say this is let us say this is east this particular phases facing east end direction if we assume like that then the aspect of this particular slope is in the east end direction for this particular side the aspect is in the west direction. So, aspect normally we will measure with respect to north that is the convention we use. So, let us assume if this is like exactly facing east this particular side. So, the aspect will be from north it is 90 degrees and for this particular side it will be clockwise 270 degrees. So, aspect is the angle the particular slope makes in which direction the slope is oriented with respect to north we measure it in a clockwise direction from north. Say if a hill is looking like this and north is let us assume the north is in front of me it is like oriented like this. So, if the hill is like this. So, this phase is facing east end direction. So, the aspect is from north 90 degrees clockwise. So, aspect is 90 degrees if for this particular phase from north I go like what to say almost like a full circle stop here this is like 270 degrees angle. So, aspect is clockwise angle measured from north. So, from the DEM we will be able to calculate the slope as well as the aspect of the slope these 2 we will be able to calculate. So, once we calculate other things such as what is the solar zenith angle like with respect to vertical what is the solar zenith angle we should be knowing and we should be knowing what is the solar azimuth angle. Azimuth angle again means very similar concept to aspect from north at which angle sun was here say this is north like pointing in the front and back towards the screen if the sun is oriented here. So, the azimuth angle is from north it is like 90 degrees whereas if the sun is here the azimuth thing is 270 degrees and so on. So, the azimuth is direction of the sun from north horizontal angle and similarly the aspect. So, if we know the solar zenith angle solar azimuth angle the slope of the terrain and the aspect of the terrain we will be able to calculate do some calculations of topographic correction. So, what essentially we do is say for a sloped terrain so this terrain the dark black line is a sloped terrain. So, this is like the true vertical but since the terrain is sloped the surface normal to the terrain will now be in a different direction. So, this is the surface normal for this particular terrain this would not be vertical the surface normal will be away from the vertical. So, here we need to know the solar zenith angle from vertical what is solar suns angle vertical angle we should know the site aspect solar azimuth angle and site slope if we know all these things we will be in a position to calculate the incidence angle like incidence angles is due to the slope the incoming solar radiation will now fall in a different angle what angle is that we will be in a position to calculate. So, this is like one simple formula to calculate the cosine of this incident angle incidence angle angle of incidence. So, the incidence angles will tell us due to the presence of slope solar radiation will not be falling in the direction which should be there will be a difference. So, what is the angle at which the solar radiation is falling on the ground we will be able to calculate from this formula this is the. So, theta s is solar zenith angle eta is the slope. So, using solar zenith angle and slope angle similarly the azimuth angle and the aspect angle we will be able to calculate the angle at which the solar radiation will fall on the surface. Once we know all these information there are certain topographic correction models which helps us to correct for the effect of terrain we are not going to go deeper into such models because of want of time. But just remember one thing is the models are there are many different models exist for such topographic correction with varied level of complexity some models may assume the surface as lamber sheen some models may not assume the surface as lamber sheen that is they will consider the surface will look different from different different angles. So, based on the models within the assumptions within the models based on the inputs required the final output may may look completely different. So, just as an example I will show two models but we will not go into the details of it. So, these are two different models that are currently existing to correct for topographic effect one is called the cosine correction models which assume the surface as lamber sheen one is called the C correction model and so on. So, as I said before some models may assume surface as lamber sheen some may assume the surface as non lamber sheen the complexity may will vary but all these models will correct to the maximum possible extent not the perfect correction but maximum possible extent. So, one such example is given here like this image is actually like having topographic effects here these patches appear completely dark due to shadow effect here it is bright here it is dark and so on after topographic correction the reflectance look something more uniformly here we have certain values here we have certain values and so on. So, the topographic correction provides some sort of improvement over the raw image. So, as a summary in this lecture we have seen a simple image based atmospheric correction method called the dark object subtraction and also we have seen a basic introduction to topographic correction why effect of the by how topography changes the radiance recorded in the sensor and some simple models to correct for this. Thank you very much.