 Hello everyone, welcome to the next lecture in the course remote sensing principles and applications. In the last class, we have discussed about the important concepts of spatial resolution, pixel size, how these two can vary and what effects it will have on the image. In this particular lecture, we will go a little bit deeper into the concept of how certain properties of object space helps us to identify the objects more clearly. In the last class, I told you that our ability to resolve features depends on the GAFOV. As a GAFOV increases, there will be lot of spatial averaging will be occurring and hence our ability to resolve two features will be decreasing. Also, if we go and ask a common person what they think about the term spatial resolution, it is very normal for us to say the smallest size of the object that can be deducted in image is spatial resolution. That is the most common definition one can expect from any common person, it is intuition. What is the smallest size of the object that can be deducted in an image is spatial resolution that is what they will say normally. This is not the case because we saw the complex nature of a remote sensing image acquisition system how different elements play a role in our ability to identify objects. So essentially, does the smaller size of the object will determine the spatial resolution? No. Most of the times, we may not be able to determine smaller objects like if the pixel size is say 30 meters, we may not be able to identify objects of the size say 10 meter by 10 meter, most likely. But under certain circumstances, we may be able to clearly identify objects that have or that has a much smaller size than the pixel size as well as the GAFOV size of the system when it can happen example is given in this particular slide. Let us take an example of this. So this is like taken from a sensor called thematic mapper Landsat 5 thematic mapper where GAFOV and pixel size are 30 meters. So here, GAFOV is 30, GSI is also 30. This is modus band 2 where each pixel corresponds to 250 meters of data. So here, 1 pixel corresponds to data from 30 meter by 30 meter of ground space. Here 1 pixel corresponds to 250 meter by 250 meter of ground space. Look these two images, though this image appears much more average, much more pixelated as everything, still we are able to see certain features. This road is clearly visible in this image. This is visible in this image and this is also visible to some extent in this image. So we are not able to clearly distinguish what is there, still we are having some signal but this road is highly visible similarly this line. Most likely a road or a canal, most likely a canal may be. They are visible clearly. What exactly happens here? The road size, if you take a road size definitely it not be 250 meters, even it may not be 30 meters. The road size most likely may be less than 30 meter in dimension, 30 meter in width but still it is visible in Landsat image. That is fine. A road may be say 10 meter, 20 meter in width. So it is more than half a pixel it is seen there, fine. Here also it is being seen at a much cozier resolution. How is that possible? It is possible because of certain properties of the object space. So this is we call it as object space where the actual object is located on the earth surface. What characteristics? The first major characteristics is the contrast of in the object space. What is contrast in the object space? Let us go back to the previous example. Let us take one GAFOV element. This is one GAFOV element by any one director. So the energy contained or the energy sensed by a single director element is actually determined by what is coming out from each of the feature located within that particular GAFOV. Say let us say one GAFOV is located here in this particular sand, pure sand. This is one GAFOV let us assume. So the entire GAFOV is filled with signals from sand more or less uniform values. So it will appear bright normally invisible bands like normally invisible or any air bands and all. Sand will have like a somewhat brighter reflectance so it is appearing bright. Even if you look at like beach sand it appears bright to our eyes right because it can like reflect a good portion of energy it appears bright. But let us take one GAFOV which has a mix of this road and sand. So here this is sand. So let us take a pixel example from here a mix of road plus sand. In this case what will happen? Let us say the sand has reflected back say for a rough example I am telling 30% of incoming energy is now reflected back okay. Now this road has a very poor reflectance. Let us say the reflectance is say 10% whatever be the band let us say the reflectance is 10%. So this portion of the land surface will reflect only 10% of energy. This portion of the pixel where there is sand it will reflect 30% of its energy. Let us assume the incoming energy is same for the entire GAFOV element. Say if the entire say if the incoming energy is some 100 units for this pure sand pixel pixel A the outgoing energy will be 30 units because 30% of its energy is reflected back. For this particular pixel pixel B which contains a mix of sand and road the outgoing energy will be like average of these 2. Let us say both of them occupies half a pixel half a pixel. So it will be like 20% of energy so outgoing energy will be just 20. Let us say everything is like equally distributed half pixel half a GAFOV is covered with road half a GAFOV is covered with sand and both of them aligned in the same direction okay. So the outgoing energy will be 20 units for this pixel the outgoing energy will be 30 units for this pure sand pixel. So most of the sand pixel pure sand pixels here will be having a high incoming energy because of its pure sand content it may have more or less uniform incoming energy. On the other hand the GAFOV elements which are covering both sand and road will have a lower incoming energy in the sensor I am talking from the perspective of sensor. At the sensor the GAFOV is covering a mix of road and sand will have a lesser value of incoming radiance than pixels of this pure sand pixels all this time. So if you try to plot it in a 2 dimensional space like if you distribute them in a 2 dimensional space the dns along this particular road element will be actually lower than or the incoming radiance at the sensor from these elements will be lower than the incoming radiance from all these space. Because of this smaller radiance received for these pixels they will appear darker to our eyes because the entire surrounding area is bright this particular portion alone is dark so we will have like a somewhat darker pixels and just by seeing those darker pixels aligned along one particular line we are able to understand or we are able to infer that it may be a road feature because it appears linear, it appears dark, it is smooth and so on. But just remember one thing here the road appears much sharper here the road does not appear much sharper because of the much larger GAFOV that is say the road may not be distributed like this it may not be distributed like this road and sand it may not be like this it may be something like one portion may have a larger sand component another portion have like a larger road component and so on. So it may differ the orientation of the road within a GAFOV may vary the percentage occupancy of the road within a pixel may vary but whatever be the even if small portion of the pixel is occupied by the road definitely there will be like a influence on the outgoing radiance from that particular GAFOV which will affect the DN of that particular GAFOV. Hence the road width and those things may not be inferred from this practically it is not possible here it is much clearly the exact space of the road and all is like much clear here rather than this particular part but still we are able to see how a road it is. So essentially what enabled us to identify this road the contrast between the GAFOV or the pixels containing a mix of road and sand and the pure pixels containing only sand it is the contrast in object space. So almost all pixels along the road will have a mixed pixel we call mixed pixel when more than one feature is present within that pixel. So there will be always be a fraction of road plus fraction of sand visible in the GAFOV is covering that particular area and because of this presence of two features the outgoing radiance will be less because of the difference in reflectance in comparison to the surrounding pure sand pixels. So here we get a clear contrast that one line appears much darker in comparison to the surrounding sandy areas which helps us to identify and objects clearly. So this is the important concept of contrast in the object space if there is a very high contrast in the object space here the sand is much brighter than the road the contrast is much higher. This enabled us to identify that small road in a much coarser pixel size of 250 meters. The next important concepts which helps us to identify smaller objects in a coarser pixel sizes and what is known as the point spread function or the empty of modular transfer function these two are related concepts point spread function is a concept modular transfer function is a concept we are not going to go in depth into these concepts but I will just introduce you what is what. So essentially what a point spread function is let us take a lens ok. So let us say some energy light sources passed via through the lens and the light source that is passing through the lens is coming from a point source essentially just a single streak of light is passing through the lens essentially it should image as a point when only one streak of light is passing through the lens it should be image as a point light from a point source should be image as a point by the lens it will not happen in reality. Almost all optic system be it lens or whatever they have certain limitation of imaging a point source a point source will not be imaged as a point but it will be imaged as if it is like optical system if it is lens or something it will be imaged as a set of concentric circles like example is given in this slide say here you can see 2 point source rather than being imaged as 2 separate points they are imaged as 2 set of concentric circles with a single central bright spot dark spot a little bit more bright spot a dark spot and so on. So this is 2 different point sources but they are appearing like as concentric circles this is the limitation of any optic system none of the optical system like lens mirrors the combinations none of them are perfect they will have some deficiency like this essentially a point will not be imaged as a point this is one thing not only because of optic system if you take a case of a remote sensing sensor then there are lot more other system involved optics is just one part of the sensor the lens assembly and everything is just one part of the sensor after the data is collected by lens it is passed on to amplifier and electronics then or detector basically after the detector sensors the energy it sends it to the amplifying system and electronics all those things. So there are lot of other components involved which will further degrade the incoming signal the detector has like a finite size it is not like a again a point detector has a finite size that is whatever be the incoming energy it will not be like recorded in one point since the detector has a certain size the incoming energy will spread over that particular area which is essentially will cover one particular full pixel in the image basically. So because of the finite size of detectors there may be like a point spread function happening because of limitations in electronics how we sample how we quantize point spread function may be happening and we also know that earth is moving inside sorry underneath the satellite satellite is moving like this earth is moving like this and we have our experience with photography normal photography that when the object moves when we are taking a photograph there will be some sort of image blurring image will not be crisp the object will the image will look blur because of the movement of the object. So earth is continuously moving satellite is continuously moving so both the camera or the sensor and the object is continuous motion and which will cause small amount of blurring in the image even though we may not be able to see it or visibly feel it there will be some blurring in the image. So all these things combined together will determine whether a single point in the object space is imaged as single point in the image space which will not be anyway we know that digital image will not be continuous a single point in object space will not be imaged as single point in the image space it will not be the case. So all these components taken together the optical components the director finite size of detector the electronics involved a motion of earth and satellite in relation to each other all these things will make a single point to spread over like a small area. So each of the different different points in the image is going to spread over like a small area which will cause a small kind of blurring in image image will not be crisp and we all know if the image is blurred our ability to identify objects within the image will go down higher the blurring our ability to identify small small objects will go down. So a system has to be perfect a system has to be has to reproduce whatever there and the object space exactly in the image space for a star identified in remote sensing sensors that will not happen that will affect our ability to identify the objects in the object space. But if some sensor is extremely good where nowadays lot of high spatial resolution sensors are there they are extremely good in having like very good system properties. If that is the case then such systems are very good system we may be able to identify object clearly like just an example let us take two hypothetical systems they have exactly same GAFOV, GSI, same look angle, scan angle everything one and the same. Only the difference is the all the PSI different PSF point spread function components of system A is little bit poorer than system B. Let us say system B is having a better PSF it is like very much better than system A. In that case the image acquired from system B will be much sharper and we will be able to identify objects very clearly from image acquired by system B than system A. The next important concept I told is MTF Modular Modulation Transfer Function. Just to put it in a simple sense how the contrast in the object space is being transferred to the image space this is what will determine the MTF. We are not going in depth about the MTF concept but just a simple example I tell you. So, this is like an object space ok. So, object space has like a bands of dark and white patches. So, essentially if you take it like a what to say if you take like a image signal from this. So, here it will be 0 signal it is extremely dark here it will be 100 person signal extremely white again 0 100. So, it is like a very high contrast object space. Let us say it is now being transferred to a system. So, optics, electronics everything is the signal is passed on to this. When the image is being produced let us say because of the limitation in system this particular object space this signal is being reproduced like this. The contrast has now decreased. So, the black has become gray and the white has become little bit dull or brighter version of gray it is not pure white. So, the contrast in the system is now totally reduced. Just take this example the contrast the same object but the contrast here is much reduced the black appears like a still a lot brighter gray and the white again appears like a lower version of or impure white white brightness something like that sort. So, here you can see whatever be the object here is not represented properly here. So, this will say how clearly the contrast in the object space is reproduced in the image space. So, if the contrast is maintained properly if the black and white tones in the object space are maintained exactly as black and white tones it is fine. But as I told all all sensor elements comes with their own limitation there will be like a variation in the gray tones everything is like gray tone black is extreme low value white is extreme high value. But in between there is just gray tones. So, in the image space the gray tones are not reproduced properly this will determine the MTF. And as we know if the MTF goes lower and lower this MTF is little bit higher than this particular MTF. So, our ability to identify the objects will also depend on the MTF of a system. So, these two characteristics point spread function and MTF will together define our ability to see smaller objects in the object space. So, in the properties we have seen contrast in object space point spread function MTF modular transfer function of the system. And then is the signal to noise ratio of the sensor that is like each sensor will have like a characteristic noise being produced within it. So, none of the system is again it is perfect. So, there will always be some amount of noise that is being produced inside the system. So, that will always mix with the incoming signal from the ground they will be recorded together. If the signal component is much larger than noise it is fine. Actually that is the aim of the lamps at MSS they want to increase the signal in terms of system noise that is why they made the JFOV much larger than GSI that is one of the reasons. So, signal to noise ratio the amount of signal to the amount of noise in a simpler sense. So, if the noise is low S by N ratio will be high or if the signal is much higher for a constant N still S by N will be higher. So, increasing this S by N ratio either by increasing the signal or reducing the noise produced by the system within the system will help us to improving the image qualities. So, all these things combined like contrast, point spread function, modular transfer function, signal to noise ratio all these things combined together will help us to identify objects that are much smaller than the what to say pixel size of the particular sensor system. If there is like very high contrast and the system between the object space and the system is able to reproduce everything perfectly then we will be able to see even much smaller objects. And just by looking at the context in the image we will also be able to decipher what that object is. Say in the example of that road and sand the desert example we were able to identify or interpret that linear features as a road by looking at its not only its color but also its shape how it is oriented lot of other features right. So, essentially by studying this variation is contrast in respect to the context the spatial context we will be able to interpret what feature is present there. So, essentially what have we seen in today's classes? It is not necessary that the smallest size of the object should be treated as spatial resolution that definition is not true like most of the people will tell that the smaller size of the object that is seen an image is a spatial resolution of the system that is not true. Under certain circumstances we will be able to see certain features that are much smaller than the pixel size or GIFOV size of the sensor and what factors control these is what we have seen in today's class. We will continue the rest of the topics in the coming lectures. Thank you very much.