 Hello everyone, welcome to the next lecture in the course remote sensing principles and applications. Today, we are again going to continue with our topic of remote sensing image acquisitions and characteristics of a remote sensing system. In the last class, we discussed about how image is acquired from satellites that are in geostationary orbits like the scanner will have to move in both the north south as well as east-west direction in order to acquire a 2 dimensional image we saw. Then we saw some important concepts such as IFOV, GAFOV, FOV and GAFOV. So, IFOV is related to the angle subtended by single detector element at the point of like its objective space where the ray from the earth surface enters the lens system or the optical system, optical point ok. So, the angle subtended by single detector is called IFOV and if you take into effect of the orbital height into picture and combine with this IFOV, we will get the ground size subtended by each of the detector element. So, corresponding each detector element with the size will have a corresponding ground size that is determined by the IFOV angle and the orbital height we call that as IGIFOV, ground projected IFOV. So, whatever be the features on within that particular GAFOV size all those radiances will be averaged out and will be recorded as one single value in that particular detector. Then the next important concept we saw is FOV and GAFOV. So, that is the total scan angle subtended by or total scan angle that is the scanner made or if you take in the case of like a push broom sensor the total angle subtended by all the detectors in the cross track direction at the optical point where the light ray enters from the earth surface. So, this is FOV and correspondingly if you take with the height orbital height of the satellite we will get what is known as the swap width of the satellite. Today, we are going to see an important concept about spatial resolution. So, what exactly spatial resolution is? There is no clear cut definition for spatial resolution, but resolution the meaning will tell us that our ability to resolve something or distinguish between two features that is the concept of resolution our ability to resolve or distinguish between two features. So, spatial resolution means our ability to distinguish two features in the spatial domain that is if you take one particular the let us imagine like there are two different objects on the earth surface that are next to each other and you take a photograph from it. Are we able to distinguish these two objects separately or not? We will tell us information about the spatial resolution of the system. So, how close an object should be to be resolved or how far the object should be to get resolved all these depends on the spatial resolution. So, if you look at this meaning like our ability to resolve then our ability to resolve two features in the spatial domain will improve when the objects are far apart that is one case or the IFOV of the system is smaller, smaller in size that is say there are like two different objects here object A, object B. If the IFOV of the system covers the entire area surrounding these two objects together. So, this is JFOV, JFOV then the radiance coming out from these two objects will be collected together and it will be recorded in the sensor as one single value. So, it is highly possible that we may not be able to distinguish these two features they may appear as one single different signal in the image rather than appearing as two different features. On the other hand if these two features are there A, B and if the JFOV is something like this then the energy coming out of these two objects will be recorded as two separate values or will be reaching the sensor as two separate values using which we can determine or we will be able to determine these two objects separately. So, the spatial resolution is in turn determined or influenced to a large extent by the JFOV of the system our ability to resolve features on the image is highly dependent on the JFOV size. Course are the JFOV here course are I mean very large scale say in order of 500 meters, 1 kilometer, 5 kilometers and so on. Course are the JFOV the radiance from many different earth surface features will be averaged out and our ability to resolve them separately may decrease or if the JFOV is finer our ability to distinguish features on the earth surface may increase this is one thing. The next concept is pixel size in the image normally whenever we download a satellite image they will say each pixel is 30 meters 500 meters and so on. It depends on depending on the sensor we will have a characteristic size of a pixel that is each pixel in the image represents a certain area on the ground. Say if this is like one Landsat image I will take an example say image from Landsat 7 ETM sensor ETM plus sensor in band 1, 2, 5 if you take any bands each pixel sizes roughly 30 meters. So, that means each pixel corresponds to a ground area of 30 meter on x axis 30 meter on y axis it is like a square pixel. So, that is the meaning of pixel size. But what determines this pixel size? Does JFOV determine the pixel size means the answer is no JFOV does not determine the pixel size. The pixel size is determined by our scanning sampling time in case of line scanner or vis groom scanner or in case of a push groom scanner the pixel size is determined by the distance between two adjacent detector elements. I will make it clear with an explanation, detail explanation. Let us take an example there is like a vis groom scanner like this scanning the ground from this point to this point. So, this is like nadir when the vis groom sensor scans the ground surface what will do first it will have it will subtend like a small JFOV collect all the data from it then it will move little bit it will move continuously so it will move it will collect another JFOV signal. So, like this the movement of the scanner is continuous and it will be collecting energy continuously from the ground surface. So, that is the scanner moves like this or this while it is moving continuously it will be collecting whatever energy. So, but each time it will subtend a small JFOV on the ground whatever energy is coming within the JFOV will be collected but it will be a continuous process without any gaps. So, the signal coming out of a scanner will be like a continuous signal something like an example I will tell like the signal may be coming something like this actually I am drawing the signal from one of the examples we saw in earlier image acquisition process. So, let us say the signal coming from the ground for each one particular scan line on the earth surface something looks like this. So, here there is very high amount of incoming radiance here that it suddenly drops then it slowly increases drops again. So, it is a continuous stream of energy across one entire scan line. So, this is one scan line but digital systems cannot store this energy continuously we already saw and it will do what is known as sampling. So, at definite time intervals delta t the incoming energy will be sampled and that particular energy will be sent to the electronics part for amplification and quantization. This we already saw in the previous classes when we discussed about image acquisition and image formation process. So, this continuous stream of incoming energy will be sampled at certain time interval delta t and that value alone will be sent to the electronics for quantization and saving as image. So, within this time delta t the scanner would have moved certain distance on the ground that is for each delta t time period the scanner would have moved some distance here next delta t scanner would have moved here like this. So, the scanner will be continuously moving when for each enlarged delta t time period when the sampling occurs. So, corresponding to this particular delta t there will be a ground distance the sensor or the scanner that covers it is as a delta t increases the ground distance also like scanner will say the different points on the red surface let the scanner be here at point A. After delta t the scanner would have moved to point B after delta t time the scanner would have moved to point C etcetera. So, there is certain amount of distance the scanner moves for each delta t time period and the sampling that occurs after each delta t and the corresponding ground distance moved by the scanner will determine the pixel size that is say at delta t instant the scanner moves the distance of 30 meters on the ground. So, one sample is being collected for a distance of 30 meters. So, next delta t that is 2 delta t the scanner moves the scanner would have moved further 30 meters along in the across track direction. So, another sample is collected. So, every 30 meters on the ground distance we are collecting one-one sample. Hence, the pixel size will be 30 meters the pixel size is determined by our sampling interval or the corresponding ground distance covered by the scanner during that sampling interval. This is known as GSD ground sampling distance or GSI ground sampling interval. So, GSI refers to the time taken for sampling say in the order of like few microseconds in case of like with broom scanners ok. So, maybe in order of say 6 microseconds every 6 microseconds one sample is collected or every 10 milliseconds one sample is collected and so on that is GSI. But with that time interval for the time interval delta t what is the ground distance covered GSD ground sampling distance say within the time interval of say 6 microseconds the since the scanner has moved a distance of 30 meters in the across track direction. So, 1 delta t 30 meters let us say here 1 delta t 30 meters 2 delta t another 30 meters another 30 meters and so on. So, that GSD that distance travelled by the sensor will determine the pixel size of the whisk broom or line scanners. In case of push broom scanners where there will be no scanning involved there will be multiple detectors involved what will determine the pixel size. The pixel size will be determined by the distance between 2 adjacent detector points. Let us take an example a push broom sensor is looking like this this is the across track direction. Let us say we have some 4 detectors. So, each detector element from the top left point to the top left point of the next detector what is the distance that will determine the pixel size because whatever is so this is essentially the detector size. So, if the detectors are arranged just next to each other without any gap then essentially the distance between them is actually the size of the detector element. So, this will determine the pixel size in image essentially this will be the JFOV for that particular pixel and that will also determine the GSD or GSD. Let us say one hypothetical example where one detector element is here say width of W then there is a gap W without any detectors then there is another detector with W with size of W there is a gap then there is a third detector and so on. If a hypothetical sensor is designed like this like with gaps in between detectors the detectors are not placed continuously like here then the pixel size in the image will be determined by this distance to this distance that is W plus W 2W that is each pixel will have a size of 2W or its corresponding JFOV. If you talk in terms of JFOV it is 2 times the JFOV because there is one detector element there is a gap so whatever the data coming in will not be detected will not be detected. So, the same energy collected by this particular pixel has to be saved for these 2 detector elements or corresponding JFOV terms. If you talk about detector size or if you talk in terms of ground distance each detector element will have a JFOV size. So, this is the pixel size will be 2 times the GIFOV we call this as inter detector spacing what is the distance between 2 detector elements but most likely in push-proom sensors they will arrange everything together such that there will not be any gap in between the detector elements because we need to collect continuous stream of data without any gaps on the earth surface and hence the detector element size will be equal to the GSD or GSI but if there are in hypothetical case if there is a gap in between 2 detectors then the actual detector size plus that gap together will determine the pixel size in case of push-proom scanners. So, the pixel size is not actually determined by JFOV, JFOV is different that is the area covered by one single look of the detector on the ground whereas GSI is determined by the inter detector spacing in case of push-proom scanner or our sampling interval in case of line scanner or vis-broom scanner. So, a pixel and the JFOV need not be related the JFOV can be different a pixel size can be different are there any practical examples yes there are once a good example is the earlier Lancer satellite which was launched into space this is example of Lancer MSS multispectral scanner the earlier Lancer satellite that was launched into space or a sensor called ABHR. Let us take example of Lancer MSS Lancer multispectral scanner what it had is the JFOV of the Lancer MSS sensor was approximately 79 meters by 79 meters like JFOV squared I will say that is each size is 79 meters JFOV on the other hand the sampling times was set such that the GSD was equal to 60 meters that is each JFOV will cover area of 79 meters by 79 meters but the GSD the distance between 2 plus marks will be 60 meters so what essentially will happen let us take if one such scan line is here so it will collect the JFOV will look one large area a sample will be collected at 60 meters interval the JFOV will be like half like a overlap the second JFOV will look something like this like between this point and this point another sample might be collected here so what the sensor did is it did over sampling the JFOV the energy was averaged over a much larger area but the sampling occurred at a much shorter distance 60 meter by 60 meter so each pixel was 60 meter by 60 meter in size but JFOV was in order of like 79 meters so essentially the neighboring pixels say if you take an image and if you take 2 neighboring pixels pixel 1 and pixel 2 there will always be some overlapping region what I am highlighting here this region will be commonly overlapping between those 2 pixels because of the overlapping or larger size of JFOV so each pixel will have some common information with its neighboring pixel in both the direction that is a middle pixel here will have certain portion of overlapping with its pixel to the left similarly certain portion overlapping with its pixel to the right so only one particular portion of the image or one particular portion of the pixel will be unique the rest of these 2 sides will have a strong overlap with the adjacent pixels this is called over sampling why over sampling was done over sampling was done to increase the amount of signal that came in to the sensor that will improve the signal to noise station one sense and also the image quality will be better because of this larger signal that is coming in in order to improve the image quality they design the system like this but the one drawback is due to this common information between adjacent pixels each pixel is not independent of each other there is always some common data available between 2 adjacent pixels that means the pixels will be highly correlated in space that is whatever information contained in pixel A will be highly related with pixel information contained in B similarly the information contained between B and C will be highly correlated and so on so it leads to over sampling this is an example of over sampling and it is done in order to improve the signal quality incoming signal quality but it resulted in a lot of correlation between the pixels. A typical remote sensing system on the other hand a normal system nowadays what are launched will not have this kind of over sampling issue will have the GSD set equal to GIFOV the example is given here. So, whatever is the GIFOV of the sensor the GSI will be set exactly to sample that particular distance. So, there will not be any overlapping information nor there will not be any gaps. So, GIFOV and GSI will be made equal in a typical remote sensing system. So, this is an example answer is an example of GSI size smaller than GIFOV this is always preferred this was done just to improve the signal quality but adjacent pixels will be highly correlated. So, next we are going to see this in little bit more detail through an example. So, these pictures show image of same area processed using some digital image processing techniques where the GIFOV size was changed for each image here the GIFOV size was 10 meters here the GIFOV size was 30 meters here the GIFOV size was 80 meters but the pixel size in each image is just 10 meters. So, the pixel size is the same that is effectively the GSI is same but GIFOV kept on progressively becoming coarser and coarser 10 meter, 30 meter and 80 meter. From this image we can see that even though the pixel size is the same the amount of information we can get from these images actually decreases that is in this image we are able to resolve lot of features here there are like two structures we are able to resolve here there are like lot of small structures here that we are able to resolve one doll but here we are not able to resolve them they appear as single feature in this image nothing is clearly visible for us everything appears much because of the larger GIFOV and the averaging effect of this 80 meter size reservation size. Hence the GIFOV the size of GIFOV will influence our ability to resolve features but GSI will determine the pixel size they can be different they are not one and the same they need not be one and the same they can be different but in most remote sensing systems the GIFOV and GSI are set equal such that whatever the GIFOV sizes a sample will be made in the corresponding distance inside the sensor system but they can be different that is the aim of this particular exam. We have seen an example where GSI size was smaller than GIFOV but can the reverse may also happen yes it can happen that is the sampling the GSI can be occurring at a larger distance than GIFOV we will see an example for that. Let us take an example we have data arranged like this okay two scan lines okay so this is scan line one this is scan line two. Now I am going to have a system like let it be like a vis broom where the sampling interval in both across track and along track direction that is for vis broom the sampling has to occur in both the direction because the satellite in across track it is moving like this satellite also is moving like this in the along track direction so continuous stream of energy will always be coming in hence the sampling has to occur in two dimensions whereas in case of push broom sensors the sampling occurs only in the along track in the across track it is determined by the inter spacing the spacing between the detectors adjacent detectors but along in the along track direction a sampling will occur because satellite motion is continuous it will produce a continuous stream of energy as the satellite moves hence sampling has to be done. In case of vis broom or line scanners sampling has to be done in both the cases because here also the scanning happens continuously without any break system will sample certain points satellite does not is moving continuously without any break hence system has to do sampling in along track direction also hence sampling occurs in both the directions that is why we get like a GSI it is possible to produce a rectangular GSI also the sampling interval is different between across track and along track direction it is very highly possible for us to get a rectangular pixel most likely we will get a square pixel because the sampling interval is same in both along track and across track directions ok. In similarly in case of push broom the sampling interval in the along track direction will be made equal to the distance between the two adjacent detectors there also we can have rectangular pixels. So, here we let us assume we have a vis broom or some sort of like line scanner and where the GAFOV size let us say this distance each this is like ground area I am talking about ground area some features are there the features let us say for example I am taking 1 2 3 4 5 6 6 5 4 3 2 1 let us say each size is 30 meter the GAFOV of the system let us say is 30 meters which covers each area. So, this is 30 meters this is 30 meters and so on a hypothetical example. On the other hand the GSI is set at 60 meters in both X and Y direction. So, what will happen because the GSI is set is 60 meters in both X and Y direction the pixels rather than be this is 30 this is 30 30 30 the pixel rather than having 30 meter size will now have a 60 meter size in the both X and Y direction because of the GSI. So, this will be the size of pixel in both X and Y direction. So, let us take the example a sampling is starting from here. So, since the GAFOV is 30 meters in the both along track and across that direction all these numbers let us take this as 1 pixel this 4 pixels you consider 1 2 5 6 will be collector data will be collected by the scanner because of a GAFOV size. But since the GSI is set at 60 meters only one sample for this large area will be collected it will be collected as 1. Let us not take the center of pixels and we will not enter into that sort of argument we will make it simpler. So, let us say one sample is collected. So, this sample is collected here for the next pixel the sensor would have collected samples and the next GAFOV will be here it will collect 2 then it will move to 3. If you take these pixels then essentially the sample 3 will be recorded here. Similarly, here the sample 5 will be recorded because we are sampling only one our GSI is much larger twice in both X and Y direction but GAFOV is much smaller. So, the detector will effectively collect data from 1 2 3 4 everything separately, but we are not sampling it like that in this particular direction our sampling will take only 1 3 5 in this direction there will be no sampling because of the GSI set that is how it is actually happening that is scanner is continuously scanning. But the data is not being sampled properly data is being under sampled rather than collecting 4 samples we are now collecting like 1 1 sample like that. So, here we are doing what is known as an under sampling. So, if you look at like the pixel there is like a huge amount of loss of data instead of having like any one at least the information should be averaged out. Let us say there is like a numerical averaging say the information should be averaged out and we should have an averaged result here, but that is also not there we are having only certain data points we are not collecting the actual averaged value also because our GSI is much larger than GAFOV. So, we are not averaging over larger area we are averaging energy over smaller area, but we are missing in between area because of our poor sampling interval the data is now collected being 0.1 and it is sampled over 0.2 data is collected but not sampled over 0.3 data is collected and sampled and so on because GAFOV is 30 meters GSI is 60 meters. So, 1 1 ground point or GAFOV element is actually skipped by the sampling system. So, instead of having 1 2 3 4 5 6 we will have 1 3 5 and so on it is actually a loss of data for us. So, having GSI larger than GAFOV will lead to under sampling and it will also lead to a phenomenon known as spatial aliasing. So, spatial aliasing means there may be some unwanted artifacts or the image may look completely different from what it is there on the actual ground we may have like a completely different features on the image than what is actually present on the ground. So, here we are doing under sampling and we may encounter a phenomenon known as spatial aliasing. Example is given here in this slide. So, here we have taken an image GAFOV equal to 10 meters GSI equal to 10 meters. We are now making using some image processing techniques we are converting the image as such here GAFOV remain the same, but GSI is becoming coarser and coarser. Say 10 meters GSI size is 20 meters here of GAFOV 10 GSI is 40 meters you can see how the image looks here. So, here there is some loss of information from this image this image looks bit different this image looks totally different. Here, so this is like a 20 meter pixel image this is like a 40 meter pixel image. This is also example of 20 meter pixel image and 40 meter pixel image, but the GAFOV size is made equal to GSI. So, here in these 2 cases the GSI was much larger than GAFOV in these cases the GSI is equal to GAFOV. So, essentially let us label this images as A, B, C, D pixel size of A and C are equal, but if you look at the image C appears little bit better than A, A has lot of points of discontinuities. Similarly, if you look at image B and D both have same pixel resolution pixel size, but the image information content when you try to extract information out of image D will provide you better information than image B. Because in images A and B the GSI is larger than GAFOV. Hence, we are doing under sampling and we are not representing the ground truly, whereas in image C and D even though the pixel size is closer because of larger GAFOV we are actually collecting at least the average data. Let us say like the example where we took pixels 1 and 2 if the GAFOV is made equal to cover both 1 and 2 we will have like an average of the energy stored in the sensor. So, let us say if it is a numerical average we would have stored 1.5 instead of 1 because the sensor is doing like a proper averaging. So, by making equal GAFOV and GSI we are actually preserving the information even though we are making large scale averaging, but having a larger GSI and having a smaller GAFOV may cause data loss and may cause a phenomenon known as spatial aliasing. So, in this particular lecture we have covered the important concept of spatial resolution. What spatial resolution means and what a pixel size means and what determines this spatial resolution and what the pixel size. So, essentially our ability to resolve is determined by GAFOV but the pixel size is determined by GSI. So, there is no clear cut definition of the term spatial resolution because everything combined together the pixel size the GAFOV size everything combines together will finally determine our ability to identify objects in the image. When we finally get an image we would not have knowledge what happened inside the system. We have to just look at the image collect information out of it and all these things the GSI, GAFOV and all other sensor are scanning geometric properties will determine our ability to identify objects in the image space. So, there is no single point of definition for the term spatial resolution, but GAFOV determines our ability to distinguish objects and GSI determines the pixel size. The next class we will further go deeper into the topic and discuss about how our ability to identify objects changes with different properties contained within the image itself. With this we end this lecture. Thank you very much.