 Hello, everyone. Welcome to today's lecture in the course remote sensing principles and applications. Over the last few lectures, we are discussing about the characteristics of remote sensing systems, how images are acquired and how the characteristic of the system will affect our data and all these concepts. So, we also we are going to continue with these concepts. In the last lecture, we discussed about how the spatial resolution of an object like how the size objects that are smaller in size than the pixel size or than the JFOV are deducted in one particular pixel. So, some basic concepts we have seen and where we also noted that the factors that enable us to see objects that are smaller than the GIFOV are and the contrast in the object space, the point spread function and or the empty of modular transfer function of the sensor, the signal to noise ratio of the sensor and also the context, the spatial context in which the feature is located. We saw what point spread function is, a basic explanation I gave. Similarly, what empty of again another basic explanation I gave. So, today we are going to continue with the further details about how objects smaller than a pixel size can be deducted. So, I already explained you clearly that contrast plays a major role in identifying an object. So, this gives like a very good example where the reflectance of the background and the target here. So, if your background is 0, target is 1, here the background is 0.04 for target reflectance is 0.08. Assuming both of them cover 50 percent of each 50 percent each of the GIFOV. So, this covers 50 percent of GIFOV, this covers 50 percent of GIFOV and so on. So, the average reflectance or the average DN that will be produced in this case is 128, assuming a 8 bit quantization. Assuming an 8 bit quantization, the DN produced for this particular pixel is will be 128, whereas for this particular pixel where the difference in reflectance is pretty low will be 15. So, how contrast the background and the feature is going to play a major role. The larger the contrast difference or larger the reflectance difference between the background and the feature of our interest, we will be able to identify the object more clearly. Actually, that is what happened and in this particular example which I showed you in the last class, like there is like a very large difference in the reflectance between the sand and this particular road. Because of the large difference in reflectance or the large contrast between the object of interest in the background, we are able to infer or interpret this particular road properly even in a course resolution image such as MODIS. And also the one more thing we have to notice. In addition to the contrast between the background and object, the sample scene phase is also important. Sample scene phase means how the object of our interest is oriented with respect to different pixels. Say for example, this is 4 GAFOV, like 4 adjacent GAFOV elements, a single feature of our interest is oriented like this. It is equally distributed among this 4 pixels, 4 GAFOV elements, whereas here the same object is now distributed in a non-equal fashion in the 4 GAFOV. So, essentially what will happen? The DN values finally that is being produced is going to be different because of the change or the difference in area coverage of background and object of interest. See this is our object of interest. This covers uniform area in all these 4 pixels, whereas here is the same object of interest is not covering uniform area in all the 4 pixels. It has a larger aerial coverage in this particular pixel, very low aerial coverage in pixel number 4, again limited aerial coverage in pixel number 2 and so on. So, when we get our final image, our ability to identify this particular feature will vary between these 2 images because of how the objects are orienting itself with respect to image. So, that is basically how objects of interest are aligned or oriented within the GAFOV. So, say for example, if this is 1 pixel, say 1 GAFOV and this is the object of our interest, so our object of interest will be much clearly seen in the GAFOV provided there is a very high contrast. But let us imagine the same object with the same like even we can take this particular example like 50-50 percent contrast difference. Even in this case, if the bright object, if it is oriented like this instead of being oriented in 1 pixel, if it is oriented like this, then the DN values for these 2 pixels will differ and our ability to identify this object of our interest will change. So, essentially, we will under some circumstances, we will be able to identify objects that are smaller than GAFOV, it is possible. So, the sum of the factors we have already seen, in addition to all these factors, the alignment or the orientation of the object of our interest in the image space or in the GAFOV elements, whether it is occupying 1 full GAFOV or it is distributed among like many different GAFOVs in small small aerial extent and so on are going to control our ability to identify that particular object. Till now, we have discussed in detail about the concepts of spatial resolution and so on. So, now we will go back a little and understand some characteristics about visc broom scanning and push broom scanning, some more characteristics. So, I said in visc broom scanning, sensor like satellite will be moving like this in the along track direction, the scanner will be scanning in across track direction. So, I was repeatedly telling some terms such as dwell time or time of integration and so on. I also told you that the dwell time in visc broom scanning will be very low and it will be little bit higher in push broom scanning. So, we are going to now see how this dwell time is, what is the concept of dwell time and how this will vary with respect to each type of scanning. So, let us consider there are n pixels per scan line of the image that is let us say the satellite is moving like this, this is the along track direction and this and the scanner will be scanning like this, this is the across track direction. And let us assume there are n pixels in one scan line. So, the number of pixels in one scan line is n, let us assume that. The satellite is moving with a ground velocity of v meter per second, v meter per second and assume the pixel size to be r by r. So, each pixel within the image has a size of r by r meter. So, essentially what the scanner has to do, the satellite is constantly moving like this in the along track direction. So, this scan line must be completed before the satellite moves ahead by distance of r, that is let us say for example, let us take a case of Landsat. So, in Landsat 7 the pixel size is 30 meters. So, the entire scan line along this particular across track direction must be scanned completely before the satellite moves at distance of 30 meters like starting from here this point to here this is 30 meter. So, this scan line must be completed before the satellite reaches the next 30 meter. Then only there would not be any data gaps else let us assume the satellite is moving faster rate than the scanning rate, then some pixels here will not be scanned properly. So, in order for us to collect all the data without any missing points it is necessary that the scanning is completed before the satellite moves ahead by one pixel size. So, by r meters here the example of Landsat it is the scan line must be completed before the satellite moves 30 meters. So, what is the time taken for one full scan line? One scan line must be completed within r by v seconds that is velocity is equal to distance by time. So, the distance has to be covered is r divided by time. So, time is equal to r by v seconds. So, within this particular time interval within this r by v seconds this scan line must be completed. So, for satellites in near polar orbit such as Landsat velocity will be around 6.5 to 7 kilometers per second that is every second the satellite will be moving with velocity of close to 6.7 kilometers per second which is pretty high and hence for scanner such as line scanner which has only one detector element or visc room scanner which has small number of detectors will have a very small dwell time in the order of micro seconds. So, having more number of detectors in the along track direction will increase the dwell time by some extent. So, this maybe we will see by an example. So, here in this example assume hypothetical sensor with 30 meter pixel size single detector. So, this is essentially a line scanner it has only one detector and 6000 samples per line that is in the last the theoretical explanation I gave n is 6000, 6000 samples per line in the nearer orbit assume v is equal to 7 kilometers per second calculate the dwell time taken for each pixel. So, now for a line scanner we are going to calculate the dwell time. So, traditionally what sensors will do is like in the last when I explained you the theory part I said before the satellite moves one pixel distance ahead the scanning should be complete. Essentially what will happen is in some scanners the scanner can scan in both the directions like this as well as like this it will start from here scan one full line. Then it will scan the next line and so on. So, because it scans one line the satellite moves it scans the next line the satellite moves this is possible in some scanners. In some olden day scanners the satellite or the sensor can scan in only one direction that is it scans like this it covers one full line it has to come back again to its starting position before the satellite moves to the next position that is full scan come back satellite moves to the next pixel full scan come back like that it will scan like I told you that a time taken or time available for scanning r by v second assuming the satellites can scan in both the directions like it can scan like this scan like this scan like this and so on. But if the satellite can scan in only one direction like olden day sensors, olden day Landsat satellites can scan in only one direction it has to go back to its starting position before it can scan the second line. If that is the case the time available per scan line will be not r by v but it will be r by 2 v. So, the time is now even reduced by half because the scan line is scanned this is called active scan phase or active scan time scanning is happening. So, one line is scanned now it has to go back without doing any scanning. So, it has to go back. So, that is called dormant time dormant it is not scanning actually. So, it scans it goes back next line it scans it goes back next line it scans and so on it proceeds for such sensors the time taken will be r by 2 v. So, the time available is still half. So, here we will see what will be the time available if the scan is happening in only one direction. So, here during the active scan cycle here we are assuming scanning happens in only one direction. So, the time available to complete one scan is 30 by 2 here I am bringing in the factor of 2 that is the satellite like the pixel size is 30 meters. So, essentially the satellite is moving like this let us assume the scan starts from here and it moves here. So, what should it should do let us assume this is like a 30 meter line and it has 6000 pixels n is equal to 6000. So, before the satellite moves to this second line the scanner should start from here scan to the end of this line and again it should go back to this point with the satellite would have moved by a distance of 30 meters. So, the time available for us now 30 by 2 that is just between sorry before the satellite crosses 15 meters distance the scan line should be completed the scan should start from this end and it should reach this end before the satellite moves a distance of 30 by 2 that is 15 meters. Then only for the remaining 15 meters this point can go back here and be ready in the starting position again. So, here it is 30 by 2 divided by velocity is 7000 meters per second everything we are converting into meters. So, divided by 7000 this gives 2.143 millisecond that is to start from this point A and end the scan at B the active scan the active scan is 2.143 millisecond whereas the total scan time start from A to B and ended point C at the end of 30 meters is twice of it 4.286 millisecond. So, that is what I am telling here is before the satellite moves one distance of 30 meters this should complete the scan go back to its initial state and be ready for its next scan line. So, the total time available for this entire cycle to be completed one scan go back to its starting point and be ready this entire process is 4.286 milliseconds. But the time available for the active scan starting from here to go here that is what that is when satellite scans the ground and the time available for that active scan is half of the total scan time that is 2.1 millisecond. So, if it is like 2.1 millisecond it has to now collect 6000 samples within that 2.1 milliseconds. So, hence 2.143 milliseconds divided by 6000 samples will give you 0.35 microseconds per pixel. Hence to collect data about one pixel the sensor has a time of 0.35 microseconds extremely small value. This is assuming a single detector and scan scanning takes place only in one direction. So, this is like highly restrictive whatever I said. Now we will see what will happen if we have 10 detectors in the along track direction that is for a visc broom we spoke about line scanner. Now we will talk about visc broom what will happen if we have 10 detectors in the along track direction. So, if we have 10 detectors in the along track direction and again assuming like here now it is visc broom assuming there are 10 detectors in the along track and still assuming the scanning takes place only in one direction like the active scan time is half the total time. Then the time available for active scan is 2.143 into 10 because here for one detector we got 2.143 milliseconds multiplied with 10 we will get 21.43 milliseconds because as the number of detectors increases the time available for data collection will increase like you can think the concept as like this. Let us say we have 10 detectors oriented like this this is the along track direction this is those 10 detectors. So, let us assume there is a point A here. So, first detector number 1 will see point A. Now the satellite will move to one pixel detector B will again see same point A and the signal will be sent to the corresponding pixel in the image. Similarly, third detector will again see point A but the signal again will be saved here for the same pixel on the ground. So, it is the concept is very simple the same point on the ground or seen multiple times by different different detectors and all the signals are combined together to produce the signal for one pixel. So, the same ground point A will be seen by a series of detectors as the satellite moves and all the signals collected for that particular ground point will be processed and saved together for one ground point ok which implies that instead of collecting signal with only one detector we are now collecting signal with 10 detectors over the same point which increases our time available by 10 fold that is the concept. If we have 6 detectors we have 6x more time if we have 16 detectors like Landsat had 16 detectors then we will have 16x or 16 times more time available than the time taken by one detector that is the idea. So, a same ground point can be imaged by different different detectors as the satellite moves in the along track direction that is how visc broom scanning works conceptually. So, if we consider 10 detectors in visc broom direction then the time available for one scan line one active scan line is 21.43 milliseconds 10 times more than our line scanner case and hence the time available for one sample is 3.57 micro seconds which is higher than 0.35 micro seconds 10 times higher than 0.35. So, having more number of detectors in the along track direction that is visc broom scanning will help us to get more dwell time than having only one detector called the line scanner. Okay, one more problem or one more issue with the visc broom scanner is I told you scanning is happening simultaneously when the satellite is moving forward. So, what we assumed here in the last example problem we assume the scanning will be over satellite will move to the next point scanning will be over satellite will move that is how we assumed and did the problem. So, in reality scanning will happen simultaneously as a satellite moves. So, as the scanner is scanning the satellite will be moving forward. So, the scan will not happen like this in a perfect perpendicular direction it will happen like in a skewed direction that is the satellite will be moving together and scanning will be happening like this. So, the image like look at this particular figure in the diagram. So, the scanning will not happen like this this is along track and this is across track. Scanning will not happen like this but it will happen like this because the satellite is moving as the scanner is doing its scan. So, essentially people will those who design the sensors will adjust for all those things like they will ensure there is no data gap there is no gap on the ground before the scanning scanner completes one's scan line. So, there must be some sort of correction mechanism because the satellite is continuously moving like this. But finally when we get an image we get a proper image with all the pixels in its space. So, essentially some sort of image correction has to be done to convert this non perpendicular scanning geometry into a proper image. So, we call it as like a scan line correction. So, most of the visc room scanners will have a scan line character especially like satellites like Landsat 7 and all which can scan in both the directions that is I told you some satellites can scan only in one direction. Some satellites have the capacity to scan in both like it will scan like this it will scan like this this in both the directions it can collect data from the ground that is the when that is the case the scan line correction becomes even more important that is the nature of the skewed data acquisition must be corrected and we must be in a position to create like a proper image without any gaps. So, this sort of image must be corrected to get this sort of image. That is with the scan line character on we will have image properly aligned like this. But without the scan line character the scan line will look something like this this is exact manner with lot of non overlapping pixels sorry with lot of overlapping pixels. In order to correct this there will always be a scan line character some mechanism to correct for this geometrical effect. So, in Landsat 7 when satellite was launched in 1999 in 2003 the scan line character failed. Hence they were not able to like the NASA and USGS people were not able to correct the image for this scan line geometry and the image was something like this this shown in this particular figure. This is pre scan line character failure March 3000 you can see like all the images are perfectly aligned all the pixels are there without any data gaps. Here after the scan line character happened after the scan line character failed you can see here there is not much of gap because this is like a overlapping area this portion there will be data overlap. But this portion there is kind of like gaps in the data because of non availability of scan line character element or the particular system available. So, a scan line character must be there available in order to correct for the scan line geometry effect. So, we saw now about the visc room scanner. Then we will move about like push room scanners. So, in visc room scanner the pixel size or the GSI the ground projected sample interval is identified or it is defined by our time of sample collection I told you at what interval we collect the samples from the continuous stream of signals coming in I told you that defines in both along track as well as across track direction that is across track and along track satellite the scanner is moving in the cross track scanner is moving in the cross satellite is moving in the along track in both the direction that is continuous motion continuous stream of energy will be coming in radiance will be coming in there will be always some sort of sampling occurring in both x and y directions. In case of push room detectors no need to scan in the across track there are like n number of detectors in the across track direction. So, the pixel size is defined by the distance between these two detectors I already told you the pixel size in push room sensors are defined by the distance between two adjacent detectors. So, they will define it and in the along track there will be a sampling because as the satellite is moving like this they will make sure that like this engineers will make sure that data is collected every line that is the sample is collected as soon as the satellite moves 1 pixel distance ahead. So, they will make sure the pixel size in the along track and across track direction are the same. So, essentially JFOV will be mostly equal to GSI for push room scanners this is one thing and in push room scanners let us take the example of our older example like 6000 samples we have to collect. So, in 6000 samples when we have to collect like we will go back to the previous example what we have done. Yes, here let us assume we have to collect 6000 samples then there will be 6000 detectors in the across track if it is a push room scanner there will be 6000 detectors in the across track direction. So, what is the time available for us the time available for us to complete one line is 4 milliseconds 4.2 milliseconds that is satellite has a time of 4.2 millisecond to move from starting point to 30 meters ahead right. So, it can collect data for the entire 4.2 milliseconds. In case of visc room scanner since the scanner is moving we had a trouble the time available was very less. But let us imagine for a push room for a push room sensor if this is the case there will be 6000 detectors in the along across track direction. Hence all the ground points will be imaged simultaneously. So, for this whole 4.2 milliseconds there is no scanning involved for the whole 4.2 milliseconds sensors can see the ground continuously which will ensure that all the 6000 pixels the data are collected for the whole 4.2 milliseconds. Hence push room scanner has a much larger dwell time than visc room scanners because in visc room the time we got is like something close to 35 microseconds per pixel whereas in case of push room the time there since there are no scanning element is involved each detector can see each point on the ground continuously for the whole 4 milliseconds. And hence the time is just see the time difference there we had 35 microseconds here we have 4 milliseconds for data to be collected. So, the time available for data collection in push room sensor is much higher than the time available for data collection in visc room scanner or line scanner. So, that is what is mentioned in this particular slide increasing the time of integration like push room scanner has a lot of time to collect data increasing the time of integration will lead to collection of high amount of radiation that is radiance basically which will increase the signal to noise ratio and which can be translated to improve the spatial spectral or radiometric resolution of the data. That is higher the amount of signal collected will lead to high SNR ratio and hence we can use this improved SNR for increasing either the spatial or spectral or radiometric resolutions. So, we have just covered in detail about the spatial resolution we will see about spectral and radiometric resolutions in the coming lectures. So, before we close down this lecture I want to ask you two questions. So, after each question please pass the video for few seconds think over and check the video for your answer. The first question is is it possible to increase a dwell time to a great extent say order of seconds using thousands of detectors in the visc broom scanning. That is I told you when we had like 10 detectors the time increased by 10 force from 3 microseconds sorry 0.3 microseconds it became 3 microseconds. Can we put 1000 detectors in the along track and do visc broom scanning is it possible? Please think over and let me know this in the along track now I am not talking about push broom I am talking about visc broom in visc broom is it possible to have say 1000 detectors aligned in along track and do a scanning which may increase the time by a lot second just think over and let me know. The answer is no we cannot keep on increasing the number of detectors in the along track direction for visc broom scanners because it is not only the scanner is moving also the earth is moving underneath it. So, what I told you conceptually how visc broom collects data say a same this is the along track say this is like ground point A if you have 10 detectors all the 10 detectors one by one will see the point A which will be processed as one single point. So, the same ground point will be imaged by 10 detectors. Let us assume we have 1000 detectors. So, this point A as the detector is moving like this in the along track direction due to earth's motion underneath now the point A would have moved somewhere here and the new ground point from here more example B would have come here. So, as earth is moving underneath continuously we cannot keep on increasing the number of detectors in the along track direction for visc broom because even before one line is completed or the same point is imaged multiple times earth underneath would have moved actually it is not it is a problem in sensors like as the dwell time increases like even in case of like push broom or 2D array as a dwell time increases the earth's motion becomes a major problem earth is constantly moving at a much faster rate. So, the points ground points will be keep on moving which will actually degrade the image quality and also may produce geometric errors as before the standing completes if the ground point underneath it moves means the sensor will be wrongly imaging a different point rather thinking that it is point A we will now be imaging point B which is not correct. So, it is not possible to keep on adding the detectors in the along track direction as the dwell time also remember as the dwell time increases even in case of like push broom scanners push broom sensors earth's motion becomes important which will cause image degradation image will blur a lot because we know that with a normal photography as the object moves we get a blurred image image will not be sharp right same thing will happen as we are looking at the same pixel for a very long time that pixel under the ground like this is the scanner sorry this is the push broom detector the ground point underneath it will be constantly moving. So, as we keep the sensor here for more dwell time the point will be moving it may move away after some time we will be wrongly looking at it. So, the image motion or earth's motion we have to always account for when we plan to increase the dwell time the dwell time should be increased keeping in mind it cannot be a very long time it should be short such that data over one ground point should be collected before the ground point moves away because of earth's rotation see we cannot keep a sensor permanently fixed and earth will be constantly moving with us. So, the image should be collected before the ground point moves away due to earth's rotation keep this in mind. Second question what I want to ask is what do you think as a drawback of push broom or array type sensor for RS? I told you they have lot of benefits increased dwell time which causes improved signal to noise ratio and limited distortions and so on. But there is a major drawback not a drawback but we need to always take care about one thing think about it come back after few seconds. The answer is I told you when we have more number of detectors say n number of detectors when we talked about image acquisition process or image formation process I told you the radiance are stored as dn values in the image when this happens there will be a calibration phase like l max minus l min by dn max minus dn min some sort of equation I told you like that. So, which is essentially rating the radiance observed with the dn value that is produced. So, having multiple detectors means all the multiple detectors must be calibrated exactly to the same point. So, there should not be any difference between them because say in case of like more number of detectors whatever be the number of detectors when they see the ground for the same amount of radiance they must produce the same dn then only image quality will be proper. If say each sensor has a different different calibration factors then the dn in the image will be changed because of this difference. So, when the number of detectors increase either in push broom or vis broom whatever the relative calibration between them must be properly done that is all the detectors whether it is 10 or 16 or 100 or 1000 does not matter all detectors in a given band must produce same dn for same radiance received. That check or that kind of calibration is not very easy to do having doing relative calibration of all the sensor is a quite difficult task that is why in olden days sensors were not push broom type they were mostly like having the scanning mechanism with limited number of detectors. But now with advanced technology people are going for push broom and even 2D arrays because the calibration can be done rigorously. But remember having more number of detectors means more calibration must be done all detectors must be calibrated to the same point such that they produce same output for the same input. So, as summary in today's class we have discussed in detail about characteristics of vis broom systems, push broom systems, calculation of dwell time and concepts such as how to increase dwell time and so on. With this we end this lecture. Thank you very much.