 Hello everyone, welcome to the next lecture on the topic remote sensing image acquisition and characteristics of the data. In the last class, we discussed the concepts such as temporal resolution, radiometric resolution and also we started discussing about the characteristics of data collected by WISP room scanner. In this lecture also, we are going to continue looking at the characteristics of WISP room and push room scanners. In the last lecture, I told you that the data collected by WISP room scanner will undergo increase in JFOV as the scanner moves away from the nadir and increase in GSI, ground resolution cell size variation as the sensor again moves away from the nadir point. I will tell you with one example of how this is happening in sensor called AVHRR. AVHRR is one sensor which is works under the principle of WISP room scanner, okay. So now the sensor has a spatial resolution of 1.1 kilometer at nadir. So this is example of the data. So this collects 2048 pixels, 2048 samples say if this is the along track direction, this will be the across track direction and hence then this sensor will collect 2048 samples that is 2048 pixels will be there in the across track direction, okay. So this is cross track pixel number and since this is from nadir, this will be 1024 on the left side 1024 on the right side or east west we can see. So this is just one half of it. So it starts from pixel number 1024 goes up to pixel number 2048. At nadir everything is fine. The sensor has a circular JFOV and it collects data with radius of like 1.1 kilometer, like the JFOV is 1.1 kilometer, GSI is 1.1 kilometer that is fine. Now as the sensor or as the scanner moves away from nadir, now this is across track away from the nadir point, just see what happens. This is the number of pixels or the number of samples collected. Here the sample number is 1024, here the sample number is 2280. So the difference is 256 samples. So 256 samples got collected in 200 kilometer distance. The next 256 samples from here to here 256 samples got collected, now this is the ground distance on the top. So now the difference between them is 230 kilometers, 256 samples got collected in ground distance of 230 kilometers. The next 256 samples on ground now got collected with the distance of 320 kilometers and so on. So here what are we seeing? The number of pixels or the number of samples collected is the same. Here we are seeing examples of the increment of 256 samples for simplicity sake. But you can see as the scanner moves away from the nadir in the across track direction, same number of samples is getting collected, but the ground distance covered by the scanner increases. It was just 200 kilometers for the first 256 samples closer to nadir. The next second set of 256 samples was collected over a ground distance of 230 kilometers then it became 320 kilometers and so on. So if you look at the GSI or the ground distance covered per sample, the ground distance per sample increases that is the GSI or the Ground Projected Sampling Interval or GSD also we can call Ground Projected Sampled Distance increases as the sensor or as a scanner moves away from the nadir. The GIFOV that is the projection of detector element on the ground that also will elongate will increase in size that is also depicted clearly here. Here it was like a perfect circle no distortion in shape as the scanner moves away from the nadir it becomes an ellipse rather than being a circle and it starts to overlap because there is an increase in GIFOV size both in the across track and along track direction. In both the directions GIFOV will increase and distort. So this is because of the geometry with which the data is collected. You know like if a circle is projected at an angle it will became an ellipse we know that same thing is happening here same thing was happening with respect to GSI also. What is the effect of it? What is the implication of this? I already we told you that whatever is the energy coming out from a single GIFOV will be kind of average and stored as one single value within the system. As the scanner moves away from the nadir the GIFOV becomes pretty large in case of AVHRR sometimes it may even be more than 3 kilometers. It should be 1.1 kilometer but it goes all the way up to say like 3 kilometers sometimes at very end of the scan line. So what will happen? Instead of collecting data over a area of 1 square kilometer it is now collecting image over an area of like 3 square 3 kilometers by 3 kilometer square if you assume a square what to say square projection definitely more than 1 square kilometer. So whatever be the feature present there all the radiances will be averaged together and stored within one single pixel of size 1.1 kilometer. So essentially we are averaging energy over a very large area and storing that particular energy in a pixel assuming pixel of size 1.1 kilometers. So what essentially we are assuming in mind we are assuming that the radiance recorded in that 1.1 square 1.1 kilometer pixel essentially came out from a ground area of 1.1 kilometer by 1.1 kilometer that is not the case. If that pixel was at the end of the scan line then the radiance recorded in that particular pixel actually came from a ground area of somewhat 3 to 5 square kilometers a much larger ground area. So this is what is happening in visbrum scanners. We are collecting energy over a larger footprint of the ground averaging them a single radiance value everything is now stored in a pixel of 1 square kilometer in case of AVHR. So normally in sensors with very large scan angles such as modals AVHR and all if some pixel falls at the end of the scan line then naturally the ground area covered for that particular pixel will be much larger than the pixel size itself. So if the pixel size is say 1 kilometer the energy collected by that particular pixel will not be 1 kilometer by 1 kilometer it may be 3 kilometers or 2 kilometers definitely larger than it as it moves away from the nadir. Only at nadir the GAFOV will be 1 kilometer by 1 kilometer that will be preserved. This we should always keep in mind when we work with data from scanners with very high scan angle. So we should not always think that okay the pixel size is 1 kilometer the data got collected from a ground area of 1 kilometer that is not the case we should always look at the scan angle of the scanner at which the data got collected. In this particular figure like it is like little bit clearly explained how the GAFOV will vary. So this is the variation of GAFOV here GAFOV element is like a circle for the system the beta is the IFOV the angle. The angle is fixed for a system it is not going to change but as the scanner moves away from the nadir now it has moved at a triangle theta away from the nadir the GAFOV has now become has now increased by a term of 1 by cos square theta in the across track direction and 1 by cos theta term in the along track direction. So the distortion of GAFOV is not uniform in both along track and across track it varies but essentially it varies in terms of like 1 by cos square theta in the across track 1 by cos theta in the along track. This is for the variation in GAFOV. Similarly, GSI also will vary GSI at an angle theta away from the nadir is equal to GSI at nadir divided by cos theta square that is GSI at certain angle theta is equal to GSI at nadir divided by cos theta square. So the GSI also that is the ground sample distance also will be keep on increasing in addition to GAFOV. You should always remember the GAFOV will increase the GSI will increase GSI or GSD will increase. So here again coming back to this AVHR example this ellipse denotes the GAFOV this lines represents the points at which the samples were collected that is essentially the GSI. So this we should always keep in mind that the data collected by a vis-broom scanner always has enlarged GAFOV problem and enlarged GSA problem as the scan angle goes away and away from nadir. Actually this is the main drawback of not all sensors having a very wide swap angle. We can make all the we can just increase the scan angle of all the sensors and start collecting data over large areas of the globe possible. But as the scan angle increases the distortions also will increase and hence not all systems has this large scan angle only certain systems which has the need to cover the globe really quickly such as motors or we are a sensor and all they are equipped or they are provided with a large scan angle because they are sent with an application they should cover the globe regularly at a much larger temporal frequency that is why they have been provided with such a large scan angles. But they will have their own distortions which we should always keep in mind when we work with such data sets. Now just we will see like one example of geometrical distortion that will happen in vis-broom scanner that is what is known as a tangential scale distortion. So, what exactly is like a tangential scale distortion a very good example is given here in this particular slide. Here you can see an image of a area taken as it is. So, the scale is lateral in both along track and across track direction. Now this is the along track direction this is the across track direction. Now I have told you that as the sensor scans as the scanner moves away from the nadir the GAFO will keep on increasing. It will became like ellipse or it will became like a rectangle. So, what will happen let us say there are like two buildings like this is like one building exactly covering the size of GAFOV of the pixel. Let us say this is like 500 meter by 500 meter some feature it is there. Now this is at nadir and let us say the GAFOV of the system is also 500 meters. This will be covered as one single feature. So, it will be represented as it is. Now let us come to the end of the scan line. Let us say there are like two adjacent buildings each of size 500 meters by 500 meters. If these two buildings were at nadir they would have been imaged as two separate points and two separate pixels. But now let us say this is at end of the scan line this is across track direction. Now the GAFOV has become let us assume 1.1 kilometer. Let us say the GAFOV has increased at the end of scan line. So, now the GAFOV will be something like this covering both the buildings together. So, initially at nadir the GAFOV was just 500 meters building size was also 500 meters it was imaged perfectly. Now at the end the GAFOV has increased it is now covering two buildings together. So, what will happen because of this both the buildings will appear little squeezed because same one pixel this will be stored as just one pixel data. But instead of showing one building per pixel both the buildings will be located let us say this is the pixel at nadir this is the pixel at the end of the scan line. Here you will see the image of only one building at nadir. At the end of scan line because both these two buildings were covered by the same GAFOV element this pixel will have both the buildings covered in one pixel. So, when we see those pixels what we will feel those two buildings actually have a smaller size than what actually it is. So, as the scan angle increases objects will appear squeezed together in the across track direction that is depicted here in this example also here it is like a perfect circle actually. But because of this scan angle distortion it is now squeezed and appearing as an ellipse. Similarly this diamond this diamond is now squeezed in the across track direction. So, this kind of squeezing of elements two buildings in our example were squeezed together to form part of one single pixel right. So, these two are squeezed and they will appear little shorter in size especially in the across track they will be squeezed only in the across track rather than along track. So, this distortion is much larger in the across track objects will appear squeezed in the across track direction when compared with the along track direction. So, a circle present at the end of the scan line may appear like as an ellipse rather than appearing as a circle it will be squeezed in the across track direction. This is what is known as tangential scale distortion objects will be squeezed will appear like distorted or squeezed as the scan angle moves away from the nadir continuously. So, till now we discussed the characteristics of visc-broom scanner we saw few examples of how the GSI changes how JFAV changes and all. So, these are some of the important limitations of visc-broom scanning. So, that is why now most of the systems are moving to push-broom scanning. Push-broom scanning avoids these sort of geometrical distortions to a large extent because you know like as the if the scanning introduces more distortions then finally, the geometric accuracy will be affected. Geometrical accuracy means a ground point having a coordinate of x comma y. If we calculate the coordinate from the image we should get the same coordinate x comma y. Let us say there is a building standing at point 100 comma 100 coordinate on the ground. From the image also the building should have a same ground coordinate 100 comma 100. But just imagine in visc-broom scanning if everything appears squeezed together in one pixel we may wrongly calculate the coordinate of that building if we do not correct for geometric distortions. So, visc-broom scanners come with their own geometric distortions which affects the geometric accuracy of the image. There is always a need to do correction of these images. That is major limitation of visc-broom scanner. In push-broom scanner all these sort of distortions are reduced to a large extent because in push-broom scanner if you assume like a flat earth then the GSI will not change because in push-broom scanner what is going to happen? In push-broom scanner like we are having like lot many detector elements each detector element will cover simultaneously the different parts of the area in the ground whatever energy comes in will be saved. So, the sampling interval is actually set because I told you in the earlier classes the GSI for push-broom scanner is determined by the spacing between two adjacent detector elements that is fixed when the system is launched that is not going to change. So, GSI is not going to change for push-broom scanner, but the IFOV will change because the pixel at the center will have like a full conical angular coverage, but a detector at the end may not have that particular coverage. So, the IFOV will change that is given in this particular slide and the IFOV varies in the across track direction due to the viewing angle. IFOV will vary with this particular relation, but the distortion in general in push-broom scanner is not as serious or not as troublesome as the distortions occurring in this room scanner. So, naturally push-broom scanners because of their fixed detector pattern which is not going to change and since there are no scanning involved that particular image will have higher geometric fidelity or higher reliability in comparison to images acquired from this room scanners. This is a major advantage of using a push-broom scanner over this room. So, already we saw one advantage that push-broom scanner provides a very high dwell time in comparison with this room that is a major advantage, but one more advantage is the geometric fidelity of the images acquired by push-broom scanner is much higher than the geometric characteristics of data collected by this room scanner. Now some distortions we have seen that can come because of scanning geometry or data collection geometry and so on, but as the sensor is moving or as the satellite is moving there can be change in attitude of the satellite which can cause geometric distortions. So, what is attitude of a satellite? We will just take an example of an aircraft. So, let us say like an aircraft is moving like this. So, this is the flight direction. This is from my right hand side to my left hand side moving like this. Let us take this as x-axis the direction in which the flight is moving. This is x-axis and y-axis is towards the screen like this and z-axis is the vertical. We will take 3-axis. Now the flight is moving like this. If I hold the x-axis like x-axis along the flight direction, if I hold it and let us imagine I am rotating the flight, what will happen? The flight will rotate like this. So, if I project like this in this direction, flight is coming towards you now. If I hold the x-axis and rotate, the flight will rotate like this. This is known as roll. Okay. So, the platform which is moving like here in example aircraft is undergoing some roll. Then pitch, I told you where is y-axis. This is flight direction, y-axis is like this. I hold the y-axis, I rotate it now. So, what happens? The plane nose moves up or down. This is known as pitch. Okay. So, rotation along x-axis is known as roll, rotation along y-axis is known as pitch. Similarly, I now hold the z-axis, I rotate. This is known as yaw. So, pitch is this like the nose of the airplane going up and down. Roll is this, plane itself is like kind of like rotating and yaw is this, moving along the z-axis. That is what we call attitude or distortions in attitude of a platform. In aeroplane, we can feel this if we are like traveled in like for long distances. It is very easy for us to feel. Flight will undergo a roll, especially when the captain banks the aircraft, it will suddenly have a roll and it will move like this. It can have a pitch during landing, they will increase the pitch, they will make the nose up, they will land. We can feel that. Similar attitude distortions like these three pitch roll and yaw, together we call it as the attitude of the platform. Okay. Similar to aircraft, a satellite also can undergo minor variation with this. A satellite is planned to move in one particular direction without any movement. Sometimes some unwanted things can happen, solar storm, atmospheric drag, change in gravity on the earth surface, all these things may change the position of the satellite, may cause some distortions in the attitude pitch roll yaw, which will cause distortions in the image. So, when finally the data comes down from the satellite, the satellite along with the image data will also give information about its position and attitude, at which position it was when the image was collected and what was the attitude, the pitch roll and yaw of the sensor, everything will be sent together. So, the people sitting here in the ground will process the data and correct the image for this variation in satellite attitude. So, that is what is given in this particular slide. Here I have just shown you how pitch roll and yaw is defined and the change in pitch roll yaw may happen due to change in gravity, atmospheric drag or some effect of solar winds, etc., etc. And these things normally they will be corrected like any distortion in the pitch roll yaw, like a satellite. If it has to move like this, it has to move like this while collecting data, suddenly let us say it has undergone a roll, slight roll, what will happen? Instead of collecting ground point here, the sensor would be looking somewhere away from it. So, the geometry of the image or the geometric accuracy of the image is lost, instead of collecting here, it is now looking somewhere else. So, that attitude information will be passed on to the ground and they will correct it. Now, the satellite is not looking at the ground point x comma y, it is now looking at some other ground point x, x1 comma y1. Hence, we should correct it. So, these sort of corrections will happen even before the data reaches us. Though we do not worry about it most of the time, it is always good to know. So, here I will complete the sentence, they will be corrected by the data providing agency who manages the satellite. So, till now we have seen some distortions in the images due to scanning characteristics, attitude characteristics and so on. There are other ways also in which the image or the data we get from satellite may get distorted. Some of them are, first thing is a multi-band core registration error. Let us say the satellite is collecting images in 4 bands, like this is 1 pixel, coordinate xy, band 1, coordinate ground coordinate x2, x comma y, band 2, band 3, band 4. This is 4 bands, it is collecting image over some ground point x comma y. Let us say this is pixel number 1 comma 1, like it is a 2D image, this is pixel number 1 comma 1. In all the 4 band images, pixel number 1 comma 1, like 1 comma 1 means first row first column, like this I meant 1 comma 1. So, pixel number 1 comma 1 should correspond to same xy. Then only we will say all the multiple bands within the image has same geometric accuracy. But you know what will happen is the sensor layout or the band layout means like all band 4 data may be collected by sensor here. This may be data for band 3 and so on. So, this may be band 3, this may be band 2 and so on. So, what essentially happens? All the 4 band sensors are actually located slightly away from each other and the ground points or there are very high chances that same pixel in each band, pixel number 1 comma 1 may have information about different points on the ground. Instead of like 4 multiple bands, let us say 4 bands are there in one particular sensor. All the 4 bands, same pixel number in the 4 band image should correspond to same ground point. There are high chances that those 4 band images or the pixels same exact number of pixel may look at different different parts of the ground. It is possible there can be slight mismatch in the ground data collected by 4 different bands or n number of different bands whatever be the number of bands. This is we call multi band co-registration error. Same pixel number 1 comma 1 in all the bands has different different ground points image drawn it. They must be corrected because we normally want to do we stack all the image data one over the other and then we visualize or then we process. That sort of image processing will suffer. So, all the bands all the pixels should have same corresponding coordinates with its adjacent bands. This sort of errors is quite natural it can happen in satellites. They will also be nowadays mostly corrected by the data providers. The next error that may come is topographic effects, sudden scale change. That is let us say a satellite is flying like this there is like a terrain like this suddenly there comes like a large plateau. What will happen and let us say this plateau is like quite tall let us say in case of like Tibet pretty high region. What will happen suddenly the orbital height of the satellite will be reduced from the surface over like see it may be like 700 kilometers. Let us say this is some let us say this is some 10 kilometers like 10,000 meters pretty high surface may not it is extreme example I am telling but just for explanation sake. So, when it comes over this there is a drastic change of drastic change in the height of the satellite above the terrain from this point to this point. So, there will be like a change in scale because of this these points may appear little closer and zoomed in. That is what we call sudden scale change in image due to presence of large topography. So, these two are some examples of image distortions. But how these image distortions have to be corrected we told you like there will be lot of distortions will come in the picture which will affect the geometric accuracy how to correct it. Nowadays mostly we need not worry about the geometric accuracy of the data the geometric standards are pretty high for nowadays images they are almost like corrected for distortions to the extent possible and it is being supplied to us. That is if it is a ground point x y on the ground from the image also you will be able to identify it pretty closely the accuracy is pretty high now. But how to generally do it say if we got one image which has some geometric distortion in ground a ground point is having a coordinate of 100 comma 100 but in satellite if I calculate the coordinate is 75 comma 80 there is a change how to correct it. Then we need to do a process of what is known as image to map registration that is we should have a proper map of the area let us say we have it. In map we assume all the coordinates are perfect and correct. So, what we do we take an image and we say okay this pixel in the image should have a ground coordinate of this like this we will identify some 5 to 10 points in the image get that actual ground coordinate from a map or from like a GPS survey we have done on the ground we will feed it into image we will reorient it or create a new image after correcting for job area distortion. This is known as image to map registration. Similarly, some scanners can look at some angles away from the nadir we can also correct to some extent and make sure and the image appears as if the scanner is looking from nadir that is say I am looking like this means from a top angle a building may appear like a square. But if I look at a building from an at an angle instead of being like a square or seeing the top I will see the side of the building right that sort of distortions will be there we can correct for such distortions we call it as ortho rectification we make sure that all the points on the image will appear as if we are looking from the top. So, these are some of the examples in which we will correct the image actually these sort of corrections the in detailed principles will naturally form a part of like a digital image processing course. So, we will not cover these topics in this particular course, but just for your information I am telling all this. So, whatever we have covered in the series of lectures the last few series of lectures is we discussed in detail about the four important characteristics of a remote sensing system spatial, spectral, radiometric and temporal characteristics. After which we also discussed what are the different ways in which images can be distorted or the characteristics of the data how it will be and we should always keep this in mind. The change in JFO we element and element in wisdom scanner and all actually cannot be corrected even though we can do some sort of geometric correction, but once the JFO is enlarged and data is collected over it we cannot change it practically speaking. So, that sort of characters above the data we should always keep in mind and we should remember them while we work with remote sensing images. With this we end this lecture. Thank you very much.