 Hello everyone, welcome to the next lecture in the topic of active microwave remote sensing where we are discussing about the imaging radar. In the last lecture we discussed how a radar works in principle and also we got introduced to few terms. In this lecture we will discuss further in knowing more about the imaging radar systems. So last class we defined few terms like azimuth direction, range direction, near range, far range, depression angle, look angle and instance angle. So basically with respect to all these we will be defining our image acquisition characteristics and based on these factors the look angle and image angle the terrain also will look different to us. We will see it later that the properties of terrain what we acquire from radar images it will change when we change this observation geometry. In the first lecture like in the last lecture I told you that the radar basically measures the distance between the source and the target. Also it measures the incoming power in some form. One more important thing to understand in microwave remote sensing is we also measure polarization because in microwave polarization of electromagnetic radiation will give us additional information about the object of our interest in the terrain. So normally most of the active microwave systems will record the incoming radiation in different different polarizations ok. So polarization is nothing but the direction in which the electro or the electric field in the EMR is oriented. We have seen it in detail in the earlier classes. We have also seen that the polarization can be horizontal, vertical, circular, elliptical in different directions. But normally in microwave systems we use horizontal polarization or vertical polarization or both ok. So let us say we have a system that can transmit electromagnetic radiation only in horizontal polarization ok. So the system is going to transmit only in horizontal polarization we indicate with the label of H. Then this system will move towards the object will get reflected and come back. Let us say the system is also going to receive only EMR that is going that is horizontally polarized. So H the first H indicates the polarization of the wave that is transmitted. The second H indicates the polarization of the wave that is received ok. So this H H indicates the system transmitted microwaves that are horizontally polarized. And that horizontally polarized waves will go back to the object of earth surface it will interact some may get some may preserve the orientation some may change the orientation that can happen and whatever the total signals that came back the system allowed only the horizontally polarized light ok. So this system is receiving only the horizontal polarized EMR and hence we again label it as H H H polarization. Similarly some systems can be V V transmit vertically polarized signal receive vertically polarized signal alone. So such systems in which both the transmitted signal and the received signal are having the same polarization we call them as light polarized or copolarized ok. That is light polarized means same polarization is used for transmission as well as reception. But certain systems are capable of transmitting in one polarization and receiving the wave that are in that are coming in the other polarization H V or V H transmission is H reception is V, transmission is V, reception is H. Such systems are called cross polarized ok. So essentially if you combine this microwave systems can operate at H H H V V H V V. So these are copolarized and these are cross polarized. Some systems has the capability of measuring all these like transmit in both H and V, receive in both H and V giving rise to 4 different images. Similarly like in optical remote sensing each wavelength band we get one image right. Say 0.4 to 0.5, one band we get one image 0.5 to 0.6 V, one band one image like this. In microwave objects will look different in different different polarizations. And hence with the wavelength remaining the same like same band say L band or C band radar. But because of this change in polarization of the transmitted and received signals we will get one image per polarization. H H we will have one image, H V polarization we may have another image. So some systems that are quad pole, quadrupole polarization which can have all these 4 combinations that I just listed. So it can produce 4 images with the same wavelength where each image represents how objects look when illuminated by radar with different polarization and when the reception happens in different polarization H H H V V H and V V. So with the same given wavelength we can have more than one image in a radar system with acquired with different different polarizations. And each polarized image can give us different information about the land surface and also polarimetric SAR data processing will provide additional information that is like a whole new research field where lot of active researchers are working on understanding how this different polarized polarization information helps us in understanding more about the earth system earth surface and its properties. So normally whenever we come across a radar image we will also have to look at which polarization it is whether it is H H polarization, H V V H or V V. The first letter indicates the polarization with which the wave was transmitted. Second letter indicates the polarization with which the wave was received back. We will now go and try to go little deeper and try to understand the working principle of this side looking radar and especially the real aperture radar. So first we will understand the real aperture radar concepts and then we will move on to synthetic aperture radar briefly and try to understand the similarities and differences between them. So I told you radar in age basically works based on measuring the distance. Flight is flying, it calculates the distance between different objects on the earth surface from this particular point. The distance is actually measured in what is known as a slant range. That is slant ranges the radar is not looking at nadir, the radar is looking at some angle away from the nadir. So the line joining the antenna and any point on the ground surface essentially is measured in this particular direction along this line. So the distance is not measured with respect to the horizontal distance but the distance measured is with respect to this slant line that connects the antenna with every different point on the ground surface. So the radar essentially measures the slant range distance or slant range. Normally what we will need for our applications is what is the distance of features away from this radar acquisition system in a horizontal plane. Say this is like the system, this is the height above the earth surface. We will be interested, okay this is the nadir point. From this nadir point what are the distance of each feature on the earth surface horizontally. We will be interested in measuring this we call it as ground range. So the ground range is nothing but the horizontal distance of different features from the nadir point of the imaging radar system. But the radar will measure what is the slant range, okay. So because of this slant range image acquisition we will see also later that images or objects that are in the near range will be compressed in compared to the objects that are in the far range. Like here we have two features A and B, two field both of them have the same linear dimensions. But since A is in near range and B is in far range, A will appear compressed in the radar image in comparison with B. So we may think A as a smaller field like a field with smaller dimensions when we compare this with field B. This can happen because of this slant range measurement. But if we convert this slant range to ground range, if we calculate the true horizontal distance of each field then we will realize, okay both of the field have the same linear dimensions. So this is like a major difference between how our normal photographs are taken and how radar imaging system works. So for a flat horizontal surface it is always easy to convert this slant range to ground range using simple trigonometry. Say this is like the slant range that is measured by the radar system along this line. So what is the distance? Say for example 0.2 here field A 0.2. So the distance will be measured from this antenna will be along this line. What is the length of this line? That will be recorded as the slant range distance. So in order to convert this slant range distance to ground range distance, if we know this flying height it is always easy to convert using simple trigonometry. So using like basic Pythagoras theorem, we can write it as slant range square is equal to ground range distance square plus the height of the platform square. So using this and rearranging this we can calculate the ground range distance. It is like using very simple trigonometric principles. Also using this depression angle or the look angle we can also estimate or calculate the ground range distance. So these equations assume that terrain is flat and horizontal. So this is like a very simple equation which will help us to calculate this. Sometimes or not sometimes most of the times the terrain may not be flat giving rise to some complexities in the image acquired and there we may have to do correction using like a digital elevation model. So digital elevation model is for each pixel on the terrain x y we need to have like a elevation information that using that we can correct this image geometry distortions. So normally the radar images will be acquired and the distances will be measured in the slant range and for our purposes we may have to convert that image to ground range in order to get like a proper representation of terrain features. Like later we will see this slant range image acquisition also causes lot of distortions in the image when the surface is not flat and it has some topographic features within it. So how a pixel is defined in radar? What is like the concept of JFOV in radar image acquisition? Like we know how a JFOV is formed in our normal visible or thermal remote sensing. Say whatever the sensor dimension physical dimension of the detector is based on the orbital height it will have a small area projected on the ground. So whatever the energy within that particular ground will be recorded in the imaging system as one single unit. So that we call it as JFOV right. Here the concept is bit different. So the formation of this JFOV one single pixel element is not a straightforward task. It is it forms in two dimensions in across track direction that is the range direction and azimuth direction and this will differ. So the pixel size basically say as depicted in this particular figure. See this is the platform carrying the radar. So it is collecting data here. So in the range direction we will be forming image and this is like one single pixel in a side looking airborne radar image slar image. So this pixel will be keep on varying. So here you can see it will be like this. The pixel may look something like this in the far range. It may look okay. So the pixel will has have a different meaning in both the range direction and in the azimuth direction and the total angle for which the radar beam transplants it will define the swath width. So the entire footprint of the what to say the entire footprint covered by this radar beam in the across track direction will define the swath. Whereas each pixel in the range direction azimuth direction has to be defined will be defined independently based on two different properties of the radar system. So the spatial resolution or simply put the pixel size in a radar image is defined in two directions. In range direction it is defined differently. In azimuth direction it is defined differently. So we call it as range resolution and azimuth resolution. So what these are? Range resolution is nothing but the dimension of each pixel in the range direction. Saturday is moving like this acquiring image in this particular direction. So the pixel size each pixel size what is the length of each pixel in the range direction we call it as range resolution like that will define we can simply call it as range resolution. Similarly the pixel size in horizontal or in the direction azimuth direction we call it as or the pixel size we call it as azimuth resolution. So here the concept of resolution is if there are more features within one pixel element that is defined by this range and azimuth resolution like I just go back to one slide. So whatever be the features present within the same pixel element defined by the range and azimuth resolution that will produce one single power return or that will be recorded as one single power unit whatever be the total power reflected by the signal here that will be recorded as one single value here. So most likely the features present within this single resolution element may not be resolved independently. Say we have two towers within the same pixel element we may not be able to see those two towers in separately we may think there is like a one tower only present there it can happen. But if there are two pixels and if these two towers are located in two different pixels then we can think of okay this is tower one this is tower two there are two towers over there it is easy for us. So this is what the pixel size or the dimension of the pixel in both the azimuth and range direction will influence whether we are able to resolve or distinguish two features on the ground or not. First we will discuss about the pixel size or the resolution of the radar system in the range direction what we call it as the range resolution. So what determines the size of the pixel in range resolution in range resolution two objects will be resolved as two different features if they are separated by a distance of at least half of their pulse length. So first we should know what exactly the concept of pulse length means okay. So pulse length is as I told you in the last lecture for each radar system will send microwave in terms of pulses for a certain duration say for 10 power minus 6 seconds and the radar transmits a wave. So each microwave wave will have like a certain length based on the time for which it is transmitted right because it is not wavelength. So do not confuse this with wavelength lambda wavelength can be different but pulse length is based on the time of transmission of the radar antenna the microwave will have a certain distance each pulse will have a certain distance on the ground that is what we call pulse length that is let us assume the velocity is 3 into 10 power 8 meter per second and let us say a system transmits microwave for 10 power minus 6 seconds that is 1 micro second. So the total length of the pulse that will be transmitted will be 3 into 10 power 8 into 10 power minus 6 that is equal to 3 into 10 power 2 that is 300 meters that is the length for which the microwave is transmitted is 300 meters. So this pulse has a length of 300 meters. So the time duration for which the pulse is transmitted will determine the pulse length. So if two objects if they have or if they are differ if they have difference in distance more than half of the pulse length then they will be resolved as two different features. Maybe I will just tell with this particular example say here we have like two towers let us say tower 3 and 4 horizontally on ground they are separated by distance of 30 meters okay. This is the horizontal distance between them. On a slant range that is radar system measures slant range to tower 4 measures slant range to tower 3. If this slant range say SL 3 and distance SL 4 if this difference is more than half of the pulse length say here the pulse length is given by the time of transmission tau into velocity of EMR C okay. If this distance between them is more than half of the pulse length then these two towers will be resolved independently or labeled as two different features okay tower 3 is here tower 4 is here they are different features. On the other hand if two towers if the distance if the slant range distance between them say example tower 1 and 2 if the distance between them is less than half of the pulse length then what will happen is the pulse that is transmitted by this radar antenna it will be transmitting say first it will be heating tower 1 that is first it will reach tower 1 it will start it will begin to be reflected back. So, the 300 meter pulse has to completely reflected back by this object. So, for the 300 meter pulse will take some time to get reflected. In the same time the pulse would have reached here and even before the pulse the 300 meter pulse got reflected from this tower 1 the signal from tower 2 will also be beginning to reach this system that is the signal or the reflected microwave pulse from both tower 1 and 2 will have some time of overlap in the system. If that happens if the slant range between them is less than half of the pulse length then these two features will be recorded as one single feature radar will think these two objects the the signals are coming at very short time intervals they can they are most likely they will be like one single feature it will be labeled as like that. So, the condition we have seen is based on slant range distance if the slant range distance between two features if they are if the slant range is more than half of the pulse length two features will be imaged independently. But when we form pixels in the image or when we try to understand the distance between features we always think in ground range distance right like the horizontal distance between the nadir point on the features on the ground. Looking at that in order to convert this slant range to ground range we can like use this look angle formula. So, Tc by 2 cos theta d theta d is this depression angle. So, using this formula we can convert this slant range to ground range. If we do this then we will realize the same slant range distance may be there, but the ground range keeps on changing because of this. That is two features the difference between them on the slant range let us say two objects are here two objects are here. For both of them to be separated or to be having like slant slant range distance the ground range distance between them or the horizontal distance between them will be different that is like these two let us say this tower 3 and 4 and tower 1 and 2 the same example we will go back. If these two towers are horizontally separated only by distance of 19.58 meters they will satisfy this criteria the slant range the difference in slant range distance between them will be more than half of more than or equal to half of the pulse length. But in near range if these two objects are separated by a horizontal distance of close to 36 meters then only they will satisfy this criteria. So, what here I mean is two objects if they are differing in distance by half of pulse length like along the slant range they will be made separately. But in order to satisfy that criterion the actual horizontal distance between them should be really far apart. Okay that is if objects are closer to each other they will be most likely be imaged as one single feature. So, we will quickly see one example like just remember this equation how to do this tau c by 2 cos theta d theta d is the depression angle. Okay in order to convert this slant range resolution to ground range resolution. So, in this example we have two towers 1, 2, 3, 4. So, for map for measuring tower 1 and 2 the depression angle is 65 degrees that means they are in the near range. For measuring tower 3 and 4 the average depression angle is 40 degrees between them. A real aperture radar is working a pulse length is 0.1 microsecond 0.1 into 10 power minus 6 second. So, that is the pulse length is equal to the actual distance between them is equal to c 3 into 10 power 8 into the time duration that is 0.1 into 10 power minus 6 that is 10 power minus 7. So, 30 meters. So, the actual length of the pulse is 30 meters. Two towers both of them are separated by horizontal distance of 30 meters 30 meters. Okay. So, just use this formula that is tau c by 2 cos theta d from this we can calculate for towers 1 and 2 to be resolved independently as two separate figures the horizontal distance between them should be at least 35.5 meters. Then only this condition will be satisfied the slant range distance between them is more than half of pulse length. But here in our example we see towers 1 and 2 are separated by horizontal distance of 30 meters. Hence most likely they will be forming a part of the same pixel and they will be not resolved independently. On the other hand in the far range two towers again separated by 30 meter horizontal distance. But if you calculate this particular formula tau c by 2 cos theta d then the ground range distance between them comes to 19.58 meters. That means these two towers 3 and 4 will be resolved as two separate features if their horizontal distance between them is more than 19.58 meters. So, in this case since the horizontal distance between them as 30 meters they will be resolved as two separate figures this will be coming in one pixel and this will be coming in another pixel. So, this indicates that the pixel size along the range direction varies like if objects are very far in the near range then only they will be resolved. But in the far range away from the image acquisition system objects even which are closely spaced with each other will be image separately. So, this is this will define the resolution of the radar system in the range direction. So, in the next class we will look at the resolution of the radar system in the azimuth direction and we will also see how the pixel size will vary as the distance is moved. With this we end this particular lecture. Thank you very much.