 Hello and welcome to today's lecture. So, just to refresh your memory, we are currently in module 2 and this is the 5th lecture. And if you remember in the previous lectures, we discussed in detail about radar equation like what is a radar equation and what is the fundamental relationship that it gives and we also learned how to derive the radar equation. So, if you remember as part of a previous lecture, I promised to show you a diagram that will give you more clarity about the difference between beta naught, sigma naught and gamma. So, let me start today's lecture with this diagram. So, to refresh your memory, sigma naught what you see is the normalized backscatter coefficient sigma naught. And we also refer to sigma naught as differential radar cross section or normalized radar cross section abbreviated as NRCS. So, assume this is the ground surface. So, shown in the diagram is the normalized radar cross section normalized over the actual ground area. So, the area considered is the actual ground area. Please note that here the area used is the projected area. As in ground topography shall heavily influence the local viewing angle. Because if you try to look at a complex terrain, a mountainous terrain say the Himalayas, the ground topography is not going to be flat like this. So, in sigma naught the area used is the projected area which means the actual area on the ground may not exactly be equal to the projected area. Now, assume you do not have access to any topographic information. You do not know how the actual terrain is fluctuating in terms of heights. Then this area is estimated calculated by assuming a flat reference surface like the one that is shown here, is not it? Flat reference surface. Now, this would be an incorrect representation of sigma naught, is not it? Because we are using a flat reference surface when actually there is variations in topography. Now, consider the reverse scenario that is on the contrary. Assume that by some means from some source, you do have the actual information about the topography, how the ground surface is varying with respect to height. So, now if we have the topographic data, we can get to correct the area and hence get a better representation of sigma naught. So, that is about sigma naught. Now, coming on to beta naught that is represented as a radar brightness. Here I want you to focus on the area. For sigma naught, it is flat surface, ground surface. If you have the topographic data, you can use that otherwise it is considered as the flat reference surface. For beta naught that is for radar brightness as shown in the diagram, the beta naught values it actually shares very useful information about the ground surface. Now, please note that even in the case of beta naught, if topographic data say is not available, then the radar brightness itself tends to be used as a standard measure of backscatter, backscatter, scattered return signals. Please note that for the radar brightness, it is normalized to the projected area on some known reference surface. Let me reiterate, it is normalized to the projected area on some known reference surface that is beta naught, radar brightness. Now, coming on to gamma. Gamma is nothing but we already discussed it is a backscattering coefficient normalized by cosine of the incidence angle. Backscattering coefficient normalized by the cosine of the incidence angle. Please note that as detailed in the diagram, gamma uses the projected area along the line of sight, projected area along the line of sight. And gamma is suitable when considering volume scatterers such as forests. So, essentially what are we talking about? We are talking about the difference between sigma naught values, beta naught values and gamma values. In the tutorials, we will learn how to generate a sigma naught imagery, beta naught imagery. So, then this understanding, this background is going to be useful for you. Remember, as I mentioned before, gamma is suitable when we consider volume scatterers such as forest. And for forest, gamma shall remain approximately constant for all the incidence angles. So, just to repeat the difference, the relationship between gamma, sigma naught and beta naught, their relationships are given as per the equations shown here. We have already discussed here, theta i is the incidence angle and sigma naught is nothing but beta naught divided by sine of theta i and gamma is nothing but sigma naught divided by cos of theta i where theta i is the incidence angle. So, now we know the difference between sigma naught, beta naught and gamma values. Let us try to revisit the radar equation now because in the last class, we just learnt what are the individual terms required to understand and derive the radar equation. So, now let us try to revisit the radar equation quickly. What does it give you? It gives us the fundamental relationship which tells you about the power of received echo, P suffix r, power of received echo is given by the radar equation. Here P t is the transmitted power, G is the gain of the antenna, lambda is the wavelength, sigma is the radar cross section and r is nothing but the range that is the distance from the satellite to the target. Now, till now, we have understood each and every term that make up the radar equation. We also learnt how to derive the radar equation, but then we have not spent much time in interpreting the radar equation. So, as part of this lecture, let us try to interpret the radar equation. Now, looking at the equation in front of you, you can easily tell me that as the range increases, the signal drops as the range increases, you can directly estimate it by looking at the equation which means if we triple the range, that is, we are range is nothing but the distance from the satellite to the radar. So, assuming I am making this distance 3 times, I am going to make it 3r, then what happens to the return power? It is going to reduce by a factor of 81, is not it? Because pr, that is, the power returned is inversely proportional to fourth power of range. So, if I triple the range, the returned power is going to decrease by a factor of 81, which means if you are say a radar design engineer, you need to understand this relationship because it is going to severely limit the operationality of a radar. It is going to severely limit the operationality of a radar. Now, the radar equation can also be written, rewritten to represent signal to noise ratio. So, what have I done? I have rewritten the same radar equation in the form of signal to noise ratio pr by n naught, the expression by n naught. This tells us that higher the transmitting power, higher shall be the signal to noise ratio because it is directly proportional, is not it? Higher the transmitted power, higher the transmitting power, higher shall be the signal to noise ratio. Now, what happens if we triple the transmitting power? If we triple the transmitting power, the return power of echo also becomes triple, is not it? Now, let us try to understand what is the relationship with respect to wavelength. Signal to noise ratio, directly proportional to square of wavelength, but I am going to say that shorter wavelengths will have better signal to noise ratio. But then you must be wondering how because you know signal to noise ratio is directly proportional to square of wavelength and now I am telling you that no shorter wavelengths will have better signal to noise ratio. Now, the reason for that is if you remember the derivation of a radar equation, you can recollect that effective area of an antenna is nothing but gain into square of wavelength by 4 pi. What is this? Effective area of an antenna. So, you can recollect that increase in wavelength also has an impact on the antenna gain. Therefore, wavelength is going to impact gain and the collective effect, the net effect of wavelength is that signal to noise ratio shall be inversely proportional to square of wavelength. That is why I said that shorter wavelength will have better signal to noise ratio. Why? Because antenna gain is also dependent or affected by the wavelength. All right? Now, an important point to be noted here is that the radar cross section that is sigma is the sole property that is dependent upon the target properties. All the other parts of a radar equation, let it be the transmitted power, let it be the wavelength, let it be the antenna gain, the effective area of the antenna, all these are already known by a design engineer. These informations are already available with a design engineer and of course, the range is known as the time delay used by radar to estimate distance. Now, the radar cross section or sigma, it is the sole property that is dependent upon the target characteristics, the target properties which means the radar cross section can be estimated by rearranging the terms of the radar equation, is not it? From the radar equation, I have just rearranged the terms to represent it in the form of target radar cross section. Now, remember for satellite bond radars, the targets are usually distributed targets rather than discrete targets. Let me repeat. For a radar that is on board a satellite, when it looks down at the earth surface, it sees distributed targets and not one single target. The earth surface can have many features like water body, buildings, roads, vegetation, barren earth. So, distributed targets are seen by a satellite bond radar and not a discrete target. So, as we discussed in the previous lecture, for distributed targets, we need to use something known as a normalized radar cross section, sigma naught, which is nothing but sigma by a area, which means using the same radar equation, I can further rewrite the same to represent normalized radar cross section that is sigma naught. So, what have I done? I have rearranged the terms of radar equation first to get a value of sigma and now I am rewriting it again to represent sigma naught, which is normalized radar cross section, which is nothing but sigma by a area. See, the practical utility of these expressions, you know, like why are we even discussing this or where is it useful? So, all these practical questions will be answered when we start covering the numericals, the problems. When we start solving problems, that time you will get more clarity on what I am mentioning now. But for the sake of this lecture, I want you to be familiar with these terminologies. Just be aware that using the same radar equation, I can rearrange the terms and interpret many details. I shall give you some time to sink in. So, if you understand how to derive a radar equation, you do not have to learn these by heart because you know the relationship of each term with the received power and then the rest are nothing but rearranging of terms to estimate the normalized radar cross section or to get the range or to get the signal to noise ratio, it just requires a little bit of rearranging terms. Now, with this background, let me take your attention to a slightly different topic which is termed as Synthetic Aperture Radar Image Distortion. See, when we are talking about a real aperture radar or a synthetic aperture radar, they are looking sideways. They are not sending the beam to the nadir, they are looking sideways. They have side looking geometry which implies that both a real aperture radar that is rare and a synthetic aperture radar that is SAR both rare and SAR, they tend to generate images that are having some unusual features, some unusual features. Now, if you are a design engineer, you need to understand what are these unusual features so that you are better able to interpret the radar imagery. Please remember that these unusual features are dependent on a number of instrument characteristics. For example, the shape of antenna pattern, the viewing geometry of the sensor, etc., to name a few. And whenever radar tries to capture information over a complex terrain, a mountainous terrain, I am going to give the example of Himalayas. So, whenever a radar is trying to capture information over a mountainous terrain, that is when the most noticeable features of radar imagery arises. Which means the most noticeable radar features become visible when the ground surface is diverging from the reference surface. Let us try to understand a few of these what we call as unusual image features. Now, why does it all occur? Because, you know, whenever we try to use a two-dimensional imaging system to represent a three-dimensional surface, we always make an assumption that the underlying surface is flat and it has no topography. So, firstly, what we will do is as part of SAR image distortions, we will discuss about geometric distortion, geometric distortion. And remember, why does it arise? Because we are trying to use a two-dimensional imaging system to capture a 3D surface, three-dimensional surface. So, obviously, we are trying to make an assumption that the underlying surface is flat and that it has no topography and that is when these unusual features become apparent, become visible. So, greater the difference between the reference surface and the actual surface, greater shall be the geometric distortion in the projected image. It will be more clear as we move ahead as we discuss. So, let us try to understand something known as layover effect, layover effect. So, this schematic explanation of layover is shared here. Now, what you see as a black line is the reference surface and what you see as a shaded blue triangle represents nothing but a slope surface, steep slope such as the top of the object is closest to the instrument. So, once again, assume what you see here is nothing but the reference surface, the ground surface and what you see as the shaded blue triangle is a mountain, a steeply sloping mountain and what you see here is an aircraft which has a radar. Now, whether it be SAR or RAR, it is going to have a side looking geometry. So, it is going to look at the mountain. Let the top of the mountain be represented by Q, let the base of the mountain be represented by P. So, what happens? The top of the object, the top of the mountain that is Q is closest to the instrument than the foot of the object. This would mean that the radar echo that is coming from the peak that is coming from Q in this case shall reach back the instrument first before the return echo from the base reaches the instrument. Here, for the sake of clarity I am going to refer as a steep sloped mountainous terrain that is our object. So, once again what happens? When you have a steeply sloping terrain like the blue shaded region, the return echo from the top of the object is going to reach the instrument first before the return echo from the base of the object reaches the instrument. So, what happens to the resulting image? The top of the object is going to be mapped closer to the nadir as compared with the base or foot of the mountain. Which means as shown in the diagram here, the mountain shall appear to lean, isn't it? Mountain shall appear to lean, lean way towards the nadir and hence this effect is called as layover effect, layover effect. Now, let me give you a different scenario. Assume the mountainous terrain is not so steep, it is only mildly steep like what you see here in the schematic in front of you. Again, let us go from the scratch, what you see in black is the ground surface. What you see in blue, the shaded region of blue represents a terrain that is not as steep as what you saw earlier but it is mildly sloped terrain. And the letters remain same that is q denotes the peak of the object, p denotes the foot of the object, here we have an aircraft which carries a radar instrument that is having side looking geometry. So, you can make an informed guess as to what will happen to the return echoes, isn't it? In this case, the return echo from the base that is from p is going to reach the instrument first before the return echo from the top or peak of the object. Just try to get a feel of the distance rp that is r being the position of the aircraft carrying the radar. So, the distance rp and rq, you can see that rp less than rq less than rs. So, what happens? The return echo from the base that is from point p as shown in the schematic here, it shall reach the instrument first before the return echo from the top or the peak of the object. Now, what will happen? The projected image in this case, it shall also lean over. What you see here, the projected image it shall also lean over but the difference is that the projected image in foreshortening shall appear to lean away from the nadir. You can see it is trying to lean away from the nadir rather than towards it. So, for a layover effect what happens? The projected image appears to lean towards the nadir whereas for this effect known as foreshortening, the projected image appears to lean away from the nadir. Now remember that both layover as well as foreshortening both shall result in something known as a radar shadow, radar shadow given here is an example of a radar image which has shadow and towards your left side you see the same schematic. Only difference is that you have an object which is sloped in a manner, in a particular manner such that the rays from the aircraft they are not able to see the portion Qs. This part is not visible because the object is sloped in a particular manner. So, radar shadow is a region which is completely dark, completely black and in fact we can think of radar shadow as a region like a zone of silence where there is absolutely no information, a region of no measured signal where no echo is being received. Now please remember that radar shadowing is a factor of local slope. So, shown here with this schematic it will help you to understand more clearly about what is a radar shadow. Remember please do not compare a radar image with an image captured in the visible or infrared region because the meaning of shadow is different. When we try to consider microwave images and we try to look at a image that is captured in the visible or infrared region. In the case of radar shadow it is nothing but a zone of silence from where there is no measured signal at all. So, we have seen geometric distortion, we have seen layover, foreshortening and radar shadow. Now one thing that I wanted to discuss was something known as motion errors. Now remember that the platform that is carrying the sensor it can be a satellite or an aircraft as I have repeatedly shown in part of the slides. Now if the platform is a satellite its motion is more or less well defined as in the orbital path is relatively smooth and it varies slowly. Now imagine that the platform which is carrying the sensor is an aircraft as shown here aircraft. Now what happens? An aircraft will be most certainly influenced by the wind speed and direction and it shall be subjected to turbulence. When you travel you know in a flight you may have experienced turbulence, isn't it? So, what I mean here is that the aircraft will be subjected to change in roll, pitch and yaw. Now what is roll, pitch and yaw? It is shown in the schematic here by roll I mean rotation about the front to back axis roll and pitch is the rotation around side to side axis and yaw is the rotation around the vertical axis. Let me give it some time to sink in roll and yaw. We are talking about the motion errors that can cause because the platform carrying the sensor can be subjected to movement due to wind speed and direction. There can be some turbulence and this in turn influences or incorporates errors into the radar image. Now, remember in all our discussion so far we have been concerned only about the movement of the platform which means somewhere we are assuming that the target is stationary, that the target is not moving and only the platform is moving and it is sending signals sideways with a side looking geometry. The radar system is collecting information from a stationary target that has been our notion or understanding so far. So, please remember that this is not the case always as in do targets remain stationary? Not all. Let me give you an example of moving targets say ships or vehicles on road or terrain surface or trains or surface waves on water. So, all these are examples of moving targets. Target can also move, platform is already moving which means the movement of target tends to introduce something known as an extra Doppler component in proportion to the relative velocity between instrument and target. We will see about the Doppler effect and the Doppler based synthesis of synthetic aperture radar shortly. But for now I want you to slowly start thinking that the target can be stationary or it can move the platform carrying the radar system is already moving which means it tends to introduce an extra Doppler component and what is the result in the image? The target under consideration which is moving shall appear slightly displaced in the azimuth direction of image. Remember that this distance shall be in proportion to the relative velocity of target. Again you know we will try to understand these topics through numeric else because unless you solve problems you may not get complete clarity. For now I want you to understand that the targets need not always be stationary, the targets can move, the platform is already moving and this can introduce something known as an extra Doppler component. So, in this section till now we have been discussing about image distortions, image effects and we have been discussing specifically about geometric distortion. So, remember there are more image defects to be discussed but for now in this part of the lecture we have limited our discussion to something known as geometric distortion and just to reiterate accurate topographic information, accurate information about the terrain surface it is essential to be used as a reference surface which means if we have information about the topography very well and good then we can obtain a distortion free image. Such an image we call it as a geocoded product, geocoded product, distortion free image. So, geocoded product is nothing but image projected to the actual topography which has been completely subjected to geometric correction, geometric correction. Now whatever I am discussing is mentioned in the slide so feel free to pause as and when required. We are discussing about geometric distortions and now our focus is on how do we correct this, geometric correction and we were discussing that if we have information about the topography then we can obtain a distortion free image which is called as a geocoded product. Now remember that topography can have varying effects on a radar image. And ground control points or GCP, let me try to introduce GCP now because it is the right time. So, a GCP is nothing but a collection of points with known ground and image locations. Remember the main apparent difference between optical remote sensing and microwave remote sensing is that in optical remote sensing you can visually see and differentiate between features whereas we already know how a radar image looks like, black and white with a lot of noise and we cannot differentiate between the features. Visually it is very difficult to understand what is present when we look at a radar image. So, this is where GCPs or ground control points find their importance because they are nothing but points whose location is known. Now GCPs allow image pairs to be co-registered and it plays a very important role in radar remote sensing because as I mentioned earlier differentiating features in SAR image is challenging. It is not straightforward and for this reason we have metallic corner reflectors deployed at convenient locations. We have already discussed about corner reflectors, trihedral and dihedral corner reflectors and we also saw their location in Hyderabad as part of Calval site of ISRO. So, these metallic corner reflectors are deployed at convenient locations. They act as GCPs. Now what do I mean by convenient location? I am referring to areas which is accessible by ground as well as areas with very low back scatter such that the corner reflector appears as bright spot in the radar image, dihedral and trihedral corner reflectors. So, let me summarize what we tried to learn as part of this lecture. So, in the beginning we discussed about the difference between beta naught, gamma and sigma naught and then we were trying to interpret the radar equation by rearranging terms. We were trying to interpret the radar equation and then we slowly started discussing about the image distortions due to the side looking geometry of a radar and then we learnt that the targets can also move. They need not be stationary always which is going to introduce an additional Doppler component and we learnt that geometric correction is possible provided you have an information about the topographic surface. Remember most of the points that are being discussed as part of the lectures will be backed up by a session where we solve problems where you will get practical exposure to why we are learning all these terminologies and equations and so on. So, in all let me hope that you found this lecture enjoyable and I will meet you in the next class. Thank you.