 to today's class. So today before we actually start the lectures, let us see the topics which we have been covering as part of module 2 as well as those which are going to be covered in the upcoming lectures. So in front of you you see the summary of the topics that is we have covered water synthetic aperture radar and the image formation basics and also the fundamental properties of SAR imagery. As in we have seen that SAR image can be a complex number. The different types of SAR imagery, the different acquisition modes and what is the meaning of resolution, azimuth, ground range and few terminologies like the swat and nadir and then water sigma, sigma naught, SAR versus real aperture radar. So all these topics we have been covering as part of previous lectures and also we have seen what is meant by image defects. So remember the sample image I showed you from the ISRO website where you could see the layover effects clearly, is not it? So we have covered what is geometric as well as radiometric distortion as well as what is peckle that is the inherent salt and pepper noise you see in a microwave image and you know by now what is multi looking and also the different data formats. So today we will be dealing with lecture 9 of module 2. So in this part of the lecture we shall learn about some processing steps which are generally carried out for the synthetic aperture radar products using the intensity image. Now this part of the lecture is also helpful for you to understand the various preprocessing steps which should be followed before trying to classify an image. Now what is classification and what are the different algorithms with which one can do classification that will be dealt with in the upcoming lectures. But for now let us just call it that we will try to learn about how to process SAR intensity image, okay. So back to today's class. So to start with what I will do is I will give you a list of steps which are not necessarily followed in order but a series of steps and the terminologies that deals with processing of SAR intensity images. And the order in which these are carried out will be dealt with in the hands-on tutorial session. So you will know which step comes first and which comes next and so on, all right. So to start with we will deal with what is known as focusing. Now synthetic aperture radar processing it is generally a two-dimensional problem in nature, is not it? Because you are using a two-dimensional image to represent a three-dimensional terrain surface. So in raw data that is the first level data from SAR, the energy from a point target gets spread in both the directions. So what are the two directions in SAR? One is azimuth direction that is the flight direction and the other one is range direction perpendicular to the flight direction. So SAR focusing or focusing this step is carried out to collect this energy into one single pixel in the output image. And here raw data is the energy from point target which is spread in range and azimuth let me reiterate. And the aim of SAR focusing is to collect this spread energy from both the directions into one single pixel of the output image. It is done by range compression and azimuth compression. So we just touched upon these topics. We will see this in detail as part of today's lecture. So shown towards the left hand side is the field within a SAR raw data. It contains the power reflected from a ground cell. I am going to name it as R1A1 range azimuth R1A1 instead of going with XY. So what you see here in red is the field within SAR raw data which contains the power reflected from one single ground cell as an example. You consider just one single ground cell. You can see that the image the data it is spread across the range and azimuth direction and towards the center you can see that what happens to the image after it is being subjected to something known as range compression. Shown here is the schematic for image after range compression but then you know it still has information from a single azimuth spread over many pulses. As written here it is compressed in range but with information from a single azimuth spread over many pulses. So then we come to the third schematics, third diagram that is towards the right hand side this shows the final representation after both range compression as well as azimuth compression. So now you see that the field has been collected focused into one single point that is the information the power reflected from a target which was initially spread across the range and azimuth direction after synthetic aperture radar focusing you see it here in the third image that is the final representation shown here. Now as part of today's lecture what we will do is we will try to understand what is this range compression and azimuth compression quickly. Now let me try to explain it using a flow chart. See for collecting the synthetic aperture radar data a very long duration linear frequency modulation pulse is transmitted. For collecting the SAR data a very long duration linear frequency modulation pulse is transmitted and then range compression needs to be carried out on each line of the SAR data and it is carried out using FFT that is fast Fourier transform. Now without getting into the mathematical details of FFT let us try to understand how this is done. By now we have the understanding that the data set in SAR they are complex numbers which consist of a real part as well as an imaginary part and they are in the time domain. So here you see I am going to use T to represent the time domain, F to represent the frequency domain. So what you see here is the raw data in time domain, range reference function in time domain. Now what is range reference function? See a range reference function needs to be built to carry out range compression. Now remember when we discussed about chirped pulses and then I mentioned an example about you being in a traffic signal and straining to hear your own favorite song amidst the cows of the junction and because the song as such it follows a pattern you are able to follow and identify the pattern. So the original chirp that is transmitted by the antenna is represented by the range reference function. So it is going to be a series of complex numbers. So let me reiterate the original chirp that is transmitted by the antenna is represented by the range reference function. So the first step is both the raw data in the time domain as well as the range reference function in the time domain they are converted into the frequency domain using fast Fourier transform FFT and then compression is carried out in the frequency domain after which the data set is going to be converted from the frequency domain back to the time domain using inverse FFT, inverse fast Fourier transform. So this schematic is explained in a flow chart for you to easily understand what is being meant by range compression. Again I am not going to the mathematics of FFT for now. Moving on, we also discussed about azimuth compression. Now similar to the range compression what happens is the range compressed data is used through FFT that is fast Fourier transform along with the azimuth reference function in the frequency domain we carry out complex multiplication to obtain the azimuth compressed data in the frequency domain which are then converted back to the time domain using inverse FFT. So this is what happens in azimuth compression. So remember always the raw data that you get from SAR synthetic aperture radar raw data it is going to you know it is not going to make much sense because you find that the data needs to be compressed in both directions in both range and azimuth so that the information from a point target gets represented adequately. Now just a summary of the processes that can be carried out to get the intensity images. So moving on if you remember we also discussed about multi-looking briefly, isn't it? So here in front of you I am trying to show two images. Right now let me not specify the satellite or the polarization because I just want you to focus on the image that you see in the screen in front of you. So towards the left side you see the image after you download it. The image is going to look like this and this shows the area of Maharashtra Mumbai region and it looks elongated, isn't it? Looks elongated. Now what you see towards your right side is the image after it is being subjected to multi-looking. Now this image looks more or like a rectangular or square shape and now at least some features are visible, isn't it? So multi-looking as such this is the process of averaging over range and or azimuth resolution cells and here in multi-looking instead of using the full Doppler bandwidth of an azimuth beam to synthesize full aperture we are going to use a number of small sub apertures. These are synthesized using small sections of the bandwidth. Remember each of these smaller sub apertures are going to result in an image and these sub apertures are known as looks, looks, sub apertures and we now have an understanding that incoherent averaging is carried out in multi-looking. So once again what you see towards your left side is an image before multi-looking and what you see towards your right side is an image after it has been subjected to multi-looking. So what are we discussing now? We are trying to understand the different processes that can be carried out in synthetic aperture radar intensity images. We covered focusing wherein we try to understand what is range compression and azimuth compression and right now we try to understand what is multi-looking. Now remember processes like multi-looking are covered also as part of the tutorials where you will be developing python based codes to understand what is multi-looking. All right. Now I think it is the right time to cover a small numerical so that you get your concepts cleared. So in front of you you see a small question as in it is written assume the following for a satellite with fixed acquisition geometry. Three information are given. One is pixel spacing in the azimuth that is 4 meters, pixel spacing in the range that is 8 meters and the incidence angle that is 23 degrees. Now what can we possibly calculate using these three informations? You know I really want to understand the optimum or the appropriate number of looks through this numerical. So once again three informations are given. The pixel spacing in azimuth, remember azimuth direction is the flight direction. The pixel spacing is given in range as 8 meters and range direction is perpendicular to the flight direction and the incidence angle is given as 23 degrees. Now let us try to understand what information can be possibly calculated using the three given details. For example, I can estimate the ground range resolution. Let me write it down. Ground range resolution, if you remember the portions that has been covered previously you know that it is pixel spacing in range by sign of incidence angle. So I am going to write 8 by sign 23. Remember the lecture wherein we try to differentiate between slant range resolution and ground range resolution. You will get a value close to 20.4 meters. So given these three details we are trying to calculate what additional information can we gauge using the given three details and first one was ground range resolution. Now we know it. Now second is let us try to recollect whether we can estimate the pixel spacing in azimuth. Already given? No? Assume I am going to write it as 4 into 5 which is equal to 20 meters because my ultimate aim is I want to find out the optimum the appropriate number of looks through a numerical and with the given information if I multiply by 5 I am going to get 20 meters very close to 20.4 meters which is the ground range resolution which means the recommended pixel size in a geocoded image is going to be 20 meters. Isn't it? Recommended pixel size of a geocoded image is going to be 20 meters with the given details. Now I hope that you are able to understand the concept of multi-looking better L looks. So moving forward now let us try to understand what is core registration. So remember the processes that are being explained as part of this lecture they are not in any particular order. These are just terminologies that will help you understand more about the tutorials and the upcoming lectures. So we covered focusing, we covered multi-looking and then now let us try to understand what is core registration. Now imagine that there are two or more synthetic aperture radar images say they are having the same orbit and acquisition mode and say they are going to be superimposed. That is when core registration comes into picture. Now one thing to remember is core registration is not equal to geocoding. So I do not want you to mix it up. So let me write it down. Core registration not equal to geocoding. I am going to repeat it. Now in georeferencing or geocoding we are converting each pixel from a slant range to a cartographic reference system. And core registration finds importance whenever we are having say two or more synthetic aperture radar images which has been acquired using the same orbit and acquisition mode and you want to superimpose them. Now moving forward coming on to speckle filtering. Now what you see in front of you towards your left side is LO spalsar data before speckle filtering before speckle filtering and what you see towards your right side is the same data after it has been subjected to speckle filtering using spatial convolution technique. We have already covered this as part of the lecture and in the tutorials also by now you may have developed python based codes to carry out speckle filtering in the spatial domain. Now remember that speckle as such you know it is a characteristic of all coherent sensors. All the coherent sensors is going to have speckle as an inherent characteristics and synthetic aperture radar is also a coherent sensor so it is going to have speckle. So what is speckle in layman terms? It is multiplicative noise owing to the superposition of multiple backscatter sources within one single SAR resolution element. So what happens? It is nothing but a multiplicative noise. There is scattering happening in all directions from the targets and there is superposition of multiple backscatter values and when it happens within one single synthetic aperture radar resolution element it leads to speckle and we also know that it is called as salt and pepper noise. Multi-looking and speckle filtering are means of reducing speckle. We have also covered that to reduce speckle in synthetic aperture radar imagery. You can either go with spatial convolution using these speckling filters or even multi-looking helps, isn't it? Okay, so moving on I thought it is the right time to mention a few adaptive filters which are based on multiplicative model because you know speckle as such it is a multiplicative noise and to remove it adaptive filters are also used because adaptive filters themselves they are based on a multiplicative model. You know a few names are listed down like frost filter, Lee and Kwan filter, map, multi-temporal filters etc. Now right now I am not going into the weights of these kernels but for you to understand and get familiar with these terms in multi-channel synthetic aperture radar images the Gaussian distributed scene model is popular, very popular to handle speckle and in the case of multi-temporal filters they are generally used wherein the assumption is that a single resolution cell on the ground is illuminated by a radar beam in the same manner in all images of time series. Let me repeat multi-temporal filters can also be used wherein the assumption is that one single resolution cell on the ground is illuminated by radar beam in the same manner in all images of time series and a resolution cell on ground corresponds to the same coordinates in image plane in all images of the time series. Now in the case of multi-temporal filters spatial core registration of SAR imagery in time series is highly important you know spatial core registration. So moving forward now let us come to geocoding remember geocoding or it is known by multiple names you know and if you pick up a standard textbook on digital image processing you will learn that geocoding is same as geo referencing or geometric calibration or ortho rectification it is known by many names. Aim is to convert synthetic aperture radar images which are either in the slant range or ground range geometry into a map coordinate system. Let me re-itrate the aim of geocoding is to convert synthetic aperture radar images which are originally in either the slant range or ground range geometry into a map coordinate system. Now we could use terrain geocoding which are conducted using digital elevation models or dem nominal geocoding which are conducted without ground control points GCPs and we also have precise geocoding which are conducted using GCPs. Again the reason that I am listing out these terms is because you will get familiar with these once we reach the tutorial section also. If you are trying to work with any open source image processing package you will find a lot of modules that performs the processes that we have discussed right now as part of this lecture. Now a few more you know topics need to be discussed so I will try to quickly brush through the same. The first is radiometric calibration. I know that you are already familiar with this but just to re-itrate radiometric calibration is conducted for inter comparison between radar images which are acquired with different sensors. And by now we know what is beta naught, what is sigma naught and what is gamma and radiometric calibration as such it is going to involve correcting for the scattering area, the antenna gain pattern and the range spread loss. By now again I hope that you understand what is antenna gain. Remember the video I showed you when a human ear is compared with the antenna of a radar. Moving on next is terrain correction. You know the synthetic aperture radar has a side looking geometry. It is not going to send coherent pulses down in the nadir direction but it is going to send the pulses in the sideways which can be either left side or right side but nevertheless synthetic aperture radar has a side looking geometry and this causes geometric distortions such as laver foreshortening and radar shadow. Now in order to reduce the geometric distortions we need to perform ortho rectification. So terrain correction as a process will be covered as part of the tutorial but again I want you to be familiar with this terminology terrain correction. One more term I am going to leave you with in today's class that is mosaic. Now typically when we have terrain corrected and geocoded and radiometrically calibrated data that are acquired from different satellite tracks there needs to be some mechanism by which they are mosaic to create a seamless data of high resolution. You know mosaic it is something like stitching together stitching the different satellite images together by making sure that there is some overlap between the area mosaicing. Okay terrain corrected geocoded just have a look at the terms terrain corrected geocoded radiometrically calibrated data that is acquired from different satellite tracks are usually mosaic to create a seamless data of high resolution. Okay so to summarize in today's lecture we understood about few processing steps on SAR intensity images. Synthetic aperture radar intensity images and let me hope that you could understand the same and I will meet you in the next class. Thank you.