 Hello and welcome to today's lecture. So we are at module 2 and this is the 8th lecture. So let us see where we stopped in the last class. So in the last class we discussed about what is Peckle in a radar image and then I showed you two images like this, this is where we stopped, is not it? And then I asked you are both the same visually, does the image on the left side seem equal to the image on the right side. Now remember that radar signals they tend to hit the targets at many angles and this is depending upon the incidence angle of the transmitted signal, the local incidence angle, the looks used in creating an image etc. So the returning signal from the target is subjected to interference as a result of interaction with rough terrain surfaces and this interference pattern they involve adding signals either in phase or out of phase and that is when we see a noise like phenomenon known as Peckle which gives a grainy appearance to the radar images as you see in the image towards your left side. So now that we know what exactly Peckle is we also need to understand how to completely remove it from a SAR image. In that aspect today's lecture will focus on Peckle removal in SAR image. So remember that a SAR resolution cell it shall contain a large number of targets or scatterers whose return echoes are coherently summed to obtain the phase and brightness of the resolution cell. Let me repeat. If you remember in the last class we discussed on how to use vector representation to understand what is Peckle and how the information adds up, is not it? So once again a SAR that is a synthetic aperture radar resolution cell it is going to contain a large number of targets or scatterers whose return echoes are coherently summed together to obtain the phase information and the brightness of the resolution cell which means if there are a large number of scatterers the resolution cell shows a brightness value which is much, much larger than the actual brightness by the object. So this appears as Peckle on SAR image. As we discussed scatterers at different parts of the resolution cell are going to contribute in different manner to the return echo. Now before we understand how to remove Peckle we need to be knowledgeable about some of the fundamental distributions that are relevant in synthetic aperture radar image processing. So what I will do is I will list down these fundamental distributions that are relevant for us. Starting with images of real part and the imaginary part that is A cos 5 and A sin 5 they tend to be distributed in this manner as you see. So they tend to have Gaussian distribution here. I hope you understand that A stands for the amplitude and phi stands for the phase. Now the distribution of amplitude they tend to follow a Rayleigh distribution. So at any point of time feel free to pause the screen and have a look at the parameters that comprise the Rayleigh distribution. Similarly intensity that is I which is nothing but square of amplitude A square it tends to follow a negative exponential distribution. Now with this background let us try to understand how to handle Peckle in SAR imagery. Remember we know that each cell of a SAR image shall contain a large number of random scatterers and also that each scatterer is going to convey some meaningful information about the target. Remember when we discussed about scattering from water, vegetation and so on. But we already have an understanding that SAR image shall consist of complex numbers with a real part and with an imaginary part. We can assume that both these real and imaginary parts shall be Gaussian distributed but then think about it. Is it actually the amplitude that we are interested in? Because when we were trying to derive a radar equation we had an understanding that power returned as more of interest or in other words our interest lies in the power or intensity as it is that quantity which is directly related to the radar cross section RCS sigma and power equal to square of amplitude. So actually the quantity which we are interested in, the quantity which we are trying to estimate is n as it shall share information about the underlying average radar cross section. Which means that to understand the underlying RCS that is radar cross section we need to make a number of sample measurements and then we need to average them to get the value of n. But then can we just spatially average the pixels, will that help? Say we can average the pixels by taking the neighboring values okay but then does that averaging make sense because it is going to be performed for values for same targets or is it? So without confusing you let me start by the concept of multi-looking okay. So what you see on the screen here towards the left side is an SLC image single look complex image and what you see on the right side is something known as a multi-looked image. Both represent the same geographical region. Visually when you compare you will find that a multi-looked image visually it feels better okay than an SLC image. So let us try to understand the concept of multi-looking and why it is important when we are trying to understand about speckle. Now as I mentioned earlier we cannot spatially average the pixels you know say we can average the pixels by taking the neighboring values but then the that kind of averaging makes sense only if performed for values of the same target but in the case of radar return echoes there are different targets contributing to the return echoes. So unless we know that the echo is from the same targets averaging it does not make much sense. So another approach of handling speckle in radar imagery would be that right during the measurement process while the radar system is receiving different return echoes from different targets we can split the azimuth beam into say L sub beams as you see on your screen. So what are we doing? Right during the measurement process with when the radar system is receiving different return echoes from different targets the azimuth beam can be split up into L sub beams which means we can use a number of sub apertures using small portions of the Doppler bandwidth to create an image. Here we discussed what is Doppler bandwidth and we know that by aperture we mean antenna. So in multi-looking we are trying to use a number of sub apertures using small portions of the Doppler bandwidth to create an image and these sub apertures are called as looks and the resulting image is known as an L look image which means we can have a 2 look image where L equal to 2, 5 look image, 6 look image shown here is the schematic for a 4 look process. Now of course there will be some degradation in the spatial resolution but then this is compensated by the improvement in speckle reduction. So just to repeat what I wanted to convey spatial averaging the process as such is going to take samples of different cells on the ground and looks refer to multiple measurements of the same cell, it is the same cell. Please remember again that synthetic aperture radar shall consist of complex numbers and remember the vector notation that we just discussed. So a simple arithmetic average would be adding up the vectors and then dividing by the total number. This is known as coherent averaging. For multi-looking we are not performing coherent averaging, we are performing something known as incoherent averaging, multi-looking incoherent averaging. And this is conducted by ignoring the phase values and by taking the sum of only amplitudes. Let me re-itrate. So in multi-looking we are going to perform incoherent averaging wherein we are completely ignoring the phase values and then we are taking the sum of amplitudes only. Please remember that a coherent average would again give us a speckled image. So our aim is to do incoherent averaging. But now the next question is I told you L stands for the number of looks but then I do not know what is the optimum number of looks, is there a magic number? So please be aware that there are relationships that helps to answer this question as to what is the optimum value of L, okay, alright. So what you see here is a schematic which you will also be following as part of your tutorials wherein multi-looking the process will be performed using Python, alright. Now another solution to handle speckle is by spatial convolution by using a despeckling filter to an image which has already been subjected to multi-looking, okay. You have a synthetic aperture radar image, it has already been subjected to multi-looking after that you can use a despeckling filter for that we need to understand the concept of spatial convolution. Remember that the idea is to average across areas which are relatively homogeneous. Now first let us discuss about spatial convolution using kernels and then we shall come on to speckle in particular, alright. So the concept of spatial convolution uses something known as the spatial filter. So a filter for which the value at any location say i, j or x, y at any location it will be a function of some weighted average that is linear combination of brightness values located in a particular spatial pattern around the i, j or x, y location in the input image spatial filter. Now 2D convolution filtering is carried out using something known as a convolution mask or kernel which carries some weight and the mask can have a size of 3 cross 3, it can have a size of 5 cross 5, it can have a size of 7 cross 7. So we need to understand how a convolution mask having weights can be applied in a spatial domain over a synthetic aperture radar image. So let us try to understand that next. Assume this is a part of an image and of course the cells are not blank, it is going to have some values, I have just kept it blank for this lecture so that you focus more on the process and not get lost in the calculations. So assume what you see in front of you is a small part of an image which is having m number of rows and n number of columns and say you are placing a filter of size 3 cross 3 that is 3 rows, 3 columns, the filter is also having weights, again I am not showing the weights because I want you to focus on the process and not get lost in the calculation. So assume the 3 cross 3 filter which is having weights is placed at the leftmost corner of the image, the leftmost corner of the image. Then weighted average approach is used to calculate the value of central cell and once the output value from this filter has been calculated, the window is moved one column or one pixel to the right and again the operation is repeated which means I get to alter the central value. What happens next? The window is again moved right towards and successive output values are computed every time at the central location until the right hand edge of the filter window hits the right margin of the image which means now the filter is here. So this operation is being repeated until the right hand edge of the filter window hits the right margin of the image. At this point the filter window is moved down by one row or one scan line and it is pushed back to the left hand margin of the image and this procedure is repeated again to alter the central values. So this procedure is repeated again until the filter window reaches the bottom right hand corner of the output image. So what do we do? First of all I gave you a blank matrix and I told you that imagine it is part of an image and then a 3 cross 3 filter was placed at the leftmost corner of the image. I am going to use a weighted average approach to calculate and alter the value of central cell and once the output from this filter has been calculated the window is moved one column or one pixel to the right and then the operation is repeated until the right hand edge of the filter window hits the right margin of the image. Then what happens? At this point the filter window is moved down by one row or one scan line. It is sent back to the left hand margin of the image and this procedure is repeated again and again till when, till the filter window reaches the bottom right hand corner of the input image. So this process is known as spatial convolution. This is how you apply a convolution kernel and alter the values of the image. Now depending upon your application the weights of the filter can vary depending upon your application it can vary. Now you may be wondering that we are only changing the central values which means the values present at the border are left untouched, is not it? So does it mean that the output image will have fewer rows and fewer columns than the input image? Does it mean that? Absolutely, because it has an unfiltered margin corresponding to the top and bottom rows and the left and right columns of the input matrix that the filter window cannot reach. Remember we are talking about a 3 cross 3 filter. So generally these missing rows and columns are filled with zeros in order to keep the input and output images of the same size, otherwise there will be a mismatch of size. So they are the output image, output of what? For spatial convolution the output image are ensured that their size is same as the input image by filling with zeros in the corners that is in the border locations. So these missing rows and columns are filled with zeros in order to keep the input and output images of the same size. Now if an image is filtered by a 10 cross 10 filter it will be smoothened to a greater extent than an image which has been filtered with a 3 cross 3 filter. So what happens is that in the tutorials you will get to create a spatial filter and you will learn how to download a synthetic aperture radar image and how to perform spatial convolution using 3 types of filters, we will discuss that shortly. So this will be your hands-on exercise which you will be performing in Python. So let us try to cover those specific filters with which you will be working during the tutorial session. So they belong to a category known as low frequency or low pass filters. They are mainly used to de-emphasize or block the high frequency details, okay. It will be more clear once you see the output after filtering. Now at this point you know it will be worthwhile to note that always the kernels that you have just seen they are kept as odd positive integers, isn't it? 3 cross 3, 5 cross 5, 7 cross 7 but never a 2 cross 2 or 4 cross 4, why? Because you need a definite central point to change. In a 3 cross 3 filter you do have that definite central point. So let us try to discuss about one or two examples of spatial convolution filtering using specific filters which will also be covered as discussed in through the tutorials. So what you see in front of you is the same example that I showed earlier, only difference is that now the kernel, the convolution kernel or mask is not empty but it is filled with weights from W1 to W9 and in the process of spatial convolution you take some weighted average which means you use the values of the image that are lying beneath the filter as well as you use the weights of the filter, you use it in some way to alter the central location of the kernel. Now one kind of filter that I would like to introduce you to is known as the mean filter or we also call it as the moving average filter. What is specifically done when we apply spatial convolution using mean filter is that you always multiply the weights with the digital number here dn stands for the digital number at location 1. So you are multiplying the values of the image with the weights of the convolution kernel and then dividing it by the total number which means it is a process of simple averaging that a mean filter does. Now one by one we need to take a look at the radar images before they have been subjected to speckle filtering and after they have been subjected to filtering. So what you see in front of you towards the left side is an unfiltered image, unfiltered radar image and what you see towards your right side is the filter after it has been subjected to a mean filter of size 3 cross 3. Now mean filter it does reduce the overall variability of the image. At the same time those pixels that have larger or higher or smaller values than their neighborhood average they get respectively reduced or increased in value so that the local detail is lost. Now mean filters they do belong to a general class of filters which are described as box filters, box filters. So we are looking at these sample images obtained after applying different kinds of filters. In that context what you see in front of you is again towards your left side you see the unfiltered image, radar image and what you see towards your right side is the image after it has been subjected to median filter. As the name suggests median filter picks out the median from the window. Now you can compare the advantages between mean and median which also are very much applicable when we are talking about mean filters and median filters. Remember these will be a hands-on exercise that you will perform as part of the tutorials where you will learn how to download the synthetic aperture radar imagery as well as how to create the filters and how to apply them to the imagery. So let us move forward. Now one of the important continuous probability distribution in the entire field of statistics is the normal distribution or the Gaussian distribution which has a bell shaped curve. What you see in front of you is the density function of a normal random variable X with mean and variance. So we can also have a filter named as Gaussian filter. More details about what exactly the Gaussian filter does and how do you create a Gaussian filter are addressed as part of the tutorials. So for now I am going to show you the sample results obtained after you apply a Gaussian filter. So what you see here is comparative evaluation of the unfiltered image, the image which has been subjected to median filter, size of convolution kernel is 3 cross 3, the output after it has been subjected to mean filter and finally towards your right side you see the output after it has been subjected to a Gaussian filter with sigma 3. Visually what can you say which filter works better? You will find out once you perform the hands-on exercise as part of the tutorials. So to summarize in this part of the lecture we understood the concept of multi-looking what it is and how it is performed and also we understood a few despecling filters which means you understood the concept of spatial convolution and how convolution kernels or masks is being used to alter the values of a radar image and then I showed you sample outputs of what you get when you apply mean filter, median filter and Gaussian filter. Remember despecling filters in itself is a vast topic and there are many, many more filters with different weights. But for this particular lecture I am just introducing you to three filters which are common which you find in all textbooks. If you are more interested to know more about the filters, more details shall be provided in the tutorials. So for now, I hope you enjoyed this part of the lecture and I will see you in the next class. Thank you.