 Hello and welcome to this lecture. So before we proceed to today's lecture, let us first try to quickly go through the topics which we have been covering as part of module 2. So remember we are still trying to answer this question, imaging radars, how do I interpret? The remaining topics that are part of module 2 are ratio images and indices, contrast and polarimetry. So in the last class we understood about what is texture and how we can compute the gray level co-occurrence matrix using a simple numerical example. So we are presently in the 11th lecture of module 2 and in this lecture we shall continue to understand about texture in synthetic aperture radar imagery. So in the last lecture we learnt that gray level co-occurrence matrix or abbreviated as GLCM, it can be computed from a synthetic aperture radar imagery and through a very small numerical example of 4 by 4 sub part of the image, we understood that GLCM consists of a measure of probability of occurrence of 2 gray level values separated by a given distance in a given direction. Now wait, we have used distance as d equal to 1 if you recollect the earlier lecture and we have used direction as 0 degree horizontal direction and we were trying to estimate the pixel pairs both from left to right and right to left in theta equal to 0 degree. We actually compute the probability of occurrence because right now I mentioned that we understood that GLCM consists of a measure of probability of occurrence of 2 gray level values separated by a given distance in a given direction. So let us try to find that out as part of today's lecture and we will also try to answer this question as to what can be extract from GLCM. Now just to re-itrate 4 directions or adjacency can be used to compute GLCM, one is the horizontal at 0 degree, vertical 90 degree, diagonal bottom left to top right 45 degree and again diagonal top left to bottom right 135 degree. So the next question is what can be extract from GLCM? Shown here towards your left side is a GLCM matrix 4 cross 4. We already know how to compute a GLCM matrix. So a sample GLCM matrix is given here. Remember it is not a symmetric. So how do we create a symmetric GLCM matrix? What we do is we try to take the transpose of GLCM. What do you mean by transpose? The rows are written as columns which means 4, 2, 0, 0 is written as 4, 2, 0, 0, 3, 4, 1, 0 becomes the second column 3, 4, 1, 0, 0, 1, 4, 1 is written as third column 0, 1, 4, 1, 0, 0, 1, 2 is written as the fourth column 0, 0, 1, 2. We have created transpose of GLCM. Now to make a GLCM matrix symmetric we add the GLCM to the transpose to make it a symmetric GLCM. I have just added the values to make it symmetric GLCM plus transpose of GLCM. Again that does not answer the question as to how GLCM can represent probabilities. We can also normalize a GLCM. For example, I can divide each element of a symmetric GLCM or a GLCM by the sum of its elements. Which means I am going to divide each element by the sum of elements. Which means co-occurrence values can now be thought of as probabilities in normalized GLCM, is not it? Now we can think of probabilities in normalized GLCM. Now let us try to understand what all are the different features, textural features that we can extract from GLCM matrix. Because till now we were focusing our discussions on how to compute GLCM. But now we want to understand how textural features can be extracted from GLCM. Shown in front of you is towards the left side the input image that is the sample SAR data will be shown and towards your right side you will see the image showing mean. Similarly we can have an image showing variance. If you go through the paper by Haralik et al. 1973, you will find the different textural features are explained in that paper. A few of them we will try to discuss as part of this lecture. So instead of just showing you the before after images, let us try to understand how these textural features are computed. So shown in the slide is the contrast image. Again contrast is a Haralik texture feature which gives non-linearly increasing weight to transitions from low to high grayscale values. Let me re-itrate contrast, Haralik texture feature that is giving you the non-linearly increasing weight to transitions from low to high grayscale values and contrast is nothing but the amount of local variations present in an image. Mathematically if you ask me what is contrast? I will try to represent contrast which is a Haralik texture feature by using this relationship where i and j represent the addresses rows columns i minus j square into p of ij. Moving on, we also have features known as correlation, another textural feature. It tells you the number of gray tone linear dependencies in the image. For example, you consider that you are trying to plot the correlation values of a water body. So you want to compute the correlation image of a water body. Now water body will mostly have constant gray tone values, is not it? As you see here, constant gray tone value. Now since the samples are mostly uncorrelated, the correlation feature for a water body shall be having very low values as you see in this particular image towards the right side, the image showing correlation. Similarly, we can also have something known as angular second moment. It measures the number of transitions from one gray level to another. Here we can either normalize a glcm and then use it or you estimate the probabilities and then calculate the angular second moment. The relationship is given here. Remember i, j are the locations, addresses, x, y, rho, column, black long, the location of each pixel in an image, angular second moment. Similarly, we can also have entropy which is another textural feature. Now remember that if you want to get you know more details about each of these textural features, I would suggest you to refer to Harali Ketal 1973. So here I will try to introduce some textural features and how it can be applied in image processing. So I have not depicted all the mathematical relationships here. Just to give you an overall idea, we can also have image showing homogeneity, homogeneity, another textural feature. We can have something known as dissimilarity, another textural feature, dissimilarity. Now I just want to understand whether we can create an image using ratio of two images because you know if you are familiar with satellite images that are captured in the visible and infrared regions, you may have come across images which are visually more appealing to look at. As in visually, you are able to demarcate or identify features in an image that is captured in the infrared or visible regions of the electromagnetic spectrum. Now when it comes to microwave images, always we are getting a monotone image, monochrome image, black and white, varying intensities of black and white and it also has pickle which is noise, inherent salt and pepper noise in a radar image. So my question is, can we generate an image using ratio of two images? Absolutely. What you see here is a false color composite FCC using SAR imagery. Here towards the left side you see that HH band horizontal, it stands for remember in polarization we discussed about H and V, it stands for horizontal, V stands for vertical and HH image means an image in which the electric field vector is horizontally transmitted, hitting the target and then the electric field vector is received back in the horizontal polarization. Horizontal transmit, horizontal receive results in an HH band. So HH band is assigned to red, here HV is assigned to green and as the third band I am using entropy band, one of the textural features, entropy band of HV is assigned to blue which will result in a false color composite that you see towards your left side. Which means, you know I can get colorful images using microwave imagery by you know taking the ratio of bands or using even a textural feature as a third band. So what you see here towards your right side is the same image, same microwave image. The only difference is that instead of using the entropy textural feature as the third band I am using HH by HV. Ratio of two bands are used as third band and then assigned to blue to create an FCC false color composite. Now let me try to discuss a small case study that will explain the use of texture in image classification. So this is where I am going to expand on the satellites that has given rise to the following images that you saw and the study region. So this case study, this was a small research work that was carried out for the regions in parts of Guntur and Krishna districts of Andhra Pradesh, you know it is having the extents as seen 16 degree east, 80 degree north, 1631 east, 8034 north. Now the river Krishna as you can see it flows from west to east in the upper part of the study area and what you see now are three images. Towards your left side and center you have microwave images and towards your right side you have the image captured in visible and infrared regions optical remote sensing. You see the difference in clarity between both the images, isn't it? Now the images acquired in microwave spectrum that is towards your left side they exhibit properties which are totally different from that of optical images. Now firstly, optical images register the reflection from an incoherently illuminated target. Let me re-itrate the optical images, they register the reflection from an incoherently illuminated target whereas a radar image that you see towards your left side, a radar image they are formed from the reflection of coherent signals which may add up either constructively or destructively causing an effect that is characteristic of all coherent imaging system known as speckle. Now another difference is that you know objects that appear rough on the scale of optical wavelengths they may appear smooth on the scale of microwaves. Hence some objects may act as microwave mirrors producing little or no reflection in the direction of the radar and they can you know end up shown as you know dark silhouettes on the radar image. So what I mean to say is that some objects they may act as microwave mirrors producing little or no reflection in the direction of radar and they can show up as dark silhouettes on the radar image. Moreover by now we have the understanding that radar works on a ranging principle whereas an optical system produces images which are projection of a three dimensional scene onto the image plane. Now this implies that different points at the same range but different elevations will be mapped to the same point in the image which causes distortion when mapping rugged terrains. Now apart from these visual differences between both images you know they they are apparent. You look at an optical image you look at a synthetic aperture radar image the visual difference is apparent. For example radar images are monochrome in nature you know they will be varying in tones of black and white. A color in a radar image it signifies change in scene which may arise due to a whole lot of factors the the factors can be due to moisture content or crop growth on land or wind condition or wave condition and also the synthetic aperture radar images that is SAR images they can be mapped to RGB for pseudo naturalistic display purposes. So in this case study what we will do is we will try to discuss the effect of texture in augmenting classifier performance. So again what you see is the false color composite using list 4 imagery band 1 red band 2 green band 3 blue and in this case the different features are highlighted for better understanding. We have water bodies the pixels that represent paddy we have pixels that represent fallow land we have predominantly it is an agricultural area so we have pixels that represent cotton and of course we have sand. Now we are coming to a false color composite created using synthetic aperture radar imagery wherein HH is assigned to red, HV is assigned to green and the entropy band of HV is assigned to blue. Now for this purpose like for using a FCC of SAR imagery for image classification we have combined a texture band that is capable of discriminating differentiating between crops because as I mentioned earlier this study region it is predominantly agricultural in nature which means while we select the bands for creating FCC we should choose the bands that are capable of discriminating between crops. Now previous studies by researchers they reveal that the energy and entropy bands they work well on agricultural areas that is the reason why different combinations of HH and HV and energy and entropy bands were used for this particular case study but for the sake of discussion I am showing you the FCC with the entropy band. Now what are the algorithms used for image classification and what exactly name is image classification all that will be discussed as part of upcoming lectures but for now just understand that image classification it helps to infer surface cover characteristics from satellite data by classifying each pixel into a specific land cover type based on some predefined classification scheme. Let me try to show an image and then reiterate again what you see towards your left side is the image that has been used for classification and what you see on the right side is the output of image classification. Different colors are designed to different features land cover features for example green represents fallow land cotton is represented by red and so on. Again our aim is to use a three band image what you see here FCC false color composite and to apply some classification algorithm and then end up with a map like this okay. I am calling a map this is not an image this is a map because every pixel is assigned a color according to land use land cover type and this whole process of converting an image into a map we call it as classification and image classification helps to infer surface cover characteristics from satellite data by classifying each pixel into a specific land cover type based on a predefined classification scheme. Now as I mentioned earlier what exactly is a classifier algorithm what it does all that will be discussed as part of upcoming lectures but for now this is just to give you a quick summary of how textural features can be used in image classification okay. Now while you hold your thought there let me try to present you with a quick numerical just to make sure that you have understood the concept of GLCM correctly or not okay. Now what you see here is a computed GLCM matrix d equal to 1 theta is 0 and I am asking you to compute the contrast given is a normalized GLCM and we know what is contrast is not it which means we have to first know the location of each value but we know that as well because this is 0 comma 0, 0 comma 1, 0 comma 2, 0 comma 3, 1 comma 0, 1 comma 1 and so on. Now to compute contrast let us follow the procedure I take the probability and I multiply it by i minus j square I have to do that for each and every value. So let us compute the contrast which is going to be now 109 into 0 minus 1 square plus 0 into 0 minus 2 square remember contrast is nothing but summation over i and j i minus j square into probabilities okay. The normalized GLCM already gives us probabilities and we also know the value of i and j okay, row column 0, row, zeroth column, zeroth row, first column, zeroth row, second column and so on. So let us try to compute the value of contrast it is going to be slightly lengthy okay 0.109 1 minus 0 square so we are here now plus 0.174 1 minus 1 square plus 0.043 1 minus 2 square plus this is going to be 0 so I am going to omit it it is going to be 0 I am going to omit it 0.043 2 minus 1 square plus 0.174 2 minus 2 square plus 0.043 2 minus 3 square it is going to be 0 I am going to omit it again this is going to be 0 I am going to omit it plus 0.043 3 minus 2 square plus 0.087 3 minus 3 square okay, what did we do? We just computed the contrast by taking the difference of locations that is summation over i summation over j i minus j square into probabilities okay. So similarly we can compute different textural features given a glcm matrix like this and this small numerical was for you to understand that we can use similar relationships for say entropy for say homogeneity dissimilarity and we can end up with their corresponding values okay. So let me try to summarize. So what we learnt as part of this lecture was what are the different textural features that can be extracted from a gray level co-occurrence matrix that is glcm and we also saw a small case study where textured band that is entropy band can be used to create a false color composite FCC which was later on used for image classification and we also covered a small numerical where we learnt how to compute contrast given a normalized glcm. So that is it for today's class and I will meet you in the next class. Thank you.