 and welcome to today's class. So, today just to break the monotony before I start the class, let me try to show you a few images, a few images, you know. The moment you see these images, you get a feeling of how it would feel if you touch these, is not it, rough surface or smooth surface or basically because you are familiar with these images, whether it be any construction material or whether it is mold or if I show you an image of agricultural field, whatever be it, texture is a very important property and innate property of all surfaces, is not it. Now, when you feel the surfaces, you get an idea whether they are rough or smooth. So, in today's class, we are going to understand about texture and synthetic aperture radar images. It is a very important characteristic used to identify objects or areas of interest within an image, whether it be your own photograph or satellite image. So, to reiterate, today's class will answer the question, does texture exist in synthetic aperture radar imagery? If so, how do we quantify it? Where are we using texture, image texture? Now, what you see in front of you is a sample image of synthetic aperture radar. Remember as before, I am not going to emphasize on the polarization or the geographical area that the image represents or which satellite has given this image because I want you to just focus on whether this image has any texture. So, our understanding of image is as a 2D array. Image in a digital form is usually represented as a function of two variables, isn't it? X, Y or latitude-longitude or rose columns. So, I am talking about two-dimensional image which is a representation of three-dimensional terrain as seen through photographs or satellite images. Now, this image that you see, it has different types of textures. Our visual system is able to identify between textures, isn't it? Because when I showed you the different types of walls at the beginning of the lecture, our visual system is trained to be able to identify between textures and we are able to demarcate the difference visually. Now, the same is done in image processing using tools. The aim is to identify variations in intensity values, variations in intensity values. Now, human interpretation of color photograph is through spectral, textural and contextual features. Here, when I say spectral features, I am referring to average tonal variations in various bands captured in regions of electromagnetic spectrum, spectral features. We can define textural features as information about tonal variations within a band or information about spatial variation in the image intensity. How do image intensity values vary with respect to space? How does it vary in the direction, Y direction? When I say contextual features, I am referring to information that is derived from blocks of pictorial data surrounding area being analyzed. Now, just like the smallest unit of an image, we call it as pixels or pelts. Texture elements are called as texels. Now, here let me make a distinction between tone and texture. Tone is nothing but gray level of a single pixel in an image, tone, tone of an image. And when we say texture, it is the variability of this tone within neighborhood of a pixel. Let me reiterate by tone, I am referring to gray level of a single pixel in an image, gray level. And when I say texture, it is the variability of the stone within neighborhood of a pixel. Now, a texture can be described in a number of manners. So, what I have done is I have tried to define different types of textures using small images. We can have fine texture, we can have coarse texture, we can have a smooth texture, we can have a rough texture, high contrast texture, low contrast texture, textures that are directional and which follow a regular pattern. So, in all these images, the texture elements that is texels can be repeating themselves in different manner. Because when I say high contrast image, it means some areas containing high intensity or brightness values exist whereas some other areas also exist containing low intensity or brightness values. When I say a regular texture, it means regular patterns are visible or seen. On the contrary, texture can be low contrast in which there is very, very small that is faint contrast between the foreground and the background, low contrast. And whenever the elements of texture or texels are arranged in one particular direction, we have directional texture. On the contrary, if the elements do not seem arranged in any particular direction, if it is random, then we can have non-directional texture. We can have rough texture as seen it has a lot of variations in brightness values. And if the brightness values are changing very, very gradually just known as smooth texture. So, these are a few different types of texture which we define using visual methods. Remember, the whole idea is we tend to use visual methods to differentiate, to discriminate between features on a normal photograph or a normal remote sensing image which is captured in visible and infrared regions of the electromagnetic spectrum. And as part of this course, we are trying to understand how to quantitatively define a texture in microwave images. Now, please note that texture of a single point is undefined. Now, coming to where do we use texture in microwave imagery? Texture can be extracted from a single band, say HH polarized intensity band of SAR image. I can pull out different textural features from this single band image. And similarly, we can extract textural features from different bands of the image. Each textural feature can be used to discriminate between different land cover types. Now, for discriminating between different features that is for visual image interpretation, texture is highly useful. So, in a nutshell, how the intensity values or how the brightness values change over an area that defines a texture. And given in front of you are few of the areas where we use texture in image processing. So, when we say image segmentation, this can use texture to divide the image into distinct areas having different texture. We have texture based shape extraction. You know, it is possible wherein 3D images can be extracted which are covered in a picture with a specific texture. Image classification is another area where textures play a very important role. Because texture based analysis can be used for object identification or pattern recognition. And we also have something known as texture synthesis. It is used in graphic images and computer games, you know. The purpose of this is to produce images having same texture as the input texture. So, a texture in an image can be understood in different levels of spatial resolution, very important point. Say, if you are browsing images in the Google Earth engine, as you are zooming out, the areas would be smoothened out, you know, there will be less noise, is not it? But when you try to zoom into your area of interest, slowly after a certain point of time the image starts getting blurred. So, let me re-iterate texture in an image. It can be understood at different levels of spatial resolution. All right. So, moving forward, let us try to understand how texture can be extracted from pictorial data. Remember, it can be your photograph or it can be a satellite image captured in the microwave region of the electromagnetic spectrum. You see, when we try to classify each resolution cell of a satellite image, we look for a set of meaningful features to represent the pictorial information, you know, meaningful features to represent the pictorial information. And trying to get a grip on the best means of defining texture, well, that in itself is as difficult as measuring texture. But still, you know, that should not deter us from learning about texture, is not it? So, to summarize, there are different approaches to define what exactly is a texture. There are structural based approaches, which considers pixels and some repeated or regular relationship. There are statistical based texture features, you know, here the analysis is based on statistical parameters like mean, median, mood, autocorrelation, etc. Or it can be based on something known as gray level co-occurrence metrics or GLCM as it is popularly abbreviated as. We will see details shortly. Now, moving on, its statistical based approach, they consider texture as a quantitative measure of arrangement of intensities or gray level values in a region. When it comes to model based measures, they use models to specify textures. Given here is an example of fractal based features. There are gabar and wavelet based features that are also used. Texture and tone of an image, you know, texture and tone, both are indisputable parts of an image. They are always contained in an image. Of course, at times one may dominate the other. But when it comes to images, we can have first order statistics, second order statistics to define a texture and also some simple analysis of texture also exist like the one you see in the screen in front of you that is data range. The simplest texture operator where range is nothing but maximum intensity value in an image minus minimum intensity value in an image. And then we can have mean that is the average intensity value and the variance that is sum of squares of difference between intensity of central pixels and its neighbors. Some simple analysis of texture. Shown here are the input image towards your left side and the image that represents data range towards the right side. Remember, you are trying to understand texture using microwave images, not optical images, microwave images. Now, again towards your left side you see the input image that is a sample synthetic aperture radar image and towards your right side you have the image showing mean. Again, towards the right side you see the image showing variance. See, one of the early applications of texture measurements to remote sensing data was made available by Haralik et al. 1973 and the others then had proposed what is now popularly known as gray level co-occurrence matrix abbreviated as GLCM. Now, whenever textural features are to be estimated, generally it will be estimated using GLCM. Now, what we will do is we will try to understand what exactly is GLCM and whether we can create a GLCM matrix. GLCM it refers to the tabulation of how often different combinations of gray levels occur in an image. A tabulation of how often different combinations of gray levels occur in an image. That is what is the frequency of finding a pair of pixels or a pair of gray levels in a particular image overall or over a specific area. That is what GLCM tells us as in how the pixel values co-occur, gray level co-occurrence matrix, how the pixel values co-occur, occur together and how they are distributed throughout the image. Now, remember whenever you get a synthetic aperture radar image for the whole image as such you can calculate GLCM and once you complete calculating GLCM from GLCM the texture measures, textural measures can be calculated. So, let me re-hydrate. GLCM it represents the distance and angular spatial relationships over an image sub-region and GLCM is nothing but a tabulation of frequency of finding a pixel pair. Remember not one pixel but a pixel pair or a pair of gray levels either in an image or over a specific area within an image and from an image GLCM needs to be calculated and from GLCM textural measures can be calculated. Now, GLCM has certain properties that we should be aware about. For a central pixel, assume this is the central pixel. Now, for a central pixel it is going to have 8 neighboring directions, is not it? 8 neighboring directions from 0 degree to 45, the cell at the top of the central cell that is 90 degree, diagonal towards the left side is 135 degree, horizontal towards the left side is 180 degree, diagonal in the southwest direction is 225 degree, exactly below the central pixel is 270 degree and finally we have 315 degree. So, we have divided the entire 360 degrees into 45 degree separations to indicate the neighboring directions for one central pixel. So, hold that thought, you know why are we discussing this right now? Because GLCM is a two-dimensional area wherein both rows as well as columns they represent a set of possible image values and to estimate to calculate GLCM it is important that we know these directions. Now, before we start computing GLCM, a displacement vector needs to be specified and all the pairs of pixels can be counted which are separated by this distance, this displacement vector which I am going to call it as D. And GLCM matrix has a dimension of n by n where n is the number of gray levels in the image. Typically, the number of rows and number of columns and quantization levels all are kept equal. The preferred quantization level is of course 4 bits. Now and GLCM you know we can make it symmetric about the diagonal, we can normalize a GLCM matrix. Now if we are calculating GLCM at a 0 degree, always we shall be calculating using the central pixel and the pixel to the right adjacent side of the central pixel which means for a particular band we can have multiple GLCM matrices each representing a direction. And remember that we can make GLCM matrix symmetric and we can also normalize it to represent equivalent probabilities. So, instead of listening to me verbally let us try to compute one by one the textural features after constructing a GLCM. You know as this particular concept it is understood more clearly if we try to solve a simple numerical. So, what we will do is we will start with an image sample and say I am giving you a 4 by 4 matrix which is a part of an image not the complete image part of an image. And you see different pixel values here I have kept it simple that is 0, 1, 2, 3, number of rows, 4, number of columns, 4 and number of pixel values 4. Now what you see is a sample image for which I am asking you to compute the GLCM matrix. Again the quantization levels remember they are kept such that we have 4 values just 4 values. So, let us try to see what are the computations we need to do to fill the values for the GLCM matrix. So, towards your right side you see a matrix it is intentionally kept empty so that you understand what a GLCM matrix is all about. So, I have written rows represented by i and the column I have represented it as j and what is written inside each cell is not the value of the GLCM matrix but it is the address the location. So, 0th row 0th column is written as 0 comma 0. The first row jth column it is written as 0 comma 1 and so on. You know you can flip it you need not keep it in a similar manner you can even flip it. So, these represent the address 0 comma 2, 0 comma 3, 1 comma 0, 1, 1, 1, 2 and so on. So, now let us try to understand how to compute a GLCM. Remember for the sample computation let us try to use d as 1 and theta as 0 degree which means I need to compute a GLCM matrix 0 0, 0 1, 0 2, 0 3 all these values I need to compute and fill in the GLCM matrix. Remember GLCM 0 0 means the number of times a pixel with grayscale value 0 is horizontally adjacent to a pixel which also has a grayscale value 0. When you scan from left to right as well as when you scan from right to left both in the 0 degrees right to left and left to right. So, let us try to draw an empty GLCM matrix. Now remember the address here is 0 0 which means I need to estimate how many times the pixel pairs of 0 0 occur in the 0 degree direction because our aim is to find GLCM where d equal to 1 and theta is equal to 0 degree. How many times does it occur? 1, 2, 3, 4. Isn't it? Left to right, right to left in 0 degree horizontal direction. So, I am going to write a value of 4. Now remember the next address is 0 comma 1 which means I need to find out how many times a pixel with grayscale value 0 is horizontally adjacent to a pixel which has a grayscale value of 1 when I scan from left to right and when I scan from right to left. Okay? How many times? 1. So, I am going to fill the second row, second column with 3. Similarly, I can fill each of these details in a GLCM matrix to complete the calculation. So, remember I started with 4 to estimate how many times the pairs 0 0 occurs from the sample image and that is when I calculated from left to right and right to left that makes it 2 plus 2 4 and then my aim was to estimate how many times the pair 0 1 occurred in the image which was 3. Similarly, I started filling each and every row and column of the GLCM matrix. How many times does 0 2 occur? Remember we are only checking in the horizontal direction. Why? Because in the beginning we mentioned that we need to do sample calculations to estimate GLCM values for D equal to 1 and theta equal to 0 degree. So, I am just focusing on the 0 degree direction. So, I can compute the each of these pairs I can count the number of times the pairs occur and ultimately I get to fill the GLCM matrix. What you see in front of you is the complete representation of a GLCM matrix. Similarly, we can have GLCM matrix calculated for other directions also. Say 45 degrees, say 90 degrees that is possible because as we mentioned earlier, we can consider different directions, is not it? Here for the computations remember we are considering ones from left to right and second from right to left. So, now we know how to compute a GLCM matrix from the input image. But now let me end the class with a question. Do you think this image that is the GLCM matrix that you computed from the image do you think it is symmetric? Is this matrix symmetric? Hold that thought. So, what we will do is we will try to answer this question as part of next lecture. So, to summarize in this part of the lecture we tried to understand about what is texture? What are the different ways in which textural features can be extracted from an image? What is the use of a texture in image processing? And finally, we were covering a small numerical example of a sample image and then I was mentioning how the GLCM values GLCM matrix can be calculated for each cell. Consider the pair of pixel values that occur together this example was carried out for 0 degree, 1 direction, horizontal direction. And remember when you try to fill the values in GLCM matrix you check from left to right and right to left. And similarly you can calculate GLCM values for say 45 degrees for 90 degrees. Then you will check the pairs only pertaining to that particular direction. Now remember we have not yet calculated the features of a texture. So, I will stop here for this lecture and I hope you understood this concept and I will meet you in the next class. Thank you.