 Welcome to this section of tutorial on Synthetic Aperture Radar that is SAR image classification using SNAP. Shown here is the Allos Palsar data towards your left side and the colorful image you see is the RGB image which has been subjected to radiometric and geometric calibration which has been subjected to speckle filtering and multi-looking. These terminologies I believe are not new to you by now because we have already covered each of these in detail in the previous tutorials. So in tutorial 2 we have seen first how to access the Allos Palsar imagery, we discussed how to download the specific data for Mumbai region followed by how to open the imagery in python and different methods by means of which we could carry out speckle filtering. Also over the previous tutorials to be specific tutorial 2 we got familiarized with the SNAP toolbox and also to carry out radiometric and geometric calibration of synthetic aperture radar imagery. So with this background let us try to understand how to classify a synthetic aperture radar image. So moving further for this particular tutorial we will be using the Allos Palsar data which was previously downloaded for tutorial 2. As before the SAR data formats and the processing levels are being displayed here, okay. Before we can start with image classification we have to do preprocessing of the image. So let us try to refresh our memory in SNAP. Let us first try to open the product using the open product button that is available. So that the image is displayed in the product explorer. As you see there are a number of bands. Let us see how the intensity image looks like, okay. Because double clicking on any of the bands will enable one to view the raster data. The world view helps us to see over which region the image that is being shown here is over which region. So we had downloaded it from Mumbai region that is why you see a square box there. And this is how the intensity image is going to look like. Now this image appears to be stretched in one direction, is not it? Because it has not been subjected to multi-looking, okay. So now also in this image you can observe that there is a lot of speckle, okay, lot of noise. That is why I mentioned that after you download an image you have to understand how to do the preprocessing before we can even begin with image classification. So right now what we did is in the product explorer we opened the image and we understood that it contains a metadata, vector data, tie point grids and bands. Out of these bands we tried how to double click on intensity underscore hh and to display the image here. That is what we have commenced. Now let us try how to calibrate the data. So as we have already seen to properly work with a synthetic aperture radar data the same has to be first calibrated and by calibration to be specific by radiometric calibration we are trying to convert the backscatter intensity as it is received by the sensor to the normalized radar cross section or sigma naught imagery. We tried to compute the sigma naught imagery in python in one of the previous tutorials. So I may not go into details now. To summarize radiometric calibration we are performing to take into account the global incidence angle of the image. So this correction is specific to the mission, okay. And to perform this we have to go to radar radiometric and then this window opens up for calibration. You can see the bands are displayed here. I am going to select four bands, okay. And the target product automatically underscore cal is going to be added to the output image. Source file is the imported product and target file will be the new file and we get to select the directory in which the target product shall be saved. We can open double click on the imagery. So we can see that there are two images available sigma naught images HH and sigma naught VV and this is how the image looks like, okay. Now that we have completed calibration, let us move on to the next step that is multi-looking. As I mentioned earlier the image that you see in front of you it appears stretched in one direction. So in the step of multi-looking we are trying to generate multiple looks by averaging over the range and or azimuth resolution cells. So this improves the radiometric resolution and then it gives us an image which will have lesser noise and which will have approximately square pixels because you are doing the conversion from the slant range to the ground range. Throughout the lectures I hope that now you are familiar with these terminologies that is slant range and ground range. So to perform multi-looking we can directly select the image highlight the image on which we want to perform multi-looking. We go to radar, SAR utilities and then there is an option of multi-looking which we click as before underscore ML is going to be added by default, okay. I can run it takes hardly one second to display the output, okay. Sigma naught image and sigma naught HH image and HV image. So this is how the multi-looked synthetic aperture radar image looks like. You can see a huge difference, isn't it? Because first when we started the exercise I showed you the image and now after multi-looking you can see that it has approximately square pixels. It does not appear stretched in one direction. But of course it has the salt and pepper noise that is speckle effect is prominent in this image. So as the next step what we try to do is let us try to subset an image, okay. So this is just a small step to show you that when you are working with large images many a times when you are trying to do classification the program runs for a lengthier time and longer time. So to avoid that you can subset and work on a smaller image so that the process is completed quickly. So if you want to subset an image you can directly click on it, go to image viewer, zoom the area for subsetting, okay. So let me select a smaller section of the image, okay. The subset area is shown towards the left and as soon as I click a subset image is generated which is having two bands. Now let us try to understand how speckle filtering can be carried out in SNAP. What you see displayed here is the subset of the image that was shown and as discussed in previous lectures speckle is caused by random as well as destructive interference. So we have random constructive and destructive interference that results in these grainy dots or salt and pepper noise that is predominantly seen in synthetic aperture radar images. And to reduce this effect we use speckle filters. Now the choice of which filter is best for what application that is dependent on many factors. Now for this particular tutorial just to cite an example I am going to use the Lee filter, okay. So as before I can go to radar, I can highlight the image that needs to be subjected to speckle filtering and then I can go to radar speckle filtering, single product speckle filter and yes as mentioned earlier I have selected Lee filter just to show as an example. We find this is how the image looks after it is being subjected to speckle filtering, okay. The next step is known as de-squeuing or it is part of terrain correction. So the data needs to be de-squewed to transfer it into a zero Doppler like geometry before we begin with any standard synthetic aperture radar processing. And this particular step is just for the LO's 1 sensor. Within SNAP it allows us to do LO's de-squeuing, always highlight the image in the product explorer on which you want the operations to happen, okay, radar geometric LO's de-squeuing, the digital elevation model can be selected. I have kept it as SRTM 3 second, okay. So the next step is terrain correction. So this is performed to geocode the image by correcting the geometric distortion using a synthetic elevation model, okay. The map projection I am going to change it to WGS 84. So in previous tutorials we have already seen what is foreshortening, layover and shadow effects, etc. This particular step that is terrain correction is performed to geocode the image by correcting SAR geometric distortions using a digital elevation model and what you see in front of you is the output imagery after it has been subjected to terrain correction. You may have noticed that in this particular step we have unchecked the mask out areas without elevation option. We do this otherwise it will remove the zero elevation pixels, okay. Here the digital elevation model is going to be automatically downloaded, so please ensure internet is running during this time. So I want to create an RGB image channel. I can right click and then create a band ratio, okay. There are different options that are available, different tools that are available for us to manipulate the images, okay. I can use band math with individual bands of the imagery. For now let me try to select the imagery and then go to edit expression of band math. Say I want to create a third band that is nothing but the ratio of first and second bands. I can write the expression here. So the data sources that is images present in the highlighted file is going to appear towards the left side. I can type the expression here. So I want the third image to be sigma not hv by sigma not hh. The name is new band one, okay. So you see now that the third band has been added which is nothing but ratio of existing bands. Now that we have three bands, I can use the RGB window to create an RGB image. So as before I can select on the product. I can right click on it and then RGB image window option opens up. I can select the image channels for me to get a colorful image like this, okay. Visually this image gives me more details than the image that I started to work with. So if you remember initially we started to work with a black and white image which is stretched in one direction which has lots of noise. We performed the preprocessing steps one by one and then finally we have generated the third band to display the RGB image. You can see the mountainous regions, the mangroves, water body, urban area, vegetation all are displayed in different colors. So visually it helps me to identify the features or classes in a better manner here, okay. Now let us try to understand about classification. So image classification is a topic which has been covered in the lectures. So here we will try to have a practical hands-on session wherein you understand how to perform unsupervised classification and supervised classification using the image that we have seen just now, okay. Now when I talk about unsupervised classification, we are trying to work with clusters. So it is nothing but cluster analysis that classifies objects into different groups so that the data in each subset shares some common traits. Now a data clustering as such is used in many fields like machine learning and pattern recognition, image processing, bioinformatics, etc. The SNAP toolbox by default offers two algorithms to perform unsupervised classification. First is the K-means clustering and second is EM cluster analysis or expectation maximization, EM cluster analysis. So what we will try to do is till now we started with the LO spalsar data and we completed all the preprocessing steps and we are standing at after creating the RGB image. Now let us use the same image RGB image to perform unsupervised classification. For example, I will take the EM cluster analysis, one of the two algorithms that are present by default in the SNAP toolbox. So in EM analysis, each pixel is assigned a membership to a cluster defined as a probability and for each pixel there are K probability values where K denotes the number of clusters. Let us first see how to achieve this in SNAP. So I have the RGB image displayed, I can go to raster options and then to classification unsupervised classification out of the two options I am going to go with EM cluster analysis. As before processing parameters, the number of clusters, you know for now I am going to keep it as 5 so that I get the output quickly. It takes less than 1 minute to run and then display the results. So the topic of image classification and the different algorithms what goes inside when you click on a particular button, all those details we have covered in the lecture. So in this particular tutorial I am not going to repeat those. Just be aware that you can perform the same operations using Python as well as now whatever we are trying to do using SNAP. And if as mentioned before whenever we are trying to use a particular inbuilt function in SNAP as the target product that is the output product, it automatically assigns a alphabet so that it is easy for us to identify. For example, underscore ML is after multi-looking ML and SPK is peckle, TC is terrain correction and so on. And remember we can also use sentinel imagery in SNAP. This example is pertaining to the allos pulse and data sets but you can also use sentinel imagery and there we have our output now. It has just one band of class indices. So instead of digital number or sigma not values, I have class indices. So one color is assigned to one class. Say I am not happy with the colors and say I want to change the colors. I can go to tool windows and color manipulation wherein I can click and change the colors so that the features are highlighted to my satisfaction. So the optimum number of clusters are important when we deal with unsupervised classification and there are algorithms that help you to give an answer on what is the optimum number of clusters pertaining to unsupervised classification. Right now to show in this hands-on exercise I have just kept it as 5. So now that we know how to create a classified map using one of the unsupervised classification algorithms, as the next part of this tutorial we shall be trying to work with supervised classification algorithm. I will see you in the next section of this tutorial. Thank you.