 these are about resolutions. What is the how does it scan earlier scanning was in spectral it used to be split up here. This it gets split up the beam that visible spectrum is split up into various bands of blue, green, red and all that and it used to scan across track scanning. Of course, it has its problems we will not go into the this is what is called spatial resolution. How does it how do you say the instant instantaneous field of view is this angle right and in terms of the you know size of the pixel how do you get the size can you tell me this d I want h is the flying height and beta is given beta is the angle h into beta that is how it is so spatial resolution depends upon the height at which from which it is seeing plus the angle beta that is if these two are known then you can find the spatial resolution it has some problems with the you know what is the resolution in case of a photography resolution spatial resolution is only in terms of in for the digital images because pixel wise it comes but in case of photography there is no pixels it is all in one go we get so it depends upon in one mm how many black and white lines you can you can see the same thing is getting repeated the center so under microscope you see them and in one mm how many black and whites you can discriminate that gives the resolution of the photography and the scale of the photography is known that gives divided by this will give you approximately what is almost equal to parallel to the pixel size or the resolution but it's not it is a different type there are some problems because of the scanning you know the every pixel size may not remain same so and it creates its own problems I don't and that is how what is supposed to be on the left side it looks like this because of these problems anyway that is all details how it can be corrected and these are the things that it can have even the plane that can have so to avoid these you know distortions finally now it is a CCD technology if the IRS IRS have been adopting Landsat and Landsat had that but the later satellites all that is every CCD detector detects one pixel the radiance from one pixel or reflected energy from one pixel so each what is the problem here it is good at least it need not travel right it need not move but problem is you should be able to calibrate every detector and thousands of detectors seven hundred seven thousand like they're depending upon the spatial resolution and the total ground width that you want to cover in one you know travel so that depends on that so you can see all though all of them have to be calibrated and some of them may be speed whereas in the scanning no only thing is when it gives the problem so this present technology is mainly on CCDs charged coupled device okay some of these resolutions that I have given is about for the Landsat for spectral resolution spatial resolution and of course radiometric temporal that you can calculate but what probably I have not given is our Indian IRS one-way one C1D now it is resources set and Cartosat resources that gives up to five meters and Cartosat gives up to that is the latest that India has achieved and of course there are view and all that which can give you even in terms of you can see four meters and one meter and that comes to even point six five with one of the satellites that is the order that we have gone into what is a satellite it's something that you know lands are these are supposed to be the satellites from the very first beginning that have been what are their orbits their orbit something look like this why are they so earth is moving and it has slight inclination also it's not that it is coming down polar satellite orbits okay at synchronous orbits or communication satellites because they are most synchronous with the rotation of the earth this polar orbiting satellites it doesn't come like this it comes with a slight inclination and then coupled with that earth movement when you trace their paths they become little of this type so so these are the various bands that you know TM Landsat for TM has that sensor has these many bands each has been useful for a particular thing can we discriminate snow and cloud both of them how do they look like in terms of energy when it comes both will be white ish or whitish so how do we discriminate them whether it is a snow or shadow shadow may be difficult reflect but how both look almost white so how do we so this mid infrared of 1.55 to 1.75 this can differentiate snow from clouds okay snow has low reflectance clouds have high reflect this is the only band which can discriminate between the clouds and snow no other way that you can find out that is how some of the bands are good for something vegetation infrared like that you can decide depending upon the spectral reflectance cuts that we have seen if you have those curves studied carefully you should be able to say that so how do we process these data digital in in terms of visual or with seroscopes we have seen say can we process of course this has been processed digitally only there is no manually it is almost impossible so there are various facets of digital image processing rectification and resolution restoration image enhancement image classification data merging and GS integration bio-physical modeling it is not an exhaustive list or what it compartments some of them are listed and sometimes some of them get merged with some other thing or maybe further classified it all depends upon what is image rectification and restoration one is geometry correction radiometric correction and nice removal how do you determine the geometry correction what you have scanned how does it relate with respect to the ground otherwise how do you interpret how do you use the maps that is geometric correction and the radiometric correction is atmosphere may not be same you know in winter and summer does the inclination of the sun does the you know distance to the sun will it remain same in all seasons yes or no so then that have to be taken into account otherwise the how do you normalize this radiation what gives as more energy that reflectance coming but maybe because of other factors also so unless that we take care of corrections that becomes radiometric correction and then finally noise removal so many the other pixels may be contributing atmospheric reflectance may be contributing so it is all that if you want to if you can remove then only you can give a better interpretation something like this you know it may have the real pixel ground pixel may be given by the three four pixels because exactly where it may not start so you should be able to do this geometric correction or based on nearest neighborhood or you know higher-order equation that whatever you feel a more involvement of mathematical things then it is up to you so you should be able to get to the thing and it is something like this the satellite is at one place the sun's altitude keeps changing and even the distance this has to be corrected before we proceed for because finally the reflectance variation you will see and interpret yes or no so unless those things are corrected it would be it would not be correct to say that so what is enhancement is a contrast manipulation gray level thresholding level slicing contrast stretching spatial feature manipulation spatial filtering edge enhancement multi image manipulation band ratio in principle components vegetation indexes plenty you can what exactly does image enhancement is image enhancement necessary for image classification I am not showing you to you know frighten you know it's only some of these terms I thought okay so let us not discuss every one of them but then let us know is image enhancement needed the previous image rectification and restoration is a must step before we go for classification I finally what we want is image classification yes or no so that how do we get before that rest rectification because I should know what area I am referring to and then so there is a major enhancement is not a necessary step what is required is enhancement is only for visual application not for digital application if you want to see if there is no contrast in the image then it would be difficult to discriminate between different classes so we do enhancement for visual application not for digital so it can be gray level thresholding a level slicing and contrast stretching and of course many manipulations of spatial filtering edge enhancement multi-image manipulation ratio in principle components vegetation this is all these things come center what exactly means by this suppose if the gray level values are only within this range working gray level values are only within this range it may be difficult to discriminate between two classes because they are very close there is no much of a difference in terms of its radiance values yes or no so if I can but I have the wide band spectrum I can divide into you know 0 to 255 whereas this has occupied only hardly from how much 100 to 159 so if I can redistribute them so that this contrast gets better so that I can discriminate the classes that is what so that is various linear stretch histogram stretch non-linear stretch there are many methods let us not get into the details but this is what essentially done and is only for visual when you want to do classification digital classification you use the original data not the not the enhanced data and sense they start not to be it is only for for us to say okay whether can I discriminate between these two classes so that is what is mainly the and of course finally the class and the previous ones if you want to have an idea about this is level slicing you know just I want water and land that's it you slice into two that's it that is how you can improve and in land then again you come back to the discrimination contrast stretching stretch the fear the spatial filtering some filtering methods help because this is not the lecture on digital processing so I am only just touching under this because this only one lecture I should cover the fundamentals as well as the digital processing and the sometimes the filtering and edging enhancement will improve multi-image manipulation band ratioing suppose there is vegetation on the side which it is looking towards the Sun as well as the other side of a hill will it emit the same type of radiance will it emit the same radiance at the detector one is on this hill slope which is facing the Sun the other one is on the leave other side of the hill which is not facing the Sun and the same type of species or vegetation is available which one there won't be so and then but can we classify as a different feature are we right we are not so this band ratioing helps if you ratio between two bands visible and infrared the ratios ratioed values will remain same irrespective of these effect of these topography can be removed and that is how band ratioing helps okay and of course principal components is data compression most of the thing that they are correlated you know the correlation is very high then you can reduce the data content by principal components and then vegetation is a near infrared to red you know red to near infrared and all that these vegetation gets enhanced in this ratioing of vegetation index is also a ratio so some particular purposes these are being used okay finally our interest is in classification because either visual interpretation or digital classification so if we have a we can have supervised unsupervised and in hybrid classification and finally how do we assess the accuracy of course there even other methods like artificial neural networks nowadays and these methods are available but basically the standard methods had been supervised unsupervised for quite a long time artificial neural networks have come in now so what is it supervised classification supervised classification means unless I know this area power area there is water body nearby there is vegetation nearby and of course there are other features if you have some pixels which you know them as water vegetation and other features that will be given as an input those features with similar radiance value should be identified as a class of water if you have 100% ground truth no need for classification yes or no can you have 100% ground truth then there is no need for classification so you have but is can we reduce yes with increasing you know type of spectral spatial and all that you can reduce but you can't eliminate the ground okay so you take them that is what is supervised classification so it's radiance values mean okay variance covariance matrix all that you have to extract and then process the whole data and look for these things so that that areas are marked as that particular class that is what is supervised it can be minimum distance to mean parallel pipe does Gaussian we will see unsupervised classification is k means ISO data unsupervised means I don't have any idea about this type this area but still I would like to classify okay so that is what is unsupervised how do you do you you classify into some clusters because of the radiance similarities and then later you have to identify what the cluster means it is post classification visit to the site is required whereas these collection in supervised is a pre classification exercise it depends upon if you have that that is supervised if you don't have you say that these are different clusters that I could identify and then go back to the site and see what that cluster represents and then give the title for that okay so that is what is the thing and hybrid classification the combination of these two and the classification accuracy can be assessed always so these are the various types of you know particular feature and in every band what are those values you know reflectance values that we get all digital numbers now this and that reflectance are converted to digital numbers so these are the various layers that we have same pixel can have different values in various bands and finally how do you classify the various types of classification is suppose say you have these you know you have plotted back only one band data band 4 and their reflectance values in terms of digital numbers reflectance values can be converted to digital numbers and vice versa okay and if you see these are all urban and you know that different thing forest water like that a pixel which is at place one in which class does it go for these different methods are used whether it is you know supervised that parallel pipe minimum distance to mean or the statistically maximum likelihood classification all these methods so how does it how do you what do you think that pixel one should go pixel at place one should go to which class class C is it so nearest name minimum distance to mean it should go to class C yes or no but if you take the distribution into account where it is going this is parallel pipe because you have seen the distribution if it take into this type of distribution then where it is going it may keep changing and finally when we go to maximum likelihood where does it go distribution is according to the distribution we draw that yes that is maximum likelihood so you can see there are various methods for this supervised classification okay these are some of the methods and unsupervised you don't know these are all how they are represented you know how each class sometimes it may be a combined class this if you draw these things it gives you discrimination between class to class that is and the spectral radiance that plots also give you which classes are getting overlapped which classes are independently can be seen okay and how do you say this you know what is the classification accuracy finally that you are giving see you have to know some supervised only that can be done otherwise unsupervised you can't do unless again you collect something so what we do you take these are all the ground truth these 233 pixels at the column one w these are ground truth and what have been classified as rows you please see now an area with these pixels as ground truth have been classified and the classification that pixels under that class have been given in the last row can you see this these are the ground truth pixels at the bottom these are the classified pixels now if you see the producer's accuracy is 226 by 233 is 97% and the user's accuracy 226 by 239 is user's accuracy and overall accuracy based on the diagonal terms because the same the pixel should belong to that particular class that is given by diagonal so that is called overall accuracy so what is producer's accuracy and what is user's accuracy producer's accuracy is how many pixels that I have as a ground truth the fellow who uses map will never know will ever know so to satisfy myself what I have taken and how it is giving that is given by producer's accuracy getting I know so I can do producer's accuracy as a producer of the map I should know how good is my map okay whereas I am as a user you as a user the map that is produced by me if you check you check the ground truth as well as that what is produced only 239 pixels are classified but actually water is only 226 so your accuracy in terms of what you can expect from that map is called user's accuracy only the fellow who produces can have the producer's accuracy to satisfy himself how good is the map that he has produced but the users are the judges finally to say in terms of their usage what is the accuracy that is overall accuracy is in terms of this these are called you know the matrix error matrix are also called termed as confusion matrix various applications of remote sensing can be land use land cover geological and soil mapping agricultural applications okay range land forestry urban wetland wildlife archaeological environmental oil exploration mineral expert many it is not the end of the list not an exhaustive list there can be many more okay everything under the sun is supposed to be possible but everything has its limitation if it is land use you know usgs classification of land use residential urban in that sub you know level 2 classification various that you can see and even level 3 4 you can have your own this is the type depends upon the spatial resolution that you have that you can classify right and let us say back as as we are water resource engineers I mean that is the purpose of this course so let me confine myself instead of other applications that what we are going to talk tomorrow so let us have how remote sensing can be useful can it give the type of drainage yes or no and the type of drainage should be able to say what type of you know the area that I have whether it's dendritic or the parallel structurally controlled drainage yes or no possible so based on that I know whether the runoff is more or the infiltration is more yes or no more the drainage more than enough less infiltration less the drainage vice-versa yes or no so I should know about the terrain then the you know sparse drainage and the drainage being more intense and the type of the erosion you can say different type of the soils that you can see depending upon whether it is rectangular cut or a v cut or something so less cutting means it is of clay silt sand like that it depends on the type of the and finally what we are interested in the water resource environmental management in terms of what we want in water resources one is a geographic distribution yes or no how are the water geographically distributed then the quantity yes quality groundwater locations if possible finally any applications additional water resource applications like what flood forecasting disaster management right there can be many other applications all these are meant for what can we do with that geographic distribution definitely we can do yes or no because you can identify in black and white it looks little blackish the water in infrared it becomes absolutely black because there is no everything gets absorbed in infrared so that is how when you talk about improving what is it noise removal you can say the in infrared if it has some value radiance value that can be because of the noise the simplest algorithm for water noise removal is that value to be deducted and deduct that much in everything so more or less that is the simplest algorithm for noise removal so quantity quantity is a problem if you have yes the spread of the reservoir based on area elevation curves while surveying for the reservoir yes we remember now area elevation volume yes or no so once you know the spread provided no silitation takes place otherwise you have to conduct what sounding so that the death well there still remains there are not at that point so that you can update these area elevation volume curves yes or no so based on that you should be able to say about quantity in terms of the reservoir storage and all that okay and then about quality quality again it can be whether it is inorganic turbidity whether it is biological turbidity in terms of algal concentration or it could be oil slicks yes or no and of course there can be others like the odc body and all that is it possible to detect everything yes turbidity can be detected in that neflon that units turbidity units because more the turbidity what is the reflectance finally only reflectance more the turbidity less reflectance it is either that or this but you see more the turbidity more reflectance you can see in band 4 even in these satellite images that is how which is near the coast will have more reflectance if you go deep that becomes black so that is turbidity what about biological turbidity more the alga algae of course beyond certain point it may get saturated because the detector itself cannot detect but it has to increase more the algae algae more the because the reflectance in particularly infrared okay and about what is the third one we said oil slicks how does it more the oil slick thickness what more reflectance yes more reflectance so you should be able to estimate dial slick even thickness for that thread we have and of course is the odc body can be detected yes ultimately that is what environmental engineer wants our scientist wants yes or no so whether it can be detected we want to see what it cannot be directly but instead you know indirectly suppose you can spot the fish which are surviving so it is an indirect indication that it can there's bio physical modeling and all that so you can talk say yes it can but otherwise there it as long as the signal is not affected in terms of absorption transmission and reflectance nothing so if reflectance is more or less you can detect but otherwise you can and the be odc body values unless they turn into turbidity cannot be detected because it is dissolved oxygen there could be slight but that is not so that is how we can talk about quality okay we have talked about distribution quantity quality and the ground water location can it be detected can can it go below the ground micro also it is only not much few mm vegetation yes indirectly particular type of vegetation is there and particular type of drainage structures based on that indirectly we can talk about ground water location and additional the flood particularly with the microwave that is being coming it it it penetrates through all weather satellites so you should be able to even talk about the cloud indexing and then taking what can be the potential rainfall that's possible that is what they you know do but that is all it's everything and of course post flood you can see the extent of damage that very easily and the disaster management you know management techniques of how to give the relief for the people even before you know you can plan and the post flood or anything so there are many additional applications that you can talk about and if we say that you can see how these clay soil and silty soil with respect to the intensity from clear water how they are differing that is how the turbidity can be detected okay and the algal concentration with respect to the you know clear water you can see how these reflectance curves keep changing with respect to that and you can see the plumes are thermal that is another important things you see what happens most of these you know things are discharged in the night because others cannot see but you can always have remote sensing and can see thermal particularly these reactors see you can see what is the temperature you can see the clear difference between the black and white yes or no so that you should be able to get to the temperature difference aquatic life is being detected finally my friends I think I am just you know we started a little late so but still I thought you know other person should not wait picture is worth thousand words yes or no because pictures can safely you know convey information about position size and relationship between the objects the basic advantage of these images is what one is vantage point it is from a height yes or no then it is a permanent recording stop action that is what you can see in pity usha though she must have retired from competitive athletics but we can see her picture of competing in the international you know fora even today because stop action yes so that possibility is there permanent recording and of course broad end spectral sensitivity what we see is only visible but it can go beyond yes and then of course increase spatial resolution and all that so remote sensing will going to with respect to in addition to the GPS and the more these things it is going to play a role but we should know its limitations also it is not that everything is rosy there are limitations so we should be able to utilize and to the best of its capabilities and that is what so we know bhabhavaya said that spirituality plus science is Sarvodai science minus spirituality is Sarvanash and spirituality minus science is suicide so can we club these two things of science and the sustainable management that is what probably would be an ultimate you are the best judge to use to what extent to damage to what extent the natural ones probably little more about it we will talk about tomorrow