 Hello everyone, welcome to the next lecture in the course remote sensing principles and applications. Till last lecture we discussed in detail about data collection procedure, characteristic or spatial resolution concepts, how objects of small size can be detected in pixels with a much larger size and so on. This lecture we will move on to the next concept of spectral resolution or the spectral characteristics of a remote sensing system. What is meant by spectral characteristics of a remote sensing system? The spectral characteristics of a system is given by first thing the number of bands. How many number of bands does a system has? Then the central bandwidth for each band like we will always give like a bandwidth. What is the central band or the central wavelength for each band and then the bandwidth? Say for example, if we say Landsat 7 has 7 bands or 8 bands including panchromatic, we are telling that it has 8 bands. Each band has its own central wavelength and a bandwidth surrounding it. Say for example, let us say 0.45 to 0.52 micrometers. This is bandwidth, may be the central band or central wavelength of data collection may be around 0.49 micrometer. This will be like the central point around which we have this bandwidth. So this is lambda C central bandwidth may be 0.49. So here we have 0.03 micrometers. So essentially we have 0.46 to 0.52 roughly of this size. So, a central bandwidth, central band or central wavelength, a bandwidth surrounding the central wavelength and the number of bands all these together will define the spectral characteristics of a system. Among these things, the bandwidth, the spectral bandwidth like 0.45 to 0.52 that is 0.07 micrometers. This bandwidth defines the spectral resolution. So it is easy for new learners to get confused between the number of bands and spectral resolutions because we have heard young students telling that number of bands is equal to spectral resolution. That is not the case. Spectral resolution is spectral resolution is actually for each band. What is the bandwidth for each band? The bandwidth designated for each band will give us the spectral resolution. So 0.45 to 0.52 bandwidth of 0.07 micrometers. If you compare this with 0.45 to 0.47, 0.02 micrometers, then 0.45 to 0.47 has a finer spectral resolution than 0.45 to 0.52. Always keep this in mind. The bandwidth will define the spectral resolution. Okay. We are talking about a central wavelength bandwidth surrounding internal. What exactly are these? How these things are important? A sensor with a given bandwidth will be or a spectral band with a given bandwidth will be more sensitive to incoming radiance within that particular bandwidth. That is please see this particular figure. On the left side, I have what is an ideal sensor response? How an ideal sensor or in theory, how a sensor should response to incoming signal? Let us assume there is some bandwidth scheme for the, we will take the same example. Let this lambda C central wavelength be 0.49, let this be 0.46, let this be 0.52. So this sensor or this band collects data between 0.46 to 0.52 micrometers. So this is the case in theory or in ideal response, an ideal sensor should not produce any output for any radiance coming in wavelength less than 0.46 micrometers and similarly for wavelength greater than 0.52 micrometers. It should not produce any output, output should be 0. On the other hand, any radiance coming within this particular spectral bandwidth should be producing signal with 100 percent efficiency. Like the sensor should collect all the energy coming within that particular band and should filter out or remove all the energy coming outside the band. This is how an ideal sensor or a sensor should work or in theory this is the concept. With this bandwidth everything should be or the output of a sensor is 0. Within the bandwidth the sensor should work with 100 percent efficiency. In reality this ideal response will not happen rather than having like a perfect ideal response like this we will have something like this what is given on the right side figure here. This shows the spectral response of different sensors TM, Landsat thematic mapper sensor, Landsat ETM plus sensor and so on. The numbers are different bands within the sensor. You can see like this is not like perfect like a box given here it is like it starts it is like slightly curved and it is not having 100 percent efficiency that is the relative response is not 1 it is slightly less even within that particular bandwidth the sensor response is actually not uniform non-uniform sensor response. Even within the bandwidth designated bandwidth sensor will not work with 100 percent efficiency in all the wavelength within the bandwidth say 0.45 to 0.52 means maybe in 0.46 it may have 100 percent efficiency in 0.47 it may not have and so on. So the incoming energy will not be measured at 100 percent of incoming level but with some amount of weightage. So a sensor with 0.8 relative response will just collect only 80 percent of incoming energy and so on. So essentially in reality a sensor response is not similar to what we saw in the ideal case there will be difference even within the given bandwidth. So that is what is given here. Certain bands has or certain sensors has like different kind of response in different different band. See this is band 8 for ETM plus sensor band 8 I told you this is like panchromatic band which collects data in the entire visible and NIR portion of the spectrum with a very improved spatial resolution. So look at this particular band 8 around this 0.5 micrometers and all the response is pretty low something around like just 0.6 whereas the response increases only around this 0.8 micrometers and so on. So within the selected bandwidth itself sensors will have differing response to the incoming radiance. So how people still define we see even within the band there is still some amount of complexity like sensor is not responding uniformly across all wavelengths even within the selected bandwidth. So how people define this bandwidth? People assume the sensor response curve is actually a Gaussian curve and then we get to a concept of what is known as full width at half maximum we will see it in detail in the next slide. So this is how normally the spectral bandwidth of a sensor is often defined the concept of full width at half maximum that is say this is like intensity of the relative spectral response say 0 to 1 factor we will assume. We assume the spectral sensor response is Gaussian in nature like given here or it follows it looks very similar to normal distribution say this is like response of one factor one or 100 percent intensity everything is exactly stored. So that is the case and if this is like a Gaussian function there will be some point at which the curve will be at 50 percent intensity 0.5 of relative response. So along the x-axis you project this and we will say that the full width at half maximum that is the width along the x-axis along the wavelength axis at which the sensor produces 50 percent efficiency is actually between 0.7 and 0.8 micrometers that is 0.1 micrometer bandwidth or this is like 100 nanometer bandwidth. So this is the bandwidth that is we are interested upon for this particular sensor. Here we assume the spectral response is Gaussian in nature that is for a given incoming signal a sensor will have like a Gaussian curve of response. So we will be able to define a peak point that is with 100 percent efficiency and then we will be able to define a range at which the sensor is equally producing 50 percent with in which the sensor is working with 50 percent efficiency. So for that 50 percent efficiency at which wavelength it is producing it we will find the wavelength and that will define the spectral bandwidth 0.7 to 0.8 in this example or whatever bandwidth we are this is assuming sensors have a Gaussian bandwidth. Now this bands are actually like defined we need sensors with multiple bands and hence for each sensor for each band we will use particular optics filters and proper detector elements in order to make each sensor sensitive to one particular bandwidth that is there will be a detector it will have a series of optical elements and filters before it those filters will remove unwanted wavelengths say if a detector has to sense only green wavelength or energy coming in green wavelengths 0.4 to 0.5 to 0.6 micrometers they will have filters before it which will remove all the all other wavelengths and only green will be allowed and also the detector element itself will be sensitive to this particular wavelength. So in order for us to get multiple or multispectral measurements or multispectral observations we use the optics filters optical filters and specifically made detector elements which helps us to collect data over different different bands. So all these taken together all these elements together will define to which wavelength the sensor is actually responding and that is why since many elements are involved and the complexity of the system increases and hence the response is not matching the ideal case what we saw earlier the response is not ideal but it is different. I told you in the concept of full width at half maximum we assume sensor behaves like the response is Gaussian in nature we will be able to identify central peak with 100% efficiency and then we will also be able to identify two points with 50% efficiency the whole width at 50% efficiency. In reality it is not possible for all the sensors like an example is given here in this particular slide. So this is like the relative sensor response of different bands of sensor like green band so this is like for green band and this is for red band this is for NIR band and this is for short wave infrared bands for different different sensors the sensor names are listed here primarily they are like launched by ISRO the sensors belong to ISRO. So we can see from this the spectral response of sensor is not even close to Gaussian instead of having like one single peak like this here we have like a flat curve there is like high amount of variation in this there is like a long tail and so on. So ISRO scientist used a concept of what is known as the method of moments in order to define the spectral bandwidth. So we are not going to go into the details about the method of moments interested people can look at the textbook fundamentals of remote sensing by Dr. George Joseph or you can also look at like the paper referred in this particular slide Pandya and others 2013. But the basic concept is people use statistical measures of very similar to mean and variance people use statistical measures and they try to define this spectral bandwidth. So why I am telling this here is full width at half maximum is not the only way how with which we define the spectral bandwidth. If you look at like some specifications they may put the bandwidth as this full width at half maximum like this they will put. So we have to assume that okay this bandwidth is defined assuming the concept of full width half maximum is followed. So what it means within that particular bandwidth we are very sure sensor will have at least 50% of response within that particular bandwidth that is what is known as full width at half maximum concept like just I will go back to this slide. So here if you look if the bandwidth is defined as 0.7 to 0.8 micrometers if that is the case the response from the sensor will be at least 50% of its peak value at least 50. So within this particular band it will always be at least 50 or more than 50 within that. So this bandwidth only we define as full width at half maximum. But method of moments is slightly different it works in the very similar concepts of calculating mean and variance for like a continuous distribution function we are not going to go in detail about it. But just remember full width at half maximum is not the only way of expressing spectral bandwidth but it is the most commonly used like mostly like this there is another way which is show people use for defining the spectral bandwidth of sensors. So now we have some idea about what the spectral bandwidth means within that particular bandwidth the sensor will have its maximum response. If we take into account the concept of full width at half maximum the sensor collects data with at least 50% efficiency within this bandwidth that is the bandwidth. What is or what will be the influence of that bandwidth in remote sensing data collection? So the band the location of central wavelength and the bandwidth around it will help us to identify or will help us to classify different features or different objects on the earth surface. So let us see an example here this in this particular figure the black dark line is actually a spectral reflectance of any one object let us assume. Some object we have measured the spectral reflectance in different wavelengths we have plotted it as like a thick dark black line. On below it we have what is the spectral response function of different bands say here we have many bands each having 10 nanometer spectral bandwidth here we have a smaller number of bands with 50 nanometer spectral bandwidth. So here we are following the concept of full width half maximum we are assuming all bands are behaving like in a Gaussian way okay. If that is the case so here we have more number of bands with much shorter spectral resolution or much finer spectral resolution 10 nanometer here it is 50 nanometer. So we will have more number of samples collected each black dot is one spectral sample each band collects data along the spectral reflectance curve because we have many number of bands with much finer spectral resolution. However here we have only four bands so there is only four samples. So using these four samples if we join them just imagine how clear we will be able to get this spectral reflectance curve. So this will be something like this so 0.1 here 0.2 here 0.3 here 0.4 is somewhere here it is the shape that we are getting here is much different from what we are getting here we are actually missing this absorption features. But if we use a fine spectral resolution with many number of bands we will be able to sample the spectral reflectance curve much with much more clarity or much in a more clearer way. So having many number of bands with finer spectral bandwidths will help us to properly collect information or the properly collect spectral information about the object. Say you have 100 bands for a same pixel if you plot the reflectance in 100 bands those imagine those 100 bands are very close to each other cantiquest bands. If you plot all of it wavelength and reflectance like similarly we will be able to exactly replicate or more or less replicate the spectral reflectance curve for that particular pixel it is it will be more clear. So that is why hyperspectral remote sensing developed. So hyperspectral remote sensing means hyperspectral sensors will have a large number of contiguous bands with finer spectral resolution large number means say the number of bands will be 100 200 and so on not like 10 20 hundreds of bands contiguous bands bands will be continuous 0.45 to 0.46 0.46 to 0.47 0.47 to 0.48 continuous bands much finer spectral bandwidth it will not be like 100 nanometers it will be 10 nanometers or even shorter much finer spectral bandwidth. So the advantage of having such more number of bands with final bandwidth is if you for a particular pixel if you collect reflectance for all the in all the bands and plot them like x axis wavelength y axis reflectance for each pixel if you plot we will be able to properly replicate the spectral reflectance curve for that particular pixel. Let us imagine the entire pixel is covered with vegetation. So we will be able to replicate the spectral reflectance curve of vegetation for that pixel almost all the absorption features everything will be very clearly we will be obtaining. So this is really important where to put your bands how many bands everything will define what we get as our spectral output. So this slide is actually another example of the importance of spectral resolution. So here you this is the spectral response or the spectral reflectance curve of vegetation this is how it will look for different wavelengths this is the reflectance. So this is let us say we are collecting data using a fine spectral resolution sensor with multiple bands. So there are like many number of bands each dot represents one one spectral sample. So and let us assume the spectral resolution is much smaller in the order of say 10 nanometers. So for each sample if you join each dot we will be able to replicate the spectral reflectance of vegetation properly it more or less represent the spectral reflectance of vegetation even if we join them linearly some features we may miss but we are still getting the major absorption features. Here we are not having continuous bands. So these bands are non-contiguous here the bands are also contiguous continuous bands in this case non-contiguous band with us quite broad so coarser spectral resolution if that is the case just join them we will not be able to get the proper spectral reflectance curve of vegetation. For vegetation it is okay having a smaller number of bands will provide lot of information but for other features especially like earth science people who work towards direction of minerals identifying different different types of rocks for them each and every minute absorption feature in the spectral reflectance curve is very important. Maybe we will see little bit more detail about the absorption features and all in later lectures but what I want to say is the number of bands the central wavelength of each band and the spectral bandwidth all of these things that is essentially the spectral characteristics of the system will define the amount of information or the proper spectral information that we collect because only within that particular spectral bandwidth we are collecting data. You can just think it analogy with spatial resolution okay in spatial resolution what happens whatever the features present within the JFOV everything will be averaged out and we will get one single value right similar concept here whatever be the spectral reflectance within a given bandwidth let us say a feature has something like this okay let us say the bandwidth this is the spectral bandwidth so this is the reflectance. Whatever be the reflectance we will be finally getting an average reflectance for the band because this is our bandwidth lambda mean lambda max we will be getting a average reflectance recorded in our sensor we will miss this fine absorption feature. Let us imagine now we have a sensor or we are using a very fine spectral bandwidth like this in this case the reflectance may be somewhere here because of our fine bandwidth if we average everything this will be somewhere here and hence say now I have like 4 bands 1, 2, 3, 4 so this will be rho 1, rho 2, rho 3 and rho 4 if you plot rho 1, rho 2, rho 3, rho 4 it will be like this rho 1, rho 2, rho 3 and rho 4. So if you use many number of fine spectral bands we are able to get this absorption feature properly with some accuracy if we use a single broad band everything is averaged out you are getting only one rho as output. So the spectral characteristics of a system will define the information we collect in the spectral space whatever the energy coming in within that particular spectral bandwidth you can think of they will be averaged out. So find out the spectral resolution we will be better able to capture the finer absorption patterns within the spectral reflectance curve anyway when we deal more about spectral reflectance curve these concepts will be more clear much more clear to you all. So this line gives you spectral characteristics of some sensors this is ETM plus sensor, Aster sensor, MODIS sensor and so on. So here you can see what is known as a PAN band written as PAN which is PAN chromatic. So PAN chromatic generally means all colors PAN we can think it of all colors chroma means colors. So essentially PAN chromatic bands spans a very large or a very broad spectral resolution say maybe here 0.5 to 0.9 micrometers. The reason for having a PAN chromatic band is to sacrifice spectral resolution but get high spatial resolution. So we will talk about this in detail like the tradeoff between different resolutions but still imagine PAN chromatic band essentially means a band most likely in the visible and NIR range PAN chromatic generally means a band which is in visible and NIR range it will have a very wide bandwidth say spanning across entire visible and some in NIR where we are concentrating on increased spatial resolution instead of or the cost of spectral resolution we are losing spectral resolution we are increasing the spectral bandwidth thereby losing spectral resolution but which will help us to achieve high spatial resolution. Maybe we will see it in detail in the later classes but just imagine PAN chromatic band means a band with a large spectral bandwidth in the visible NIR portion mostly blue will be removed out because we know blue undergoes lot of scattering produces haze in the majoring. So in PAN band people will remove blue so green red and NIR these three will be combined to produce one single band. The advantages PAN band will have high spatial resolution in comparison to other bands. So Lancet has a PAN chromatic band and each band defines the number of bands the system has and its spectral bandwidth. You can see like how wide this particular band 7 how wide this band 5 in Lancet etm plus whereas in modus most of the bands are pretty narrow. So modus has these bands has fine spectral resolution than Lancet or Aster sensors. So the selection of spectral characteristics of a sensor. So the choice of spectral bands or the spectral characteristic of a sensor is not arbitrary. You cannot just pick this sensor should have 10 bands with these bandwidths that is not the case they are not arbitrary. Each thing has to be selected with proper reasoning. Some of the factors that one should consider for selecting the spectral characteristics of a system are first thing the feature of our interest should be identifiable in that band. Let us say vegetation take an example later class we will see that vegetation can be better studied using a combination of NIR and red bands. So the sensor should essentially work in NIR as one band and red in one band. So the feature of our interest should be identifiable in that particular band. So which band the feature has feature produces a distinctive signature select those bands select those wavelengths. Second thing the selected window or selected bandwidth should be located within atmospheric windows that is let us say like vegetation shows a characteristic water absorption band like presence of liquid water inside a leaf will produce characteristic absorption bands around like 1.4 micrometers 1.4 1.9 micrometers and all. But atmosphere also has water vapor. So if you put any sensor around like say 1.4 micrometers and if you send it to space whatever the energy coming in from the vegetation will be absorbed by atmosphere itself we would not get any signal in the sensor. So if you want to study certain characteristics about the object the band should be selected such that the band is not in atmospheric absorption window. It should not be an atmospheric absorption band it should be in atmospheric window where the atmosphere is cleared where it transmits radiation. So if we want to study about Earth's surface features the band we select must be located in atmospheric windows. Third thing proper sensor elements should be available that is okay we decided this is the bandwidth I am going to use for studying this feature that is present in atmospheric window no problem. But still we should have some detector to collect that energy right what if no detector available in Earth will be able to collect energy in that particular band it is a waste right. So we should have proper materials using which we can produce sensors which can collect energy in that particular wavelength. So it is not that in olden days like when the sensor technology was evolving only certain wavelengths can be imaged or can be observed certain wavelengths cannot be observed later with the latest technological development now we are able to observe in many different wavelengths now it is okay but in olden days it was a major problem. So people will say okay use this particular band this is of our interest but sensor will not be available in that particular band. Say if I ask if I give you like a normal camera and ask you to take a photograph in NIR band will it be possible no right the camera should be attached with an NIR filter or NIR sensitive sensor basically a sensor which produces output to NIR signals then only we can collect that energy. So that should be availability of sensors in that particular band then the band should be uncorrelated to the extent possible uncorrelated in the sense let us say we have two bands two neighboring bands say if in band 1 if dn increases in band 2 dn increases in band 1 if dn decreases band 2 dn decreases. So essentially you have like a similar response between band 1 and band 2. So the output finally we get from these two bands will be more or less the same or we will get equal amount of information from both the bands if one increases the other increases if one decreases other decreases. So essentially the bands which we choose should give us new information like I told you like hyperspectral sensors was really wonderful but hyperspectral sensor has a problem of data redundancy and data dimensionality that is they have lot of data hundreds of bands but the information in all the 100 bands will not be unique the information 100 bands will be repetitive in nature repeated in nature. Similar information will be present in many different bands each band will will occupy a large amount of memory storage in our storage system in our display system and all. So everything is like a having a large number of bands has its own problem it has it occupies lot of storage in the memory. So we should choose correct number of bands I do not need 100 bands I can identify the feature only with 10 bands choose that particular 10 band. So that is where like in hyperspectral data processing even though there are like hundreds of bands people will select bands few bands that is of useful for the selected application. Then only this the data processing will go smooth else there will be so much of memory consumed so much of storage will be wasted. So the bands should be uncorrelated each band occupies or produces large volume of data like if you download one Lancet imagery each scene will be like in 1 1.2 gigabytes gigabytes. So imagine for to cover the entire earth how many gbs of data will be produced every day or how many tbs of data will be produced every day just imagine each and each and every added band adds to our increased memory storage. So the number of bands we put in a system should provide us as unique information as possible they should not be related to each other. So maybe like if you come across like digital image processing courses there they will explain the concept of like identifying similarity between data removing data and so on. So the bands we are going to select should be uncorrelated to the maximum extent possible. So the features that we have listed in this particular slide will help us to select spectral bands for any given application. So as a summary in today's lecture we have seen concepts related to the spectral characteristic of a system. With this we end this lecture. Thank you very much.