 Welcome to today's lecture in the course remote sensing principles and applications. In the last class, we have discussed the various ways in which the signal from the terrain feature will be affected and modified by atmosphere and neighboring pixels. So, we saw how to calculate the radiance that is actually reaching the sensor and what are the different energy components that will add up to it. In this lecture, what we are going to see is using this particular radiance, how an image is formed, how an image is represented and how to use that particular data contained in an image and bring it back to surface reflectance. That is, we will be recording radiance using satellite sensors. From that radiance as users, we will get images out of it. Most of the satellite sensors will provide us like images. Using that image, we have to retrieve back the reflectance of the surface in order for us to use it in further applications. How to go about it and that is what we are going to see in this particular lecture. First, we will see how an image is formed in the remote sensing system. So, this is the radiance that is reaching the sensor and this radiance that is reaching the sensor is essentially collected by scanning geometry, imaging geometry, directors and all those things. So, in detail how the image collection process happens, we will see it in later lectures. But now let us assume whatever the radiance that reaches the sensor is now being collected by imaging scanner and optics, then it will reach the detector. So, this detector essentially converts or I will say collect, collect information. So, detector is what is the one that collects information from the object of interest. So, whatever is coming in these two will be just collecting and pass it on to detectors. The detectors will be able to collect it and record it in a meaningful way. So, the detector will detect it, the signal will then be passed on to electronics. So, the imaging electronics system will process the collected radiance further as detected by the detector. Then it will undergo a process of analog to digital conversion, analog to digital conversion and will be stored as digital numbers. So, this essentially make ups the image or this essentially forms image. So, a two-dimensional matrix of DN is essentially an remote sensing image. You can take this in analogy with the normal cameras that we use. In olden days we used to have cameras in which we will load films that is film rolls we will load, we will take a picture, take the film rollout, develop it in photographic labs and have hard copy photographs. Nowadays we have digital cameras where the images are stored digitally which we directly transfer it to computer, process it electronically and if you want we print it or store it in computer itself as digital photographs. So, remote sensing images essentially are digital images. So, what digital image and what an analog image essentially the basic difference is an analog photograph that we used to get earlier from our normal film based cameras are continuous in both spatial and radiometric terms. Whereas the digital image that we collect is actually a sampled and quantized representation of object space. What are these terms? I am telling something as continuous, I am telling something as sampled and quantized. What are these terms? We will see it in the next slide. So, let us assume this is the actual object of our interest. So, if in olden days if we use a normal film cameras, so whatever the object that is being here will be photographed by an imaging system. So, this is the aperture lens aperture like our camera will have a lens it will have a small opening. It is a lens aperture through which signals will pass through and this is the imaging plane where the film will be kept. So, light source from this is from this particular object will fall through or pass through this optics and will be get recorded in the film as an image or photograph. So, each and every point on this object space will have a corresponding point on the image space. So, film cameras each point on object space, object spaces are the object of interest will have a corresponding point in the image space, image spaces here in the photograph. Similarly, whatever be the brightness level recorded here. So, let us assume it is like a olden day black and white photograph. So, everything will be recorded in form of different shades of gray, right. So, each and every shade of gray in the object space will have a corresponding shade of gray in the image space. That is why I said it is spatially and radiometrically continuous. Each point on the object space will have one corresponding point on the image space. Similarly, each gray level or each level of brightness in the object space will have a corresponding brightness level in the image space, in the olden day film cameras. If we look at digital cameras, digital imaging system, this is slightly different. How this will be, each and every point on an object space will not be imaged as the same point in the digital image system. So, digital image system the sensors will be made up of individual detector elements. So, let us assume this is one detector element. This will essentially see one small area in the object space. That is say this is the object space, this is the detector element. So, based on the distance at which the detector element is from the object space, this will cover a small area on the object space. And whatever energy contained in that particular area will be kind of averaged out. And a single value will be seen by the detector. So, olden days each and every point, energy coming in from each and every point will be recorded in films, it is continuous. Whereas in detector, each detector has a finite size, mostly it will be like square in shape has a finite area. That area say the camera is kept here, object space is somewhere here. So, based on the distance between the camera and the object, each detector element will cover a small area on the object space. It is not a point anymore, it is now a small area. Whatever the area is there is covered by the detector element, the energy coming out of that particular area will be averaged out and detected as one single value by detector element. This is one thing. So, this is what I called as an example for sampling. Instead of taking everything continuous information, we are now taking like an aerial average information. And the gray levels, each and every gray level within the object space will now be lost. So, in the image space based on the incoming energy levels, we will now have a discrete level of gray levels. So, what is that? Let us assume a sensor is designed such that it can record radiance in the range of say 500 to 1000 watt per meter square per radian per micrometer. So, this is the range of radiance that can be sensed by any particular detector, let us assume or we will drop even the units for simplicity. Some 500 to 1000 energy units a sensor can record. Each sensor has a own inbuilt quantization levels. So, what is the quantization level? Basically, it will tell us for each pixel or for each detector element, for each pixel, what is the number of gray levels that can be represented. Let us say, let the image has 8 bit quantization. So, this is like a digital image, everything works in binary format. So, the memory or the storage for each pixel now will be 2 power 8, that is 256 bits or 256 levels, I am sorry, 8 bits is 1 byte. 256 levels of gray can be represented by one particular pixel. So, essentially it will have a value of 0 to 255 or 1 to 256 depends. Most likely it will be 0 to 255, it depends on how the sensor is configured. So, each sensor will be inbuilt. Whatever be the gray level outside, the sensor will be programmed with a certain quantized levels, discrete levels. So, 8 bit quantization, 10 bit quantization, 12 bit quantization and all. If it is 8 bit, it is 2 power 8, 256 different levels of gray. If it is a 10 bit, it is 2 power 10, 1024 different levels of gray and so on. So, that is how it will be programmed. So, what essentially will happen? So, whatever be the energy coming in from the object space, a sensor will have a range, a minimum range or minimum and a maximum limit within which it will sense the energy coming in. Rather than storing everything as it is, say for example, if an energy of 500.01 is coming, olden day film can store it as it is like with the same level that is scaled based on camera properties, everything will be continuous. Here in this system, there will be like a bins, say 500 to 550 may be having one gray level and so on. That is for a range of gray levels, or for a range of incoming energy, there will be one particular gray level associated with it. Hence, that is not continuous. Whatever be the energy level coming in within that particular range, the same number will be assigned to that particular pixel. This process is known as quantization. Maybe we will see this in detail, little bit more detail in the next slides. Yeah, in this particular slide. Now, let us say this is the object space. Now, our sensor is scanning this particular line. Let us say a sensor is starting from A, it is going up to B, it is scanning each and every point along this straight line. So, first thing, the director element will be like kind of sensing energy in form of like small area, it will be collecting energy. So, the energy collector will be continuous in form of like this. So, here it is everything is white appears with very high energy level, then comes level of grays is like this, then again white hits here. So, from A to B, the energy is collected. Now, it has to be sampled, a continuous stream of energy as sensed will be now sampled at a different time intervals. See, that is what represented here in form of dots. That is this stream of continuous energy came in from the ground. In digital systems, it will not be stored as it is as a continuous wave, but it will be sampled at different different points. Say at a given time interval with every time interval delta t, the incoming energy will be sampled and measured. So, now this is measured, whatever the energy at that particular instant is now measured and the one measurement is mean. So, now this measurement will be compared with the gray level. Say my system has a gray level of 8 bit quantization that is 0 to 255 will be the values it can save. So, what it will do may be for this particular very bright spots, it may assign 255. For this spots, it may assign something like 128 in between. For these spots, it may assign something like close to say 10 or something like that. So, each for a range of radiance or energy coming in from the object space, 1 dn will be assigned to it. So, it is not continuous. Say in the older example, say 500 to 510, whatever be the energy level coming in within this range, dn of 1 will be recorded. It is how it will be programmed. So, we will lose that continuity from 500. There can be like infinite energy levels 500, 500.01, 500.02 like this that can be many different small small different energy levels can be coming in from object space. Everything will be lost. They will be quantized if the energy level is between 500 and 510 record 0. That is how the system will be calibrated. So, that is why I said this kind of conversion in energy terms is known as quantization. The continuous stream of energy, whatever is coming in, is quantized into discrete bins and saved in digital image. So, that is why I said a digital image is a sampled and quantized representation of object space. Sampled in the sense, we are not seeing each and every point on the ground. We are effectively collecting few samples from the ground. So, we are not seeing all the points on the ground as if old and day film cameras. We are seeing essentially few selective points on the ground sensing energy only from those points. The energy from those points also will not be stored as it is. They will be converted into some finite discrete gray levels and stored in digital images. So, this is how a digital image is stored. And the number that gets finally stored in an image is what we call a digital number or DN in short. So, when we use some data is say from satellite called Landsat, if we download a level 1 data, it will contain DNs. It is a 8 bit quantized system like old and day Landsat. So, we will have DN values ranging from 0 to 255 in the image. From that number, we have to do further processing, convert it into meaningful radiance, reflectance and whatever. Similarly, if the system has 10 bit quantization, the DN will vary from 0 to 1023 or 1024 different levels like that it will be saved. So, essentially a digital image is nothing but a spatially sampled and radiometrically quantized representation of object space. So, this is finally that is how it will look after the process of sampling and quantization. So, the object space is now converted into a digital image like this. So, each pixel is now has a DN. The number of bits used to quantize is an indicator of the radiometric resolution of the sensor that is say 2 power 8, 256 levels of gray, 2 power 10, 1024 levels of gray. Say this number 8 or 10 will tell us how many different gray levels will be there in an image. As the number of gray levels increases, we will be able to see finer variation in green levels if the sensor is perfect. Like if it is sensor is collecting everything accurate, then as the number of gray levels increase, we will be able to see even finer level of energy changes in the object space which tells us how precise we are able to record the incoming energy. So, this is an indicator of what is known as radiometric resolution. We will see the concepts of resolution in the coming lectures, but this just as an indicator of how precise we are collecting the data. If you are going from say 10 bit to 8 bit, we are reducing the precision of how we are storing the data. If you are going from 8 bit to 6 bit, we are further reducing the radiometric precision and so on. Next important thing we are going to see is what is known as a radiometric calibration. What is a radiometric calibration? Why it is needed for remote sensing systems? So, let us simply take an analogy of our normal everyday photography. What is the purpose of our normal everyday photography? We take it for storing memories, seeing it at later time. See when we take say some happy event is occurring, someone is getting bedded. We take pictures out of it as a memory. What it will record? It will record whatever is there in the scene in a sampled and quantized manner. It will have different levels of gray and if it is like a color image, it will store colors. We will see later how color images are also produced, how it is displayed and all. But now let us assume it will have different levels of brightness levels and we will be able to see it. Essentially the purpose for which we are taking photograph is to just see and observe what is there. So, we are basically doing what is known as a visual interpretation that is okay we are seeing. Someone is getting bedded means okay the bright room is tying the knot, red is standing there, he is so and so, she is so and so like this. We will be identifying, we will be interpreting the photograph. We will not be interested in knowing what is the amount of energy that came in from the object space and got stored in the camera. We will be just interested to know who is there. So, we do a basic interpretation and as long we are able to see everyone clearly, we will be happy with the photograph. In remote sensing, this scenario is entirely different. Different in the sense, we will also be doing some sort of interpretation. We will be trying to identify what is there. But nowadays, most of the remote sensing based applications are all quantitative applications. So, what is quantitative? They need to know or they need to measure the energy coming in from the surface. They need to work on those numbers. What is the amount of energy that came in? Using that energy, I want to calculate something else. So, nowadays, most of the applications require a proper measurement of the incoming energy level. So, the radiometry has to be perfectly measured. The incoming radiance has to be measured and stored as it is. But I said when we get an image, we will not get the radiance. Say it is not, it will not be recorded as like 500.01 units, radiance units. No, it will not be recorded like that. It will be recorded as say 127, 242. We will get some sort of like numbers out of it. What is the relationship between the number in the image, the DN in the image and the radiance that came in from the ground at the point of image acquisition? Relating these two is known as radiometric calibration. So, from the DN, I should be able to calculate what was the radiance that came in at that particular point. Similarly, in the sensor perspective, from the sensor's perspective, for a given level of radiance, this must be the DN that should be recorded in the image. The sensor should be doing it perfectly. If this is the radiance, this should be the DN. From the user perspective, we will get DNs in our hand. Like in an image, we will get only DNs. From the user perspective, if this is the DN, this must be the radiance that should have got recorded in the system. This relationship between DN and radiance and vice versa is established during the process of radiometric calibration. So, that is what is given in this particular slide. So, a radiometric calibration is done in order to relate the observed radiance with DN. Each remote sensing system will have a band specific values to convert the observed radiance to DN. That is, say this is like the incoming energy from the object space. And this is like the anticipated level. So, whenever a sensor is being sent into space, it will have like, okay, the sensor should be able to observe radiance from this level to this level. So, based on the applications for each fit is sent, the sensor will have a minimum and maximum range of radiance, which it is supposed to deduct. And this minimum and maximum will differ based on the application. So, if a sensor is sent to space for sensing oceans, the radiance level will be different. If the sensor is sent to space for observing snow covered regions, the radiance range for the sensor will be different and so on. Based on applications, scientists will decide, scientists and engineers will decide on the sensor should deduct incoming energy from this number to this number. They will fix the range. So, that range is given here. So, based on that range, they will not store it as it is, they will do some sort of amplification using the system electronics. So, when they amplify essentially some gain component, so the amplified signal will be is equal to some gain into actual signal plus offset. This is how it will be. So, this gain and offset are all like the systems properties, how it is recorded. So, now the amplified signal will be fed into the systems quantizer quantization unit, which divides it into different gray levels that will now split each and different energy levels into 1-1 DN level. Say, let us take this particular DN level. So, whatever is the energy coming in within one particular range of say this, whatever be the energy coming in within this range will have 1 DN. Whatever the energy within this range may have 1 DN like this, the quantization cross unit will do it and store in form of digital images. So, this identification of this gain and offset, if this is the incoming energy after amplification and quantization, this will be the DN. This relationship will be fixed even before the sensor is launched. This is known as like pre-launch calibration. People will be knowing it, if this is the radiance, this will be the DN and vice versa. Unless we know this calibration perfectly, it will be impossible for us to do quantitative remote sensing measurements. We will not be knowing, we will be just be having DNs from the image. We will not be knowing if the radiometric calibration is not proper, not being done. From the DNs, it will be impossible for us to retrieve the actual energy that was measured again. So, this is a really important step. So, a very generic way of doing radiometric calibration is given in this particular slide. So, how this will be done? So, most of the systems like especially in like Landsat based satellites, like Landsat is like a series of satellites. How this is being done is like this, that is, if that is like kind of an energy level. See, this is the range of energy that sensor can record. Radiance, TOA that is top of atmosphere. TOA means top of atmosphere and LTOA meanness, what is the minimum that will record? Minimum. So, any energy less than this sensor will not record. Similarly, each sensor will also have a LTOA max. Anything above it, sensor will not record, it will be saturated. So, what is the energy that actually came in and what is the minimum? That particular difference will be multiplied by this gain factor plus a constant and offset will be added and that will be stored as dn value. So, here Qcal is nothing but the dn which means calibrated and quantized radiance. So, Qcal is nothing but calibrated and quantized radiance, that is, nothing but the dn, the digital number now has a physical meaning. A digital number is not a mere number anymore. The digital number was now being recorded based on the calibration done and after which the quantization is done. So, what will be the gain? The gain will be most likely looking like this. What is the dn max minus dn mean for that particular sensor? Say if it is a 8 bit sensor, the dn max may be 255 minus if the dn minimum that should be stored is say 1 divided by what is the radiance max that can be directed by a sensor? Let us say it is some sort of like say 50 units minus what is the minimum? Let us say it is like some 2 units say in units of watt per meter square per stray dn per micrometer inverse this is in dn levels. So, this will be the gain whatever be the energy level within this particular energy range they will be equally split into this much gray levels or this much levels of digital numbers. This is the basic way of one way of relating radiance to dn. So, this gain will have a units of counts per unit radiance per unit radiance this is the count that is going to produce. So, here we are doing some sort of like a linear relationship whatever be the radiance range a sensor has to work divide that range equally into a number of different different pins and assign values to it. So, as users we will be we will be getting dns how to get radiance back from this particular dn value is what is given here. So, that is this is the what is the inverse of gain parameter we have seen that is in the previous slide we have seen gain as q cal minus q min by l max by l min. So, here it is inverse of gain parameter multiplied by what is the dn recorded for that particular pixel minus what is the minimum dn that can be stored in an image and what will be the minimum radiance that will be detected by the sensor. So, if we apply this particular formula or this particular conversion we will be able to get back radiance from dn. This is possible only if the gain is set exactly to that particular value that gain should be constant for each band for each band the gain will vary because the l max and l min for each band will vary even though the dn max minus dn min may be system specific l max minus l min will vary based on which the gain will vary for each band the gain must be fixed and it should be constant throughout then only this sort of conversion is possible from radiance to dn from dn to radiance. So, and unfortunately or confusingly the scaling factor we used here also that is actually 1 by g is also known as gain people do not call it as inverse of gain they still call it as gain. So, we should be really careful in which context we are using it are we using it in the context of radiance to dn conversion or dn to radiance conversion but still everything is known as gain we have to use that particular fraction judiciously. And in recent data sets the gain parameter and offset has given like and a very simple equation like this say if you look at like Lancer data maybe like I will show it in later classes. So, for each dn value they will say one multiplying factor one additive factor. So, essentially take the dn out multiplied with the multiplying factor plus add the additive factor we will get the radiance the sensor actually recorded. So, this sort of conversion also is nowadays is possible with recent data sets people are giving such numbers for us to work with it easily. So, as a summary to this particular lecture what we have seen is we have seen the process of how a digital image is prepared and stored like this digital image is nothing but a spatially sampled and radiometrically quantized version of the object space. And what and we also got introduced to the concept of radiometric calibration that is essentially relating the radiance in the radiance recorded by the sensor to the dn that is actually stored in the image and how to relate them. So, now we will be able to convert radiance to dn or dn to radiance and vice versa. So, in the next lecture we are going to take this dn and try to do some sort of processing over it so that we will get the surface reflectance back. Thank you very much.