 Hello, everyone. Welcome back to the next lecture in the course. So, as I told in the ending part of the last lecture, we have finished discussing about like the fundamental principles that a new coming student or is expected to know about like remote sensing or which will pay way for further understanding about the subject. We have broader like a very broad overview and detailed concepts in some topics which will help a new student further dig deeper into the course and understand its beauty and the various ways in which the remote sensing can be used. So, in we are now reaching towards like the ending part of the course where we will briefly get introduced to various datasets and data portals from which we can download the data and some data processing tools. So, essentially remote sensing is a combination of both theory and practicals like practicals means we need to process the data to get some information. So far, let us say we have one particular application. Say we want to monitor how much the land cover changed within a city or a state. Normal like all government agencies will do this repeatedly. So, they want to understand how much the landscape is changing. Say for us may be diminishing, urban may be expanding, agricultural may be expanding and so on. So, naturally different agencies be it government or private agencies will want to know all these things. This is just one example of application of remote sensing technology. So, there in order to achieve or in order to get this application done, we will be in a position to identify some remote sensing based datasets. Ok, these are all the data that are available to me, I can use some of them. Then we should do certain processing to it using certain processing tools and using certain algorithms. After that only we will be able to achieve our application. So, essentially the success of remote sensing a major part of it lies in identifying what data to use and understanding what and all processing we should do for ultimately achieving our objectives. So, that is where that is what we are going to see in the starting from this lecture and then the coming few lectures. So, in this lecture I will briefly introduce you to various remote sensing datasets that are available to us and from which data portals we can download them and also certain processing tools which are openly available to us which we can use. This is not an exhaustive list, it is near to impossible for anyone to discuss all the datasets available from remote sensing systems. The various algorithms, various processing tools and various data portals it is next to impossible because it is so enormous the amount of data that we get, the variety of data that we get at all. I will just briefly introduce to some of the commonly used datasets, some of the commonly used data portals and two of the openly and publicly available data processing tool which we can use. Here we will not go in detail about anything, I will not tell what the data characteristics are, we will not discuss how to process the data, we will not discuss about any softwares and all, but we will get introduced to the concepts and those who are really interested can using the introduction provided and using like the web links and other resources provided, you can always learn on your own and expand your knowledge in this particular topic. First we will discuss about remote sensing datasets. Actually the current time period in which we are living can be considered as the golden era for Earth observation. We can call it as golden era because the amount of information or the amount of data and the variety of data that we get is what to say unimaginable. We are generating like petabytes of data every day like come taking into account like all the different satellite systems around the globe. Most of the datasets that we may need for our application are available publicly unless we want like really specific data like very high spatial resolution data or particular for particular project the data should be of particular unless we are like extremely specific about data characteristics. Most of the datasets for our applications are available in the open domain and also like the processing tools are also available in the open domain. We need not go towards or we need not buy expensive commercial tools. It is useful and not like discounting any commercial tools but what I want to intend is to share a tell us like the datasets, the processing tools and also like the knowledge for us to understand. Okay, this is the data I need these are all the processing I should do and this is the tool I should do. All this even for getting this knowledge the amount of information we need is all available publicly in the public domain. So, that is why I said this is like a golden era for remote sensing or earth observation as a whole. So, here what we are going to see is like some of the few commonly used remote sensing datasets or commonly publicly available remote sensing datasets which has the potential of being used in large number of application. First maybe we should know is like the levels of data processing. Say whenever a satellite or a sensor acquires data, the data is not directly transferred to the user. The data from the satellite will be transferred to a ground station and the ground station will receive it. The received information will not only have remote sensing data but all other information about satellite, orbital characteristics, satellite health characteristics, the sensor how they are performing all those important technical and sensitive information will be present. So, naturally the space agencies will collect this data, will process them to various different levels and then it will provide us. And also the data about the earth surface coming in from the satellite may contain lot of errors like errors in terms of radiometric quality, errors in terms of geometric quality. So, all these things has to be corrected before being used by the end user that is one thing. And also in between the lectures we would have discussed about calibration. Like there should always be a correlation or relationship between what a satellite measures and what we intend to measure. Satellite measures is radiance. So, but what we will be getting in the image is kind of like DN. So, how to convert it? We should be able to properly quantify what is being measured from the data contained within the image. There should be a relationship. All those calibration may be applied only after receiving the data. They may not be applied with from the satellite. A satellite will simply observe the data and may transmit it back to the ground station. The ground station and the systems there will apply this calibration coefficients and those equations that will convert the data to more meaningful form. So, the data directly coming from a satellite will be of little use to the end users. So, it has to undergo several levels of processing and normally each agency, space agency which provides data will do certain level of processing to the data before it is being delivered to the user. First we will understand the basic levels of data processing. First is level 0 or the raw data from satellite sensor. Whatever is coming in from the satellite people call it as level 0. It is normally not, it will normally be shared with the user. It will not be shared. Okay, sorry. Because as I told you it will contain not only the data about earth, but the data itself may not be really useful. It has to undergo various error corrections, calibrations and so on. Plus it may also contain lot of other information about satellites, spacecraft itself, many different things. So, normally level 0 data will not be shared with users. It is the raw data received by the satellite. Then the useful part of the data from a user's perspective will be removed. It will be undergoing some sort of like radiometric calibration, geometry calibration, error correction and so on. Then we will have level 1 data. So, a level 1 data is basically like the first level of useful information that a user can get. Say when you talk about like optical bandwidths, red, green, blue, and near bandwidths and all. I told you we will get what is known as digital numbers within the image. So, it will be like a two-dimensional matrix. Each band will be like one-one matrix. Say let us for example, let us say our system has 5 bands, okay. Blue, green, red, NIR and ASTBLAR 5 bands are there. So, let us say it is covering an area of about like 185 kilometers by 185 kilometers, one image at a resolution of 30 meters. So, this is an example for like Landsat system. You will have like multiple 30-meter pixels and each band you will have one such kind of like a matrix. So, you can think it in terms of like a 2D matrix, right, rows and orange in terms of like rows and columns, different, different latitudes, different, different longitudes, okay, along the columns. So, each band will produce like one 2D matrix array. So, you will have 5 matrix. You can think it in terms of that like. So, this particular information is the data, remote sensing data and each pixel will contain the DN number. So, the output in level 1 will be this DN. You will have 5 different images. Each image will be containing large number of pixels, each one having like a DN value. From this DN value, we will be able to calculate the radiance recorded in the sensor because calibration is done. So, the first level of useful product that we may get is level 1. So, within this level 1, there can be several levels, level 1A, 1B, 1C and all which varies with different, different agencies. There are no hard and fast tool, how many different sub levels one can go. Certain agencies say 1A, 1B, 1C, certain agency may stop only at 1A, 1B and so on. And each level or each sub level may indicate a different level of processing. Maybe level 1A may be just like a calibrated data. Level 1B can be calibrated and geometrically calibrated and radiometrically corrected data. Level 1C may be calibrated radiometrically as well as geometrically corrected data and so on. So, there are like plenty of different sub levels that exist. Calibration we know, that is providing a relationship between what is stored in the image and what actually the satellite measured, that is calibration. But what is this geometry correction and radiometric correction? Say radiometric correction is to remove certain unwanted artifacts within the image. Let us talk in terms of like a push broom sensor. I told you with the push broom sensor, there will be hundreds of detector elements oriented in the across track direction. And as the satellite moves, it will collect information in the swath across the swath right. So, there are many different detectors. Each detector may have a different response. Say one unit of energy's radiance is falling on the detector. The output, the voltage produced from one detector will not exactly be the same as the output produced from the next detector. There can be minor variations. Finally, when you arrange everything, do calibration, produce an image, we will have what is known as like a stripping effect. So, stripping effect means like the image may not look uniform within the image. Say this is like a 2-dimensional image. This block may appear bright, this block may appear dark. Again, this block may appear bright and so on. This is because of the difference in the way that the data is recorded by the detectors because of variation in detector response. This is just one example of radiometric error, stripping effect. Or there can be one full line of data may be missing, like one detector might have failed. And along so one particular column in the image may be missing. Or if you are talking about a vis-broom scanner, if one detector fails, one row of data may go missing. So, this is called like line dropout or column dropout. So, these are some examples of radiometric errors. They have to be removed. There are like various algorithms to remove it. So, these corrections may be applied to the data, that is radiometrically corrected data. Geometrically corrected data means say, satellite will just take a kind of like a snapshot. The snapshot has to be properly referenced to a ground point. This point in the image corresponds to a certain point in the ground. Unless that relationship is established properly, an image is of very less use. It cannot be properly termed as like a remote sensing image. There should be a proper relationship between each point in the image, each pixel in the image and the corresponding ground coordinate. They should be properly related. And there can be some errors there too. Certain points can be misoriented. There can be like geometric distortions happening, like a circular feature may be appearing ellipse. All these things can happen. So, people use ground-based information, what we call like ground control points or some other ground-based information to correct these geometric artifacts. So, the data can be or has to be corrected for these geometric distortions or some sort of errors. So, all these things will be done one after the other and finally, we may get a radiometrically calibrated, radiometrically corrected and a geometrically corrected data. All these things may come under the same term level 1, level 1A, 1B, 1C and each agency can have different sub-levels of processing. Then comes level 2 data. So, level 2 data is actual geophysical product. Say level 1 just tells us DN. With DN, we have to convert it into meaningful quality. Say, if it is talking about visible and NAR bands, the intended output for us is surface reflectance. We will be interested in calculating the surface reflectance. So, we have to take, we have already discussed in detail about the steps. We have to take this DN value, convert the DN to radiance, do atmospheric correction on the radiance. We have discussed about dark object subtraction method briefly to do this right. So, there are like plenty of other complex methods there, but still just I am telling you. So, take the radiance, do atmospheric correction over the radiance. From the radiance, calculate surface reflectance. So, these are different steps to follow. Those processing might have been done by the space agency themselves. So, level 2 basically talks about geophysical product. If it is invisible domain, we will get surface reflectance, visible or NAR or residual domain. If it is thermal infrared, we may get the land surface temperature and emissivity. If it is passive microwave, then level 1 data may contain brightness temperature because essentially what the passive microwave sensor measures is brightness temperature right. So, from the brightness, once we do like the, we will get the output from the antenna, if you apply all the calibration to it, we will get brightness temperature, calibrated brightness temperature that is level 1. So, level 2 will be maybe a soil moisture product or a vegetation optical depth product or like sea ice thickness product and so on. So, level 2 is essentially, so level 2 products are essentially some sort of geophysical quantities after certain level of processing to the level 1 data. So, level 2 will tell us geophysical products are the same resolution or same data characteristics as that of level 1. Say, let us say a level 1 data is kind of, if we talk in terms of like landsat level 1 data is 185 kilometer by 185 kilometer swath image. So, level 2 is same characteristic data, but instead of DN it will contain reflectance value or instead of DN it may contain like a temperature value and so on. Level 3 is same geophysical product, but aggregated to different spatial or temporal resolution. Say, sensors like MODIS or VRS may produce more than one image every day over a same location, it has a very high temporal resolution. But not all the days you will get image over all ground points, certain points may be covered with clouds over certain points the look angle may be very wide leading to like a very large geometric destruction and so on. So, normally what MODIS data products will give us is they will aggregate the data temporarily, they may provide rather than providing a daily image in addition to providing a daily image, they will also provide an 8-day composite in which all the observations done during the 8-day will be analyzed and only the best observed variable will be populated within the 8-day. So, this is a level 3 data, a temporarily aggregated data. Similarly, it can be spatially aggregated, say the natural resolution of the satellite may be 500 meters, but for other users people may be requiring data at say 0.05 degrees climate grid, MODIS provides such a data. So, spatially and temporally changed geophysical product is called level 3. And finally comes level 4, where level 4 means it is advanced geophysical product which is not directly observed by the satellite, but by ingesting the satellite data into various models we will get those variables as output. Say for example, using the surface reflectance and other ancillary data we can calculate what is known as like leaf area index. Leaf area index is one of the important variables related to vegetation growth or you can calculate like the biomass or gross primary productivity all these things. So, satellite will not directly observe them, satellite will observe radiance, brightness, temperature and so on. We need to some sort of processing to this in order to get to level 4. So, level 4 are advanced geophysical products produced by ingestion of satellite data into models. So, for each advanced level the previous level will be the input. To produce level 1 you need level 0 data. To produce level 2 you need level 1 data and so on. So, it comes as like a sequential chain and all satellites or all space agencies may not provide all levels of data. Some may provide only level 1 data, some may provide only level 1 and level 2 data, some may provide all 4 levels of data and so on. So, this is like the most generic representation of levels of data processing which we can use or which is naturally available to us. So, first we will discuss about optical data sets. Optical data sets means visible NIR and SWR as well as TIR domain. So, what are the common development data sets available to us? So, we can use data from LAMSAT series of satellites that is one of the most widely used optical data sets starting from visible to thermal infrared wavelengths. Then the recently launched Sentinel-2 is again like a very good source of optical data sets. Sentinel-3 has its ocean color system, ocean color sensor and sea and land temperature surface and all these things are example for optical sensors which provides us data in visible NIR, SWR and TIR domains. Also we can use data from Indian remote sensing series of satellite. India has like a very broad range of satellites providing us different data sets which we can use. So, first of all based on application we should understand which data to use. So, let us say we have identified certain optical data. I am going to use data from Indian remote sensing series of satellite or I am going to use data from LAMSAT series of satellite. If you have decided that then we should properly understand what data I should download. Whether to download level 0, level 1, not level 0, sorry level 1, level 2, level 3, I should analyze and understand which is needed to us. After understanding this we should download the data. So, when we need to download the data we should go to the data portal and download it. Each satellite may have its own spatial referencing scheme to refer to one data. Say I want image acquired over Mumbai. So, I can just go to some data portal and search for Mumbai and download the data. When data comes, so Mumbai is like a region and Mumbai city may be falling within one full image like a small portion of one full image or small portion divided into 2 or 3 image depending on the satellite data collection, how it happens. So, I need to download whenever an image downloads for me we will have kind of like a referencing scheme. If your location is this, then the data about that particular location will be covered in this particular image. Each image or each data is provided like a proper tag for us to identify it. So, that particular geographical tag is what we call referencing scheme. There are like plenty of referencing schemes are there. For example, path row referencing scheme or tile based referencing scheme and so on. So, what is a path based referencing scheme? This is widely used for Landsat series of satellites and Indian remote sensing series of satellites. Like I told you satellite will go in kind of like this sort of orbit. I am sorry, this is like a widely skewed. So, this sort of orbit. So, this is like the ground track of satellite, nadded ground track. So, if you combine this along with the swath width, you will the satellite will collect certain area of ground. See as the satellite goes along one orbit, it will be continuously collecting image within the swath. So, normally what this Landsat and IR series will do is along each orbit is given a name. Like I told you, after a certain number of orbits, the ground track will repeat. So, essentially there can be unique number to each orbit, orbit 1, orbit 2, orbit 3 and so on. So, let us say after 16 days, orbit 1 repeats. So, within the 16 days, it will start from orbit 1, it may end in say orbit 233, orbit 255 and so on. So, there can be that many unique orbits to cover the entire globe. So, each orbit or each ground track covered by the satellite will have a unique number that is called path, different path. So, they will have path information say 233 unique paths in order to cover the entire globe. Whenever the satellite covers the 233 unique paths, it would have produced image over the entire globe, maybe once every 16 days or once every 24 days and so on. So, for Landsat series of satellites, they will provide this is the path and within the path, it will be continuously producing strip-off images. We will not be in or we may not need to download the entire strip along this path, it will be enormous amount of data. They will cut this path into several small, small segments. Say, let us say this is like the entire orbit, say one full orbit, people will divide this into small, small rows. This is maybe row 1, row 2, row 3 and all. This may be covering 185 kilometers. So, again this path may be covering 185 kilometers. So, this particular block is called like one image, one path say this is in path number 1, row number 3. This may be covering certain area of the globe. That particular image will download to you. Say maybe path 130, row 55. There can be many different things. So, that particular image will download. This is actually fixed. For a certain location, this is your path row. So, that particular location will always be covered in that particular path row for Landsat series of satellite. If you talk in terms of modus, modus will not provide you images in terms of this path row because this orbits will differ. What they will do is, they have divided the earth into what are known as like fixed tiles and each location is what to say, will be located within a particular tile. Say they will give HV, say H25 V07. It is like a two-dimensional coordinate system. It may be located over India. We can identify H25 V07 is like modus coordinate system over India, like southern part of India. Like this they will provide. This particular part of the globe is covered in this tile. So, as modus sensor has a very wide scanning angle, same point on the globe can be covered from different different orbits. So, from whatever orbit the ground point is covered, the data will be like processed and that will be populated onto this grid. So, we will not be knowing from which orbit it was imaged or from which scanning it was imaged. That information will be there, but naturally when we look at the image we will not know. We have to dig deeper into the metadata. So, this path information will be like removed, everything will be converted into this kind of grid. So, why I am telling all these things is each satellite system may follow their own data referencing scheme. And with this data referencing scheme we may be download, we may be able to download the data over the part of the globe where we need it. So, this varies with satellite. So, whenever we download satellite from Landsat series of satellite or IRS series of satellite, we may get in image with path row system, path 135, row 52 like this we may download. Or when we download like modus image we may download it as in terms of like H25, V07 and HV in like a horizontal tile system. Some may provide like entire global image, say modus level 3 data, CMG data, SMAP level 3 data, all these things will provide like a global image with all the satellite observations available within that particular thing. Everything is different things are possible. So, when we search for data we should know these things to certain extent. We can always search with the ground location we need. Say if I need data over Mumbai I can just go type search for Mumbai and whatever data available over Mumbai will be downloaded. But this knowing this sort of referencing scheme available for each satellite may actually help us. So, just one example of this multi-level data processing. Say there is like a sensor called VIRS, visible infrared imaging radiometric suit. I guess that is the name for that is what abbreviation stands. It provides like data it in various levels. Actually this VIRS sensor and modus sensor all combined together gives us hundreds of different data products. Say the level 1 data can be a calibrated radiance. So, they would not provide DN values they will provide you radiance which is a SWAT product. That will be available in terms of like SWAT. This is the nadir track this is like the orbit coverage you will get it. So, every 5 minute of along track information or 6 minute of along track information will be cut. It will be given to you level 1 data. Level 2 product may be a surface reflectance product in a sinusoidal grid say H 25 V 0 7 like that right that is one. So, level 3 may be an 8 day composite surface reflectance product. So, level 4 may be LiA product which ingested this surface reflectance data and other information. So, this is just one example for how data from an optical remote sensing sensor like optical I repeatedly mean visible NIR, SWR and TR bandwidths okay. So, this is one such example and what I have shown here is an example for modus 8 day LST composite in a CMG grid. So, this is both temporally as well as spatially aggregated. So, this is a level 3 data. So, that is modus observes the earth every 1 to 2 days at thermal band the spatial resolution is roughly 1000 meters. So, this is aggregated to 0.05 degree grids and average to 8 days. So, this map actually tells us. So, the blue portions are cold areas temperature is less yellow portions are slightly warmer red portions are actually like hot desert areas and so on okay or especially hot areas okay. So, this is an example for level 3 data. So, like this there can be plenty of other data sets hundreds or hundreds of different data products are available from optical sensors. So, just as the overview of this lecture we are going to do we are discussing about the various remote sensing data products that are available and we discussed about different levels of data processing and we just started discussing about like the optical data sets. We will discuss further in the upcoming lectures. Thank you very much.