 So I want to start by just something everybody's familiar with, just a typical RGB digital camera that we all use. So normally this type of sensor takes information in three bands, red, green, and blue, which come from this portion of the electromagnetic spectrum. Of course, all the time the sun is putting down energy from this entire spectrum. We've designed these cameras to capture information in that little portion. Anyone know why we grab the information from this portion as opposed to any other section of the spectrum? Because we can see it. We want our pictures. It's not a true question. We want the pictures that we take with the camera to look like what we see with our eyes, right? And so out of all that spectrum, we take the information from the visible portion. What makes the visible portion special? You guys remember atmospheric windows? Remote sensing course? So this is the same spectrum that we see here on the x-axis. And then we have percent reflectance on the y-axis. So what we can see here is that there's some portions of the spectrum that aren't able to transmit through the atmosphere. These regions that you see here in the blue are very transmissive through our atmosphere. And so the visible spectrum lands right here in this nice thick part of that atmospheric window. So we've got all kinds of this visible energy coming down. And so essentially this is why our eyes have evolved to see this portion of the spectrum because there's a lot of that energy available because it was able to transmit through the atmosphere. So we can look at the actual wavelengths for each portion or different portions of the visible spectrum. So I've got red, green, and blue up there. So red, for example, goes between 620 and 750 nanometers. So we have in our digital cameras that take this information a sensor that figures out the signal strength for that portion of the spectrum. And it does that in the red, green, blue bands. In each one of these, the dark areas represent where we didn't get a lot of signal. Lighter areas represent where we got more signal. So this is an image from San Joaquin Experimental Range, which is one of our sites in D17. So then what we can do, we can take the computer and we can color the red band with red, green band with green, blue band with blue. That's great. Take those three bands. We color them, stack them up, and we get an RGB image, which is our true color image. Right? So what's important to realize, though, is that we don't have to do that. Instead of making an RGB image, I could take the information I got from the blue band, color that red, here information from the green band, color that green, and the information I got from the red band and color that blue. And I can get this. Doesn't look like much, right? Doesn't look like our eyes would see it. But with the ability to separate these bands, we have the freedom to do this if we want. Same thing. I can make another false color image like this where I've put the different bands and colored them differently. Of course, normally we don't want to do that because we want to see the image like our eyes see it. So we got a true color image. We're good, right? That's all we want. We want to see it, an image like our eyes see it. But there's this whole other portion of the electromagnetic spectrum that had tons of other information in it. So we actually want to build detectors that are able to see these other portions of the spectrum. And so I've got examples of Landsat 8, Landsat 7, here, which are two satellites. You can see that they're collecting information, these other portions of the spectrum. Here's our visible portion way down here. And as we move up, near infrared, middle infrared up here. So we can actually look at all the different bands and where they're collecting. Again, here's this visible portion, Landsat 8, and here near infrared, shortwave infrared. Does anyone here from Colorado recognize here? So this is for Collins, Colorado, just north of here. This was the first image that was ever taken with Landsat 8. And so if we look at, say, a reflectance curve from Landsat 8, which is all the values from all the different bands, we can get something like this for different features. So I think we have forests here. So if we look at the visible, it's kind of here. And then this goes into the near infrared and middle infrared. So we can get values of the reflectance across all the different bands that Landsat takes. So you get these curves here, but since the bands, since there's so few bands in Landsat 8, because they're so wide, you sort of get these very sharp breaks in them. And so down here in Landsat 8, we have the panchromatic band, which is, if you look at its band limits, it actually contains most of the visible portion, and it's down at a little bit of a higher spatial resolution. So didn't you remember from the remote sensing classes or experience why we were able to get a better spatial resolution, 15 meters, as opposed to a lower spatial resolution on the other bands, 30 meters? Sorry? Yeah. So there is a little bit less in the visible, but this, see, this actually covers the same values as these here. Yes? Yeah. Exactly. That's essentially what's happening. So, I mean, if you can think about, say, a CCD array detector, you can make a whole bunch of different elements that detect the energy in each of those individual bands, but the smaller you make them, the less energy that they're going to be able to detect. Right? And so if you want to get a good signal, good strong signal, remember all these are going to have a little bit of noise in them. So if you want to get a good strong signal, you have two choices. You can make your detector elements bigger, or you can accept more energy from a larger portion of the spectra, right? And so if we go back to the Landsat, see, these are a little bit narrower, say, blue, 433-453. This goes all the way from 500 to 680. So we're collecting a larger portion of that spectra, it's more energy, gives us the ability to have smaller detector elements get a better spatial resolution. So when we're thinking about hyperspectral data, it's important to remember this tradeoff between, as your spatial resolution goes up, the spectra resolution is going to go down vice versa. So then if we look at this graph again, we see here are the NIS bands. So NIS, as Nathan mentioned this morning, stands for Neon Imaging Spectrometer. This is our hyperspectral sensor. So we are collecting everything between here and here. And the main difference between us and something like Landsat is that we're collecting 426 bands at 5 nanometers wide each. So we're covering that whole array there with a whole bunch of very narrow bands. The detectors that we have have a 1-millaridian instantaneous field of view. So that means when we're flying the plane at our nominal altitude of about 1,000 meters, we're looking at an area of the ground for each pixel that's about 1 meter big. So that's our tradeoff between spatial resolution and spectral resolution. And so the advantage now of having all of these different bands is that we get these really nice detailed curves of the different features in our images. So that Landsat curve looked very sort of disjointed, non-continuous. And so, but now with these, with the hyperspectral data, we get these very nice detailed curves of the different features. So one thing that might be interesting to note here is that we have something like snow here in this last curve. So remember when we see snow, we see it as white because it's reflecting really well in this visible portion of the spectrum. But if you go way down here into the infrared portion, snow does not reflect very well. So if your eyes were able to see in this portion, then snow would look black to you as opposed to white. And so as Nathan mentioned this morning, sort of a matter of scales, the advantage of our system is that since we're flying it on an airplane, we're able to get these nice 1-meter pixels, we're able to get that really good high spectral resolution, those 5-nanometer bandwidths compared to something like Averis, which has a lower spatial resolution because generally they're flying higher or satellite data because they're flying or they're orbiting in space. And so, you know, at this point, the neon imaging spectrometer, the one that we're flying, I believe, someone can correct me if I'm wrong, we have the highest spatial and spectral resolution of any hyperspectral sensor that's available right now. And certainly nobody would be delivering this data to the public other than us. So when we look at these curves, something that we're generally very interested in here at NEON is the vegetation curve. This one here is green grass. So this is the reflectance curve of vegetation. You can see it has this nice big jump right here as we're going into the near-infrared portion of the spectrum. And so just like those RGB bands, we can actually pull out one of those bands from the near-infrared portion. We can look at that energy signal strength that we get from that one. And then we can make an image out of that. So here I took out the red band and I put in the near-infrared band. So I colored this near-infrared band red and then I put in the green for green and the blue for blue. And we get this great false color image that shows us really well how the vegetation is doing. As you can see that quote on the top, healthy vegetation absorbs blue and red light energy to fuel photosynthesis so a plant with more chlorophyll is going to reflect near-infrared energy. So this is really useful information to us. It's one of the reasons we don't want to just stick to those RGB bands. We want to look in those other portions of the spectrum to get additional information. So how is that useful? Well, I zoomed into a portion of this image that you see right here and grabbed a spectrum from one of the pixels from this tree. You can see here there's this nice big jump here so we can look at this tree and we can see that tree is probably doing well. It's probably fairly healthy. We can look at this tree that's right beside it. This looks a little bit more blue or a little bit more purple because we didn't have as strong of an energy signature in that near-infrared band that represents healthy vegetation. So when we look at this curve we can see here look it doesn't jump as high. So there's something particular going on with this tree. So this data allows you to go in because we're at one meter spatial resolution and in those 5 nanometer spectral resolution we're able to go in and actually identify problems with individual trees or differentiate issues between species in the same plots. And so like Nathan mentioned this morning we often also go out and set out tarps. It's the same image that he showed you from Ordway Swisher Biological Station where we put out our gray reflectance tarp here and the black reflectance tarp here. And so each one of these tarps is about 10 by 10 meters large. It's made by a company called Tracor and they cost about $10,000 each. So does anyone want to take a guess why they cost so much? They are not. It'd be a good idea to do that though but they are not. Anyone else want to take a guess? So I mean the answer is there. Constant reflectance across all wavelengths. Like I mentioned snow looks white to you, right? But if you go into this region it's going to go, it's going to fall off. So the special thing about these tarps is it's constant across all wavelengths, right? In both of them. So this is the white tarp here. It's supposed to be about 50% reflectance. You can see this is scaled 4,000, 6,000 but that's actually 40%, 60%. So our 50% is about right here. So you can see that it's pretty constant across all the wavelengths. Where our dark tarp is about 3% reflectance. And you can see that that stays constant across all the wavelengths. And so Sarah, you mentioned this morning that there was also a red tarp. And if you look really, really closely you can see one little red dot right there because we happen to fly over before they had unfolded that particular tarp. So there's one little, there's also a red tarp here. We actually lose this line again and later in the day. And then you can see the red tarp later on in the day. So something that's interesting to think about, since we're at 5 nanometer spins, I just want to go back to this one of our first slides. Yeah, you can see I've got the wavelengths listed for the red, green and blue bands. And so we notice they're really wide, right? Red goes from 620 to 750. But we're at 5 nanometer bands with our sensor. So what do we choose for red? Right, does anybody know? Yeah, I don't know. I mean, whatever you want. I mean, we could choose the center one. We could choose everything in there and try to average them up somehow. We haven't really thought about that yet, right? Exactly how we're going to do that. You'll see in a sec when I introduced the products that we're making so much substance right now. But certainly this could be a good area, things to look into because we don't exactly know what to do here. It's not a big issue with something like Landsat where you just have the band. That's all you have. You have to use the information that you have. There's a little bit more freedom here where you could pick something inside of that, some particular region that may be more interesting to you. So we have several products that we produce from the imaging spectrometer. The first is just pure reflectance. So we do collect the data. We go through a process called DN to radiance, which changes the raw digital counts that's collected by the instrument into AdSensor radiance. And then we orthorectify that radiance with which Nathan talked about this morning and then we do an atmospheric correction, which gets us to reflectance. And so currently we provide all the reflectance data by flight line in HDFI format. It's a big data set. It's got a lot of metadata. We need to include a lot of things in there, which is why we went with HDFI. So as Leah mentioned before the talk started, that flexibility requires some community development potentially so that this data can be universally imported. We're not there yet. And so we actually had to write an importer for the Envy software package to allow people to open and view our data in Envy. And Adrian, at the back of the room, is our summer intern. And he is creating a importer for QGIS. So Envy's a fairly expensive software package. QGIS is free and open source. So we got Adrian to do that project to really open up these data sets to a larger community. So again, the data is created at one meter spatial resolution, delivered by flight line. And in the future, we'll be tiling this data on one kilometer by one kilometer tiles. So once we get the reflectance, we also create vegetation indices. This is an L2 product derived from the surface reflectance. Currently we're delivering an Envy binary format. We've just switched over to be delivering this in Geotiff. It's also by flight line at one meter spatial resolution. You can see here's a bunch of different examples from an area also in San Joaquin. I just want to highlight the first one, normalized difference vegetation index, NDVI. It's a very popular vegetation index. It's the only equation I put in here I'll promise. NDVI is near infrared minus red over near infrared plus red. So remember I mentioned earlier, what do we choose for near infrared and red? This is what we've chosen. For red, whatever was closest to 648, just that single band. For near infrared, whatever was closest to 859. So for NIS 1, that was band 54 and band 96. Is that the best choice to make right now? I don't know. We'll find out. And so similar to what I showed you before, you could also take that NDVI band. Again, you can think about it like any other band. You can put that in. I switched the NDVI for red, gave us this nice image here of a false color with NDVI as red instead of the red band. Next, we've developed water indices. This is also an L2 product derived from the surface reflectance. Currently in NB binary format, but we're switching over to Geotiff. This is by flight line at 1 meter spatial resolution. You can see the sort of true color image at the beginning there and then all the different water indexes. This is to help you figure out things like water stress or drought happening with the vegetation. And then when we can, we also collect field ASD spectra. So this is one of our summer interns from last year, Catherine. She came out with the field with us and she was using a handheld spectrometer. And so you can see we've got some tarps in the background there. And this essentially gives us the same type of information as a spectrometer, except there's a lot less of the errors and uncertainty associated with having it on an airborne platform. Because it's handheld. We can point this sort of viewing device at leaves or ground or the tarps and get really nice reflectance curves. Some of those are reflectance curves for some different vegetation on the bottom. So unfortunately, we're not doing this at all the sites now. It's sort of a nice information to have if you want to try to validate some of the airborne data. I've listed the sites there that we do have it. And then this year, we'll be traveling to D2 and D5. Yeah, to collect this data as well. So coming soon, we don't have it yet. We added an additional vegetation index, which is the soil-adjusted vegetation index, or SAVI. We added that because we used that one to create our LAI and F-PAR product. And so soon, probably this month, we'll be starting to distribute SAVI, LAI, and F-PAR from the spectrometer. Coming later, which is this year, is total biomass derived from the spectrometer as well as surface albedo. And then we're going to mosaic all of these products into one kilometer by one kilometer tiles.