 My name is Nathan Liso, and instead of the introduction that we talked about on Monday, today I'm going to talk more about the calibration of our airborne imaging spectrometer and some of the stuff that goes into trying to produce a high level quality data product from those sensors. What we won't touch on as much as how we verify that these sensors are comparable between each of them, although we do work on that as well because we want any one of our payloads and the sensors on those payloads to be able to be used for the science that we desire. So in this talk I'm going to cover four main topics, and since we only have half an hour I'm going to do this at a high level, but we're going to start out with orthorectification or geolocation, i.e. how do we get the pixels, where they're supposed to be, and how do we know that they're where they're at. Focal plane characterization, so this is the sensor inside the imaging spectrometer that actually collects the light that gets to the sensor, and then the spectral calibration and radiometric calibration, i.e. what wavelengths get where on that detector and how much of those wavelengths get to those detector units. This just shows a few different items in our imaging spectrometer. You probably saw this. This was up in the lab when he did a lab tour there, and I don't know if Barb talked through that or not, but this is the optical path inside our sensor just as a reference for when we start talking about things. And then orthorectification or geolocation, we're trying to get the pixels down on the ground onto the LiDAR data, whether it's a DSM or wherever we get it from and if we don't have neon LiDAR data, we also can pull a DSM from other sources and use that as a surface to put these pixels down on. Why calibration? Well, the first is that we need to do this to get to our higher level data products. We have a very good understanding of what the sun puts out as we mentioned earlier, and then the calibration is we need to figure out how much light got to the sensor. So as we mentioned Monday, that we can get to reflectance data of different objects on the ground from which the higher level neon data products can be derived. So I'm going to start with orthorectification, and maybe I've seen this one before, but this is raw data that comes out of the sensor. No location information here at all. It kind of looks like a mess, although not as big a mess as possible. And then we have to get to something like this where each pixel is on a uniform grid and so that if we flew another flight line this way that we could compare pixels one on top of each other from those different flight lines. One of the key things here is that our sensor is bolted down into the aircraft. And so we have to know exactly where the aircraft is at exact time. And if the time is off, all of a sudden where that sensor is pointing is completely wrong. We can't get it down onto the ground on the right spots. So one of the tests that we do is called a wiggle test. We fly down something that's very straight like a runway here that we know is straight. And you get an image like this. Instead of that one before that looked kind of off, you get something messy. But if you know the timing and you know how this instrument is boresighted inside the sensor, you should be able to get a nice runway. I'm going to run through a few examples here where we don't know the timing. I'm just trying to figure it out. So this is a runway. This one was collected in winter. There was snow on the runway, so everything's white. So it's not intentionally a grayscale image, but with this one, we have a timing offset of minus 0.855 seconds. At this point, we can assume we don't really care why. We're testing it out. And this runway doesn't look straight at all. So now I'm going to iterate through a few different timing offsets and watch as the runway slowly straightens out and the imagery shifts. It's a little hard to see, but you can see there it's too far again. Right now we're crooked. And so if we back up, somewhere in here, we have a nice straight runway. So this is how, if we don't know the timing, this is how we figure it out. And the reason you end up with timing differences like this is which side of the PPS pulse you're getting off of GPS IMU, whether it's a rising edge or a trailing edge or something like that. Different systems work differently. We actually see different timing offsets between two of our imaging spectrometers because they pull it from a different side of the pulse. But the good news is we can correct for this, and we do routinely, and check it every time we install and jump to the. So the next thing we do is our camera model development. And to do that, we're trying to figure out how these instruments are placed in the plane and how they point out of the plane to the ground. We fly three different flight altitudes, kind of shown on this side, of 1,500 meters above ground, 1,000 meters above ground, which is our nominal flight altitude. So it's kind of sandwiched here and then 500 meters above ground. And each of these altitudes, we fly this kind of triangle pattern, which is what we really care about. On each altitude, we fly in a different direction. So if this flight line up here were flying this way, at 1,000 meters, would you fly it this way? And then at 500 meters, we fly it this way again. And so if you push all this data through, it's got to work. Everything has to line up at the end of the day. And the data shown on the left here is these different flight lines. I'm just going to kind of flick through them in the interest of time. And in this case, it's after we've developed the bore site, so everything should line up. And you can see those are narrowest because they're 500 meter lines, so your field of view is the same field of view, but you're closer to the ground, so it's a narrower swath. We take this data, though, and compare it back to the intensity image from the LiDAR system, which is shown here, and then find features in that and do manual tie points to the radiance images so we can tie those two things together, and then throw that all into our algorithm that builds up that camera model. So that's a broad level of that. I'm going to jump now into spectral radiometric calibration. I think we saw this image before. This is our typical flight line collect, and I just want to re-emphasize that we do use this data in our processing workflow, and it's vital that it's collected before and after every flight line, so we understand the changes of the sensor from the lab out to the one we're doing the science collects. And then, as we saw before, both an image frame and a processed radiance image that comes from that raw data set with associated spectra. So jumping into spectral calibration, there's really two things we need to know. Every pixel on that detector that we saw in the previous image except some wavelengths of light, we have to know what wavelengths there are. And given with its imaging spectrometer and the way it's built, those can generally be assumed to be a Gaussian spectral response function. And so in this case here, we have a couple of different measurement techniques here, different spectral response curves. The red one here is measured. The green is modeled. And then the blue is residual. So in this case, a Gaussian spectral response function did a good job and lined up with this very well. And really, what we want to know is, is this the right assumption? And then, where the band center of the spectral response function is, and then the width or the shape, if we're assuming a Gaussian. Beyond that, over here, it shows some issues that you can run into spectral calibration. In general, we want to have a situation like this where the colors are nice and uniform and on specific rows of pixels on the detector. So you have the same wavelength all the way across for a given row of pixels. What are the axes in these little? These are spectral and then spatial. OK, some of them. Yeah. So one of the ways we do this, we're going to start with where this different spectral response function is. And a nice way to do this is use emission line sources, whether it's a laser, mercury lamp, there's multiple choices. And put that into the sensor. And since it's emission lines, we know exactly what wavelengths those are. And then we try to figure out where they fall on our detector. So this is an example from a mercury lamp. Nice known thing, but you could use something else. And you can see there's multiple different lines there. This here is a profile through the detector here. And so you can see these different peaks. But we know where those peaks are. And then we obviously know what pixel they fell on. And so what we're going to look at is these. This is zooming in on this one right here. And you can see that actually the tails of it illuminate other pixels. So we can figure out where the center is. And we do this with all of the spatial pixels for each of these lines and then all the lines. And then we check it with some different emission line source. So this is showing what happens when you do it across the spatial response of the system. And then since the way the sensor is built, we can assume a linear relationship between this. It's not always true. We have to verify that. But in this case, we're assuming that. And then these are the four peaks that we saw earlier. And so you can build up a relationship between a pixel number and the wavelength center of light that falls on that pixel, which in this case is this relationship. And this we check multiple times through the year and verify that doesn't change. And of course, this is done with a mercury lamp. We want to come up with the same relationship used in a different emission line source and then checked in with a monochromator other input source. So what we talked about now is what the center of each spectral pixel is. We want to know the width of light that gets into those spectral pixels. To do that, we use a monochromator. Monochromator scans through a wavelength range. And then we can watch when these individual pixels light up when a certain wavelength gets to that detector unit. So for this, this is just showing raw data coming out of the imaging spectrometer. And if you squint really hard, you can see kind of green light here, blue in there, and red here. I think it doesn't show up well on here, but that light's there. That's showing that we're scanning through different wavelengths. This is just another slice from the data. So in this case, you could think of this axis as time. And then this has spectral channels inside the NIS. So when the monochromator scans through short wavelengths to long wavelengths, slowly you light up different pixels, different spectral pixels inside the imaging spectrometer throughout time. So in this case, we have this nice long line here that's been illuminated as we scan through different wavelengths. The interesting thing here is if you take a slice through time, you see there's multiple orders in here. That is not from the spectrometer. That's from the monochromator. The monochromator is putting out different wavelengths at the same time. In this case, we're ignoring these two. This is our interested one. And so I'm going to zoom in on that one here. And if you look at, I probably missed it, but on the first one, we're only illuminating five areas on the detector unit. And so this is looking at one of those areas throughout time. And you get this nice curve. So this approximates that Gaussian spectroscopy response that we saw earlier. But there's subtleties here because there's differences across those five areas, depending on how you align the input optics and how the instrument responds. In this case, input optics have slight variation. Hence, you see slight variation in these spectral response functions. So this is something we check when we do our testing. The next step that I do then is you fit a spectral response function to this. So this is like a Gaussian response fit on top of that data we collected. You can see it's not always perfect, but you can work harder at getting it better if you really care about it. But this gives us the ability to determine the full width half max of these spectral response functions. And so with this data, then we know the band center and that full width half max of all of these. And you can do this across all of our detector units, which is what we need to do. And this is what we've done in this case. So here's full width half max across the detector. And you can see there's some subtleties in here. The input light changes throughout time due to the throughput of the whole optical system, including the monochromator, fiber optic efficiency of our spectrometer. But you can normalize these changes out and correct for them. And then I'm going to zoom in at a couple interesting areas on here because we see some odd things in here, right? Well, our detector's slight differences on this. The different filters inside the system. And those cause some of these subtleties in here. So if you're using some of our data, one of the outputs is where the different order sorting filter boundaries are. And I'll cover that in a minute. But that leads to some different wavelengths that have a higher uncertainty in our use of data. I'm going to zoom through a couple other changes. This is another interesting area. Here's another one. And then out here, it's a little more subtle. This is in a water vapor absorption region. And you can see there's larger variations in that. So onto characterizing our focal plane. This is our focal plane here. And all these different colors on here signify different areas. So of this whole detector, it's 480 pixels in the spectral dimension, and then 640 pixels in the spatial dimension. But of that, we only use the area inside the blue for science data. And that's about 425 by 598, 600 pixels in the spatial dimension. The area outside of that we use for calibration. And the green lines here are the order sorting filter boundaries that I talked about earlier. And so this rejects other orders of light that are inside the spectrometer. So when we're trying to do radiometric calibration, since we did the spectral calibration, we know what wavelengths are accepted by each pixel. But we have to know how much light gets there. And we tried to do the traceability back to NIST. So if you saw in the lab, we have a NIST bulb. I don't know. Did you get to see that at all? I didn't do it at all. OK, so there's a NIST bulb back there. NIST is the National Institute of Standards and Technology. They put out a nice bulb that says, oh, there's this much light coming out of this bulb at this distance at these wavelengths. That's the answer. We can't argue with that. That's NIST. So in this case, there's that bulb. Now we have a newer test set that's more extensive than this. But we try to block off all the extraneous light there and then get the light shining directly on that white panel that's behind that little black stray light shield there. And since we know how much light is coming from the bulb and we know the reflectance of the panel back here, we can know how much light is reflected from the panel to an optical instrument like this. So this is a relatively cheap spectroradiometer that's used in the field. Or you can use a high-quality transfer radiometer like this instrument's here that was in the corner of the black lab if you saw it on the tour. So this is just showing how much light comes out of the lamp. And then this is reflectance of the panel with uncertainty associated with it. So you can transfer the uncertainty all the way through to the instrument that you're trying to calibrate. We use this instrument then to calibrate the integrating sphere on this test set that we can then scan through the field of view of the spectrometer. And when you do that, this axis is time here. So we flip time from x to y axis. And then this is spatial pixels of the NIST. So you can see we can scan that sphere through the field of view of the spectrometer so we illuminate all different spatial pixels. And with that test set, you can change the amount of light that gets into the sphere so we can test it at different light levels. Because you want to make sure our calibration is accurate across the range of input light that you'd see out in the real world. But there's a few subtleties there. Our sphere is not completely uniform. There's variation across the sphere. We've measured it, though, so we can correct for it. We're also only looking at a very small area here. So we know what we're seeing. And then after you do the scan that we saw previously, you can build up a response from the detector while scanning that sphere through the field of view, which is something like this. So this is the raw output of our detector from that scan. And since we've measured that with a transfer radiometer, we can then come up with calibration coefficients such as this. So in this case, this is the output from the NIST. And then this is the measured output from that sphere. So the comparison of these two gives you the calibration coefficients shown here. But we have a whole detector that's full area. And this is just a vector. So we need to translate this vector across that full detector. And that's typically done through the use of a flat field. So this times this gives you a calibration coefficient for every pixel used in the instrument. But this is all well and good, right? So we did this in the lab. How do you know this actually works, right? I mean, this should be a question everybody asks. Well, we try to compare it out and do an independent calibration. It's typically called a vicarious calibration because you're doing it outside in the field. This is Railroad Valley Ply out in the middle of Nevada. Closest gas is about 100 miles away. But it's a nice uniform area that's been used for satellite calibration for a long time. This is a Landsat 8 image, or 7 image, 8 image. That's collected out there. And we can go out there, set out our tarps that we've seen before, and collect reflectance measurements out there. And so there we are with our tarps measuring reflectance. You can see a different white panel. So that's our standard out there. So we have a known traceable reflectance while we're doing this as well. And then you measure reflectance of the ground. And so if you look at this, the blue line here is the modeled output that we ran through radio transfer code from the reflectance measurements made of the ground and atmospheric characterization measurements made at the site. The red dots are the reported radiance from our instrument. So it looks pretty good, but you always do a percent difference to actually check. And in general, yeah, things are pretty good. But you can't see some variation, right? All the absorption regions, we have too much light here. So that led to some subtleties that are in the calibration. And there's some stray light on our detector at the time that vicarious calibration is done. This is why you have to do it in a real world to find out other subtleties of our detector. So we're going to zoom in on one little area in here. And if you zoom in close, you can see that we got some specular reflection from some roof of something, some trailer there. And that put stray light across the whole field of view of the instrument. Well, that's not good if you're trying to do science from that, right? We need to figure out how to correct that. And so that's one of the things that we were working on. We've also done this vicarious calibration at other test sites, including Table Mountain, which is just north of Boulder here. So in that case, we had some tarps set out here at the road. And then we did a bunch of different flight lines over this as the solar illumination angle varied, because we were interested in what's our limitations on where the sun has to be for us to do good science. Well, it came out, we ended up using this for other work as well. This just shows our flight pattern. So we're flying north and south and then east and west each way. And oddly enough, we got different results depending on which way we flew, because of the stray light. Because we have a black tarp and a white tarp next to each other. And so that's a high contrast scene. And that scattered light from the white tarp was showing up on the black tarp. So if you fly it this way and this way, you'd end up with a result like this, where those radiances bounced around, depending on whether you saw both of these tarps out once with the spectrometer when you're flying this way or you saw one at a time flying through this way. So we got to correct this. And we worked very hard at characterizing this phenomenon and then building out algorithm improvements until you got something like this. So now you see a very nice straight line. It's still changing throughout time, but that's because the solar illumination is changing. The sun is getting higher throughout time. But so we went back into the lab to try to characterize this, because it's hard to characterize the flight data. We need to use some of that. And this just shows the same monochromator scan that we had before. The only thing is, if you zoom in on this, you can see these other artifacts in here. And this is, we determined it was ghosting inside of our spectrometer, where you end up with light on one side of the detector, gets reflected back to the grading, and then comes down on the other side of the detector. But we were able to characterize this. And I'll show a few more details on the detector. So in this case, this is lab data, so not out in the field. These big bright sources are our input sources. And these little things are ghosting images. So this ghost image here comes from this input data here. But since we see it, we can characterize it and correct for it. And so there's also some other subtleties. There's a, instead of a nice point spread function, we have a larger base. So there's some blurring that's happening inside the instrument. We're also going to characterize that in the lab. Here's the little ghost image I mentioned. And then this is a subtlety we call the sunset. But it's similar to some of the other higher order light that's getting into the detector that's not being blocked by the order sorting filters. So I'm going to show some of the results now after we've improved this algorithm, because we're trying to correct all of this. And so in this case, we took a piece of black fabric and covered over half of the input aperture of the spectrometer. So we're not letting any light in on half of it. The other half, you can still see our nice RGB image. And then if you zoom in and enhance this a lot, you can see, here's a ghosted image from the other side. Well, we have to fix that. And so we built up a different correction factors here. And when we apply those, you can, these are the observed spectra. But when you do these calculations, you can try to correct for them. And so I'm going to show the correction over here now after we've applied this correction. So you can see we've knocked down those changes quite a bit. There's some smaller residual there, but we've done a decent job of correction then. The other one is this blur correction. And so here's another half image where half of it was covered. But you can see, and obviously unorthorectified, no geolocation information here. But I'm going to toggle these two images and you can see it before and after our correction. You can see how it cleans up the image. So this is cleaning up some of that stray light that's going from a high contrast target from bright to a dark image. And then this is, again, Table Mountain here where Tristan pointed out the bleeding from the white tarp over onto the black tarp. And now I'm going to show you after the correction here. Unfortunately, I didn't get these quite lined up perfectly. But you can see that a lot of that bleeding from the white tarp into the black tarp goes away here. That's good, but we also have to verify that that's accurate. And we've done that. We've made sure that we didn't cause harm by doing this correction. That's the first thing you always want to do. It's kind of like a medical doctor. You don't want to cause harm. You want to improve the situation. So since I'm running out, I got a couple minutes here yet. So we obviously implement all these corrections, but this stuff wasn't done through all of neon's construction. So earlier data sets didn't have all these corrections on it. You can talk to Tristan more about this later if you want. But we're working to reprocess all this data. It's just going to happen after we catch up on some of the other data that's coming in. And then we want to do defined reprocessing. So we collect a bunch of data, and then everything gets reprocessed up to that. So it's all the same state. Then that'll be a new processing version. One of the things that we care about, I mentioned before all the onboard calibration data, we want to make sure that the instruments behave in the same during the flight as it is in the lab. And so we go back to this collection here. And we put an automatic script that happens on the L0 data extraction that checks some of this data versus the lab collect. And this is some of the stuff that we're looking at. The first two are environmental health of the sensor. And then after that, it's actual output of the sensor. We want to make sure that that compares. So this shows output from that system. I don't expect you to read through all this. But we're checking all of these variables back to the lab to make sure nothing changed. And you can see in this case, everything passes. But it doesn't always. And then we have something to go look at and dig into. So I'm going to end there with the calibration stuff. But if you have any questions, I'm happy to try to answer them.