 All right, I'm going to speak about the digital camera data. Here's a sample of a picture, an image we took over Boulder about what year. And you are right there. That's neon headquarters in the classroom. The image has been orthorectified. And the ground distance is about 800 by 600 meters. And this is a 10 centimeter resolution. All right, the camera itself is a, the camera back is made by phase one. You can see the number of pixels. At best, it can take one frame every two seconds. It's got a forward motion compensator so that as the plane is flying along at 100 knots, it minimizes the blurring on the ground. The output format of this camera is something called IIQ. It's proprietary to phase one, meaning in order to actually deal with the really raw images, you need to use their software called Capture One, which is free to use for this particular image format. And the nominal resolution when you're flying at 1,000 meters above ground level is 8.5 centimeters for the raw data. So what do you use the camera imagers for? Well, it's a complement to the spectral data. On the left here, you see spectrometer data. We selected three bands, 52, 34, and 18, and they mimic RGB. It's difficult to see what you're looking at on the ground. Over on the right, we see the camera image, coincident camera image. And you can see that there are roads, your trees, shadows. And then right here, well, this, by the way, is San Joaquin. Right here is the central tower, San Joaquin's central tower. You notice that it looks like it's falling over at 45 degrees. Obviously, it's not. Well, that's an artifact of the fact that you're at 1,000 meters above ground, and this is over toward the edge of an image, so it's distorted. All right, in operations, it's coincident with the spectrometer. We have about a 50% overlap, long track, 33% cross track. Normally, the frame rate is about four images per second. And each site, we may collect between 2,000 and 11,000 images over several days. There are three major steps to processing. The first one is to adjust the color balance and exposure. And since the images may be taken on different days and on the same day over a period of several hours where conditions change, you may have to adjust the color balance and exposure separately over the time period. The second step is orthorectification. You have to remap the image from the camera frame to a regular fixed grid on the ground, the same grid that the spectrometer is ejected on, except that it's at 10 centimeter resolution, not 1 meter resolution like the spectrometer. And finally, mosaic-ing. Like I said, you could have 11,000 images over one site. Mosaic-ing takes all those images, overlaps them, and creates one single image. And then because the single image is so large, you subdivide that single mosaic image into separate tiles, which are one kilometer on the side. All right, pre-processing. You can see on the left a raw image on the right process image. We try and make it appear as close to what you would see if you were actually up there looking down. And as I said before, you may have to adjust these separately. It can be a very tedious process. Orthorectification. We remap from the camera frame down to a regular UTM grid on the ground that is shown in the picture. And to do this, we require several other pieces of data. One is called the smooth best estimate of trajectory. We get that from the lidar. And that tells you exactly where the plane is at any second and exactly how it's oriented, the roll pitch and yaw. And from this, we can trace any line of sight from the camera down to the ground. And also from the lidar, we get a digital elevation map, a DEM. And that's the grid you see there. So we shoot a ray down from each camera pixel down to the ground and see where it intersects the DEM and then project that down to this UTM grid. We also need a camera model, which describes the distortions in the camera image due to the lens and also the offset of the camera itself from the lidar bore sight. Here's an example of an image before orthorectification. This is a raw image. But after the pre-processing. And over here, it's been orthorectified. The plane wasn't going exactly north-south. And you can see on the sides here where there's curvature on the side of the image. And that's due to the uneven ground surface. Now, the orthorectification process introduces other distortions. And that's because of a mismatch between the camera resolution, which is 1 1⁄2 of a meter, and the lidar DEM resolution, which is 1 meter. So you get straight. Here's an image of an intersection. And you can see here where these straight lines are distorted. And that's because of trees and poles nearby, which the DEM, again, has 1 meter resolution. So it may trace a ray down to the top of this tree, but it really belongs here. Over here is the very edge of an image where you're seeing the tree canopy. And you see a lot of swirls in the tree canopy. Again, you're seeing partly to the ground, partly to the top of the tree. So you get these artifacts that look kind of weird, especially when you compare them to, say, a satellite image where the line of sight is very nearly vertical. And you don't see this kind of distortion. Finally, mosaic. A single survey will produce between 2,000 and 10,000 images. And so mosaic and combines all these into one image. OK, and from all these overlapping images, you select the pixel with the smallest zenith angle, the most vertical angle, to minimize the distortion that we talked about earlier. And the result is a set of, and then you tile it into a set of images, which are one kilometer on the side. And you can end up with between 100 and 450 of these tiles. Now, one ongoing issue is how do you blend these images from different days or different times of day with different solar zenith angles into something that looks uniform? You'll see that in a bit. All right, here's an example of a measurement we made earlier this year. Here is the full mosaic, including all the different images. And over here is one tile from this mosaic. You can see along here these seams. And that's because you're taking different images with slightly different lighting conditions. And it shows up as these boundaries. All right, now, how do you deal with 11,000 images or 450 tiles? In order to deal with all these, we have created a KMZ file, which you can load into Google Earth. Here is a picture of this is Google Earth centered over the neon hanger at Boulder Airport, about two miles north of here. We created these KMZ files. And here's an example of San Joaquin of this year. And if you load it into Google Earth and then double click on it, here's what you see initially. And this purple boundary, the extreme purple boundary, is the limits of a digital elevation map that we use for processing. But let me turn that off. You'll see this other purple boundary. That's the limits of the actual DEM from the LIDAR. We've included all this other area, but that's from a USGS DEM at 8-meter resolution. We just use it for filler. And it's necessary for some of the analysis. If we blow it up, you'll see that there are some interior portions outlined in purple. And that's where there's no good LIDAR data. So this has also been filled with the USGS DEM. Typically, you'll see that over water bodies where you don't get a decent LIDAR return. But the water bodies are flat. So there's no features there to see. You also see here the location of the central tower. Then if you click on Mosaic tiles, this shows you the location of each tile. So suppose you're interested in the central site. Load up there. If you click on the tile, it gives you the name of the file that corresponds to this tile. And the name consists of the year, the site, San Joaquin Experimental Range. The two is the second visit we have ever made of San Joaquin. We went there once before or about one year. And these two numbers here are the Universal Transverse Mercator locations in meters of the lower left corner of the image. And that's just the way they're named. It's for something else. But so if you're interested in this area, this tells you how to find this tile. You can also go over to this button. And that shows you the location of each individual image. So like this one, number 0499, is there, if they click on it, it gives you the file name of that image plus the exact location, the altitude, and the heading of the plane when it took the image. And then finally, there is a 5 meter resolution browse image. This is taken from the Mosaic, but it reduced resolution because it's just too big. This image by itself is already 10 megabytes. And this gives you an overview and perspective of what you're looking at. And it includes things like cloud shadows. And you can also see those boundaries between individual images. And one more feature of Google Earth. Now, this is Google Earth Pro. And Google Earth Pro is now free for anybody to use and download. Google isn't going to support it anymore. But you can still get out and use it. So one feature of Google Earth Pro is that you can actually take some of the, well, let's go to this tile here. It's tile 3257411. We go over here to the actual images and find that one. There it is, 257411. And we drag that into Google Earth. Now, the problem is this image is too large to fit into Google Earth. So you've got two choices. You can either look at the whole image and scale it down in resolution, or you can crop it. So you get full resolution over a limited area. So let's crop it. And center it right on the tower, blow it up, and look at the image in detail. And again, here's that tower we saw at the very beginning, plus the roads, trees, and so forth. And this is just an aid to locate things of interest. And again, you can do the same thing with, you can scale it. There's also create super overlay. But don't push that button. That's a disaster. That shows you the full image, again, at reduced resolution.