 Hello, everyone. As she said, my name is Amanda Lind. I work for Blue Marble Geographics. We're a GIS software company based out of Maine in the United States. It's a very broad software, but something that we're really excited about is our photogrammetric processing tool that is Pixels to Points, where you can take images collected by drone and create the ortho image 3D model and point cloud from it. And the benefit of doing it in a GIS software is that you're able to, from those outputs, then generate your DTM, your contours, your extracted building layers, all within the same software. So I don't have a clicker to change. Uh-huh, thank you. Where should I point this? There we go. So agenda, a bit of what I'm going to cover today are the data requirements for creating these outputs, how they should be prepared, and the process of creating these high-density point clouds using Structure for Motion, which is what Pixels to Points is based off of. And then once we've created the data, we're going to thin it. Did I do that? And then process the point cloud, identifying the points for classification. We can classify building features, vegetation features, just about anything. You can rectify it to increase the vertical accuracy, and then to extract those building features that you've collected. I hope I can figure out which direction this is supposed to point. Very helpful. All right. So this data that we're using for this demo was sent in by a user, one of the fans of ours. He was looking at a high school over in Maine. So to look at this data for today, I'm going to be comparing this collected data from something that's a little bit more accurate. This USGS 3DAP data is something that's highly accurate, but it has a lower resolution. It isn't quite as detailed as we want it to be. So the benefit of creating this photogrammetric point cloud of a smaller area is that it's denser resolution, but it isn't necessarily going to be as accurate. So we're going to use free data to increase the accuracy. It can sometimes be more affordable for users who don't necessarily have a high resolution GPS unit. Goodness gracious. There we go. So this is a snapshot of what some of that data looks like. When it's loaded into Global Mapper, it will appear as individual camera points. So each one of those icons is an image. And that's because when the picture was taken, it took the latitude and longitude and elevation of just the center of that image. So that's why it shows at one point. Because before it's processed, Global Mapper doesn't know the orientation it's in or how big that image is. So that's how it looks before processing. And what this screenshot is showing here is that you're able to assess the images before you process them. So you can click on an image and you can see something that's already loaded. So here's our school building here. You can go through the data a little bit as well. There we go. So the requirements for the imagery is that what you're studying has to be static. So if you're looking at a highway and you're taking many pictures of the highways you go down, you're more likely to get good data created from the pavement itself and the static, the stationary objects, versus the cars would just be blurs of noise on top of it that would need to be weeded out later. And this is just from how the software works, where it uses, it looks at all of the individual images to identify stationary features in each image. And where those images overlap, it can identify the same feature in each image. And it uses the change in perspective to kind of create that 3D perspective. So you can kind of think of it with how your eyes work. You see if I can fix that all. Where you have two eyes and that's how we have depth perception, that's what photogrammetry and structural promotion is doing with the images. It is using multiple images, multiple perspectives to create that depth. And then on top of that, it has the latitude, the longitude, and elevation data from the drone. Goodness, I need to breathe. But, and so that's how it's creating these 3D outputs. And that's how it knows where the overlap lays. So when you're collecting these images, it's always best to have the least amount of noise in it as possible. And in images that's often generated when you have issues of low visibility, of course, if you can't see it in the image, it is not going to be generated. For example, things underneath trees. And if you have high wind areas, especially if you're generating data of trees and vegetative areas, if the tree is in a slightly different position in each image, that in itself is going to create noise and inaccuracy and we'll mess with your final result. One of these will work. Over there, thank you. I'm so sorry. So the second part of data preparation is to evaluate the image overlap. And that's to make sure there's a 60% front to back and a 40% side to side overlap between the images in that the more often an object appears in an image, the more accurate it's going to be and the more likely it's going to be reconstructed in the final output. Again, you want to make sure the images are clear. High resolution images will be more likely to create high resolution data. And removing erroneous images. This can, of course, be fuzzy images, but it can also be images where a cloud has passed over and it's created a slight change in color. And while usually pixels to points can account for that, in some cases, you may find that that change of color almost registers the object as a different object. So with buildings a different color, it might register as a slightly different building. So if you notice in your output that something is weird, you can go through and remove some of those objects. And we have some color mapping for that as well. So this is what the interface looks like. These are our more advanced settings. In the newer version of the software, which was released a couple weeks ago, 24.5, 25.0, we've also implemented something called a wizard, which walks you through basically this interface step-by-step, one window at a time, where it asks you to import the images if you have any extra XF data, if you want to add any ground control points, any of that information, to kind of break this down a little bit and make it more accessible, especially for new users. But these options allow you to choose what you would like your output to be, whether it be a point cloud, an ortho image, or a 3D mesh. And all of these can be edited in global mapper further on as well. There's the option to reduce image size, so to bring the resolution of the images down to process more quickly, and especially if you're working on a desktop computer that isn't quite as powerful, that's a really good option to use. Version 25 has increased our processing speed a lot, so you'll probably not have to mess with that quite as much as you did if you were familiar with the other versions of the software that way. There we go. Walking through this, how to import the images, the ground control points. I appreciate the arrows. Eventually this will switch. There we go. So here's what these outputs look like from this data. Something that's important to know when looking at this is all of these images were collected from a native perspective only, and that gives us the opportunity to highlight things. I don't think this is a, no. So you can see under the edge of the building here, under the eaves, there's no points. There's no data. And that's because it wasn't present in enough images to be generated from. Sometimes, you just will send in data that's generated from GlobalMapper and another software, and we've loaded them together and we can see that there are some holes in GlobalMapper's data from these points that's populated another software. And what I've noticed is that we have a lower error threshold. I'm not speaking for all of the software, of course, just a couple I've seen. And GlobalMapper leaves these holes in there because they can be later populated with a creaking tool if you want more points, or if you're creating a DTM, we have options for interpolating between grid cells to help fill in those with a more educated guess based on surrounding points, then taking the error from part of the images that isn't necessarily accurate, if that makes sense. Here we go. Photogrammetric point clouds are especially dense and that is useful in the entire resolution, but it also costs a lot in processing time. So something that GlobalMapper can do is 3D thinning where it doesn't thin universally across the point cloud, instead it's thinned based on the rate of change in the area. So if you have a flat ground like we have here with a building in the middle, it's going to thin a lot more points out of the flat ground because they're redundant. There's less change in elevation, we know it's there. But in the building you can see it maintains that higher resolution point cloud because there's more change and those points are more important. So it's a way to make your data process more quickly through classification and through DTM generation without necessarily having to sacrifice data accuracy. And this is kind of what it looks like when it's shaded by color as opposed to by elevation as it was before. So you can see that it's a lot more sparse over here in the sparking lot. A lot of those points are gone but the buildings are still basically the same. So it's just maintaining that accuracy while fitting your point cloud. Global Mapper has a wide variety of classification options. We can do, we have a few tools for built-in automatic classification where you can look at ground, you can grab your noise points, your building, different layers of vegetation, a high, low and medium. And we also have a tool called segmentation where it kind of lets you stick your hands in the classification a little bit more where you can break a cloud apart based on the structures of the cloud, right? We as humans can look at this and be like, oh, that's obviously ground, that's obviously a building. But a computer has to look at the attributes of the points as well as their structure. So those built-in classifications options, ground and building, let you classify the points because it already knows what those structures look like. But segmentation lets you choose whatever structure you want. You wanna classify cars? You use segmentation to look at the curvature so where it breaks in the ground to grab those points there to separate it from the ground. If you want vegetation, you choose chaos options because vegetation points are all over the place and where they're oriented in the color and everything. And all of those options are available in GlobalMapper. So here we can see the ground points have been classified. There's a little bit unclassified along the edge there and that's an area of higher slope. So you would want to adjust those slope settings to expect a higher... Classified points with the steeper slope there is just settings within the tool that let you... Oh, I'm out of time. My apologies. I recently learned I was giving this demonstration. Do I have time for questions? No time for questions. Well, my name is Amanda Lind. I'm at Blue Marble Geographics in the next haul over. Come find us if you have any questions.