 We're going to be talking a little bit about deep learning. So I'm going to assume you guys know a little bit. This is a little bit of an advanced topic. But if you need more information, we do run a free three-hour hands-on workshop. I'll show this QR code again at the end, or you can come find us at our booth as well. But just a high-level overview, deep learning when applied to geospatial data, it's very good at recognizing patterns of really quick things like classifying point cloud like we just saw from Poitly, and identifying objects in imagery, finding the position of that object, and extracting the x, y, z value, or doing some privacy or blurring on those objects that are found. When we start talking about combining imagery with point clouds, we need to be concerned about the alignment of those data sets. So we all know what a panoramic image is. They're typically collected from, say, you might have one of those new awesome mosaic cameras or some other integrated system from Trimble or something like that. But yeah, you collect those panoramic images, and then you also typically collect point cloud data along side of those things. From that, though, you can have some alignment issues if you're depending on the system that you're doing. So alignment happens. It's the bane of my existence helping people these days. But they might have a homebrew sensor where things are off, and you can see here that that power pole is off with the point cloud from the imagery to the point cloud. And there's a number of reasons why that happens, calibration issues, timing. You might have a wobble or different stitching issues on your panoramic, so there's a number of reasons why that can happen. And if you're interested, we do have a tool to help correct that after the fact. But typically, I recommend users fix that before or find the root cause of the problem. But what I'm getting at is you need to have properly aligned data before you can even start combining them and getting some good data sets. So just quickly, deep learning at a high level, on the left here, we have what is known as the training stage. And we kind of hinted at the last presentation. But in the training stage, you give the computer examples of what you want to learn, so of things that you like. So this could be a classified point cloud that you have manually classified or got someone else to classify. Or it could be some examples and polygons of certain objects that you want to find in the imagery. But you give the example something you like, or give the computer something you like, and then it creates what is known as a model. And that model is essentially a computer program that you can now run on the right-hand side. You use that model, and you run it on the data that you collect tomorrow. It basically takes that unclassified point cloud and classifies it. Or it takes that imagery and finds more of those objects in that imagery. So that's super high level. When you start getting into classification on LiDAR using deep learning, in today's world, we kind of have two camps of how people are doing it. We have the cloud-based deep learning approach where you just upload your data. They have some pre-trained models, really good models out there. It can classify your data quickly, because it's taking advantage of the cloud. However, there are some security concerns. You don't know where that data is going. You don't know what they're doing with it, how they're retraining. Using your data to update your model, so that's something to consider. And then you kind of have a local-based approach. And at Solft here, we sort of have a hybrid, but mostly on the local side, where you do have full control of any models that you run and create yourself. So you own those models, nobody else sees them. None of your competitors see them. But there is a little bit of initial investment and time to help create those models. But because of that, it's highly customizable. You can train on any object, or if you have special types of classes that aren't part of those common models that somebody's created, you can train your computer to understand them in a very easy-to-use interface. On the imagery side, on the left here we have an aerial tile, and on the right we have a panoramic image, or a clip of a panoramic image. But there's computer vision is very powerful. And images these days, it's a solved problem. We can find street furniture very easily, signs and poles and stuff like that is easy. But then again, there might be special, maybe you're in the utility telecommunications industry. There might be special objects in your data that those previously trained models don't understand. So again, there might be some custom data that you want to find. So to look at an example, my colleague actually just did a quick talk in the other hall about this. I'll give a quick, high-level view of what he was talking about. But in an airport environment, we have a partner over in the United States. They did 2,000 scans, or 2,000 pictures, of an airport terminal using an Aplanix-TIMS cart. From that imagery, they were trying to find badge readers, those little security things that you can go through a secure door. So they took about 20 or 30 examples of those images that contain badge readers and sent them for training. So to do that, it's a little hard to see here. But on the bottom right, they defined a class. And you can see the badge reader underneath that departure airport sign where they drew a polygon around that keypad as an example. So they did this 20 or 30 times, like I said. And they submitted it for training. Our platform does show you some graphs and statistics using TensorBoard technology or open source technology. But as the model is learning, it's showing you things like the accuracy of how well it's understanding what you gave it for training. So if you're into the data science side, you can measure that. It's always good to learn a little bit. And then they ran the model on all 2,000 images. And they found a set of 187 instances of those badge readers. And so the same badge reader was found in multiple different images from different perspectives. So it's expected. So it's good to see that it's finding stuff. And then using our platform, they extracted the position of those badge readers. So those 187 labels that were found were narrowed down to 20 unique badge readers. And the locations were shown here. You can export it as a CSV or whatever you want to do with it. So now getting into combining and improving some of these results. So let's look at a use case around doing an infrastructure asset survey. We're examining light poles and traffic lights and a few other things. We have panoramic images with some aligned point cloud. And our goal is to try to segment out and classify the point cloud with those objects that we found from the imagery. So when you first look at a training a custom model, you have to start where what does a light pole look like? There's many types of light poles in the world. So you've got to count those for your training data. Make sure you label examples of all types of those. So you've got to make sure you show the variety or include variety when you're training and make sure it's complete. So make sure if you give it an image and there's light poles in it, make sure you label all those light poles. So this video here is showing we've already previously trained a model just like I showed with that badge reader. And now we're running it on the data. And what we see here in this intersection is all of the signs, the light poles, I think there's security or traffic cameras. And we're creating some output. There's a few parameters for fine tuning, how well it ends up segmenting. But what we can do here is assign an LAS classification to that object. So we say something like a light pole is class six. A traffic light is class whatever. I think my video stopped here. Oops. Yeah, so what it ends up doing here in the end is it ends up segmenting out that point cloud. So you see here at the top, the light pole is yellow. The traffic lights are red and the signs are purple. So it ends up assigning that classification number to the point cloud. So you can use this in conjunction with, say, a classifier that classifies a point cloud using regular deep learning. But then you have that one little object that's not being properly found and apply the knowledge found from the panoramic image into the point cloud. So how this works is there's a lot of math going on. But essentially, what I said, we create the custom deep learning model. We do the label analysis and extract the positions. From that, we can then start to segment out the points. There's some math involved here about converting our labels into polar coordinates and looking at it from different angles. But essentially, once we have segmented out the points in the point cloud, we can classify them. And like I said, this is really good for those custom objects that you might not be able to normally find in point cloud classification. But what's great about this is now you can start to do some advanced analytics. So you've segmented out the points of that light pole. Now you can measure the height, the diameter. You can look at the imagery to start measuring or look at condition, whether it's damage, or there's corrosion, or whatever it might be. So you know that you can start creating reports or maintenance workflows to send your crews out to start to improve or replace things. Some of the current research that we're working on is in the same vein here, we're taking panoramic image and doing full semantic and panoptic segmentation of the panoramic imagery. So this means I'm splitting the image and finding the buildings, finding the road surface, finding the vegetation. And just like colorizing a point cloud, we can now classify a point cloud off of this segmentation map that it creates. And it works well for indoor and outdoor data. So there's a few things to figure out. It's a work in progress, like I said. There's some rules about different perspectives. Might have some overlapping collisions. And so you've got to account for those rules and obviously make it fast. Yeah, so in summary here, like I was saying, deep learning on imagery, it's a proven technology. It works very well. Deep learning on point clouds, it's still very much in its infancy. And there's a lot of cool companies doing some cool stuff around there. But it is an emerging technology. It's still, you still get to fight with it a little bit to get those objects that you want to classify. So if you do have multiple types of data, I do recommend that you combine it to get better results. As promised, I have the deep learning workshops. So if you want to scan that really quick, it's a self-guided workshop. You can download it and we give you some sample data. You can work with our software to try it out on your own and use it on your own data if you want. Or additionally, we're going to run some in-person workshops as well. So definitely fill out that form on that page and you'll get subscribed to that. And if you need to find us, we're over in the Hall 27, booth I-2702. And we do have a local partner here in Germany, Altera, that works out through the Trimble office. So you can also go find them as well. Or just go to our website, sol3d.com. Thank you very much.