 Hello everyone, thank you for being here. As just said, my name is Sam Flick. I'm working for the company called MD Group. I'm the sales manager for Europe, working at MD Group now for eight years. And I'm taking care of the sales in our group for Microtron, for GeoQ, LP360. And in these next minutes, I would like to give a quick introduction on hardware and the combination of hardware and software. And what does it mean an integrated LiDAR solution to have? I don't know. How is it on your side? I am attending the intergeal since 2016, somewhere maybe longer, some less. But in 2016, we had some drones, I would say. Now it's totally different. We see a lot of drones, a lot of photogrammetry sensors, LiDAR sensors, solutions. And I ask myself sometimes, OK, what is the real value of a LiDAR solution or of a photogrammetry solution? And honestly speaking, when I discuss with our clients or with the requests we have, it's all about, I would say, the data, what you get at the end, and what the task is. So if you have a project which is full flat, it's like easy surface, and you can do easily with photogrammetry. On the other side, you have projects where you have vegetation sites, very detailed areas, where it's like walls or something like this, where it's good to have a LiDAR sensor as well, because you have some, is another method for capturing data. But it's not just buying a sensor, putting it on the M300 or whatever drone, fly, and that's it. It's some more things inside to consider. And yeah, this is what I would like to show. And I can tell as a small spoiler, stop by our booth. If you want to have some more deeper discussion, we have our technicians there regarding software or hardware or projects. So we are happy to help here. What is the benefit of LiDAR? And next slide would be benefits of photogrammetry. We have put some together, tried to be very objective, but you're able to have a detailed 3D mapping. LiDAR is based on active measurements. It's not calculated like with photogrammetry. So if you show the point, you have it. It's measured. It's measured by light. So the LiDAR sensor sends it out and calculates the way back. And then you know how far it is from the sensor. So you have this as a direct measurement. Then you are independent from lightning. Most at a time when you go for a photogrammetry drone or use the photogrammetry sensor, you are dependent on the lightning situation. So if you have a very sunny day, big shadows, this can create problems at the end. Because in the shadows, because it's dark, you are missing some information. With LiDAR is the opposite. You can even fly in or do it in dark areas and get good data. We have the vegetation penetration. As we don't have to calculate out of pictures, out of a lot of pictures, the LiDAR helps to what we saw also before to look on the ground. It's not shooting through leaves or trees or whatever. It's looking just where to find the free spot. It will go through it and you have your direct measurement. And afterwards with good software, you can classify it and get your crowned or whatever is needed. The precise distance measurement, I spoke already. And you have a quick data acquisition with LiDAR and the good IMU on the LiDAR system. You will be able to fly at, depends on the project and how many datas you need, you would be able to fly at 20% or 30% overlap. Photogrammetry most of the time requires like 60% overlap or up to 80%. So the flight time will be less. And that saves at the end time in the field and also for processing. By the way, processing is missing here. But to process a LiDAR point cloud takes us with colorization, for example, like 20 to 30 minutes. Then you do the classification. We have to look a little bit deeper in it. It's like the algorithms are able today to classify like 90%, 95% of the data set. And then you have to check because it looks good in the beginning. But you still need to have a look and to maybe reclassify and to get a good result at the end. Benefits of the photogrammetry is the resolution is higher because if you have a very good camera, like a full frame camera, 42 megapixels or higher, with the GSD you get a lot of points. That would be one good benefit. Then it's for sure at lower cost. We saw DGI before. If you get an M3E or whatever they are called, you are at 2,000, 3,000 euros. And you can get good results. And then you have the color and the texture. Just to show it in a small overview, as a sum up, I think the most advantage of the LiDAR is that you have the vegetation at the end. All surfaces is one thing. But I think to look through vegetation is a good point where you can add some quality and some data to your data set. Because both systems have their advantages. At GeoQ we do both. At the end, we have the sensor like this one, the TruView 535, where we have the Hisaipanda M2X, for instance. And three photogrammetry cameras to capture both data sets. So now it's very difficult with the microphone. But you see here why it's three cameras. We have one camera Nadia looking. We have two cameras OpliCooking looking. Why? We use up to 120 degrees field of view of the LiDAR scanner. And you will not find a good camera at an acceptable price today, which gives you a 100 degree field of view with a good data outcome at the end. So that's why we use the OpliC mounting to get, when we capture a data set, to get to 100 degrees field of view and to get both data combined at the end. So if you go with the LiDAR at 120 degrees field of view and wants to do also photogrammetry job, you can fly the same flight pattern and don't have to go back and do like the 60% or 80% for photogrammetry overlap. So if you go for 30%, it's absolutely good enough to get both photogrammetry and the LiDAR point cloud. By the way, I am using also a topic. We integrate an APX-15 from a Planix inside to get also there the best data quality to serve the sensor and the data set at the end. Who we are? We are a GUQ company. Well, it's MD Group company, but we included this GUQ, MD Group, AP360, what I said before. We are working in Europe, in US, in Brazil, in France. So not all over the world, but in some parts. And the main products what we do is the LiDAR sensors plus software. Because software, or to say in other words, just to have a good LiDAR sensor, which by the way, a lot of people have, it's not good enough. At the end of the day, you have a point cloud, you have the pictures, you have the IMU data, and you need to get something out. It's good to have a nice flying drone, it's good to have the data and good looking data set, but this is not what the end customer needs. For example, we flew, or one of our partners flew a project in Denmark of 25 kilometers of railways, and he used the previous product, the 515, with cameras. He had the good point cloud, and everything was nice, and I was super excited about good accuracies, and author photo, and everything perfect. But the end customer, he needed just cross sections. So what he delivered was like 1,000 PDFs with just cross sections, and have like a comparison between what should be as built and what was built, so. Yes, we are working with HeSci sensors. We are working with Regal sensors, really depending on the need of the client. So at the end of the day, it depends on what the job is and what the project is, and then we can see what sensor would fit to this need. As I said, it's not just this. It's also the software, and I think my colleague will later show a little bit more on the software side. But we developed 20 years ago, started to develop a software called LP360, where you can process all the raw data from the sensor, the cameras, APX, to a last file. You do the strip adjustment. You can do classification, smoothing, volume calculations, DTMs, contour lines, and a lot, a lot of other things. By the way, if you're interested, we can do a quick demo also on the booth. We have the software running there and the technician who can help with that. And it's not just for our own sensors. We also open it up. So if you fly today, for example, the L1, and you want to get rid of misalignments in the data set, you can come to us and we show you how we work with that and also apply other EPSG codes to the data set and so on. So if that is a problem today, you are invited to come to our booth and we show you how we can deal with that. A few things to consider when purchasing a LIDAR. So I know that it's not necessarily only us on the market, but when you go and to purchase a LIDAR, there are a few things to consider. First is what I mentioned already is the field of view of the LIDAR and the camera. So if you find someone having both, take care about the field of view of the camera. Because if you want to do both LIDAR and photogrammetry and you don't want to switch the sensors, it's better to have an understanding of the field of view of the camera because that impacts directly the efficiency of the project. Because if you want to do both, you have to adjust your flight pattern and that can create more time to fly and also to process. Then a reminder about accuracy and precision. One thing is, accuracy is to explain in very small words, is the spot I am measuring at the moment is that on the right place. Is it where it has to be? And the precision is how often, or let's say if I measure it 10 times, is it really on the same spot? And we don't care about where it is. So this is also to consider because the best would be you have an accurate system and a precise system that goes directly into the pocket, I would say. It's quite cost, it's quite expensive, but just to keep in mind to look after it as well and to not just say, OK, it's very accurate, but the precision is bad, for example. Then you have accuracy testing. I will skip this slide, but it's good to have if you are dealing with a manufacturer, it's good to have some samples and to see, OK, is there a client or a data set or something where they can prove their accuracy on, for example, external measured points from a survey mapping office, and so on and so on. Depends. With GOK, what we did on our side is we have invented a thing called a Curricy Star. A Curricy Star gives you the ability to measure precisely not just the set axis, but also the x and y. Why I'm saying that today, if you are looking at a lighter point cloud and you don't have colors or something and you have, for example, a checkerboard, black and white, you have a spot in the middle where you put your GPS or whatever on it to say, OK, this is the coordinate. So you fly with the drone and you want to say something about the accuracy. You can easy do it in Z because you have, for example, a flat surface using the point and you can just measure it. So let's say it's 5 centimeters fine, but you are not able to see in the point cloud the small dot because the point cloud is not as dense like photogrammetry. So it's kind of a guess. So what the Curricy Star does is it has this white small plate. The software will recognize it. It's on a measured distance. It will not change. You put your base station on top of it and then the software can exactly tell you where the x and y accuracy is. And you readjust the point cloud if needed and so on and so on. But this is a good solution to say definitely something about x and y and not just a guess. One thing as well is the slant range. So today, if you purchase a lighter sensor and they say or we say, I don't know, it's like 80 meters you can use. 80 meters with 20% reflectivity, for example. We always talk about nadir. But if you are running a 90 degree field of view with the lighter, you have this length range. And it's like I think here in this example, we say from 75 meters you have like 106 meters at the edge. So if you have a sensor which is able to fly at 80 meters, you cannot fly at 75 meters at the end because you will get bad data at the edge of the data set. This is one thing to consider as well. Then we have something called beam divergence. This is how the laser beam is spread on the ground, depending on the flight height. So if you have like a good lighter sensor like Eagle, for example, it will be very, very thin. If the quality is not as good, it will get bigger and bigger. It's not necessarily a problem, but you have to consider if it's just a flat surface, it's good. And if you have, for example, like 48 or 22 or whatever centimeters a point, and it's a flat surface, you don't care where the point is, it's just this value. But if you go on edges, for example, or details, et cetera, you want to know where this edge is, and you don't want to estimate or something. So beam divergence is a thing also to consider and to look at when you want to purchase a lighter sensor. Yes, and maybe last thing to consider is what we see also on Intergeo today. It's like, is the sensor also able to go for mobile mapping? I would call it a little bit critical, because mobile mapping when you see MX-9 from Trimble or something is like very, very high-end. I think this is just an addition to the flights. For example, we had one project with a client due to German permissions. He was not able to fly over the Autobahn. So what he did is like he flew as close as possible to the Autobahn and then put the mobile mapping sensor on top of the car to get the rest of the data. So I will finish directly. So this is in summary. Stop by the booth. You can get a presentation if you like. Thank you for having me. And hall 1.2, and you will find your queue there. Thank you very much.