 Okay, thank you for attending this presentation. So my name is Jean-Michel Fridt, and I want to talk to you about fascinating convergence of technologies. We have reached a point now where on the one hand, we have basically infinite computational power. Everyone has a very fast and pure on his desk. On the other hand, we've got this proliferation of small unmanned aerial vehicles fitted with high quality cameras. I want to talk to you through the generation of digital elevation models, geo-reference digital elevation models using such kinds of UAV. So I want to show you a bit what software can be used, what are the flight requirements, and here's an example of two measurements that were done. So to give you an idea of the area that I'll be interested in, that's about a 40 meter scale. So we're talking about areas of like a few hundred meters side, pixel resolution sub-dissimeter, and here's an example of braiding and flow change of a small river between two weeks where you can have aerial images with short repetition time. So I want to get through this discussion about how to generate such auto photos and digital elevation models using small unmanned aerial vehicles. That's actually a sequel to the presentation I made at Force4G French last summer. Unfortunately, the tutorial I wrote at the time was in French. I had planned on translating this for Force them. I haven't got time yet to do so. So if anyone is interested in getting this tutorial in English, of course, feel free to send an email and I'll be happy to translate it for you. So why am I doing this? Basically, my basic input for geomorphological investigations or any kind of geo-reference investigation that I'm interested in is using elevation as a basic input. Whether I'm interested in measuring landslides, that's what I'm interested in, glacier mills, material transport, flooding, that kind of information. Now if you just rely on global digital elevation models, such as SRTM or GDAL, you have what I would consider low resolution from my perspective, digital elevation models which are typically free second arc over most of the world, one second arc over the US for SRTM. So we're talking about 30, 90 meter resolution, lateral resolution. And additional, usually you have one digital elevation model, so if you want to make difference of digital elevation models, you need to find some way of having repeated digital elevation models, which are usually not available. So what I'm interested in here is how can I generate local digital elevation models, which means what I mean by local is less than 10 square kilometer area. What I'm interested in is high resolution, sub-dissimator pixel size, and high update rate, let's say 1DEM every week or every month, so that I can make subtraction of digital elevation model and see material movement or changes of elevation of my area. Additionally, since this all started as a hobby and as usual, hobby gets more and more into the work, I'm interested in low cost equipment. What I mean low cost, I started this with a less than 100 euro, one of these toys that you can buy at Christmas. And slowly I'm getting into more involved equipment. Now I'm doing this with DJI drones, but I'm talking about sub kilo euro equipment. So the challenge that I will be telling to you about is that we will have to handle a huge amount of images. Typically, we're going to acquire 600,000 images per flight, so huge that might not make sense to you, but 600 to 1,000 images at high resolution images of typically 5, 6 megabyte per image. We're talking about six, seven gigabyte worth of images to process. The generated DEM, we're talking about 10 square kilometer with sub-dissimator resolution. We're going to have gigabyte sized auto photos, so I find this a bit challenging to handle. Now this means that hardly we'll be able to handle graphical user interface because such data set would be horrendously slow to render. So I will be presenting to you a tool, a common line interface that's been developed by the French National Geographic Institute, EGN, which I'm not totally unrelated to, but I fell in love with the software that they developed. Marc-Pierre Dessigny is in charge of this development in one of the labs of EGN. And I want to introduce you to the software because at first it might be a bit, you might get a bit afraid when you have a look at MiCMAC. And I want to show you step by step how powerful the approach relevant to the unique philosophy of one tool for one solution and not a fully integrated tool will allow you to on the one hand handle this huge data set and on the other hand know exactly what you're doing. For each step we have one tool for each step, which means we have an output which tells us if this step was successful, if it failed, then we have to find the reason that there is no point in continuing the processing if one step of a processing failed. This will get clear I think in the next slide. But before we start processing images, we need to acquire images. So just let me emphasize how compliance with regulation is important. If you're just doing this for fun, you might or might not comply with regulations when you get into your job, you must comply with regulations. In my case, I'm flying north of Norway. I need to get of course flight allowance or exam. I need to get flight authorization from civilian authority. This is a 30 page document that you have to fill. You have to check with the defense ministry if they agree that you take pictures. This is the meteorized zone that I'm flying, so defense ministry doesn't care. You have to check that you're not harassing wildlife. So this kind of activity you have to check to get authorization to fly. Once you get all the authorizations, you have to plan your flight. So flight planning for making digital elevation model requires that you have enough surface coverage from one picture to the other. So basically you must make sure that your pictures overlap by at least 60% typically MiGmax says you should have 80% overlap. So once you have decided what is your horizontal velocity, in my case, typically 10 meter per second, and you know your altitude over the area, then you can calculate how often you have to take an image. In my case, for example, I'm flying with a coverage of 100 meter. I'm flying at 10 meter per second. I need to acquire one pictures about every two seconds. So this is why a 20 minute flight will generate 600 to 900 images. Now, once you've acquired your images, you can do this manually. This is what I originally did because I wanted to track some features, I wanted to track some river, or we can automate the flight. But in all cases, you have the angular coverage of your camera, you know the altitude, so you can, by knowing the resolution of your camera, you can compute the pixel size and you see that a UAV flying at 100 meter high with a 4,000 pixel resolution will typically create a DM with pixel size four centimeters. I'm not saying that the DM will have four centimeter resolution, but at least your pixel size would be four centimeter. And of course, now we have automated software. This is ITZU, one of the free software that you can use to define the flight. So you just have to define the two corners of your flight, take off the drone, get a coffee and come back 20 meters later to land the drone. In this case, it's not landing automatically, you have to bring it back manually. So we've collected all these images. Each one of these dots is an image. How do we process this image to create a DM? First thing, either you're using a DJI or a geo-reference camera and it's easy, you have an exit tag and you've got the GPS position in your image. Alternatively, you're putting a high quality camera which does not have GPS. You've got on the other hand, a GPS recorder. What I'm doing is I'm using exit tool, the common line interface exit tool software for tagging my images so I can say, okay, if you check the header of your image, you know what the image is when you take a clock. In this case, this is a GPS discipline clock at observatory close to our place. And this gives me, I have the header of image, I have the GPS time, I can compute the difference between my camera time and my GPS time and then exit tool gives you the option minus geosync and you can give it a whole NMEA log file, a directory where your images are and exit tool will take care of geo-tagging all your images. So once we've got this set of geo-tagged images, I will need to inform my processing software, we'll see why in the next step, where each picture was taken. So this must be done in a projected referential. In my case, I'm flying from Norway, Norway is UTM 33 North. So I will convert all my positions, my WGS84 NMEA log into a projected framework and I will create a file, an ASCII file in which I have longitude, latitude, altitude and file name. You will notice that I remove a big offset, North and Norway is something like a million meter, sorry, 100,000 meter North. So I remove all this offset here, otherwise you have rounding error in the processing. So you just keep the relevant decimals of your position and file name and you convert this text file from a text file into XML. And MiCMAC is always prefixed with MM3D, so all commands are MM3D and I want to convert the orientation from a text file which is here to an XML file. And we will see that we need to select a subset of images for calibrating the lens properties. So we say we're going to use as a reference image, this particular image which meets some of requirements, namely that there is a maximum elevation variation in this image and I want to take 25 images to calibrate the lens. So this is pretty much what you will always do, you will always say MM3D, what command you want to use and the options to create whatever step you want. So the first thing I want to do is why do I need to use the GPS position of each camera? The first thing that a software, processing software will need is type points, meaning points that are matching on both images so that the software can say, okay, this same image was, this same point was taken from different points of view. Now if you just do this stupidly on a 600-picture dataset and you just say I want to correlate all possible pairs, then you find out that you need like 200,000 matching steps. And each one of these matching steps is taking something like a second. So you're going to spend a few days just finding the type points. But you have a good, very strong assumption, you know that this image will most probably not have any similar point with this one because they are so far away. So what we can tell here is you tell NICMAC, okay, just consider images that are adjacent in their GPS position. So this is why we computed this file previously. So we're going to ask NICMAC to use the Tapioca tool using the file that we just generated previously. I export the text files so that I can draw these images in QGIS and I ask it to only use the pairs of closest images. So by doing this, instead of having 200,000 points, you only have 10,000 pairs to analyze much, much faster. And here I drew the pairs that NICMAC has identified. It makes sense. So the algorithm is working pretty well. And by doing this, NICMAC is going to search for points that are matching in both pairs. So here are two adjacent images. And if you follow the arrows on your right, you see that the vector of motion between the two images is indeed properly computed. So NICMAC has selected relevant points on this image and computed where these points have moved from one image to the other. This will be the basic input that we will need. So here you see, first step, verify that indeed the type point identification has worked properly. If you have very smooth areas or features that are similar in multiple pictures, this step will not work properly. You can draw the arrows and you can check that this is working properly. Now we've got the type points. Next step, we need to model the lens. We have a UAV. I bought a cheap toy. I have no characteristics of my lenses. So the system can automatically generate model of your lens. I don't have nice illustration. So what I used is with this slide to show you how you can help yourself with NICMAC when you don't know how to do things. The tool for finding lens properties is Tapas. So we ask NICMAC to run the Tapas tool and with the help we have a list of all possible lens models. Of course, the more complex the model is, the better the modeling of the lens properties but the more difficult it is to make it to converge. So it's a trade-off between complex models where you have a lot of freedom but chances of not converging and basic models with fewer models, poor models but easier convergence. I just show you the output of Tapas here so that you can realize that all these numbers here are errors in pixel. In the very beginning, when you start the model, the lens model is very poor. So you see I have errors of three pixels, 11 pixels, that's very bad. If your model is converging properly, you will end up with less than one pixel error between your images. So at first it might be frightening to see these big numbers coming out of NICMAC but actually they are very relevant. They tell you the percentage of type points that have been used and your pixel error. So basically if your pixel error is not converging towards sub one value, well, you're in trouble and something went wrong. So at the end you see that your worst resolution is one pixel and residual is about 0.9 pixel. The lens was properly modeled. This is working fine. So once you've modeled your lens, you create a coarse cloud. So that's basically to check if NICMAC was able to position properly the path of the UAV over the surface and if your surface makes sense with respect to what you expect from your DEM. Just be careful. I was recently introduced to this paper which shows that self-calibration of the lens might create very low frequency, large-scale distortions, parabolic distortions which I have seen. So this is a caveat about self-calibration of the lens. You will create large-scale distortions. You might. And if you look closely you see here skew tracks here that helped the software match the features on this snow-coated area. Once you've done the coarse point cloud, this is the most time-consuming part. You create the fine point cloud. You create the orthophoto and you create a correlation map that tells you where you are most confident that your digital elevation model is correct. So what does the fine point cloud look like? Well, here's an example of a fine point cloud generated this way. So this is the example of your fine point cloud. You see it's about 10 million points and you can move around in your fine point cloud that you've generated this way and zoom into the area, whoops, the other way. And you can check the topography of your digital elevation model if I can zoom in, yeah. So you see that indeed you have a topography and as you have in real life you've got the hills and the valleys that you can check. So here I collected these images. I have one satellite image of the region I'm interested in. You remember that we removed an offset in my georeference images. I put the offset back. I insert this in my world file so that my data set is georeference. I correct for altitude. I get an image in pixels. I get in an XML file a resolution. So one pixel is 22 centimeters and if I overlap my DEM, you see that I have the braiding of the rivers. You have fine continuity. If you look at the hill over here, you indeed have something that is relevant with the aerial picture. So the topography is matching the features that we see on the ground. So that's an example of fusion in QGIS of your digital elevation model with the background satellite image. You cannot see it here, but this is a former satellite image with two meter resolution and I get a DEM, whoops, sorry. And I get a DEM with a sub-decent meter resolution. So when you zoom in you really see the difference between the big pixel of a satellite image and the fine pixels of a UAV. So here is an example of subtracting two DEM. These two DEM were acquired one week apart during this time interval where there was a big flood. Here is a channel where the river is flowing and if I take a subtraction, I see this deep here, which is about three meter deep and does this match reality? Well, I didn't make a practical field check but this is the kind of canyon where the river has carved its path in the moraine that I'm interested in and indeed that's pretty much three meter high. So it matches what can be expected from heavy flooding in this brittle material of the moraine where landslide has created this digital elevation model difference. Now closer to you, you might do the same thing. This is something that was done in the lab. I collected two digital elevation models of the parking lot next to our laboratory and here is an example of acquisition in the end of September, beginning of April. I did the same when returning in May and when you do this kind of processing, you get the correlation map. So the correlation map tells you that where there is grass, of course, grass has very little relevant features so that the MeekMak is unable to create a DEM but on the parking lot and on the soil covered area, the correlation is very good and when you do the digital elevation model, you have the parking lot, you have the cars over here which are parked and if I do the DEM subtraction, I see which car have moved from the parking lot from one time to the other. So basically this is working pretty well. So as a conclusion to this presentation, I wanted to show you that using aerial images with at least 60% overlap as in the pictures, you can create digital elevation models by first identifying type points, lens properties which are automatically generated from the images, even if you have a toy camera, you will be able to generate more or less good quality lens models and you need to get the camera position if you want to have a geo-reference model. If you just want to have a qualitative model without geo-referencing, you don't even need the GPS, it will handle by itself the variety of positions of the cameras. We create a course point cloud to make sure that the position of the camera was properly modeled by MeekMak and the result is auto-photo digital elevation model and the correlation maps to make sure that we can assess the quality of the result. If you want some of the examples, we distribute this through the web server of QGIS so you can connect to this website to have the open layer version of these results for the French speaking audience. A couple of articles were written on MeekMak because again, the excellent documentation is not really introductory. MeekMak has a fascinating documentation for knowledgeable users, but here what I was trying to do is to take you by the hand and help you step by step. So this was again written in French for a French audience. If anyone is interested in open source software for digital elevation modeling, this was done for auto-photo. You can also do this for oblique view, modeling of objects. I will be very happy to translate this to English if anyone wants a basic tutorial on MeekMak. And with this, I thank you for your attention. If there's any question. What kind of vertical accuracy can you get? So the question is vertical accuracy. What I did is I went on a flat area, made multiple measurements, so multiple slides separated by four or five minutes and the standard deviation on the parking lot was 11 centimeters. So let's say decimeter resolution. In the moraine here, from two measurements done from two days apart, I get about say 60 centimeters. So my current interest is whether I could measure snow cover thickness, 60 centimeter is at the threshold of what can be usable. One of the main reasons of this poor resolution in Northern Spiceberg and then what I did in the parking lot is the poor alignment of the 2DM. On the parking lot, even if you misaligned by a few tenths of centimeter, it doesn't matter. In the very rough moraine, if you misaligned by even a few centimeters, then you have very strong effect. And this is really the conclusion that I could not introduce here, but using ground control point, reference point, is mandatory if you want to have submitter resolution. If you just rely on GPS, this is L1 GPS, you will not have enough resolution. You need some point to introduce GCP to match properly and then I would expect 10, 20 centimeter virtual resolution. Very interesting. I have two questions. Altitude is calculated by the angle on which the two overlapping pictures differ. So the question is how I select my altitude? Actually, my altitude is a trade-off between the pixel resolution. Here you see that five centimeter is actually much below what I expect. The second one is regulation. I'm not allowed to fly higher than 120 meters. And to make it safe, I wanted to stay at 100 meter. To tell you the truth, I sometimes went up to 120. How does the software calculate the difference in height? It will see this in various type points. It will be able, so the SIFT algorithm has scaling invariance property. So even if you change the altitude, the software will still be able to find relevant type points and to compute what was the altitude. Yes. So the angle doesn't change. Angle is given by the length properties. This is a given. The length properties are fundamental. But what you will change is the area that is covered here. And this area will be dependent on the height. But the length must not change. This is a very strong requirement. This fixed lens camera are perfect. But if you have an automated camera, you must not change the zooming or the lens properties to get this to work. If anything changes, then your calibration is wrong and everything fails. The second question, if I'm allowed to. Did you try or maybe someone in the public can say what you can convert this model to something that can be printed on a 3D printer? So the question is whether we can 3D print this. It's going to be too long now to answer. But the short answer is yes. I printed this on a 3D printer and it works. But you have to be careful to get rid of all the outline points. So I did it manually in Beamer. And you have to clean it in blender. So the question is the effect of lightning. These DEM were acquired over half a day. I needed five flights, 20 minutes long each. I never correlate this flight with this one. But one flight is 20 minutes. Over 20 minutes, I have few enough lighting condition change that I have seen the failure. So it should affect, but I have not seen this on the short flights I'm using. It's a trouble in the mountains. So I have flown here with winds up to 25 kilometers per hour, sorry, because the UAV can fly up to 10 meters per second. So above 2 to 3 meters per second, I would not advise flying. Now I'm lucky enough to go on the field trip for three weeks at a time. And I will wait for the right conditions. Of course, if you go somewhere and you expect to fly immediately, chances of failure. So the day I flew with 25 kilometers per hour wind was we were going to a unique site. And I had to fly. I had to try it. But I wouldn't advise it. So the question is about thermal imaging. The problem is that you need high-resolution features for a correlation match. I believe that IGN, the French Geography Institute has this kind of investigation. I have not. I'm sorry. I tried. So the question is what open source tool may I have used? I have tried various SFM software that I could find. I cannot remember all the features that I used. The solution is, at the end of the day, I used the one that was. So I have Python. I have some. I tried a few open source software. And I was confident with the French, sorry, not French. The Geographic Institute was using this for production. I would be confident that it would be a serious project. Not that the others are not, but I was confident in the Geographic Institute. And I got a very good feedback from the group developing this software at IGN. So there is no rationale why I'm using this, except that I fell in love in it. I fell in love with the philosophy of a developer. And if you take a bit of time to go through the initial difficulties of getting acquired with it, it's so powerful. It meets my requirement. One tool for one step, checking each step. So I have not tried enough the other software to comment on why this one and not the others. Thank you.