 Hello, this is Hans van der Kwas Senior Lecturer at IHE Delft Institute for Water Education. In this module we are going to process drone images using OpenDroneMap and specifically WebODM. This video is part of the DronePilot project funded by DUP-C2. Project partners are Futurewater, iView, NARC and IHE Delft. OpenDroneMap is an open source tool for collecting, processing, analyzing and displaying aerial data such as from drones. If you want to use the command line interface we use OpenDroneMap or ODM. Here in this tutorial we will use a user interface which is called WebODM. With OpenDroneMap you can derive products such as photo photos, surface models, point clouds, textured models, the camera parameters, camera shots and the quality report. Let's start with logging into WebODM. Now you come into the dashboard. Here we can add projects. Click the Add Project button to add a project, give it a name and a description. Under the project you can create several tasks. Each task consists of images that you process. To create a new task we click Select Images and GCP and there we can load the images. They can be in different file formats. Here we use JPEG. It recognizes the location and the date from the EXIF data. You can manually change it. It will also automatically assign a processing node and we can choose from different presets to optimize the result but we will use here the default. If you want a higher resolution you can choose high resolution or if you just want an autofoto you choose fast autofoto. We can review the settings and when we are done we click Start Processing. Then it uploads the images to the server and then it uploads the images to the processing node. This is because the nodes can be on different remote computers. While OpenDroneMap is processing our drone images I'll present the processing pipeline that is happening now. So we've loaded the data set, then it will determine the structure from motion, it will get the multi-view stereo, then it will create the mesh, it will apply textures to the mesh, it will georeference the images and then it can process the digital elevation model and finally it can create the autofoto. I'll explain each step in detail. We've already done the first step to load the data sets, we've created a new project in WebODM and by uploading the images we created a task with a task name and a date. The input files could be of different formats, in our case we used JPEG files and you can optionally also upload a ground control point file with GPS coordinates. In this case we didn't do it. You need at least 5 images to get a result and the images must overlap by 65% or more. For high quality 3D modeling the recommended overlap is 83%. In our case it will read the location info from the XIF data which I'll explain on the next slide. The output of this step is the image database on the processing node which will be used in the next steps. Images often contain XIF data. XIF stands for exchangeable image file format and contains metadata. If you want to read the contents you can use the open source software GIMP which you can download from GIMP.org. If you go to the image menu you will find metadata and you can choose view metadata. Here on the screen we see that it has coordinates of the location of the camera. This will be used by WebODM for further processing. The next step is structure for motion. It uses the images and the optional GCPs that we don't use in our case and structure for motion is a very advanced photogrammetry technique for estimating 3D objects from the motion of overlapping images and it uses the perspective geometry and optics for that. An open source tool, openSFM is used in this case. The output are the camera positions, the sparse point cloud and transformation. On the picture you can see the camera positions. The next step is to create the multi-view stereo. It uses the images, the camera positions from the previous step and optionally sparse point clouds. It will create a dense point cloud based on the structures that it recognizes in multiple overlapping image pairs. The next step is to create a mesh from the point cloud. Basically it's connecting all the dots through triangulation. The output of this step is a 3D or a 2.5D mesh. A 2.5D mesh is sufficient for creating orthophotos. For more accurate 3D modeling you need 3D meshes. After creating the mesh we can add textures. It uses the images, the camera positions and the meshes from the previous step and then it adds colors to the mesh polygons. It finds the best image to fill each polygon. It removes moving objects. It adjusts colors for differences in illumination and it blends the borders between the meshes and the output are the textured meshes. In this step we are going to georeference the data. Until now it has been in a relative projection and now we are going to use the transform, the point cloud and the textured meshes in combination with GPS info from the active information if available or an external GCP file with coordinates and if that's not available we can add GCPs ourselves through the web ODM interface. It has the possum interface for adding ground control points. Here in the screenshot on the left you see an aerial photograph where we find a point that we can also see on a Google satellite which is georeferenced and connect those points and by collecting multiple points in this way we can add the georeferencing information. The output of this step is a georeferenced point cloud and textured meshes and the boundaries are cropped to the study area. The next step is to process the digital elevation model. The input is the georeference point cloud and the boundaries to crop to the study area. And if you have chosen the option to calculate a digital terrain model the algorithm in this step will classify the point cloud for ground and non-ground points. The points will be interpolated to raster using the inverse distance weighing algorithm. This will also fill voids and then the raster is smoothened with a median filter. So noise is removed and finally the result is cropped to the boundary. The output of this step are a digital surface model, optionally a digital terrain model and optionally a classified point cloud with ground and non-ground points. Finally the ortho photo can be processed. Ortho rectification is a different technique than georeferencing because with ortho rectification you need to take care of the relief displacement and the orientation of the camera. You can see that clearly in this aerial photograph where we see that buildings that are not in the focal point of the picture fall to the outside and the top of the building has a different location than the bottom which in reality is not the case. Ortho rectification is the processing of these images in such a way that you take care of the elevation. Therefore the digital surface model is an important input for ortho rectification. Obviously this is less clear in vegetated areas, crop fields or in the mountains but the effect is still there. So in the tutorial we are applying this to an area in Mozambique, Moatiz for a maize crop field during the rainforest season. It was planted on the 5th of December in 2019 and it's small-scale agriculture for which drone images are very useful. And after this tutorial we are able to use WebODM installed on a server, upload your drone images to WebODM, evaluate the most important task options, generate point clouds, generate digital surface models, generate ortho photos, visualize and evaluate results in WebODM, visualize and evaluate results in QGIS and compare results of different moments in time in QGIS. Let's see if in the meantime our task has been processed by WebODM. Okay the processing is done, it took a bit more than 10 minutes for the 32 images and if I expand the task then I can click here first on view map to see the map in 2D. There you see the ortho photo and as you see it's quite detailed and in the background we see a satellite image from Google Hybrid and we see that it matches quite okay and we can check that also with the opacity slider, obviously in very different season. And let's explore this a bit further, so I can change the base maps, use the sre-satellite but it's not available for this area, use OpenStreetMap but there are not so many features here that we can use to orient ourselves. Then I can also add layers, a geojson or a shapefile, if you're using a shapefile it needs to be zipped. So here we have a zipped shapefile of the maze plot and it will be presented as an overlay, so now we recognize the field. We can also calculate contours, it uses the DSM and we can change some parameters there but if we keep it as default it will process and give the contours as a result. Here we see it and we see that it uses the digital surface model and we see that it's quite smooth in the maze field. We can save it in different formats, clicking on the bin. We can also do measurements, create a new measurement and then we can put nodes and let's trace our polygon of the field to find out how big it is and it uses the 3D coordinates for that. So we can also calculate volume, we double click it closes and it starts computing the volume and there's the result which we can also export to a geojson. Here I can switch off the maze plot and I can switch off the autofoto, there you see the RGB distribution, if I switch to surface model you see here the surface height in a color ramp, you can control the color ramp in different presets and you can control the shading, you can also export the result as a geotive and we can share this 2D view with a public link or a QR code so other people can use this interactive viewer. We can click on 3D to switch to the 3D view, project loads and we see our point cloud and we can navigate in different ways, there are different settings to navigate, this is a pottery viewer, in the cameras we can switch on where the cameras are and then we see where the drone pictures were taken, we can also show the textured model which loads now the textured mesh and here we see the result and we see that it is inaccurate at the sides but for the field which where we have good overlap it is quite okay, there are all kinds of settings you can play with, point density, field of view, if you want the idle lighting, change the radius of the lighting, the strength and the opacity. You can change the background using a skybox or a gradient or black or white or nothing, just use a dramatic skybox, you can change the splot quality and a few other settings and you can do all kinds of measurements like measuring the angle between points, with cross you can remove your measurements, also measure a single point, you can get the elevation and the coordinates, you can also measure distances or height differences between two points. You can also clip areas and then we can save our assets from here and of course here you can also share a public link and then a friend can look at your results in 3D in an interactive viewer. Let's start QGIS, this is the experimental 3.18 version with point cloud support, here it is and I load our DSM photo photo, contour lines and the last file, point cloud file which is only supported from version 3.18 and when you load the last file, I also load the mice plot, when I load the last file it starts converting it and processing it to a format that is readable for QGIS, so that's what the taskbar says there, we see the progress and then we see that the point cloud is also there with RGB colors. Let's style the DSM to a single Bamsulo color, I use this veridis scale, there we see the colors and let's also see if we are in the right spot on the globe, so I use the Google satellite and compare that with our ortho photo, similar result as in the viewer of WebODM, let's look at this in the 3D view, go to the settings to configure and I choose the DEM which in our case is the DSM and I change a bit the resolution to get a nicer result, click OK and zoom to the extent, there we see it and when I switch on the ortho photo we also see the ortho photo with the elevation and we can also load the point cloud and we need to change the settings in the layer styling panel, that's for the 3D styling and we change it to RGB so our points are also with RGB colors, so that's what we derived, we have an ortho photo in DSM and a point cloud that we can use for our purposes.