 Good day, everyone. My name is Jonas Enval, and I come from Cammenta. We'll see if we can get my slides up here. Yeah, great. Today I will talk to you about the display and analysis of your spatial information, and in particularly, I'm going to talk about two use cases that I'm going to do into more depth of, and it's autonomous bevload slide and camera paths. So if this works. Let's do it like this instead. So what is required to successfully plan and manage the US mission? Well, above all, you need to have an up to date operational picture. And this is achieved by combining and visualizing your spatial information in real time and different types of your spatial information as well. Sensor data and dynamic events. It can be things like tactical overlay, airspace information, or really dynamic things, like radar system, feeds from radar systems, or video feeds from drones. This is an example of when we do this. As you can see, we have a 3D map here, and this 3D map is built up of different data sources. In the back, you see a low res or to photo draped on a height model. Closer to the camera, you can see boxes of buildings that we built up using building footprints together with height information for the buildings. And closest to the camera, you see a really high res city model with two centimeter accuracy that we have got from our friends at AGI. And we use this model with all these types of information to do different types of analysis. You can see the drone route going through the city, avoiding the buildings and trees and going through these narrow streets. And you can also see the blue parts on the red and on the top of the roof of the church. That is the viewshed. If you can see there is the drone symbol or the model of the drone in that picture and what that actually can see at this moment is pictured. You can see the small shades on top of the church. That is the stall spikes that gives the part of this is not visible. And in the bottom you can see height model, a vertical profile that describes what altitude the drone is flying at on the different levels. And this is an example that shows how analysis functionality can help the user to get an understanding of what is going on in this situation and get a full situation awareness. Now I will show you three examples of other analysis that can help in the drone use case. The first example is airspace coverage. Here you can see that we have two observers that are indicated by small i symbols in the map. You see one in the middle and one down on the right side. You also see a white volume in this picture. That is the analysis volume. And also you see these red parts. Those are parts that are not seen by either of these two observers. And using this type of analysis can help the operator place these sensors optimally to avoid any blind spots or and get an optimal coverage of this. But understanding is perhets coverage can sometimes be difficult and therefore we can do it in real time. So here as we move the different observers you can see in real time exactly how the airspace coverage is changed and you can have this as a help to place them optimally. And of course you can do the other way around also you can see what does this observer actually see to have a volumetric line of sight. The next example I'm going to show is how we can show a video stream from a drone projected on top of the 3D map. And here you can see in green the trajectory the flight part of the drone. You can see the drone as a model on top. You can see the view cone of the camera going towards the ground. And finally you can see on the ground the video that is projected. And of course we can also do this in real time. So here you can see a live video stream from a drone that is following a car that is projected in real time on the map. And by doing this you highly increase the usability of the video. Since everything that's happening is your reference immediately and you know exactly where the car is located at all times. And the third example I'm going to show is vertical clearance along a route. The red areas here in the upper picture shows where the vertical clearance is below a certain threshold. And in this way potential hazards and obstacles can be identified and analysed. And you can easily change the route so that it actually goes in a safe area and doesn't intersect any of the red areas. Next I'm going to go into one of the two use cases that I'm going to look at more depth. So the first is camera path for a drone mission. And what do we mean by a camera path then? Well a camera path describes the specific sequence of camera movements as it is used to capture footage. It is the path that the cameras takes in space including its position, its orientation and its speed as it records the subject. And traditionally the user either operates the UV manually or starts by setting out a number of waypoints, defining up the route. And then after this he manually adjusts the camera parameters to capture what he wants. And he doesn't really get any good feedback on exactly what is captured. There are also of course systems that do orthophotos or photogrammetry to get 3D models that do have patterns that are pre-generated to actually capture photo. And here you define a polygon and then the system calculates a standard route to cover this. But it goes through this route with typically the same distance, same altitude, etc. By using a 3D model we can improve these previous approaches. Here we allow mission planning considering not only the path that the drone takes but also considering the projected line of sight and camera visualization. Here you can see how the user can adjust the camera and get the direct feedback on the line of sight and also on the little simulated camera view. So you get a really good understanding exactly what can be seen and how that will look in the final footage that you will get later. And even though this search pattern thing exists of course in many systems today you can get a better planning tool by allowing to use the vertical component of this in a 3D model. And you can enhance the flight port accuracy especially along the Z axis. You can also use this to inspect 3D volumes. In this slide we scan a 3D structure to provide a full visual assessment of a building. A 360 image can be captured dynamically here by computing the camera path around this building based on the 3D model and information about the structure. So you can fly exactly at this distance the same distance around it. And also of course you fly this in a different height interval using this pattern that is then dynamically created. So when talking about planning for mission objectives emphasis is shifted from the traditional focus on the drone flight path to the actual objectives of the mission. We instead of starting to think about the flight path and then thinking about what the camera sees. We instead turn it around and we start thinking about what we can see and then the actual flight path is a derivative of that. And this user centric approach allows for a more dynamic and flexible mission planning system ensuring that the drone activities is closely aligned with the goals of the mission. Next let me go into the second use case that I will look into more in depth and this is calculating the best route for a mission. We'll be looking both in the rural and in the urban example in this case but in both cases of course we want to calculate the best route for our mission. What does the best route mean then? Is it safest, the shortest, the quickest, the most efficient and in that case is it fuel consumption we talk about or is it the most hidden. The one that is in range of the instructor or the radar station or is it outside a certain zone. Well, in most cases it will be a combination of multiple optimization goals depending on the operational needs. And in our example we're going to focus on two optimization goals, speed and safety. And balancing speed and safety is one example of a multi-object orienterat optimization which is complicated. Even if each of these objectives can be optimized on, to be minimized or maximized, there must be a definite way to compare numbers of different kinds. And in this example it's difficult, think of the objective that different currencies that has no official exchange rate. Travel time is an objective that's easy to rate but safety is not. Optimally, we want to be talking about the probability of success of the mission but that is really hard to optimize on. Travel time though is something that is easier to optimize on. And we solve this by assigning a safety factor to each of the zones that we pass through that will be as a exchange rate between these currencies. And by doing that, we can optimize on a single objective instead of two. Let us now look at an example of the Napa Earth route that we have generated here. Here the route goes from the waypoint on the left, it goes up to the waypoint on the top and then continues using this ridge which is the shortest route between them and then goes back to the waypoint on the right. This is of course, this is the fastest route if you want to do this Napa Earth type flying between this waypoint but and it's fast but it's not so safe because in this case, you're quite visible. So let us introduce another concept to take care of this. And this is the visibility index concept and the visibility index for a position in this case, the I is a percentage number of how much is visible off the terrain surrounding the point. So in this case, we have from the eye, we can see 63.4% of the surrounding circle. We can also view here within this observation area. We can view as a color map, the different indices that we do have for the different points. So you can see here, if you move out a bit towards to see in this example, you will see more of the surrounding area. If you move in inland, then you will see less. And since a position with a low visibility index can see very little of the surroundings, it's also hidden because visibility goes both ways. So we can use this to calculate where it's good to be placed if you don't want to be seen. In applying this to the same example, we here have areas that have a low visibility index color in green in the picture. And the route then will be adjusted to follow this instead. And if you instead know that what from our direction, the route, the danger will come. You can adjust these calculations to make sure that in this case, we have a route that is concealed from a server that is on the northeast. We can similarly make this work in an urban environment. And here we do the same calculation, avoiding surface and buildings and optimizing a route that goes through the city. But let's add some more constraints to this. Typically you want to add constraints that comes from the UTM and also for from other mission factors. And typically you have a situation like this, of course, quite simplified. You have a site airspace surrounding it. You have restricted volumes where you can't fly and you have emergency landing zones that you want to be able to reach within the flight. And then you have your calculated flight here in blue. And part of it is the deconfliction with restricted volumes, both totally restricted as in this example, where we do generate the route in blue here to avoid that. But it could also be areas that are just not decided to fly on. We will have this safety factor that adds a speed limit on such things and make it not preferable for the route to go through those. Another confliction that is very important is the deflection between different routes in the four-dimensional plane. And this video shows two of the routes that has been conflicted from each other. And as multiple autonomous systems operate in the same airspace, of course, the potential for in-flight collision is higher, the risk for that is higher. And the airspace congestion increases. And then this is very an important concept. So this is something we also work with. Finally, I want to tell you about the project that we are part of, the and running currently that implements most of these concepts. It's a project funded by Innovate UK that is part of the future flight challenge program in England. It's called HEDO and we are part of a consortium of companies, including Airport, Heathrow Airport, OSL, Thales and Heritage Gate. So, and the aim of this project is that we're going to do live autonomous bevellos flights in the last four months of this project, which gonna be next spring slash summer. And the second name of the project is to give recommendations to the legislation authorities to how we can go forward with legislation that supports this in a good way. Thank you so much for listening. Now I wonder if there are any questions.