 We just saw beautiful data sets, but these data sets need to be managed. And these data nets need to be organized. And this is what I would like to show you with our technology of TruOcean. About North IO, just a few words. So we were founded in Kiel in 2011. We are a specialist developing cloud-based technology for geospatial applications. And three years ago, we decided we are going into product direction. So we have with TruEarth and TruOcean now a cloud-based technology for managing very, very large scale geospatial and also, in this case, hydrographical data. With our technology, we are on a vision. We are on a mission. We want to emphasize the race from ping to cloud. So our goal is to shorten processes. Processes, especially in the offshore domain, they are expensive. They are costy. And with our technology, we are going to reduce many of these steps, especially if it terms of data collaboration. Just a few words about the offshore wind domain. It's enormous. And it leads to a democratization of the whole topic of underwater data. So in Europe, we are planning for almost 400 gigawatt of offshore wind installations in the APEC region, almost 1,000. With the advancement of floating offshore wind technologies, we are even allowing us now in the Americas to have complete new area coverage. So this means we are having to go from prototypes into process workflows. And this is still for the underwater domain something new. What are the industry challenges? On the left side, what you can see here, we have an enormous amount of data acquisition. We have more infrastructure going out. We have wind turbines. We have new infrastructures in terms of underwater cables, underwater gas pipelines going out. And we have the topic of data acquisition. We have autonomous systems. So we don't want to spend all the time, the time, and money for ships. We want to send autonomous torpedo systems into our oceans to gather data. And we're getting satellite coverage. And this is something especially in the Earth domain, which is not a problem for us. But if you're going out there, then you need bandwidth. So it means we are bringing cloud technologies into the world of ocean data. But still, we are facing challenges. We are having data silos. We are having proprietary data formats we are dealing with. We have lots of manual data processing. The data transfer is often happening via hard drives. So I think that's still something we don't have to do anymore in 2023. And our solution to this is the TrueOcean Marine Data Platform. I'll tell you a few words about this one. What's our goal? We are lifting maritime and highly complex sensor data into the cloud. We are making data accessible. We are making data findable. We are making it shareable and also understandable. And this is not that easy. I mean, we already have on the classical geospatial world a zoo of data formats. In the maritime domain, that's getting even bigger. We are talking about sound velocity profiles in different kind of more text formats. We're talking about multi-beam echo sound data, GSF, XTF, S7K, so very complex binary data formats. And we are having hundreds of this kind of formats, not even having one very focusing standards. We're having side scan sonar data, which is imaging of the ocean floor. We're having magnetometry. So it gives us a magnetic signal. It shows us what is lying in the sediment. We're having sub-bottom profiler, which gives us an information about the layering of the sediment itself. And many, many other satellite data types, for example, laser scan, we just saw, also very important. And this data is highly complex, and it's not so easy to work with. So what is our ocean data platform doing? On the one side, I think one of the most important things is we are cloud agnostic. So our technology runs on Kubernetes. It's microservice architecture running on Kubernetes. We can run it on every cloud in the world. We can run it on prem. We can run it hybrid environments. Was one of the main important design decisions. We have flexible performance. We are in the cloud. We are not limited to some kind of local service systems. We can just scale it if we need it. We have a competitive edge with our technology because we are faster. We eliminate data transfer waste. So we're bringing data into the platform we can share. We can collaborate on the platform itself on the data. And we have geospatial expertise. We understand with our technology all the specific kind of binary data formats. How does it speed up offshore projects? So it brings data to one space. See it as a geospatial data hub. We enable data management and data sharing. We enable simply setting up an SFTP server, an R-sync server, web map, web features service, OTC API. So getting data in and out, that's one of the most important parts of our technology. We are connecting. We are bringing stakeholders together by having a user rights and role management system. Due to cloud, we are scalable. If you need one terabyte, if you need one petabyte, in the end, it doesn't matter. We are converting the data, also this binary data, in a spatial data format. So it means we are using Parkett. So Apache Parkett, maybe you have heard about, GeoParkett, even the geo version of it, so that we can do computation on scale. We're also doing AI topics. I will come to this later stage. And we are comprising with the GAIAX principles, to being data sovereign, and to have a high data security. Just a few screenshots. So data accessibility, finding data, it's one of the most important things. So with our technology, you can simply search the data. So drawing up a rectangle, drawing up a polygon, and find all the different kind of binary formats with the concrete and very high dense information in there. Data management. So you see here, it's like SharePoint OneDrive approach, but with geospatial intelligence behind. So as soon as we are uploading data into this folder structure, it gets automatically analyzed. So we're extracting the metadata information. We're extracting the sensor coverage from it. We visualize the data. So we directly transform also this binary, this hydrographic data, this laser scan data, into three-dimensional representation. We're able to visualize it in a classical web GIS way. We do data processing and analytics. So we work on the point cloud level. We don't work anymore on the raster, on the data set that is actually created from the point cloud. But we do computation because we have the power of the cloud on the point cloud level. Here's one example of a vertical curvature, a gradient in this case, a slope that we can calculate inside the point cloud. And the output is a point cloud with a slope integrated. And we are working heavily, for example, with NVIDIA on AI topics. So what we are investigating now is how can we use physics-based AI for modeling hydrographic data and for enriching hydrographic data. In this case, what you see here, it's a multi-beam data set, so a very large-scale 10 million point cloud. Usually, you measure every several hours, the so-called sound velocity profile, to correct your data. We use the physics AI to model for every of this 10 million points its own sound velocity profile to correct the data with it. So very, very new approaches. You need huge computing power with it. But in the way we do it with our technology, we allow this. That's our partner network, some of our customers, so many also bigger companies involved in our work. And we are looking forward to organize maybe with you, your ocean data. And feel free to visit us at boothB3018. Thank you very much.