 Hi everyone, I'm Lucas from Fixed Position and today I'm really happy about telling you that I can tell you about what we were working on in the last one to two years and you might know us already. So Fixed Position we're building this high-precision positioning sensor, real-time high-precision positioning sensor. It's basically an RTK INS system but what we did we added a camera and that's why we can also operate in environments which have a very bad GNSS reception. But to start off we have to quickly tell you a little bit about our company. We are still a startup but we are around for six years already and in the six years we really, I think nobody until now managed to perform that well in real-time and that precise in such challenging environments than we did with our VisionRTK2 sensor. As a company we have customers all over the world and we are very proud also about our partners. We have an investor and customer, Segway, Movela Exense, which many of you might know is one of our partners and the global distributor for us and at the same time TopCon, since we have an RTK GNSS product, we need correction data and we partnered with TopCon to provide correction data in Europe and US. The same time we're from Switzerland, Zurich, UBlocks, one of the leading providers of GNSS chips or modules in the world is also from Zurich so basically they're our neighbors, their buddies and also our suppliers. As a spin-out of the Federal Institute of Technology in Zurich, we're also very well embedded in the ecosystem with contacts to universities, to European Space Agency and others and we're also very proud that we acquired or that we attracted one of or some of the globally most well-known investors from the US in Silicon Valley but also from Europe and from Asia. So let's have a quick look at the issue we're solving. I mean we're at the integer here so many of you know this, right? The RTK GNSS is centimeter precise, it's awesome if you're out on the open field but as soon as you get into an urban environment you get into trouble because the GNSS reception is just not good enough anymore and you're not on centimeter level anymore but more on meter or even worse. As I mentioned before, at the moment if our real-time product, Vision RTK2, we're solving this by adding an IMU and a camera to the system and internally our software is fusing all these measurements, we call it deep sensor fusion on board in real-time so we can gap wherever GNSS reception is bad and keep a centimeter position in all environments. So just as an example, this was one of the trajectories we recorded with our sensor. It was on a small robot, we didn't use any wheel speed or something and we were in a parking garage 1.7 kilometers and in this 1.7 kilometers we had a maximum error of three meters which if you look at drift over distance is roughly 0.2% of drift. You see it's a little bit tilted because while we were circling we acquired a little bit of angular drift but in general this is very useful already for customers all over the world. Looking at such customers, this product at the moment is used in several industries, for example agricultural robots, that's very big in the US in these big orchards they have there but also for utility robots street sweepers, autonomous street sweepers are coming up all over the world and also commercial lawn mowers, commercial landscaping robots and consumer landscaping robots, basically the robots everybody or many people already have at home mowing their lawns every day. At the same time we also have some customers operating in warehouses but looking into that there is also a market which all of you know very well or many markets which need even higher precision positioning but they don't need it in real time so that's what we built what I'm going to present to you today exfusion cloud for example for LiDAR data referencing large-scale inspection digital twin data acquisition but also in drone inspection high precision crop analytics and forestry these are all very challenging environments for GNSS based sensors and we are fixing this issue for all of them. So this is the same trajectory as we had before again 1.7 kilometers full GNSS outage no signals at all no wheel odometry or external wheel speed input used and you see we can reduce the error from the three meters we have before to roughly 0.3 meters so we improve the error by a factor of 10 and this all happens in our exfusion post processing or visual post processing in the cloud. One very important very important point to look at is that a key differentiator compared to an INS GNSS system is that we are not blind we have the camera data which we log during operation that means whenever we are coming back to a place which we saw before we can correct for every all the error that occurred until until there or that occurred in this loop so for example if you see here on the on the left this is without any loop closure we're going around in this loop and down here and up there it should overlap but it doesn't overlap whereas looking into wherever we are activating loop closure these two check trees overlap on a centimeter level so let's look a little bit deeper in some of the use cases which we are already looking at with customers digital twin building big topic for many of us here at Intergeo and there it's super important to get accurate point cloud that we can measure emerge sensor data from different sources maybe today we do some thermal imaging tomorrow we scan the ground and we want to overlay all of that in a digital twin so repeatability is key precision is key and that's what X fusion cloud is doing for us another use case very similar is basically large-scale inspection for example crack analysis on on a road again we need high repeatability then basically here on a centimeter level we need to know where it's these cracks are so when we come back in two months we have to compare okay did they grow how's the development of this crack and at the same time we might have several lanes next to each other of a road so we need very good pass-to-pass repeatability so that also cracks that maybe go for over a full streets are connected to each other in different sweeps and we can we can do the stitching properly there and by the way we also tested that our loop closure algorithms they also work if you're driving on a street on different lanes so it's not that you have to drive exactly or walk exactly in the same place it also works if you are in different lanes the last use case I want to show you is drone inspection for example if we do drone inspection of infrastructure a bridge for example whenever we are on top of the bridge users and drone operators are using GPS based waypoint based inspection but as soon as we are below a bridge they could operate with our sensor in real time it's a tough use case but it could could work and some of them are doing it but usually they still manually operate in the drone to acquire the data and at the moment they have the problem that they cannot get the data centimeter precisely geotacked and with exfusion cloud they will be able to do that so how will this look like and I tried to be very very simple and I think we also developed that to be very simple as an acquisition sensor you take one of our vision RTK2 sensors and you record data basically you put this on top of your of your vehicle you can record your position data on a centimeter level it's super precisely time-stamped we also offer NTP and PTP services to also time-stamp your LIDAR or your other sensors precisely and you can either log directly on sensor or you can connect it to your computer and log via internet after after logging process we have apps so you can directly upload it into the cloud where it will be automatically processed and then become available in our so-called dashboard where you can also do measurements and compare different data or use our exfusion API to automatically download and integrate it in your own application again and we have several data formats already available and we will make sure they will be available when they're needed so what are we doing today so today we're announcing an early access program to this product we are still partially in development but we're looking for interested partners that want to start testing it and to sign up for it you can scan this QR code or just write me an email at EAP early access program at fixposition.com and of course it's it's exciting you will be the first one to get access but for us it's also really important that you have a good use case that fits the application and helps us and also you in the development process. Big advantage of it is you will be able to have an impact into the development process and at the same time you will get a 50% discount on our hardware and you will be able to use the sensor during the whole early access program at the service for free. Very quick roadmap so now we did the announcement and in December we will start testing with some of the first early access program partners we will then take more and more in over time at the beginning we will not have the API yet available but you basically upload your data it's getting post-processed and it will be available in our visualizer our dashboard and in February roughly or end of February March we should have the API ready for you to also integrate it into your applications later we will also roughly in May let's say Q2 we will also have a Rhinox genesis data processing available and roughly in the middle of the year we then plan the public rollout of this. So thank you for your attention I hope this sparked your interest and I'm open to questions either in English or in German for someone to ask. Thank you.