 OXTS makes inertial navigation systems and fundamentally that means combining a GNSS receiver with an inertial measurement unit through something called an extended camon filter to provide you a time, position and orientation at any given moment. And you can see on the right hand side that's an example of one of our new products. It's an OEM chip called the X-Red 3000 that we launched in this show. And kind of the aim of OXTS is to navigate anywhere. Thank you very much. And the problem with with this fundamental hardware setup is it only takes you so far before you end up with expensive IMUs, extremely expensive IMUs, export controls getting in the way, as well as size constraints. We've all seen a big fog IMU that's an enormous chunk. And fundamentally IMUs drift over time. So if I lose GNSS, I will always drift. So what can we do to combat that by navigating anywhere, or to navigate anywhere I should say? Well we need to find another way. And the way that OXTS is choosing to do that is through sensor fusion, which brings us on to the agenda for this presentation. So first of all I'm going to explain what sensor fusion is and exactly specifically what LIO is. I'm going to explain why you would use LIO, what's the point of it, in what scenarios do we really see a difference in the data it produces. And then finally how good is it, so how we validated its performance. So first of all what is LIO? LIO, or OXTS LIO, if we're choosing the correct marketing term, stands for LiDAR inertial odometry. So if we break down where each of those components are, you are at a geospatial show, everyone knows what a LiDAR is, it's a laser scanner. Inertial, this is accelerometers and gyroscopes that tell me how my movement is changing. And then odometry is just how my position moves over time. So we're using LiDAR in combination with accelerometers and gyroscopes to provide a velocity update effectively into our INS. And how are we implementing it? Well this is the fundamental OXTS hardware block diagram, system diagram. We have our IMU data and our GNSS data that's getting fed into our processing engine and out the back we're getting localization data. And what we've implemented in the last few years is something called a generic aiding interface. And this allows us to put in different updates from different sensors, whether they provide position, a velocity, or an attitude update into our INS. Because sensors we're using don't always provide the best type of input given the environment you're in. So if we've removed that, how if we're talking about LIO specifically, what are we doing? We're taking data from our IMU, we're integrating that with our LiDAR data in our OXTS LIO software, and then we're providing a generic aiding data or GAD data packet into our INS, which contains our high accuracy velocity and angular rate updates. So why use LIO? What the hell is the point of it? Why can't I just use an INS? And I'll explain this in terms of GNSS. So when we have no GNSS or GNSS obstructions, there's no reason to use anything other than GNSS and IMU integrated together to give you time, position, and attitude updates. However, when we enter the second of these two scenarios, we go into urban canyons. This is where we're relying more on our IMU. And then finally we have complete GNSS obstructions, which GNSS just cannot penetrate whatsoever. So we're never getting a full, accurate global position update. And when you're on the left-hand side, we can achieve good global position quite easily. We're on the right-hand side, going by a SLAM system, you can achieve relative accuracy quite easily. What is difficult is finding a good global position for areas where you have poor GNSS. And that's the reason for integrating a different type of sensor into an INS system. So here's where I live in London. It's got a lot of buildings in it, a lot of very narrow streets. The GNSS coverage is particularly poor. And I'm going to show you a data collection that we did with one of our INSs. It's a constellation INS, coupled with running LIO through a HESI XT32. And we've done eight laps around the city of London, which is a very built-up environment. And this white trace is what a loosely coupled GNSS and IMU combination produces in terms of positional output. You can see areas where we have good RTK updates over here by bank. We can maintain position rather well. But we go in past here, past the gherkin, and we start sort of drift by meters. From mapling application, this is unusable. But when we add in Liner and Ometry, it provides a very complimentary error profile compared to GNSS. It works in environments where we have a lot of features around us to pick out scans from. We're keeping position for eight laps. So this is about an hour and 10 minutes worth of data. And we are never receiving particularly good position updates inside there. But we're keeping on track for the entirety of that time. Again, another example is in San Francisco, a lot of tall buildings, similar setup here with one of our new INSs. Mission Street is one of the two worst streets in San Francisco for GNSS. And this is what our INS looks like. The white trace, when we stop for a prolonged period of time, we start to drift. And it may look like quite a small drift there, maybe a few meters. But what we don't see is the drift in altitude that we get, which is shown by the point cloud over here. So we have good data going into the particularly poor GNSS period. And then we have that long stationary update. And then we get a mishmash afterwards. There's no use. We apply LIO. And we suddenly be able to pick out features we're not drifting in altitude over time. And that's the output. We're generating a much cleaner point cloud. And again, an example of a kind of completely GNSS-denied environment, or only getting spurious GNSS signals kind of through the sides of a wall, very low altitude. On the right-hand side, we have a top-down image of the car park. We can see how the layers don't really align perfectly. You can tell there's drift in there. And once we apply LIO, drift reduces and we generate a good-looking point cloud. And all the levels align. And then finally in San Francisco, this is not so much a position improvement. But we can see we're going down a tunnel. We have some subtly fuzzy railings on the right-hand side. And this is just to say that LIO can also give you small heading picture roll improvements, which is shown by the kind of the cleaning up of the point cloud that you can see here. And again, another car park example. We're inside the 470 seconds, which for any other MEMS-based INS, which is, I mean, MEMS is a relatively cheap technology compared to what you'll see here. We're inside 470 seconds, which would normally be curtains for us. But we can see that we can apply LiDAR odometry. We can see that over 470 seconds, we're only drifting by .88 meters compared to 100 meters beforehand. So finally, I'll just show you some of our, how are we validating this? This is near where we're based in Oxford. We've done seven laps of this area. The buildings around us are maybe three or four stories tall. Our INS can codify with these scenarios normally. There's not an issue. But we can blank the GNSS artificially for an amount of time, and we can test how good the drift is over time, the positional drift, which you can see on the right-hand side. So we've done seven laps of the Ashmolean with blank 60-second periods throughout that entire data set to do our cross-validation on tested data. And then we produced this on the right-hand side, which shows how different data runs are, how the drift is reduced. You can see over 60 seconds, we're going down to kind of an average of about 10 to 15 centimeters positional drift over 60 seconds, which is far better than any MEMS-based INS can achieve. And if you're particularly interested in the numbers, feel free to take a picture of this. But the ones I'll draw attention to are position error, our 2D position error. After an amount of time, 60 seconds is normally looked with what we look at. We're going from 88 centimeters unaided no-wheel-speed sensor down to 22 centimeters lio-aided, again, no-wheel-speed sensor. And this just showing how in areas where we have good physical features around us, but very poor GNSS, how much we're able to improve the positional output of our systems. And probably the most important thing here, if you look at percentage of error compared to 3D distance traveled, how much that reduces. We go from 0.23% percentage error of distance traveled down to 0.03%. So it really is quite a big jump. And we're taking technology or cheaper technology, more cost-effective technologies, and then achieving performance of much higher grade INS systems that you might see on the market for hundreds of thousands of pounds. So, all that remains to say is thank you very much for listening to me. Sorry about the clicker earlier on. But our booth is just around the corner. So if you want to speak to me or one of the team, feel free to come and do that. Thank you very much.