 Hi everyone, my name is Otto Bentigainen, I'm the CEO and co-founder at Jester Detection Company DoublePoint. Hi everyone, I'm Thomas Joseph and I'm the VP of product here at DoublePoint. We're a company founded in 2020 based in Helsinki and we founded this company because my co-founder Jamie had a personal project that he just needed to get done. This is a set of wristbands that he built. He's a classical championist and a biomedical engineer and he had this tendency to type his thoughts in midair by twitching his fingers. So we began from the electric shops from Alta University, building our first wristbands. Of course not a lot of people type their thoughts in midair like Jamie does so we didn't really have a market but then we started to look around. We saw that augmented and virtual realities were being built. We saw that IoT was becoming a thing. We saw that wearables were becoming part of most people's, a lot of people's, computation. However, there's something that's joined with these devices. We don't have a clear way of inputting to them. Currently when we look at the way we input to our devices we have the mouse but the mouse is only good for personal computing. We also have the touch screen which is currently used to control all sorts of devices but it's mainly used for two-dimensional screens. So how do we control augmented reality, IoT or wearables? How do we control these new platforms? Thomas, take it away. So this question right here is one that we obsess over at double point. And when coming up with solutions for this problem and how to address it we base this on five key principles. The solution would have to be intuitive just like the mouse and the touch screen. It would have to be accurate. Accuracy is table stakes for any good input solution. It would also have to be discreet. So just like these AR headsets that we have sort of coming on to market the input mechanism would have to be discreet like those same headsets. Always on, always available. This is a non-starter. This had to be in from the beginning. And then lastly low cost. So when thinking about these five principles over the last three years as a company our explorations have taken us far and wide. So here's just a short selection of some of the prototypes and hardware form factors that we've explored. And this includes Jamin's prototype that belongs with some gloves, smartwatches, rings, and thimbles. But we keep coming back to the smartwatch. As a tech industry we're on track to ship 300 million smartwatches just this year. And this device category is growing double digit growth year on year. They're ubiquitous. In fact I would gather that about 60% of you in the audience have some sort of smartwatch on your wrist right now. Whether it's an Apple watch, a Fitbit, or a Sunto. And we wanted to focus our attention on this because we believe that this form factor is capable of a lot, lot more. Now all of those devices, all those manufacturers, they have two sensors in common that are of particular interest to us at DoublePoint. So the PPG sensor or the sensor that tracks your heart rate for health activities, and then the IMU for orientation data. Now when we look at these two sensors we see a lot more capability. So what we actually have here are some traces of some raw data coming off these signals. And what we found was there's a fundamental machine learning problem here. And we could pick up on minute gestures that a user performs just by looking at these sensor data. So we built some custom algorithms, trained this on a lot of people, tested it on a lot of people. And what we came down with are a few key gestures. And I'm kind of going to go through these right now. So we have tap, which is very simple action in which the index finger comes into contact with the thumb. And these small deformations in the skin, we're able to pick these up with these off the shelf smartwatch sensors. As Otto is going to demonstrate right here. Right here. So let's see if the network is working. But there you go. And I can turn on my lights. And then you can also sort of multi-tap. So pick up multiple gestures like this. And probably most importantly is pinch and hold. Now this is actually a world's first demonstration of this gesture using commercially available off the shelf sensors. Now on the surface, those gestures seems fairly simple. And that's kind of by design. We wanted these to be intuitive and easy. But those three gestures also enable a lot, lot more. So I'm going to talk about it in the context of a few key use cases. XR, which we're really excited about. Now when we think about input, if you've ever worn a VR headset, you often use your hands to sort of input stuff, you use your voice or you use controllers. We believe this is a solved input problem. Sure, hand tracking can get better. Controllers can get better. But these are the modalities to control VR headsets. But what about AR? We're yet to see these devices come on market, they're lightweight, they're always on. Using hands is fundamentally problematic. Because we're now going to be in public settings. You don't want to be gesticulating wildly like Tom Cruise in Minority Report. Not to mention that that's expensive. So you have cameras that are high resource power hungry, and they don't fit with that form factor. Voice privacy concerns. And then obviously you don't want to be using a controller out in public settings. That's a non-starter. So let me show you what we can do with a smartwatch that we built at DoublePoint. Here, you'll see our interaction designer, Simon, navigating a UI with nothing but his watch. There are no cameras involved here. Just by pointing his watch, we're able to use the IMU sensor data to figure out exactly where he's pointing and the click gestures to sort of select these items. And he effortlessly sort of scrolls through these interfaces just by using pinch and hold, tap, and multi-tap. Here we see Simon playing a game, just pointing at bubbles and popping them. And then we have a smartwatch. And then we have a smartwatch. And then we have a smartwatch. But you can take this to another level. So if device manufacturers have eye tracking built, we can take this interaction scheme and put it on steroids. Here you see Simon using the same sort of smartwatch, except he's using eyes to focus his attention. And you'll notice how much quicker he is just by pointing and selecting. Now, this is super-fast, super-intuitive, and very fast. So we're not using inside-out cameras for tracking. Just the smartwatch. And just the sensors that exist on your smartwatches today. Outside of XR, there are a few other applications we're excited about. IOT. We have countless smart devices in our homes. And what you see here is our interaction designer just manipulating the devices around him with the smartwatch. Intuitively turning up the brightness using the accelerometer and IMU, turning up the brightness on his TV, and even turning on the music. And lastly, smartwatch control itself. So here, using these algorithms, we're simply manipulating the menu. So clicking through a menu, double-click to select, and pinch and hold for high-consequence actions, like making a phone call or making a payment. Now, this is just a snippet and a view into what we can do, but we can't wait to see what developers do with this technology. We are convinced this is the best input solution for spatial, metaverse, ambient computing, whatever you want to call it. For these reasons, it's fast. It's cheap. It's comfortable. You don't have to wear the smartwatch any tighter than you currently wear your smartwatch. No calibration, generalizable from person to person. High sensitivity, high accuracy, and lastly, no field of view limitations. We demonstrated all of this with existing smartwatches. But we needed to take this further. To turn all of this up to 11, we had to build our own hardware using these sensors. And I'm excited to pass this off to Otto to tell you all about it. So at DoublePoint, we decided to build DoublePoint Kit. The DoublePoint Kit is another piece of hardware now from Finland. And what it does, it enables the world with micro gestures. It provides the ultimate input solution for extended reality, augmented reality, lightweight headsets, wearables, and IoT control, among many other use cases. We built this DoublePoint Kit to provide everyone developing these interactions with micro gestures in a licenseable form. So people can license this technology straight from us onto their existing wearables. Or they can license this technology and build their own hardware. It's built for developers. So we're enabling a bunch of UI tools to help people build these new device paradigms. We're also providing a rapid prototyping experience, and it's easy to get started and will reduce your time to market. We're shipping in the first half of 2024. You'll get to experience it in CES in Las Vegas in January. Join the waitlist at doublepoint.com. And thanks for having us. Thank you.