 Thank you, Foss Asia for the invitation, the slot to speak. So today I'm going to give a very brief talk about this project of mine, which I did during the COVID time, basically sometime on hand, I could explore whether it's possible to use feed gestures to use that as an input device and also a feedback device to communicate so that this would be instead of using hands and also looking at the screen. So the problem is that, for example, I'm not able to use a hand either by choice or it's because of occupied right the hands, and then maybe it's not so safe to use a hand, and possible it's inconvenient to speak or it's noisy environment, not able to speak, and perhaps people who are active right instead of sitting at the desk and typing on the keyboard the whole day, perhaps there's an alternate method to stay in connection, and at the same time be able to be productive. So I think as I mentioned the screen I saw, and basically an alternative to the keyboard and screen. Okay. So the Haptic Communicator basically allows the user to be able to type and enter message, works as all the must have the, so the idea is to be able to replicate all the keys on the keyboard right. So it has at least 101 possible combination of input. So again, what are the possible users who can benefit? I put it into three color code. Perhaps the first one will be due to help reasons, disability, and also the people who wants to stay healthy. The second maybe professionals, for example like surgeons working on operating theater, the hands very busy right operating on the patient, and the eyes have to look at the patient right. So the eyes and hands are already tied up. So how to, for example, monitor the heart rate of the patient right. Maybe the beep is too noisy right too many instruments. So another input channel. And then the last category will be the gamers and maybe the virtual environment VR user right. So gamers could gain an extra competitive edge using this device. So this is the preliminary diagram and there's a prototype that I made. So it consists of the pair of shoes and also the wrist help wrist worn device right. So with this combination, this is the sum of the pattern following diagram which was submitted. Okay, so this is the results of the one single shoe right, not the pre-pass. So you can see that you can recognize 76 different gestures with a single shoe. So with two shoes and a wrist, there's multiple possible combination run into hundreds right. So this basically is the gesture whether the predicted match the actual gestures by the user. So for example, if I tap my feet, that is recognized as a gesture. So if I bring my feet to a certain angle, then that is another gesture. So again, this one I think I don't need to repeat. It's the same thing. So again, the use cases as I mentioned, right people with disability, couple tunnel sufferers, surgeon in operating theater, PC mobile gamers. So in terms of market right, you can see that I did a research and the biggest segment at the highest growth rate would be the fitness followed by assistive tech. And there's also some potential here in terms of the professionals using this in the line of the work and also gamers and tech adopters. So what are the alternatives out there, right? So from what I did in my research, I see that the devices out there is pretty basic. Maybe the most advanced would be this shoe. Let down shoe if I remember correctly. So basically this provides a positioning feedback for users. So for example, the people who are visually impaired can navigate around without any auditory cue. So for example, if they need to turn right after a few steps, the right shoe would provide a vibration from the user to turn right. Or even maybe an obstacle in front can give a prompt by vibration so that the user know that it should stop. And this is really not an alternative. And then on top here, that is a pretty crude right up to five keys but is functional definitely. And how about the hand worn devices? Okay, so I think this is a little bit dated but we have seen this device out there sometime back, right? There's a laser pointed keyboard but this still requires the hand and tap on the surface, right? Although there's no physical keys. And over here, this device is pretty interesting. Right, this tap strap. The tap strap is worn on hand. Basically it's tapping of the fingers, right? On the hard surface, right? Depending on how which finger is being tapped and or the combination of fingers being tapped, it could kind of like replicate the keyboard. But again, this also requires the use of the hand, right? So hands and a surface as well. So I did this comparison. So I think the major advantage of not using the hands is that you free up the hands. That's a given. And then there's also a free up the screen, right? With the feedback, you don't need to look at the screen. You can focus on other more important areas. Right, you can even like, as I mentioned, like gamers, their eyes are already busy. So many things right in front of them. Okay, so what about vision based AI gesture? There's other other alternatives out there, right? So this is some of the things that is done and available. For example, using a camera to recognize hand gestures. Even open source libraries are already available. So pretty impressive. So the problem already or the limitations would be requires a clear and obstructed view, right? And then the lighting condition also has to be within condition ideal. And of course, I think the image processing workload is a bit heavier. And there may be privacy concerns, right? Because you walk, wear a camera around, people will be a bit suspicious. And how about the proven voice, basic voice communication? Just use voice call, right? Speak like a phone. Of course, this one also requires a background that is a bit not so noisy. And I may not work that well with if the target is not another human being, right? You need to translate that into a machine code or for example, activate machinery. So also has to be discreet, right? So for example, if you are a negotiator, business negotiator or presenter, so you may not be able to speak anything else, right? You have to just use some gestures to show, for example, time to cut the camera. Okay, again, so in terms of market, I can see that this is from the public data, basically. So we can see the variables market is growing very fast, right, for the next 10 years or so, right? 12.4% CAGR. And you can see that even the mechanical keyboards were pretty popular with gamers, right? So it's followed by the market. So it's in the related segment, it's all growing, IOT. So that is the market segment that was being pursued. So I think what are the things that we can do to, I was thinking, could drive this adoption is maybe gamify it, right, in a way, right? People sharing gestures, one training gestures that is trained, because otherwise, the standard gesture may not work for every individual, right? Some people, depending on the culture, we may not reflect exactly the same. So I think with the two years since the last update, I think now it's even easier to do this because of the improved AI processing capability. And then we can even have the additional, like Wi-Fi 6 and Bluetooth, which basically improves the reliability, right? And performance and power consumption. So that's all for my presentation. Get in touch and let's see what can be discussed. Okay, thank you for this one. Okay, I had the link in the detailed original document, right? So it's definitely from one of the research, market research firm, right? Okay, I think I did put in some kind of picture. Oh, okay, it's no longer there. Probably less movement on the feet, maybe standing a bit of stretching, right? So at least a variety of usage. So instead of just sitting down the entire four-hour stretch, it could do a bit of a stretching while there's always demand for certain gadgets. Thank you. Thank you.