 All right, so I'm going to do this a little bit different. I'm going to need a volunteer from the audience. Help me to move an object with the, hi. So, Juliana, hi. Yep, we're going to use this computer. Please have a seat. Yeah. OK, so what I'm doing now is actually putting on a neural headset. This is the product that we developed. It's called Immotive Epoch. Each of the light that you see on screen is one of the sensors that send Juliana's brain wave. Yep, so what I'm going to show today is how you'll be able to move an object on screen just by thinking about it. I hope that you can see Juliana's face. You just smiley. OK. Yep, give me a normal blink and cleanse your teeth. Oh, you're doing that right? Yeah. All right. So this is a little bit of an expressive suite, which is basically we reading from the brain while we're reading all the facial expression that you can have an avatar mimicking your expression in real time. All right, so I'm going to go ahead. And first of all, I'm going to kind of set the baseline for your brain. So just be normal. It's just a little bit of calibration. It's going to go for eight seconds. You don't have to think about anything. Just be yourself. Now you can choose an option, you know, an action. You want to push the group or pull it or let's do pull. All right. So now I'm going to go ahead and ask you to think about pulling this group toward you just by thinking. You can use your hand. You can not use it. It's up to you. But thinking about it for eight seconds continuously, right? So ready? All right. First time it's not going to move, right? It's trying to read your brain. OK, so ready? Go. So now can you try to pull that group toward you just by thinking the same thing? Oh, OK. This is a little bit harder. This is the first part is basically based on your motor cortex. So it's read your brain waves and then, you know, do real-time classification of the pattern of your brain and then decipher whether you want to move an object towards you or push it away from you or things like that. This one is more on the visualization. So it takes a function of your brain's visualization. So same thing, you know, I'm going to ask you to think about this one fading, right? For eight seconds. And let's see how that compare to your actions. Ready? I'll try. It's a little bit harder. Wow. Oh, OK. Cool. Thank you. Yeah, so basically that's, you know, I just give you a glimpse of the next generation of human computer interface. At Emotive, what we're trying to do is we, you know, as you all know that from the beginning of mankind, the way we communicate with machine has always been in a conscious form, right? So you normally consciously telling a machine to perform a task for you. So either it's as simple as turning on and off the light or it's as complex as programming a robot, you know. For us, we look at the communication between human to human and we see it's a lot more complex. We not only take into account the conscious intent, but also the non-conscious expression and emotions. So at Emotive, we're trying to create a technology that allows computer to take not only your conscious thoughts, conscious comments, but also your emotions. So computer now be able to understand what you feel about the material being presented to you, right? So hopefully the force is gonna be with us. Thank you.