 So, yes, I will talk about brain-computer interfaces, but before that, I want to shortly recap the nervous system. So, Rocky was already talking a bit about it. He mentioned that it's important for movement. So, we have here the brain. We have the spinal cord, and we have the peripheral nerves that are connecting our brain to our muscles, and that's how we move. And movement is actually very important for us. There are even some scientists that say that brains exist because of movement. And when I say movement, I don't mean only moving from point A to point B. I also think of speaking, because we move our muscles to talk. I also mean of typing, because we move our fingers to write. I also think of some physical gestures that we do as an honorable communication. So it's very important for us. And sometimes, because of some reasons, there might be problems in this system. And that's where being computer interfaces come into the game. They want to help these people that have some problems with their nervous system. So we live in an era that is dominated by the power of computers. And we want to use this power that the computers have. And that's why we have the brain, and we want to connect the brain and the computer. And that's how we operate computer interface. So, the reason that we do this is that with brain computer interfaces, we want to replace, restore, or improve some of the output functions of the brain or out of the central nervous system that's consistent of the brain together with the spinal cord. So there are some patients that might have some problem in this upper part of the spinal cord or in the lower part of the brain. And because of that, they cannot move their muscles. Actually, they can move only their eye muscles. And imagine that you cannot have any communication with anyone. The only thing you can do is to use your eyes to answer some simple yes or no questions. So with brain computer interfaces, we want to help these people. We want to read the signals from the brain, give them to a computer, and then the computer tells the message that a person wanted to say. So how is this possible? We can have some different designs of keyboards like we see on this picture and on this picture. And then the patients that have the brain computer interface have to imagine that they're moving the cursor on the computer. And that's how, with this movement of the cursor, they're going to type. And that's how we understand what they want to say. And actually, they can type 10 characters per minute. For them, this is very big thing. And for us, it might be very slow. Actually, it is very slow. Because an average person types 30 or so minutes. So, I mean, compared to 10 characters, that's like one or two words. As I said, it's still a lot for them, but we want to help them. And actually, that's what I'm working on for my master thesis. So I wanted to understand the message that the person wants to give. Because when we are talking, we don't really think how are we going to type it. We just think of the message that we want to give to the people. And that's how we want to understand what the person wants to say, and then use the computer to directly say that message. And that's how we replace the function of the organs that we use for speech. Then we also can restore some functions. And actually, there are some patients that might have some problem with their spinal cord. And everything with the brain is okay. Everything with the muscles is okay. So we just want to reconnect them because the connection is the problem. So what we can do is we record the signals from the brain and then we can use electrical stimulation to stimulate the muscles so they can move. And here we see a picture of one gentleman that has his muscles stimulated and that's how he can do a movement with his hand. We can also improve some functions. And that is with patients that had stroke and because of that, they cannot move as well like in the previous case. But now, luckily, they can rehabilitate. And we will use brain computer interfaces to help them rehabilitate faster. So just like in the previous case, we record from the brain, we stimulate the muscles. But with the difference now that we're also recording how much the muscle is moving. Let's say that I want to move my hand from here to here. But because of the stroke, I can just move to here. And then with the electrical stimulation, I will finish the movement to here. And that's what happens here. And actually on this picture, we see this person that has his legs that are electrically stimulated. And so far, I talked only about the medical applications of brain computer interfaces. But actually, everyone can use them. You, me, everyone could use them. Everyone could take advantage of that. So Nissan is working on their new technology that's called brain to vehicle, where they use brain computer interfaces to make the experience of driving a car even better. And so when we think that we want to turn the wheel, there is some time until the signal from the brain comes to the muscle and then we do the movement. And they want to compensate for this delay. And they want to start moving the car even before I start moving my arms, my hands. And we can also supplement some function with brain computer interfaces. Okay, so as you already saw, this drummer uses a third robotic arm to make it have a better performance. And so far, I gave a lot of examples of what we can do with brain computer interfaces. And I'm sure that you already wonder, how can we actually measure the signals from the brain? Well, there are two approaches. We can either have electrodes outside the hat, or we can have a patient or the person have a surgery and have electrodes implanted inside the brain. So it would look something like this. But not everyone really wants to have surgery. And that's why non-invasive brain computer interfaces are preferred. And even 80% of the brain computer interfaces are based on these non-invasive techniques for measuring the signal. And this is actually so easy that you're going to see it yourself. So I invited my friend Tanya to come to the stage. Whoa! Okay, sorry. Before that, I wanted to tell you something else. I just wanted to make a comparison between non-invasive and invasive measuring of the signals. So let's take a look at these boards. And this is the original picture that we have. And then if we were to measure it with non-invasive techniques, we would, so imagine that these are the signals from the brain, I mean they're not, but visually it's much better to represent it with a picture. So this is what we would see if we had non-invasive techniques. So we have a lot of noise and the resolution is very low. And we see some green, we see some red. I would say it's difficult to say what did we have before, but still we can work on the picture a bit and get some useful information. And if we use invasive techniques, then we have these. So we see the boards, there is still a noise, there is lower resolution, but the signal is much better. And just for comparison, you can see that this approach is much better than this. But yeah, you cannot have a life that long with invasive techniques. And you can't have a life that long with invasive techniques. Tagia, we'll look at her screen and this thing that you're seeing now. So what we see here are these patterns that are blinking in different frequencies. And actually the signal that enters from her eyes goes to the back of her head. And then we can see in which frequency she's looking. And depending on where she looks, she will do something interesting. So we actually wait for the connection between the computer. Yeah, actually there are a lot of people here. And here it is. So depending on where she is looking at, right now she's looking at the right. And that's why the car started to move to the right. No, she's still looking to the right. Maybe you can't look somewhere else. So yeah, now she's looking to the front. And that's why the camera is not connected to the screen. So the people in the front row could see. Now let's try the other one out. Okay, you know, with life symbols you can never know what's going to happen. Okay, great. All the time is to the front, to the right. Okay, that's three turns. And if it goes to the left, I think it would be enough for this demo. Okay, there you go. I wanted to show you where we recorded the synapse room. So as I said, the synapse goes to the back of her head. And that's why she has the electrons here. Thank you very much for that. Whoa. Okay, so now we need to move forward. And here I will show you application where we use invasive techniques. And you're going to see that we can actually do something much more. This patient had locked in syndrome. That's what I was talking at the beginning. So she cannot move her muscles. And actually she could not do anything for 15 years until this moment that we saw on the video where she using the power of her mind, she controls the movement of the robotic arm. So she was imagining to perform the movement. And that's how the robotic arm moves. And she could take back out of the room. And I'm sure that you all have realized that this field is very promising. And actually there are a lot of companies that invest in it. I mean, some big names like Elon Musk, Facebook, and our firm. And because there are a lot of companies and also a lot of research institutes that were working on the development of this technology, they were all around the world. And sometimes it's difficult to communicate. And that's why this project took place. So BNCI Horizon 2020 is the name of the project. And the idea was to create a roadmap towards which all of the research groups are going to work. So they wanted to give a direction for the research field. And they also initiated from this. There is one student competition. That's called Cyberclaw. That's very similar to the Paralympic Games with the difference that here the participants are using brain-computer interfaces. And as you can see on the picture here, this person needs to hang on the clothes for drying. And even though it might be very simple for us, it's actually very complicated task to do. We just don't think what we're doing. That's why it seems easy for us. We see this gentleman here that is riding a bike. And actually he has his muscle on the legs, but actually he's stimulated to move. And also see this lady here, that is playing computer games using EGZ, similar but a bit more professional than the one that Tanya was wearing. And I told you today that we bring computer interfaces we can replace, restore, improve, supplement, or enhance the function of the central nervous system or of the brain. And EGZ or non-invasive techniques are very easy to use, and that's why they're dominating. And now I want to thank the Institute for Cognitive Systems for providing us with the equipment for the demo. And my four friends, Tanya, Esther, Francisco, and Jonas, who actually worked on the demo, so they are the ones that designed it. And I want to finish by saying that this is actually a very fast-growing field, and I hope that in the future, just like Na'Kalia's talk, we'll have one more talk, talking about advances in this field. Thank you. To the audience. Yes, please wait for the microphone, please. But it's coming, very good. So you showed the examples where we get information from brain and install it into the computer. What it would be possible to do is that we are to, for example, provide people to put the information, so how we'll get it and put it into the brain. Let me just repeat the question. The question is if we can go the other way around, so put the information from the computer to the brain. A very good question. I'm sure you've already heard for cochlear implants. So that's exactly what they're doing. The people cannot hear, and then we use electrical stimulation to give the signal, the information that's inherent to the person. And yeah, cochlear implants are the next step. So it's also a very big research field on which a lot of people are working right now, and they predict that in a few years we will also have cochlear implants. So people will, blind people will be able to see. Thank you. I can also add here, and refer you to our YouTube channel because we had to talk about cochlear implants. And a year ago on our birthday event, we had to talk about sensors and sensors, and also Lili was presented there and she talked about different ways, how currently blind people are getting some support using that technology to help them see. Okay, more questions. Yes, please, they're in the middle. Please give the microphone to the lady. Thank you. Hi, thanks a lot for the presentation. It's very interesting. So my question is also just one of the ladies' question where you have the information in the brain of the equipment, and then it goes to the computer and he showed one example about one person lying in the back. Is it possible for him to turn off the signals? I mean, otherwise, physically, you can't really think of what he thinks in the brain right now, right? The question is, if we have a brain-computer interface that's bringing information from the brain to the computer, can the computer also read some extra information from the brain of this person? Okay, so that's also a good question and that's also a very big research question. How to decide when I'm taking something that I want to say or that I don't want to say, and yeah, people are working on that. So we still don't have the answer, but of course that, you know, data privacy is very important. So people care about it, don't worry. Any first question? Yes. How strong is the signal you are receiving in the device? Can it be disturbed by the mobile phone, for example, when you put it to your head to send some fake signals, for example? How strong is the signal that the brain is delivering to the computer and what can disturb the signal? For example, the mobile phone or some other technical things? So to be honest, I don't know about the phone. I can't just guess that it does affect it. I don't know how much, but depending on the technology that we use for recording, like I said, non-invasive and invasive. Using non-invasive, like we had on the demo, even if she just blinks, we see that in the recording. And actually this signal is much bigger than the signal that we get from the cells from the brain. And this is the signal from the muscles, right, when the person is breathing? Exactly, yeah. So the muscles of the skull, they can influence the brain of the energy. Exactly, so we want movement, but in some cases it doesn't. One more question at the moment? Yes. With these implants that are used in the invasive network, do they have a lifespan like the electrodes and how often do they need to be changed or? The question is, do the implants in the brain for invasive technology has expiration date? Well, like everything else in the world, it also has expiration date and it actually depends what do people want to do. Sometimes while they're doing surgery, they put electrodes in the brain, they do some measurements and then they take them out. So it happens, for example, with people that have epileptic seizures, maybe. They do some surgery because they need to do something else in the brain and they open the skull anyhow and they want to take advantage of it for research. And they might take the energy out. But in other cases, they put the electrodes there, and they stay in the brain till the rest of the life. Thank you very much. As earlier, I encourage you to come back to Victoria after the talks. We will have some more time after all the talks. And now let's thank her for a great, interesting talk.