 Hello. Yeah, thanks for the invitation. Yeah, so my name is Willi Dohring. I'm the CEO of NeuroFox. And so my background is computational neuroscience. And so today, yes, I want to show you the brain doing project. But before we take off, of course, I would like to give you a little introduction into the topic. What is actually EEG? BCI is Electroencephalography. It's a way of measuring potentials on the surface of the head. And so this has actually been discovered in the 20s by a German physicist named Hans Berger. And so here we are 100 years later, and we are able to control robots with the power of our thoughts. Or we can also explore virtual reality, navigate with our minds. And since over five years, there are also now consumer devices. So isn't this amazing? So the audience has a EEG brain-computer interface. Is anybody here? Yeah, so it's still quite early maybe for a regular consumer device. But yeah, so how does it actually work? We measure from different locations, and then we see those oscillations. Those are actually the brain waves. And you can analyze them through frequencies and looking also at the amplitude. And so depending on where we put the electrodes, so for instance in the frontal lobe, we can look for decision-making processes. Or we look in the parietal lobe on the top, basically where you have motor planning, and also in the back of the head, you can infer visual processing. And so what kind of tools do we have at hand? So for instance, there's motor imagery. This is a way where we can detect if you're thinking about moving your left hand or your right hand, and trigger certain events with this. Then another technology is steady-state visually evoked potentials, which is a way we can actually, we can detect at which light source you're looking at by knowing the frequency of pulsating, and it reflects also in the EEG actually. And yeah, then it's also possible to use it as a remote control. And so like in this particular example, so like the P300, we can use it to scroll. So we cycle through different symbols. And if the one symbol you're thinking of is currently highlighted, it's possible to detect that and trigger something. And yeah, so next there's neurofeedback. Neurofeedback is basically a way to train brainwave activity by the means of upper-end conditioning. So, and this can be seen in a way as a newer enhancement. So it can be used, for instance, to learn mindfulness and also meditation. That's one way to do it. So how it works actually, we read the brainwaves at the surface, then we do filtering basically. And so let's say you want to calm down, you want to have more basically, very simplified, the slow brainwave activity. And so every time you produce more of that, you will actually get a sound. So that helps you actually to navigate towards those activity. And okay, and it's also fun actually at different places that we made installations. And you can make also multiplayer games basically. So it's called collective neurofeedback. So you actually trigger sounds and visuals based on the group activity. And yeah, so of course, you can also use it for more serious stuff. For instance, looking at the cognitive performance or also workload. Like we just quickly talked about this in the introduction. A way maybe if something that you see is confusing to you or is it overwhelming you basically this way of analyzing it through workload. And also it can be used as a fingerprint. So of course, but there's some limits, limits of course here and challenges. And of course, the data can be very noisy. I can show you also in a minute. And so that's why, you know, it's always a big challenge. And so the noise, of course, based on how you put the headset basically. And also you have factors like, let's say a lot conversation, something that disturbs you. And then also you have internal factors like if you're very sleepy, this also changes the response basically. So, but this is also a fun part for developers who are into machine learning because there's always new things to discover and this is still in progress, of course. So what is the basic setup? So you have the electrodes. Then you have basically an amplifier because it's a very weak signal. And so then you filter it and you also digitize the signal. So this is currently on the hardware side. And then on the software side, you have things like machine learning, for instance, and further processing. And so I will start to talk about the Brain Reno project. But I felt inclined to also talk about this. So how is the technology in the future? And okay, so speaking of last year, there was actually a university was able to show that you can control a prosthetic limb each finger individually. For this, you need surgery. So maybe I understand like not everybody wants to do that. But instead you could also inject electrodes with a neurodust and monitor certain activity but also basically control. And then there's new laces. Maybe you heard about this. Elon Musk is also currently investing a lot of into this like Brian Johnson too. The idea is basically to get an artificial cognitive layer on top of the current one and being able to compete with artificial intelligence in the future. So this sounds a little bit crazy but actually I think it's not so far fetched. Also if you're looking at what DARPA, for instance, is doing currently to actually restore certain memories at will and also to strengthen the connectivity between two areas of the brain. And then there's also up to genetics which is a way to control brainwave activity just with light. And so with this technologies at hand you can see that the line between humans and machines are beginning to blur. And also with the progress of artificial intelligence or machine learning, I think we can see that there's new technologies, new wearables coming up. So currently it looks like GPUs is the thing to do to look at there, but if you look at also what other companies are doing, they're building ASICs or neuromorphic chips and they can do massive parallel processing and being very power sufficient at the same time. So if you consider this, I think you will really see a revolution in wearables and so we're moving actually all the software, all the software processing more to the hardware. And this opens up a lot of new applications, especially in the interaction with machines. So the human machine interaction will basically be completely new in this way. We will have assistants that are able to understand us just amazingly. And this is because we're not only taking of course this one input, the language itself, but we're also taking into account physiological data. So not only brainwaves, but also eye tracking for instance, or movement patterns. And so this is actually, I think one of the exciting things to see also emerging in the future. So moving also from cloud services, like if you want to do some cognitive processing through clouds, it's going to be also decentralized. And you will have devices that can perform basically magic. So all right, enough of that. So let me just talk about the BrainDuino project actually. And okay, so it started actually two years ago, but also for our technician, it started almost 30 years ago. So he was building the IBVA, one of the first mobile brain computer interfaces. And so then we made it the BrainDuino actually. So it's the newest version. And so we already have the BrainDuino as an Arduino shield. And a lot of users are already happy with this. And now we're working on using also the Arduino Pro Mini and also splitting the parts and making everything work actually as a headset. And so yeah, what the result is basically, the EEG devices is very affordable and has very high data quality. So, and this is an open source project. And okay, so what can we do actually? We have the BrainDuino, then we have a serial stream and you can use different software to process the data. There's open source software like Open Vibe and we're currently working on Neuroflex. So there's a little cleanup happening with the code right now, but it's going to be also be found at GitHub soon. And so what you basically want to do is signal processing, of course, machine learning, and then you have outputs that say something to visualize with OpenSumControl or OpenGL, of course. So this is a basic idea. And yeah, like I said, how to participate in this project, we have a Slack and a GitHub, of course. And if you make it to Berlin, you're also welcome to join the Meta Group there. So both on the software side, as well as the hardware development, we're looking for people to help us with this. But also for design and also for NeuroGames, things that make this whole project fun. And yeah, I would be happy to talk to you about this. And so what's the status quo? So currently we are processing. We're actually manufacturing the first 100 devices. And so there's like a pilot test for those people, so people can actually apply for one device. And at the same time, we're developing the software, really making it nice, usable. And like I said, push it to the GitHub also. And so yeah, we're looking for developers, of course, people to participate in this project, and of course some funding. And yeah, so let's maybe have a quick look. Let's see if I can just make this. Okay. All right. Okay, so I'm just going to show you a little bit. So I heard like it's better maybe we do the demo tomorrow. But I can nevertheless want to show you something still. Okay, so let's give it a try. And so this is software right now. And so that's what I also mean. So here we have actually the data coming in. Excuse the resolution. It's usually on the full HD. So yeah, we have something coming in, actually. And those are actually brain waves. And we have different ways of also visualizing them. And yeah, and then we put this actually into a little game. It's actually a flight simulator. And I'm not sure if I can do this right now. Maybe I'm a little bit too stressed, but I can give it a try. So the idea is to make it able to fly higher if I focus on it really good. So let me give it a try. One second. Okay, let's see. Okay, right now. Yeah, so that's what I mean. Data can be noisy, but all right. Okay, maybe that's a way of showing it. Also actually stopped. So that's very typical, maybe. Yeah, but maybe, yeah, we can go over to the questionnaire, to the Q&A. And like I said, I would also really enjoy to show you outside the demo. And you can also try to run the flight simulator. And yeah, all right. You know, you actually have a lot more time to show something. Is there anything else you can show? Yeah, yeah. Okay, this is actually maybe... Here, one second. All right, yeah, so... How do you know you're stressed? Okay, so how do I know I'm stressed? I can show maybe here. You have the higher brainwave activity. All right, I'm gonna make you stress. Stress, stress, stress, stress, stress. All right, here we go. Well, yeah, just slightly, of course. But I can maybe show what happens when I just relax for just a quick moment. Okay, relax. Okay, I'll try to relax in front of the crowd. It doesn't seem to change. No? No, you look more stressed. No? Oh, wait, no, look at this. So the color is basically the dark red. It's very high activity and the more it gets blue and this yellow. So like this was the face, actually, where it's just relaxing for a short moment. All right, any questions from the audience? Any questions from the audience? Great. Yeah, I guess I have just a question about the raw data. So I noticed that it looks like it's coming in at a fairly low frequency. I was just wondering, like, what is that a... I guess, what is the range of frequencies of interest when you're doing EEG? And what frequency do you sample at? Sorry, the last part? What frequency do you sample at? Ah, okay. So what's important is to have a range, let's say, between... Actually starting at all points, let's say all point one hertz and then going all the way up to 40 hertz, 50 hertz or something like that. And so like the most common activity you have actually between all point something to 40 and that's why we also made like a band pass filter here. So you can see right now the filter mode is on and so it goes roughly also only to 40 hertz. This is really also where I want to look in particular. But of course we sample more. So it's not only... So until it's not like 80, so you have the Nyquist theorem, you have double the samples, but we can go up to 1,300 samples per second. And this is also great also if you want to do, let's say, classification if you have a lot of data points and also for discerning between, let's say, noise and the actual brainwave signal. And I guess what signal or what techniques do you use to process the brainwave? So like I assume you mentioned a couple of different utilities, but what's kind of the most common thing to use? Utilities, you mean like a pipeline of processing? Yeah. So basically you have... what you do is... because also you have effects like the DC drift. So because of this basically how it works, how the reading and so what you want to do is... it's called like a DC drift correction or baseline correction. So you bring it to actually... as you can see also in the raw data plot here. So you see that it's kind of moving up and down. So this is actually what we first have to do is to basically remove like an average signal value, this baseline correction and then you do also filtering. And so something like a bandpass filter what we currently use is like a Butterworth and Butterworth and you can say, okay, I want to start... so like in this example we start around 0.5 and then go all the way up to 40. And so then the next thing, depending on the task of course, for the P300 for instance, on the one hand from the application, you're showing some image, let's say. One example is maybe with people that you know or people that you don't know. So you know the exact moment when you are showing the image and at the same time you're having a little data window, you're tracking the brain waves and that gives you actually the way to put this... the samples actually into different processing. Like for instance, you can put this in support vector machines or other technologies, but you can also actually use something like TensorFlow, like for instance WaveNet, because it's basically it's waves, so you can treat it the same way you treat audio and that makes it also very interesting I think. And yeah, so like this is one of the things how I would say you can work with this currently using current technologies. Thank you. I have a question here. Hello? Please don't take photos, thank you. Yeah, I have a question here because thoughts are running simultaneously in the brain, so on the subject of stress, one has positive stress and negative stress and on top of that because you have a lot of people, for instance myself, I can think of like 20 tasks at the same time and even go down to details. So some tasks have a lot of positive stress and some have very negative. So my question here is how many channels you can have and how do you do the arithmetic between the different kind of stress and also I'd like to know a little bit more about your future in process. Thank you. Okay, so yeah, different kinds of stress. That's actually a good question and one has to look a little bit deeper also of course in the data because there can of course be different unfoldings and actually if you want to classify it it may look the same to some extent but if you for instance look at the phases so you not only have the frequency and the amplitude but you also have a phase like you know the positive phase of one wave and then for instance what you can do is you can look at the synchrony you can look at how much is the left and the right hemisphere synchronized. This gives you like a different interpretation possibility so like let's say a positive stress would be if you have a very high synchrony there so like a de-stress and on the other hand if you're stressed and let's say you don't know what to do or something then it's different then you have more like a chaotic non-synchronized activity but yeah so that's not... so yeah depending on the software that you use of course those kind of overlapping or let's say statistical errors can happen if you don't consider something like this because if you just look at the amplitudes okay it looks like the better waves are rising and oh yeah okay this is probably stress but yeah you're right this cannot be actually a process of something positive and so with the Brain2Eno we can go up to 16 channels with the unipolar measurement and 8 channels with bipolar so currently what I'm using is one setup with 4 electrodes and so also with this setup right now I could also look at the synchrony and yeah sorry I didn't understand the last part was something about the outlook or oh sorry alright we only have one time for one more last question I had a question so I wanted to ask how many sensors is your setup using so many other devices that measure EEG there's something like the Neurosky they have just like one electrode that's placed on the forehead so is the Brain2Eno using more sensors and how accurate is it? yeah so we use so right now for this yeah there's like devices that only use one sensor you can do certain tests of course you can do like this very simplified yeah like looking at the rousal stage like how awake am I or am I sleepy have I been drinking coffee and so on and so with the Brain2Eno we have 4 electrodes at hand and we want to differentiate basically in the prefrontal cortex between the left and the right hemisphere but like the EEG machines which are used in the medical industry like the hospitals so they actually have an entire like cover kind of thing and they have like multiple electrodes so how accurate is your measurement while taking 4 because they kind of have one electrode per lobe of the brain yeah so like for if you want to do of course if you want to do the basic research if you want to find out about processes maybe that are only theory currently of course then you would look for a cap in that way but they're actually right now there's research emerging that is showing that you only need a few electrodes and you can already make a very precise interpretation there and also yet according to the signal quality this is also what is basically comparable to medical devices that we can offer with Brain2Eno and so one of the applications for instance what they are currently looking into is a way of also bringing in stimulation sensory stimulation and therefore you don't need actually the whole cap it can be done with a few electrodes but I can also tell you more about this later thank you alright with that thank you very much Willie a round of applause for him on the EEG very interesting