 Okay, so maybe you are always wondered how you could do Jedi mind tricks with a computer and that's exactly why we are here now, so Gnudy is going to tell you fundamentals of EEG-based brain computer interface and he's always been fascinated with the human brain and He is a researcher in that scope and I give the stage to you, Gnudy Hello The reason why I'm giving this talk is Recently there has been development and electroencephalography that was developed about 100 years ago And has been used in research and in medicine as well but we now have Contimigrate EEG headsets as well as some open hardware projects Aiming to develop EEG headsets There I have a picture of the emotive epoch, which I think was the first consumer grade EEG headset and Actually, I think the aim of the open BCI project is to get cheap research grade hardware I'm not going to explain too much about About the devices I want to talk a bit about how we can use EEG readings to To have a brain computer interface a brain computer interface Typically consists of a user having a task the task can be Thinking for example to have to have some input to if If it's used to to drive an electric wheelchair, for example, it could be a thought to go forward The signal has to be the EEG signal has to be acquired. I'm not going to talk about that I'm more focusing on the pre-processing of the data and the feature extraction Classification can generally be done with all kinds of classifiers Popular support vector machines But a good feature extraction is essential We cannot really do Machine learning approaches where we learn the features because we typically do not have very much training data Doing EEG experiments with human subjects takes a lot of time and Also The data might contain Private information so often the data sets are not made public So yeah, that's what I'm going to talk about mainly but generally after classification We have an output translation. It can be a virtual keyboard or something And there can be that is optional a feedback That allows the user to train the brain computer interface Here I'm showing you the timeline of an EEG signal this was a resting state Experiment so the subject was just resting doing nothing with eyes open You can't see very much in the type line it looks quite random Random oscillations quite low generally the signals are in the millivolt range So one of the first steps that you can do do a time frequency analysis so what we have here is we have 14 seconds of EEG and 14 electrodes at the 14 channels as the emotive epoch EEG's headset has And here we also have 14 seconds and this is basically doing Couple of yeah, I'm computing Spectra for different time slots So you see the development of the spectrum over time and what you see here is one of the things that make it difficult most of the signal power Is in the range below five hertz The different frequency ranges are typically associated with For example, yeah, some states of mind like sleep states Actually, it's quite easy even with a timeline to to see in which sleep state someone is Important is is the alpha band That's something that should be in in the plot that I showed before as well We couldn't really see that in the timeline if actually I had instructed the subject to have the eyes closed but not open we would have seen oscillations in that range on the electrodes that are above the visual cortex because They would get to idle state is if nothing is seen and we would have more power in there If we are doing EEG experiments We typically look into changes because we have this huge random noise signal Where we have basically no idea what it means We typically have experiments where we look at some At at least two different states So we define something as a baseline and then we typically Look at what changes if we for example, we have a resting state task for the subject and then the task is to think maybe the The command to move the wheelchair What we see here is again from the same EEG EG recording What I did was I used the two seconds before that are not plotted here, but it looks about the same computed the average and Divided it all by it. We call that baseline correction. So now It's easier to see changes here in the other areas. So Having a baseline is something that you Normally do if you have some EEG experiments We also have some other problems besides the general back note background noise that we have It's artifacts here. It's a similar Timeline The difference is The subject was instructed to blink in intervals. You see that there are some huge peaks there Especially on the lower and the higher lines. That's because according to the 1020 system The electrodes are numbered around the head so on the top and on the bottom It's basically the electrodes on the forehead. So what we see there is eye artifacts actually to see artifacts That's quite easy in the EEG timeline. The problem is getting rid of it if we don't want to have it So we start typically by instructing the subjects Not to blink not to move Also a problem can be having for example the 50 Hertz power grid Can also be an artifact in the signal therefore we typically have Filters at at that frequency There are different approaches to To get rid of the artifacts the simplest one is cutting out of the data The parts with artifacts, but this basically means we have to Repeat the experiment to have it several times But we are doing that anyway if we One approach for brain computer interfaces are event related potentials If we have an event that could be a subject is shown a picture or any other stimuli And we repeat that And then we average over all the repetitions of showing this image Then all the random noise will Will cancel out and what we are left with is the EEG components that actually depends on the processing of showing this image And that we call an event related related potential This is just an example. Typically it doesn't look that nice We typically count the peaks so we have three Positive peaks here. We call p1 p2 p3 and the p3 are also called p300 because it's about 300 milliseconds after the stimulus that is something that we use for brain computer interfaces Because There's the so-called oddball paradigm the p300 is only there if something is relevant to your task and it's Not happening very often. So the p300 speller which is Something like a virtual keyboard. Unfortunately. I don't have an animation the lines and rows would light up at At some speed if you want to type a letter you would focus on a letter and when it lights up That is a rare and relevant event to you Because you want to type it. So exactly in that case you will have the p300 in your ERP and Of course the system that is providing the stimuli Is recording the timing and then knows which which letter actually was lit up when this Appeared so this way you can type things But again, you would need to to stare at one letter for a while because it has to be repeated several times so I want to present More specialized p300 based Brain computer interface that is an authentication scheme the ideas we are having 100 Normal photos can be anything And we select some which are our password So this is the example that is actually a set that we use for the experiments Five very different photos so now I'm doing a small experiment with you try to Remember those photos and try to see them and count them in This video stream So I think I'm not asking you to raise your arms because I can't see you anyway I guess some of you will might have seen all five pictures might some might have counted less For the first time the task is not really easy, but generally When in this experiment you you counted something you will probably have had this p300 in your brain Those are the results of the experiment I Hope you can read it Might be a bit too dark As we have 95 non-target image and only five percent so five Target images We use the f1 score To evaluate the classifier We did cross validation on the data here Where we have the best score we also did Train the classifier which was in simple a linear discriminant analysis We trained it just by The experiment done by one person and then we also tried to do a general classifier that we trained With other people's data for that we used the best data sets that we have that had the highest Score in the cross validation It still works the interesting thing here is that The classification of the eG data is possible without tuning the classifier to the user Making the system non-biometric This is the number of trials that we averaged so we actually showed 50 of those births that you have seen just before that takes about 20 minutes and No one wants to use an authentication system where one log in takes 20 minutes So we looked at how many births do we actually need but it seems like up to 50 it still Increases and there is a huge difference between different sets. So The highest line is the is the top rated subjects and the others are much lower So it depends a lot on the subject maybe also on how well the eG headset fit and of maybe how well they focused on the task Yeah, but it's the biggest effect that we found was For an authentication system we want to have Permanence so if we want to log in again after a few months it should still work So we had three Sessions with some months in between And actually we got better scores over time We feared that they might degrade but it seems there's a training effect Yeah, and the signal is permanent enough So here is the final score. That's a plot because even though it's no real biometric system We are measuring biosignals and that can go wrong Therefore we have false acceptance rates and false rejection rates and we are coming to an equal error rate of about 10% Which is of course too bad, especially when you'll see that one authentication round takes about 20 minutes Yeah, so the idea was if we have the If we are showing the images very fast we can have a very short log in time, but it actually didn't really work well enough and That's it There are three minutes left are there questions You are I found my was again. So indeed we have a bit of time There are two microphones and the internet So remember and there's a question at that microphone Hello there. Um, I was wondering how Have you heard of this being used to detect terrorists? There was a there was an experiment done where they showed images of things that you're not supposed to know as a regular Citizen so they would show all these innocent images and then it would also be a Blasting cap or the magazine of an AK-47 or stuff that you're supposed to know if you went to a terrorist training camp And they would could do it the exact same P 300 thing. I thought that was interesting and of course if you read about You know about training camps and terrorists you would fail the tests, which would be interesting. Yeah I Haven't heard about the application on terrorism, but very similar the application on criminal Investigations as a lie detector based on P 300. So basically the same You are showing some pictures that only the only the Knowledge tests. Yeah, yeah that that only the one who was guilty knows and But there are also some papers about if you are If you are attacked by this P 300 based guilty knowledge test how to how to prevent being detected Okay, another question that microphone Two small questions one is which was the EG headset used in the image-based authentication? Was it the open PCI and then also is P 300 Like individualized like does it need a lot of calibration or is it something that you can just detect like straight in So the EG headset used for for the experiments was the emotive epoch and The P 300 seems to be a bit individual There are approaches doing biometrics by looking at the P 300 But our approach was to have a general P 300 detector that would work on anyone that was the difference between the IC the individual classifier and GC general classifier You can see the score difference here the middle and the right one and the left one was across validation Okay, is there a question from the internet? I'm looking at the no question from the internet So unfortunately the time is running out and so I have to ask you to approach newly directly and I don't know. You're here. You're at the Congress. Yes. Can we contact it in some way? I guess. Okay, then I would say thanks newly again for us talk