 OK, guys, so today I'm going to talk about my main research, which is brain signals analysis. But right now I'm working with Proximity HCI. My boss is right there, Alexandre Benoit. And so, I proximity work with accelerometer and gyroscope signals, but in my PhD I work with brain signals. So I'm going to talk a little bit about signal processing and stuff. So I'm going to know a little bit about your background, so I'm going to show some news. Here we have the headline of Facebook starting to develop brain-computer interfaces. They hired Regina Dugan from DARPA, you know, the United States Defense Research Facilities. So raise your hand if you have heard about this. One person, two, three, four. Yeah, OK, some people. OK, nice. Next, we have Elon Musk, you know, the founder of Tesla, the founder of SpaceX. He just founded a new company called Neuralink also to develop brain-computer interfaces. Raise your hand if you have heard about this one. Oh, more people. Cool, nice. OK, finally, there are many commercial headsets coming out. There are between $100 and $500. You know, this one here, this one sends your emotions and then it can move your cut ears accordingly to your emotions. So this one has 16 or 14 channels, this one is pretty good. So the three main players are Emotive, Neural Sky, then there is also Muse. So raise your hand if you have heard about any of these headsets. Oh, yeah, a good amount of people. OK, raise your hand if you own one of these headsets. Wow, we have one. Oh, that's awesome, nice. Great, so you will benefit from this talk the most. OK, nice to know about you guys. So I will be talking and I'm going to be using these two words a lot. So when I say EEG, I want you guys to think about brain signals, maybe you're already familiar with EEG. When I say potentials, I mean voltage signals, just to put that out there. OK, so let's say you buy a headset, you collect your brain signals. How do you collect your brain signals? You just wear it and walk around and then you look at the signal and then you use machine learning to extract interesting stuff. I mean that could work, but it's much better if you design a carefully thought experiment. In order to design psychology experiments, it helps a lot to use this library, Psychopi, I really suggest it. It works with OpenGL and when you do psychology experiments, you really want to have good timing in your graphics. So it's very good to have direct access to the video card. So a good experiment design should have three elements that you probably have never heard about unless you have the same background as me. So the first one is jitter. So let's think about a psychology experiment. Let's think. You show an image, show a black screen, show a different image, show a black screen, and so on and so forth. So let's say we show the images for one second and the black screen between the images for half a second. If you do that, your brain will very quickly predict when the next image will come up. And then it will create a potential to show the prediction. That potential is called the anticipatory potential. Those potentials are quite interesting by themselves, but normally we want to get rid of it. How do we get rid of them? By jittering the black space between trials. So that black screen between two different pictures or two different words can be anywhere, instead of being always half a second, it can be anywhere between 0.4 seconds and 0.6 seconds. And then, at least so you get rid of anticipatory potentials. Bling breaks. So bling breaks are way easier to understand. This is some EEG signal. Here we have voltage. We have four channels here. So this is the EEG signal. And this huge peak here is a blink. It's like what? 10 times bigger than the EEG signal. So you definitely don't want to have that all over your signal. So what do you do? You tell your subject, please blink only when the cross is green, for example. And then you know when the cross is green, so you cut all the green crosses out of your signal. The last thing is markers. And if you can see them, there are these red arrows. You need some somehow, this might be, this last one must be obvious. You need somehow to tell your EEG file, your data file, that you are showing an stimulus. You have a blink break, and so on and so forth. If you have markers, then you can do interesting post-processing stuff. If you have no markers, then it's very hard to do any post-processing afterwards. OK, so let's do a psychology experiment. So this experiment has very simple instructions. Normally, we will give you a controller, and we will tell you, press a button when you see an image. Because I cannot give you controllers to all of you guys, I will tell you, press your hand when you see an image. So that's the instruction. Press your hand when you see an image. Number two, keep your eyes on the cross as much as possible all the time. And number three, blink only when the cross is green. OK? So those are the basic instructions. But let's see. Let me show you the code. OK, so here we have some code. It's super simple. Let's see. So you see this function right now just prints the marker on the console. Ideally, you need to have a function that will link the information from your EEG to the information from your stimulus, from this program in this case, from the program that produces the stimulus. Then every time you show an stimulus, like in this case, the World Monreal, you send a marker with the category that the stimulus belong to. In this case, I decided to make it one. And so normally, an experiment would look something like this with a big loop. And at the beginning, you have the jitter. And then the trials shows in randomly. And then the fixation cross. And then your blink break, which is at the end of each trial, you change the fixation cross somehow to let the user know that they can blink. So remember, press your hand when you see color. Keep your eyes on the fixation cross, blink only when the fixation is green. Oh, I don't know what was that last thing. OK, cool. So now, if I go back to my presentation, now we've got your brainwaves. Actually, we don't, because you guys don't have headsets right now, but I should stay over here. But if we had headsets, we would have collected your brainwaves right now. Since that's not the case, I will show you signals from my previous experiments. And that we have to do. So for post-processing, I strongly suggest this library. I discovered it during this presentation. And I'm mesmerized by this library. You know, when you collect data from EG hardware, it can have all sorts of formats. There is no uniform format. But M and E, this library really can read any format. In this case, we use brainvision in my lab. And it's able to read that. So if you plot the signals, you get something like this. So what we have here is the seconds. And here is the channels. So we have up to 30 channels, but I decided to plot only some of them. So what can we see here? This is the response to what the image is just so. Here, we have some blinks. They are quite regular. So from that, I can tell you guys are blinking on the right times. Thanks for that. I can tell you that here, you see an image. And here, you see a word. No, actually, no, that's impossible. Actually, you need to do post-processing to be able to do that. So the first step to post-processing is low pass filter. So the power line here in Canada is 60 hertz. Try to filter below 60 hertz. It makes a little difference. If you are in a real world situation where the subject is moving, it makes a lot of difference. So step number one is filter the signals. Step number two is to slice the signals. So here, we have voltage. We have time. Now, we are looking at only a single trial. So normally, the zero, we slice the signal 100 milliseconds before the stimulus to have a baseline, and then up to one second after the stimulus to look at what happens after the stimulus. So all the signals that are from the same category, you slice them together. And then you have these squiggly lines. Still, we cannot see anything. Number three, average many trials. When I say many trials, I mean around 50 trials. So average all of them together. And you can see, if you see signals like this, you can tell whether the signal was acquired properly if the baseline is very close to zero. Good signals normally have the baseline very close to zero. The person collecting the signal has done reduce the noise, have done baseline correction. So that's good. So let's say here, I have the signals from you guys when you saw the words Montreal, the word plant, and 48 other words. And here, where you see the word embezel, the word yarn, and 48 other words. I separate those in two groups. I'm calling them the familiar words and the unfamiliar words. So once you have the signals averaged to the same stimulus type, you would call these the event-related potentials. So for you, this might be equally as meaningful as the other signals I've been showing to you. But this already is quite meaningful if you have the right training. So the first thing I can show you guys is the P100. The P100 is a positive, that's the YDP, deflection at 100 milliseconds. And it shows that you see stuff. So anytime you see something, you will produce a P100. This one is used to diagnose blindness, for example, in babies. So that's very helpful. It can also tell you what is your processing speed. So I can tell you that by one-tenth of a second, the image travels from the monitor all the way to the back of your cortex, where is your visual cortex. OK, if I plot the two signals, the two group of signals, the familiar in blue and the unfamiliar in green, I can show you the N400, negative 400 milliseconds. Here, I can tell there is a difference between the familiar and the unfamiliar words. The N400 is a good indicator if you get the meaning. So with an experiment like this, I can tell whether you know a language or you don't know a language by measuring whether something has meaning to you. Another component that is, I think, the most popular is the P300, a positive deflection at 300 milliseconds. This is for words versus figures. I told you guys that every time you see a figure, you should press your hand. So the P300 tells you where you found the stuff. It can work both ways, whether you found the stuff because you were looking for it or whether you found the stuff because it's salient, so it catches your eyes. So the P300 has a very interesting application. It's used for spelling. So what is happening here is the person has an EEG cap and is looking at flashing lights. And when the word, the letter that the subject is looking at flashes, then he will produce a P300. And then by doing that a bunch of times and by looking the row and the column, the intersection of the row and the color, they can, the row and the column, then you can find out what kind of letter they wanted to spell. So this one, you can see the speed is not super fast. It's actually quite slow. But this technology has been improving day by day. It's much faster now. It's super fast if you can put the electrodes under your skull because the skull is a huge problem. OK, and this is just a small sample of ERP components. There is the lateralized ray, this potential, for example. It tells you when someone is about to move, just like the Berai-Schaff potential. The N170 tells you what you are a specialist on. The error-related negativity tells you whether you realize, whether you made a mistake or not. There are many. There is this book, really like a huge book, the Oxford Ham book of ERP components. This is really nice. It tells you how you should stimulate the brain to produce the ERP that you want. OK. And that was just time domain. We can also analyze the signals in frequency domain. I don't know if you're familiar with it. Frequency domain just means you can just compose any signal in a series of sinusoids of different frequencies. And then you can see at which frequency how much energy there is. So that's also called the Fourier transform. So we can also see the signals in the frequency domain. OK. So if I show you the signal from the first experiment in the frequency domain, it's not much fun. It hasn't been made for that purpose. But let's do a second experiment in the frequency domain. In this experiment, the screen will flash rapidly from black to white. So if you have any proposities to seizures, I'll suggest to avoid looking at the screen. I will let you know when the experiment starts and when the experiment ends. So this experiment, the only thing you have to do is looking at the cross in the center of the screen. So let's see how that looks. There are two ways of doing this. And I want to put an emphasis on this. One way is by controlling every refresh of the monitor. Every refresh, you say, this refresh, the screen should be black. This refresh should be black. The next refresh should be white. The next one should be white. The next one should be black, black, white, white. And you do that for as long as you want it. So I want to show it for five seconds, so I do it 75 times. The other way of doing it is saying, OK, one second divided by 15 hertz, that's 66 milliseconds. So 33 milliseconds, it should be black. 33 milliseconds, it should be white. I put the screen white, go to sleep. Put the screen black, go to sleep. So let's get started. Oh, that's not the one. And I cannot stop it. OK, let's do it again. So these are the two ways of flashing the lights. Do you guys notice any difference? No? No, nothing at all. Yes, like red, right? Yeah. The difference is that both of them are white and a black rectangle. But the timing is the difference. I don't know how easy to perceive is that. But OK, let's say I grab your signals and I do a frequency shift, I do a Fourier transform. That, again, with the M and E library, it's super easy to do a one-liner. So this is what your signal would look like. If I flash this, if you see this, this is what your signal is looking like right now. So you can see you have a big peak at 15 hertz. And then you have some harmonics as well, 30 and 45 hertz. So do you think the timing makes a difference in how the signal will come out? Actually, it makes a huge difference. If you don't have the good timing, you cannot produce this effect. This effect in psychology is called entrainment. It's widely used to study different stimulus that happen at the same time. If you want to see how you respond to different types of stimulus, you just study what the energy is like at the frequency at which you are flickering the stimulus. OK, and the next natural step would be to do the time and frequency domain. This is the wavelet transform. So here we have time. And then at each time, we can tell you how much of each frequency there is. By red is more energy at that frequency. And blue is less energy at that frequency. So you know the EG in general is very low frequency. The way I use this in my research is for authentication. So let's say a person comes to get enrolled into a system, a highly security system. Then they would generate this signal after looking at some stimulus. And then when they come back again, we can do the same process again. And then we can tell whether the person trying to access the system is subject A or subject B, for example. So this is the field of brain biometrics. And this is what I do most of the time. And that's pretty much it. Hope that this talk has inspired you guys to maybe get a headset or build a headset of your own. There is a big community, Open EEG. You start collecting your own ERP signals and predicting where people are going to move ahead of time and then start questioning free will and stuff. I would like to thank my lab, my great advisor, Dr. Laszlo and her lab, the Brain and Machine Lab at Dinghamton University, as well as Alexandre and all the people at Proximity HCI. And thank you guys for your attention. Single events? Yeah, so definitely I would be careful, especially for real time processing. That's why the ERP, the P300, takes so long because you need to take the average. So I would say if you want to do single trials, you might have a better chance if you do frequency domain analysis. From single trials, you can get sometimes the face. Also, that gives you a lot of information. For time domain analysis, the signal to noise ratio is very low. So I think without averaging, it's quite hard. Yeah. Do you have a? Yes, we use it. Yes, yes, yes. It's not you have to. For example, the commercial electrodes, they try to do dry electrodes. Yeah, I guess they heavy post-processed signal afterwards. And also, they use active amplifiers. So there is a voltage follower at the electrode side. Yeah, yes, good question. Yes, for the authentication one? For the authentication? Yeah, no, the way I collected that image, I thought someone would ask. This is the type of stimulus I was showing. It was just pretty much like the first experiment I showed you. I would show some kind of words, foods, celebrity faces. And that's the output I would get. It was from still images one after another, but also averaged together. Every person has different responses. So that's why we can identify them. Yeah, it's exactly like fingerprints. Yes, we call these brain prints. It might change. The farthest we've gone to test retest is one year. I think that's kind of reasonable to change your password every year. So every year you have to re-enroll your brain. Like, for example, what do you mean train the system? Oh, I think that Morif has a library that does that. It requires some training. Still, actually, there are games that do that. There is an obstacle game that has a little ball on a fan. The strength of the fan will determine the height of the ball. And your brain wave will determine the strength of the fan. And then you go to obstacles doing that. It's a little bit frustrating at the moment. Are we up? No. Yeah? OK. Yes. I'm sharing this project. Yeah, yeah, yeah. So there are many types of ERP components. So the ones I show you are mostly sensory and cognitive. But there are also motor components. So there will be, you can pick up when someone is going to move. Or even if they don't move, even if they think about moving, you can predict if they are going to move their left hand or their right hand, even if they don't move at all. Yes, yes. Yes. Yes. Yes, I think it's quite friendly, well-documented. So yeah. I think you guys, if you have the opportunity, definitely give it a try. I see. Hope that changes. So that is sad, but it's very sad. But this one here is the hardware. Hardware is the best. But they charge you to give you the raw data. So you have to pay $50 to be able to get the raw data 50 times a month. I don't know why the business model is like that. But the hardware is the best, but they charge you to get the raw data. The only thing they give you back is a handler, like right, left, up, and down for free. But they charge you for the raw data, which I think should be the other way around. The nearest guy is the most versatile, even though it's a single electrode. And it's the most open, easy to access the raw data and everything. The Muse, I haven't used it, so I wouldn't be able to tell you about it. OK, thank you. Thank you.