 I come from Inria, which is a national French institute on informatics and applied mathematics. And we are now building an institute close to Nice called Neuromod Institute about modeling and neuroscience. And it's an exciting time for us neuroscience modelers in this area. And we're supported by a new university called Université Côte d'Azur, which is a kind of super structure above Nice University, which is quite dynamic. So today, I don't think I'd need to recall the principle of brain-computer interfaces since widely explained by the previous speaker, thank you. And I'm going to talk today about the issue of personalizing BCIs to users. Just briefly, one of the problems we have are to determine features of brain activity, which are going to be able to characterize a mental state. And you know how noisy these types of recording, whether they're inside or outside the brain, can be. So how challenging it is to be able to characterize a mental state using such recordings. This, of course, has to be done in almost real time in order to translate these mental states into commands, which are meaningful for the subjects. Or in some cases, we'll look at the passive BCI, passive brain-computer interfaces, in which a subject is not really aware of how their mental states are going to be interpreted, but they can be used to modulate user-system interaction. So brain-computer interfaces have been bearing great promises. But for the moment, we can't claim that they're helping many end users. Yes? Sorry, can I turn this on? I think so, yeah. I think it's on. Yeah, yeah, it's on. It's not so loud enough. It's on. OK. So we don't see that many people using BCI in their daily life. And from my point of view, the hindrances to this BCI adoption would be, well, they have to be really relevant to the people's needs. As long as people have one muscle left in their body that they can willfully use, what would be the point maybe of adorning such a complex system and having to learn to use this, et cetera, if they can do what they want with the remaining muscle and the help of therapists? There's an issue of availability and of wearability on a really daily, daily basis. And we'll see that some gel-based EGs certainly not a good way to acquire EGs for durability and long-term use. And also, maybe it may sound a bit superficial, but there are not that many early adopters or role models that can give the people who may need it the incentive to go and try to get one and to try to get equipped, et cetera. Sometimes you need role models like famous people or people that people want to identify with. And, well, for this, of course, a lot of work has to be done. So in this talk, I'll talk about how to tailor the BCI to better fit the user's needs, specificities, considering their physiological, psychological specificities, and also keeping in mind utility, usability. So the main thing in brain-computer interfaces is that there's brain activity. And this brain activity is being acquired by sensors that can be placed in different locations. In my talk, it will be mostly non-invasive, well, only non-invasive. So EEG-based sensors. So between the sensors and the brain, you know that you have all kinds of tissues, all several kinds of tissues that each have specific activities. There are cortical foldings that are quite distinct from one person to the next, except for the main, main sulcaia and gyri, but the details of the foldings are different, which makes the activity that you see on the person's scalp actually quite different from one person to another. A way of trying to find, let's say, more invariant features in order to be able to develop BCIs that will more easily translate from one subject to the next, if that's a concern, well, it would be to go to the source space. Because then you get rid of the problem posed by, well, the foldings. And also the head shape that is different and the electrode positions. So to do this, to go to source space, you have to know where the sensor positions are in 3D on the scalp. Ideally, you need to know the geometry, as I show here in a very detailed way, the geometry of the different tissues have a hunch of their conductivities solve the forward and inverse problems and then get to reconstruct the sources as they, let's say, as they develop on the cortex. It's a dynamic, of course, the dynamic sources, but the forward and inverse problem are not dynamic. They're just static because you can use the quasi-static Maxwell equations. So it's not that hard. And there are many toolboxes that allow to do this. If you can't do the source analysis, then you could try to use features that are designed to achieve some kind of invariance. For example, if you have been looking in, let's say, in the field of BCI, there's a type of feature that is coming into play, which is based on covariance matrices. So spatial covariance matrices, sensor across sensor. And the invariance that is achieved with these covariance matrices is that if you consider distances between these matrices when you're looking to cluster them or to classify them, if you look at distances in a specific geometry that is well adapted to covariance matrices, which is the remaining geometry of symmetric positive definite matrices, well, in this geometry, you're invariant to linear transformations. So when you consider that the forward problem is a linear transformation of your sources, then there will be a kind of invariance meaning that the distance of the covariance matrices of two tasks will be very close to the distance of the sources in the source space, let's say, if you have several sources. So I didn't cite the work, but Marco Congedo, Alexandre Branchand were researchers in Grenoble who were quite instrumental in getting this geometry into BCI. I was saying that their toolboxes that do allow you to reconstruct source space. So this has a widely known problem and solution in our community. And the inventive of Ian's source space is you're closer to the neurophysiological reality of what is actually going on in the brain that is better interpretable neuroscientifically than if you're just saying that there is activity on sensors because you can get the dynamics of how activity propagates on the cortex much better than on the sensors. And as I was mentioning, it's going to reduce the variability across different headsets, across subjects, across sessions. So in other fields now, people are moving the source space more and more in neuropsychology especially. And we'll give you one illustrative example of using source space in brain-computing interfaces. This was a national project called co-adapt, which I was coordinating a few years ago. And we were trying to use on top of BCI, which is the black loop here. We were introducing a new red meta information, which is to try to detect from the brain of the subject performing a BCI to detect some indication that there's an error going on. For example, that the feedback which was given to the subject is not correct or that the command that is done is not correct. And then an error will be detected by the subject, which relates to an error-related potential in the EEG. So detecting this error-related potential allows to relabel trials as being faulty, having been badly classified, and thus to improve the classifier. And detecting errors in EEG signals is not that trivial because, normally, you need quite a lot of examples to be able to correctly identify a feature. And if the BCI is making too many errors, then people are not going to be motivated to use it. And so there is a kind of conundrum here how to be able to identify these errors quite quickly. And an answer was to go into source space using a FURIA algorithm developed by Fabien Lotte and co-workers. And OK, by going into source space and looking for features specifically in the Broadman areas known to generate such error potentials, it was much more efficient to be able to detect errors. And then leading to online correction. So OK, even though we go into source space, we have to keep in mind that the brain activity may still be variable across subjects. Well, here's actually just on three electrodes, not even in source space. But I'm showing motor imagery, brain computer interfaces. So what is often called ERD, ERS. And ERD, ERS, S means synchronization above the baseline synchronization. And ERD means desynchronization of oscillatory rhythms of the brain. So here we have time frequency maps in which people are instructed to move their right hand between 0 and 2 seconds. And you can see that, well, there's a time reaction. So the desynchronization that you see on the left hemisphere occurs a little bit after 0 second and stops a little bit after 2 seconds. Then you see ERS, which is a synchronization. And red, which is power that is stronger than the baseline that was occurring before the movement. So this is a typical time frequency map. Average over many, many repetitions, like 100 repetitions for it to be so clear. You wouldn't get such a clear map in a single trial. But what you can see when comparing different subjects that you don't have the same powers at all across subjects. In some cases, what is called the beta band is more prominent than others. So if you're using the same classifiers across subjects, you might run into a lot of trouble. Source space will kind of alleviate some of these problems. You're not going to reconstruct a lot of power and frequencies that weren't there in the first place on your sensors, right? So the way to try to get beyond this intersubject variability, well, generally, it's just to ask a subject to calibrate the BCI for a long time before it starts. So this allows to learn the deep features in classifiers. But the state of the art and the current BCI research is really trying to get rid of this calibration phase to have more plug-and-play BCIs where you don't have to train before using. And so the idea is to use transfer learning, which does domain adaptation of the features. And then once the subject is already using the BCI with the feedback, with the commands coming into play, then you get features that are even more informative of how that will be even more, let's say, efficient. And you can fine-tune your classifier and get better classifier. Why do I say it's more efficient when this person is using the BCI rather than when they're calibrating the BCI? It's because when they're using the BCI, they're in another frame of mind. They're using the BCI. So they're getting other types of information. They're engaged in another task than just calibration. So even between calibration and use, there's already here a mismatch of the features. And it's important to retrain, to fine-tune the classifier while the person is using it. So there's quite a lot of references on transfer learning. It's quite an important topic these days. And I hope that it will come into the BCIs that people will be using very soon. Here, I only give you one example of right-hand movement imagination. Why should everybody be able to do right-hand movement and imagination? I'm currently working with Disabled's person and view of training her for a competition called a cybethalon. And she has never moved her left-hand in a useful way. So for her left-hand movement and imagination, for example, would be not possible. Talking to people who are disabled and asking them to imagine movements that they have maybe never been able to do in their life is kind of far-fetched. So you really have to tailor mental imagery. Let's call it mental imagery, not motor imagery, because maybe imagining music, imagining language, imagining some landscapes, imagining other things might be much more relevant for some users. So you have to think of your user and get to your user's frame of mind to understand what mental imagery they may be more interested and more proficient in using. Then you would have to also look at psychological traits of people. Some people are really super able to think in 3D or to solve certain types of problems. And others have more other types of interests. And according to these traits, then the BCI training progression can be really adept. There's very interesting work. And that's by Andrea Kubler's group, or Fabien Lot's group also. Then of course, since the BCI is supposed to be used, well, what are the commands the user wants to do? Does he want just to press some button? Does he want to move continuously a cursor or a robotic arm? Is it better for the person to be using a BCI that is self-paced or system-paced? If it's self-paced, the person can decide at any moment of an action to give. But there might be a limited number of actions. If it's system-paced, well, then you have some stimuli that are displayed or conveyed to the subject. It can be tactile, auditory, visual. So all this has to be thought of quite deeply. And to learn about the user, we developed an empirical approach which tries to learn about the user by asking the user questions, but then the BCI point of view. So the idea is from the BCI, from the computer point of view, the computer is going to try to learn about the subject by acting, by doing a certain number of actions according to observations. What are these actions? What are these observations going to become clear in the next slides? But basically, we're going to use an exploration exploitation trade-off. And so you immediately think of reinforcement learning approach, in which exploring means learning about the user from the computer point of view. Exploiting means act optimally according to the current knowledge. So what can be the goal of a computer and the brain computer interface? In the case where we want to learn about the user is to determine automatically the best classified mental task and to save calibration time and to do that as fast as possible. So let's imagine you have a bunch of tasks, K tasks, that we might propose to the user to do. For example, imagining moving their left hand, their feet, their right hand. Well, we would tailor that to whatever the subject is able to imagine, their tongue, their arm, et cetera. And we're going to try to find as fast as possible the task that is best classified with respect to the idle state. You mustn't forget there's also an idle state because people don't always want to be doing tasks. They sometimes just want to do nothing. And the BCR has to recognize that too, so the no-control idle state. So we're going to use a computational model called a kind of barbaric name, but a stochastic multi-armed bandit. So what is a bandit? It's in Las Vegas, these machines where you try to get money. Multi-armed because we have several of these machines and stochastic because these machines have a reward that we don't know and that we are going to put a model for the reward. And we're going to try to imagine that one of these machines is more rewarding than the others and we're going to try to find which one it is by going in turn to try them out. So in the case of an usual multi-armed bandit, the goal is to maximize some of the rewards all the money you're getting. And in the case of BCI, what we're trying to maximize is the classification rate of one of the bandits, which is one of the tasks. We're going to try to find a task that maximizes the classification rate. So for this, I said it was stochastic because we don't know the classification rate that we want to maximize by definition. We're trying to learn about it. So, but we can assume that we know it just in the confidence interval. We're going to do each task, let's say, twice. And then we get an estimated classification rate, which is in blue here, okay? So here we have two tasks, sorry, just two tasks. And we estimate a classification rate of these two tasks compared to the resting state. The true classification rate is what we're trying to discover as fast as possible. It's in red here. And we have this upper confidence bounds. According to the number of times we've sampled the tasks, we can know with a certain confidence that the true classification rate is within a certain bounds. This green star, we actually can compute. And what the upper confidence bounds theory in enforcement learning gives you is it allows you to determine at each step which is the task you should select in order to converge as quickly as possible to the best one. And it's quite easy. The best task to select is the one that maximizes the upper confidence bounds, which we can compute at each step. So I'll skip all the details, but for example, this inequality guarantees that the number of times you have chosen a task M decreases in square of the distance to the optimal classification rate, which is R. So if you're far away from the optimal classification rate, you shouldn't be presenting this task too often, which is what happens. We did it online to test between three tasks. First, we asked foots, sorry, small. Feet, right hand, and tongue here. We were asking people randomly at first, no, no. Generally, when you do a calibration, you ask the person to do as many times feet, tongue, and right hand. And here we're running our algorithm so that, well, first it does each of the tasks once and then it selects automatically using this upper confidence bound algorithm, which is the next task that the person should be asked to do. And at the end, we see that the feet had been asked 28 times, the right hand 19 times, and the tongue 13 times. This is the last line on this table showing that for a given budget, which was 60, I think there were 60 task presentations in total, we hadn't wasted time on showing the person the tongue to perform because quite fast the algorithm sees that the classification rate of the tongue is quite low. And so it kind of spends more time on exploring the best task, which we're here, feet and right hand. But to end up after a few checks, really finding out that the feet was the best classification rate. So this can be a faster way to learn about what task the BCI will be able to classify best. Once you have asked the subject what are the tasks he thinks that he will be able to perform. Now, how much time do I have? I have 10 minutes left. It's five minutes. Five minutes. So I mentioned already that we have to adapt the BCI stimuli and feedback. For people who really disabled, of course, consider independent BCI, which will not rely on any muscle activity. For example, when you think of the P300 Speller or the SSVEP, it relies on looking at a screen and looking at targets on a screen and this relies on eye movements, for example. And some locked in syndrome people have difficulty with eye movements. So you have to think of maybe auditory BCI for these people or another type of a visual display or tactile display. Now, I'm going to focus here maybe and make BCI practical to wear and use. And this comes out of a project that we had with Nice University Hospital who came to us about six, seven years ago, saying, okay, I'm an ergo therapist, okay, a patient therapist at Nice University, working with ALS patients. And we have some patients for whom we can't find a muscle that we're able to, on which we're able to attach a contactor that we usually do to allow them to perform their daily activities. So this came really from the hospital. Could we work with you and BCI? So we thought it was a great opportunity to bring a BCI out of the lab. So this is why I'm concerned about this because we have this experience with Nice. And so we thought that the simplest BCI was a P300 Speller because it was already working pretty well on the literature and there were even some commercial systems available. But when we tested the commercial systems, they didn't work at all. So we decided to run our own, develop our own. So we did a BCI system, which I can show you working here quite fast. Here we have a patient. You see, she does have motor ability, but she's unable to speak. She's in the bulbar onset of an ALS. Maybe in five years, she won't be able to move her muscles either, unfortunately. So learning to use a BCI while you still have some residual ability is perhaps important for if you get to a stage where you can't. So here she's gazing at the letters that are flashing with smileys. As mentioned, the smileys give a kind of better ERP than just the letters. And after a few flashes of each letter, you see the pace, she's able to select the letter she wants to write. She's writing the word revée, which means to dream. And of course, she would be able to select actions and not letters exactly in the same way. It's a selection principle and it's working quite well, but you see she has an EG headset, just a usual EG headset on her head. It's not that practical and the people at Nice University also said, no way can we have EG like that in people's homes. It's not practical for somebody to put on their head when they have already so many problems and putting gel at the same place each day. You mentioned the scalp after a few weeks. Lots of issues. So it's not practical for the moments. Although people are satisfied with it, well, maybe they wanted to please us when they answered the questionnaire because they actually asked to take it home. They wanted to practice with it at home, but we believe it's not yet totally practical. Although it works well in terms of accuracy. It detects very well what the person wants to select. The patients, sometimes they have a network and so other ALS patients in France learned that some ALS patients in Nice were testing our system and this person in the Alps said, okay, I want to test your system, but we couldn't go and see him. So we said, you can download our software. So he downloaded the application for doing a BCI and he on his own computer, et cetera. So we didn't provide anything in terms of hardware, just the software. And then he ordered an emotive epoch over the internet and he had a person, he's severely, severely disabled. He can only move his eyes. And he asked, he had somebody come every week to help him try to set the epoch on and to try to connect it to the system, et cetera. And after a while, he was happy to be able to use it and he wrote a blog about it and he got a prize from the Academy of Science. So I mean, people like this are very encouraging. People who go out of their medical routine to try to help the research and to BCI actually get used. This is encouraging for the field because we think, okay, we've got to finish our work also and try to go further. So as I was saying, gel-based electrodes are a problem. We have some great colleagues like Professor Hawisen and TU Ilmenau in Germany who really are trying to push forward the state of dry electrodes with the concern that there's a compromise between comfort and signal quality. I don't know if any of you have ever worn dry electrodes. Some of these with the gold pins were just, you have to be a fucker to enjoy that. And now they have more compliant plastic type coated with AG, AGCL, electrodes which are nicer to you don't feel them as much. But the thing is that you do have to apply pressure. You do have to apply some pressure for them to work. And with these textile caps, it's just impossible for you and you and you to be able to have the right pressure on each electrode. So at the moment with Nice University Hospital and University Equidazur, we're designing custom design headsets which means we 3D scan somebody's head. After all, if they break their arm, they're going to have a cast, right? And nobody thinks that it's so expensive to build a cast for their arm. Well here, what we do is we 3D scan their head and then we build a cast of their head on which we design with some silicone just a place where just the right number and the right position of the electrodes have to be applied. This has to be fabricated, there's some oven stuff. And then the electrodes that I showed you before that are nice plastic, not too hurtful, they're just slided into the right positions that are already planned. And you can wear that for quite a long time. We're testing that the impedances stay constant for a long time and you don't really feel it. It's still a bit heavy, but you don't really feel it. It could be printed with whatever the patient would like to wear. It could be kind of fun. So we could have some role models one day wearing those things. And well, it was really important that it takes one second to put on. Just literally one second, and you don't have to spend 15 minutes putting gel and wearing that and all that. The performance for P300 are almost on par with the gel, not quite. You don't have quite as good single quality. So conclusions. My guidelines towards personalized BCI are well, learning from, with and about the users, spending time with the people who are going to spend time using your system. Starting with simple BCI, like we did with the P300 and then maybe going further later on, like we're doing for the cybethalon at the moment. Outsourcing, really having living labs, places where people are going to try and even at home. Value-centered design, of course. But hardware and software, I didn't mention it, but we try our software to be really you know, multi-platform, to be accessible, open as much as possible. It's downloadable to whoever would like to use it. And we need actually wearable sensors. The software, as I was saying, interoperable. Okay, one important thing too, you want to be able to use existing applications, not develop your own office for BCI, but allow open office to be able to take BCI commands. So that's an important thing for neuroinformatics, is you have to hack into, try and horse, into existing applications, because you're not going to spend time developing all those software that are already quite efficient, but find the handle for BCI users. And well, fundamental brain research was mentioned also in the talk before. Of course, we need to learn a lot more about all those networks and nervous system components that people are using in the BCI and about possessing the brain. Before I stop, I would like to thank all the contributors to this presentation, including the patients and Damien Perrier, the patient from Chambéry. I would like to acknowledge that we're using open-vibes software, which is developed in Inria to do BCI, it's open source, and acknowledge the support from sponsors. Thank you. Thank you very much. Time for one, two short questions. We will have the discussion later. Yes, on the back, sorry, sorry, the second one. There is a question over there. You are the first on the end of the call. Hi, thank you for your talk. I had a question about BCI for communication. I was wondering whether you have an experience working with people who have cognitive impairment when you have advanced neurodegeneration, people who can't maybe communicate what they want to do or how they feel and whether you, or maybe who can't take instruction about what to do in a specific task or how to indicate what they want. No, I'm sorry, but for the moment, no. Actually, in the P300 Speller screening we did before inclusion, we had avoided cognitive deficits, but it's true that it is also maybe a reproach we could make because a lot of ALS patients do develop cognitive deficits. So we have to be aware of that. And I mean, maybe I can refer you to the work of Michael Tangerman in the University of Fiburg who's working on that very closely on cognitive impairment and how BCI can be used to work on that. But I can't... Thank you very much. Second question, Kashyabin Nuskal. In the middle, you know, wait for a microphone. Wait for your microphone. I'm interested how much transformation to source space improve the performance of BCI because, you know, with inverse solutions there are certain problems. Do you have some comparison how it worked with transformation to source space or just the raw signals? Was it really improved? Yes, I think that error potential study is the one that shows the most how it has improved and the number of examples that you need to get a correct classification. So in that study, if not we had tried with motor imagery quite a long time ago. We had shown a small improvement with my colleague, with my PhD student, Joanne Fruté. We had shown a small improvement of classification rate in motor imagery, but it wasn't super significant. But I think all the same that if you know the regions in which you are looking for activity, for example, a neurofeedback, imagine that you're using BCI to try to enhance the motor activity in a certain region for a person who had a stroke, for example, I think it is, I mean, in fMRI, they do it in fMRI, right? They go to the voxels in fMRI space and they look at the voxels where the activity is supposed to lie, not elsewhere. And so if we can do it in the AGMG, I mean, why not do it? If we have the, of course, we have the head shape, et cetera, for patients who really need personalized care on their brain activity and neurofeedback, I think there's really worth it. Because my impression is that the simple fuel transforms. It's enough for, maybe. Yes, let's say that if you're interested in the anatomical details, then it's interesting. Can we move the more detailed discussion on the topic?