 Okay, so no advertisement. So our next talk is Pierre Clisson. I think I hope I pronounced it correctly. It's a topic I always wanted to get into, but never had the opportunity. It's about this brain-computer interface is something very interesting. So welcome Pierre. Hi, how are you? I'm fine and you? Perfectly fine. So where are you streaming from? I mean Paris, France. Ah, very nice. So not that far from Italy, I'm from Basel, Switzerland. So in another time it would be just a train ride and today is more complicated. So, okay, so let's get started. Okay, so as you know, the field of brain-computer interfaces has attracted quite a bit of attention recently from the media. And beyond the hype, I would like to show you how you can achieve state-of-the-art results today without having any surgery. First, we will get just enough theory to know what are BCIs and how they work. Next, we will briefly review the kind of hardware you need to build your own BCI. We will then introduce Time Flux, an open source Python framework for designing BCIs. And finally, we will look at a practical example of an actual BCI along with demo and code. So let's get started. A brain-computer interface lets you interact with the physical world using your mind only. So this is a very generic broad definition of BCIs. And I'd like to show you a few examples. So what can you do with BCIs? First, you can help disabled people to move and to communicate. You can learn to control a wheelchair or robotic arm with your brain. We can develop software to help us spell words with your mind only. And we have something called neuro-rehabilitation. So in that case, we ask patients who have suffered from stroke to imagine a hand or a foot movement. And if we can detect the correct pattern in the brain, we show the movement or we show feedback on the screen or in virtual reality. The goal is to help people with neural plasticity and to help people recover usage of their limbs after a stroke. There are other usage. For example, we can detect if a driver is getting sleepy. So we can detect the vigilance level or we can train for meditation, for instance. This is something that we call neurofeedback. And obviously, we can just have fun. So we can include emotion detection in games or we can decide to fly a drone with our mind. So these are just a few examples. And now this is the general principle of a BCI. This is the general process we follow when we decide a brain-computer interface. So first, in the signal acquisition step, we generally use an EEG headset to measure brain activity. Of course, we can have a much better signal by implanting electrodes directly inside the skull. But, you know, I already said we don't have surgery today. Then we have the signal processing step. We generally have to filter the signal to reduce the noise, muscle artifact, electric noise, and so on. We can also apply more advanced algorithms to increase the signal to noise ratio. Then in the feature extraction step, we are entering the machine learning realm. And this is where we divide the raw signal into small chunks of data called epochs. And where we extract meaningful information from these epochs. Then in the pattern recognition step, we train a machine learning model. And depending on the application, it can be classification or a regression model. And finally, in brain-computer interface, there is interface. And this is where we output the resulting prediction we made. And which will allow us to control an external device, for example, a wheelchair, or to provide feedback to the user through sound or images, for instance. So, going back to the signal acquisition phase, how does an EEG work? First, an EEG stands for electroencephalogram. It's a recording of the electrical activity of the brain using electrodes placed on the scalp. So, the brain is made up of cells called neurons. These neurons communicate using chemical messages. And these chemical messages change the electrical potential of the cells that connect with. So, in a way, neurons can be seen as tiny electric dipoles, just like very small batteries. And these changes in electrical potential create a very small electrical field. It is so small, it is impossible to measure the electrical activity of one individual neuron through the scalp. But if a lot of these neurons are aligned and are activated at the same time, the electrical field becomes large enough and we can measure it. So, an EEG signal is made up of many independent measures. And this is a time series. The formal definition of a time series is a series of data points indexed in time orders. So, in time flux, we use pandas to represent this time series. As you can see on the left, we have a time index and we have four electrodes. And on the right, we just plotted these four channels of data. We have four electrodes at a time. And here, we can see we measure the electrical activity in microvolts, so it's a very small unit. So, now that we get, now that we have this EEG signal, this time series, how do we process it? How do we classify this signal? So, there are two main ways, depending on the paradigm, on the application, to process the EEG signal. And the first one is in the frequency domain. You have probably heard about brainwaves before. And these are neural oscillations in specific frequency ranges. And we classify these brainwaves into five main categories. There are others, of course. And broadly, they correspond to different states of mind, or attention, or drowsiness, or etc. Another way is to observe how the raw EEG signal changes during specific condition. When we do this, we operate in the time domain. So, for instance, here we have a stimulus of time zero. And we can see that just after the stimulus, we have a specific pattern here. This specific pattern is called the P300, because we're starting to see a lot of activity after 300 milliseconds. And we can detect these patterns in the brain. So, what do you need to acquire this data? So, you need an EEG system. And there are two components in an EEG system. First, you need electrodes, which are connected to the scalp. And you need an amplifier to digitalize the electrical signal. First, the amplifier. So, there are many brands doing EEG. So, on one side, you have research grade, very good quality, but very expensive EEG amplifiers. So, on the left here, we have BioSemi, which is a really, really good amplifier, but very, very expensive. On the right, we have an open source. DYI amplifier. This specific one is HAC EEG from StarCat, but you may also have heard about open BCI, which is very popular. And then we need electrodes. So, on the left, we have the gold standard for EEG research. We have this electrode cap. So, we have to put gel into holes so it can be messy. It requires about 15 to 20 minutes when you're experienced to set up everything. But it's really top-notch quality, and this is really the gold standard. On the other hand, you have 3D printed DYI, dry electrode solutions, such as this one from open BCI. It's fully 3D printed. The signal quality is not as good, but you can start to play with it. And finally, we have a third category, and we have what is called consumer grade EEG. And this includes the amplifier and the electrodes together into one small form factor. So, on the left here, you have a very popular solution for neurofeedback, which is called the MEWS. Here, it's the MEWS-S. On the right, it's the NeuroCity crown. And they have dry electrodes, limited coverage of the scalp. So, it's good for basic neurofeedback, but you can't really use it for advanced BCI. So, we try to bring the best of both worlds. So, this is a project we are working on, and we expect to launch in the upcoming weeks or months. And this is a high-performance, research grade EEG, which is suitable for advanced applications such as BCI, but with an affordable price. So, please register your email and you get a notification when it's ready. So, we talked about the theory, the hardware. Now, we need to talk about the software. Time Flux is an open-source Python framework for the acquisition and real-time processing of biosignals. So, it runs on Linux, macOS, and Windows, which was not really an easy task, considering that there are a lot of multi-processing, multi-threading issues, and Windows is not a posic system anyway. So, it's working. And so, what can you use Time Flux for? You can use it to acquire data, and with synchronized events from multiple sources, you can use it for presenting stimuli to the users, to build biofeedback or neural feedback applications, obviously to build brand computer interfaces, to make interactive installations, etc. The important thing to remember is that Time Flux is not only about EEG data, it's even not only about biosignals, but it's compatible with many kinds of time series. Time Flux works with many devices out of the box. So, we have native support for many devices including OpenBCI, Haki, G, Vitalino, but also commercial research-grade EEG systems such as ANT neural on the right. And also, we can support other devices such as iTrackers, or multimodal biofeedback systems, or even force platform, which are basically balanced on steroids. And we also support the lab-streaming layer networking protocol. And with this, you can access tens of devices out of the box. So, why did we build Time Flux in the first place? First, we wanted something that fits well within the Python data science ecosystem. I wanted something with a permissive MIT license, something that I can use in a commercial setting also. And something that works both offline and online. So, in the BCI jargon, online just means real-time. So, what I mean here is that you can use Time Flux both in real-time, but also you can use it in a Jupyter notebook, for instance. And I wanted something that allows me to quickly prototype, to quickly test new IDs without having to go deep into C++, etc. So, Time Flux is easy to learn, it's easy to use, easy to extend. Most of you are familiar with graphs. If you are not, it's really easy to learn. Some basic graph theory. It relies on industry standards. Pandas for 2D data, XRF for multidimensional data, Scikit-learn for machine learning, LSL. So, if you have already done some basic data science, you can reuse your skills here and there is nothing new. It's the tools you already know. And if you're not a coder, it's not a problem. Because many pipelines, many processing pipelines can just be described using a simple camera syntax. And if you want to go further, if you need to have a special use case, etc., you can build your own custom node. It's just a standard Python class with one method to extend. So, we try to have a good documentation. It's not perfect yet, but we are going to this direction. So, we have a full tutorial on the documentation website. If you spend 30 minutes, half an hour following the steps, etc., you will get the main principles of Time Flux. We have some examples. And we also try to document the API extensively with example and illustrations. So, Time Flux comes with everything you need to get started, including multiple networking protocol tools. We use the Publish Subscribe Protocol, which we built on top of ZeroMQ, the LabStream English Protocol, which is very popular in neuroscience. The OSC protocol, which is an UDP protocol to communicate with multimedia applications, WebSockets to do things in web browsers. And we can record and replay data and events in the HDF5 format. We obviously have all you need to do DSP, machine learning, obviously. And we provide tools to build a user interface. And we included a few basic applications with the code. We have a monitoring interface to check the signal and some web applications, demo applications. We also have all you need to do multi-dimensional matrix manipulation in real time. You can query, run some expressions, Epoch the data, run some windows on the data, et cetera, et cetera. We already mentioned that we have multiple native device drivers. We can have a very precise synchronization between the stimuli and the EEG data, which is something which is very, very important when you do neuroscience and ERP research, EEG research. And if something goes wrong, we have debugging tools. If you're precisely in your YAML file where you have made a mistake, or you can output your graph into visual way also. And you can plug some other application through hooks, for instance, you may want to upload your data file on the cloud just after your acquisition. So you will write a very simple Python module and you will just hook to the 10Flex hook for that. So I already mentioned that many applications that you can make with 10Flex run in a web browser but they don't have to. And you can use any networking protocol available. You can even design your own networking protocol and you can plug it with anything you want. You may decide to design a game in Unity, for instance. So you can use 10Flex as a backend and Unity as on the front end to play your game in 3D. Or in this case, we use the WebSocket protocol and we have a few applications here. So on the left, we can see the raw signals, the raw EEG signals. Here on the middle, it's P300 Speller. I will get into more details later. And on the right, it's just a basic example of showing the signal quality of each electrode. So how do you design a processing pipeline? There is one essential concept and it's called directed acyclic graph. And this is something you learn in computer science but just a few words. A DAG is a set of nodes connected by edges where information flows in a given direction without any loop. And so each node here is a processing unit and edges are where the information flows. And at the junction of nodes and edges, we have what we call ports. So a node can have multiple ports, so multiple outputs or multiple inputs and these inputs and outputs generally they are an object and this object contains a data frame and some metadata to describe the data frame. So remember earlier I talked about the closed loop system. So how is this DAG which doesn't have any loop? How can it be compatible with the closed loop system we talked earlier? Well, it's simple. We can just have multiple DAGs and make them communicate in an asynchronous way. So the important thing to remember here is that each graph runs in parallel in its own process and each node in a graph runs in a sequential way. So when one graph wants to communicate with another, one node will publish information to a broker and the broker will store the data for a while and when the other graph needs this information, it will text it from the broker. So this is how we can have loops without breaking anything. So enough theory. How do you represent this graph in practice? So this is a very simple example. It's a hello world of time flux. Very simple graph, one graph, four nodes. We have one node which generate random data here. In practice, you can imagine that this node will get data from a new system. One node that will display data on the console. Another node which will add value to this matrix and then display also the value. So here we have a simple YAML object. So we describe all nodes. So we have one, two, three, four nodes, random nodes that display before display after nodes and the add nodes as you can see can take arguments. Here we decide we will add one to each value at each cell in the random matrix. And then we need to connect these nodes together. So we connect the random port to the add port. We connect the random port to the display before port. And we connect the add port to the display after port. And that's it. This is how you describe a processing pipeline. Of course, they are subtleties. You can, because each nodes can have multiple ports, dynamic ports. So we can extend a bit the syntax, but basically this is it. An important thing to know also is that each graph will have a rate. And this rate will determine how many times per second the graph will run. If you have a rate of one, the whole graph will run one time per second. If you put 20 here, the graph will run at 20 hertz. That means 20 times per second. Okay, so what about custom nodes? What if I want to develop my own node, my own logic? Well, again, it's simple. It's just a class that you extend. And this has an optional constructor where you can pass parameters as you have seen before in the Yammer syntax. And you have just one method to extend, and this method is the update function, which is called each time the graph is updated. And you get the input in one value, and you can set the output. And time flux will take care of everything else to connect and pass the value by itself. And so this is a very basic example. This is our add nodes we described earlier. So the constructor, we store the value we want to add. And then in the update, we just copy the input to the output and we add our value to each cell of the output. It's just as simple. Okay, so there are multiple different kind of ports. I won't go into details right now. Okay. So we have plugins, plugins are a collection of nodes. It's just standard Python packages, nothing exotic here. You can just clone, use the template on GitHub, and it's really easy. If you want to learn more, we have the documentation website. I think I need to accelerate just a little bit. And, you know, in interfaces, in the web, in the brand computer interfaces, we have interfaces. So in time flux, we provide a JavaScript API, which allows you to build user interfaces available from a web browser to receive and to send data streams and events and to deliver precisely scheduled stimuli, which are suitable for advanced EG research, such as SSVP or ERP. It's not easy. You know, there are a lot of things going on in the browser. There is the dump painting, the JavaScript event loop, the screen refresh rate, the spectrum and the security counter measure, even bugs in Chrome. So it's not easy, but we did our best to make it easy to use. And you can schedule repeating stimuli or one-time tasks, and you know exactly when the stimulus has been displayed. It's well tested in Chrome, and probably it works in other browsers, but it's not as well tested. We went to considerable lengths to ensure that our signal is well synchronized. So we first send LED on the screen, and we checked if it was synchronized with the events, etc. Okay, I won't spend too much time on this. And then we run a standard neuroscience experiment called the Oddworld experiment. And basically you display boring stimulus repeating stimulus. And from time to time, out of the blue, there is a deviant stimulus, a light or a sun. And we have this pattern when something goes out of the ordinary. We can detect this pattern. And to be able to detect this pattern, we need to average a lot of epochs together. And here we can see that we have a different pattern in the deviant case. And this is something you can't see if you don't have very precise synchronization between the events and the data. We went a little bit further, and we tried to classify single trial ERP using different machine learning algorithms, and we were able to have a very high scoring. And so it works quite well. But okay, so now let's build the BCI. And what I'd like to show you today is how to build a P300 Speller to type with your mind. So let me show you a quick video before. Okay, so this is on the left, you have time flux. On the right, this is the monitoring interface. I will go faster. So this is my EG signals with eight electrodes. And you see this is the value in a microvolt. Okay, something really interesting is happening here. You can see these little waves here. These little waves are half a wave. You can really see them on the screen. And as you can see, it's also very, the signal is very sensitive to noise. You see this big spike here? It's when I blink. So the EG was able to capture also muscle activity. So then, you know, in machine learning, we need to train the model to calibrate something, and then we are able to make some predictions. So how does it work? First, we ask the subject, we ask the participant to focus on a character. Here, focus on the T. Okay, now it's I. And each time the character I is flashing, we can detect there is a specific pattern in the brain. And if we also put this nice smiley face, it's because humans are hardwired to recognize faces. So it's easier for us to detect the pattern because we have both the reconnection of the character and the recognition of the face. So we have different groups of character, and we record the signals along with the events. And then we know when a character has been flashed and when it has not been flashed. And from then, we are able to train the model. Now we'll go forward. Okay, the model is fitting. It takes a few seconds. And it's okay. So now we are ready to enter the prediction mode. And I will look at a specific character. And after a while, the system detects which character I was looking at just because it recognized when it was flashing and there was a face. Okay, let's go forward. And I was able to spell hello worm. How cool is that? Okay, so in practice, in practice, how does it work? There is a GitHub repo. I will give you the URL in a while. You have all the code, all you need to do it yourself. And so remember, this is the full graph of the application. Okay, that's much time, but anyway. So here we have 3, 4, 5 graphs. We have the acquisition graph, which is responsible to get the data from DEG to have some basic filtering and signal processing. It sends the data to a proxy broker. On the other hand, we have the user interface, which sends events through the classifier, but which also received prediction from the classifier. We have a basic record node to save the data in file. This is the most important graph in the application. It's the classification graph, which receives data and events. It then divides the data into small epochs, trim the data so to make sure that we have the same number of events in each data point. We have the same, sorry, to ensure that we have a good, the same number of data points into each epoch. And then we pass this to a classification pipeline, a cyclical classification pipeline, where, and then we pass all this prediction to a prediction node because it's very difficult to be able to make one prediction. We have only one flash. We need to accumulate a few predictions, and then we apply some Bayesian logic to be able to make a prediction with confidence. And the full YAML file is described here. So here we have the classification graph, and maybe this one is of particular interest. It's the classification nodes. And maybe you are able here to recognize these steps are basic cyclical pipeline. So here we have two transformers and the classifiers. And so first we need to transpose the data. So it's in the correct shape for the PyReman algorithm, which is a state-of-the-art algorithm for classification in BCI. It's available on PyPy, so I encourage you to check it. And then the PyReman is also responsible for classification. So we have a first transformer and then we have a classification. Okay, so here we have custom node, and here we have the JavaScript and HTML for the code. I tried to comment it as well as I can. I called, so please have a look at the code and if you have any question, please reach us. Yep, only one demo for now, more are coming. So we have all the pipelines with the standard BCI cardings for SSVP, CVAP, P300, motor imagery, neurofeedback, but also for cardiac and EMG, gesture detection. Another thing which is coming is the hub. Everything bundled into one single application where there is nothing to install because you are developers, but neuroscientists sometimes get lost with Python dependencies, etc. And it includes graphical user interface and something to easy launch and monitor applications. And we barely scratched the surface. So here there is a lot of more to know and to learn. And please don't hesitate to reach for us. First, please register your email on the website. I promise we won't spam you, but we will send you information when we run workshops or events, hackathons and this kind of things. I encourage you to check the documentation and tutorials, to report bugs on GitHub, and join the Slack channel if you have any question. And if you are a startup or a company and you need consulting services, please contact me by email. So this is the address of the repository. And maintaining such a project takes a considerable amount of time and energy. So please play the social game, give us a star on GitHub. You can even sponsor the project if you want. If you like it, talk about time flex around you. If you use it, let us know. Thank you. I'll take any question if there are any. Thank you very much. Very nice project. I really like this concept with the YAML files and the graph. That's really, really great stuff. So there is the first question. It's more about the hardware. What's the approximate price range for those consumer EEG hardware? And what about the professional ones? Okay, so if you want to have a full set in an open hardware, you have to get around 1,000 or 1,500 euros. The really professional one, research grade, can be like 30, 40,000 euros. But to be able to really start to play with BCIs, you need to spend at least 1,000 euros. Does that just EEG? What about the eye tracker that's, I guess, even more expensive? Yeah, but you don't really need an eye tracker, of course. Okay, yeah. Okay, so at the moment I don't see any more questions. And anyways, we are running out of time, so we don't really have time anymore before the next break. I think there is a break now. The coffee break is starting now. So thank you very much again, Pierre. Thanks, Martin. And see you around virtually. Okay, my player. Bye.