 Hola, Buenos dias. My name is Andreas Klostermann and this talk is called Brain Ways for Hackers. It's about using our Python tools to understand our brain a little closer. It's also in the sense of hackers as people who build stuff and not as hackers who try to hack your brain or something. Well, please don't take anything of this as medical advice, or this is just mainly a hobby project for me. It's no academic endeavor or something. I've given two talks previously like this. If you have seen them, you probably don't. If you have seen them, I'd still show you something new, but if you haven't, you probably can follow quite nicely without it. First of all, what are brain waves? Brain waves are created by your brain cells. You probably know your brain is composed of billions of brain cells, and these communicate with each other via electrical signals. And these signals sort of travel outwards through your skull and through your skin. And if you have a sensitive sensor, you can measure the waves at these locations. As this also already implies, these signals are very weak and they are very noisy, and you can only measure the summation of billions of cells at once, so you can't listen in on a specific brain cell or a specific group of brain cells, and that makes the signal analysis very challenging and limits what we can get in terms of information. So what can we do with brain wave technology at all, especially as hobbyists, we can do neurofeedback training, and I think that it's a very big and very good area for devices, for consumer-grade devices. It is about training your brain to attain certain mental states. The way it works is that we use brain wave data to assess, for example, how concentrated a user is, and then on the screen or via headphones you give him a feedback about this mental state, and he tries to increase it. And by trying to increase it, he learns how to consciously control this brain state, and this kind of training has been proven quite successful in, for example, ADHD and epilepsy. But if you want to use new feedback in a medical indication, please consult a doctor. Now, the second thing you can do are simple neurologic experiments. I've done some correlation things and tried to figure out how, especially the news brain band works, but there is another thing that I got to work with the newest guy, which is auditory evoked response potential, but since I didn't get it to work here without some serious yuck-shaving, I tried to... I didn't bring it today and I won't even explain what it is. Then there is this area of brain-computer interfaces. These VCI types of applications, they are a bit like the new feedback thing, but the goal is actually to control the computer or to control a robot or something just with your brain, but it's very difficult and it's not really my main interest because it doesn't really work very reliably. You have to find a way to have some kind of button, for example, a button that you can press when you want to press it and not press it when you don't want to press it, and that's kind of hard, especially with the devices I have. Some researchers had more success with implanting electrodes under the skull, not necessarily for this reason, but rather for epilepsy, and then they did BCI experiments and it worked very nicely, but please don't do that at home. Theoretically, you could use EEG devices like this for diagnostic purposes, but I would not believe that it is legal to do this with these devices and probably they aren't really suited for this. Now the device I presented in my last few talks was the Neurosky Mind Faith headset. It's a Bluetooth connected device and it has one sensor in the front. I have this device here and it has a sensor and you can wear it like this and it has a reference electrode and one channel. Inside this headset is a technology called an ASIC, an application-specific integrated circuit and this can be used in other devices too. For example, there are board games where you can control the ball moving up and down through your brain waves and there's also a headset that has cat ears mounted on servos and then these ears move according to how relaxed or concentrated you are. Well, and there's also another headset which is a bit more expensive but it's more comfortable to wear. But it uses the same, always the same protocol and the same characteristics. The second device is the Interaxon Muse. The Interaxon Muse is a bit more expensive about 300 euros and it has four channels. It has a very good app for Android and iOS also connected through Bluetooth and the app teaches you to meditate and gives you feedback on how calm you are and how you progress over time. It uses lots of gamification and I think this app is really very well designed. Overall, I'd say the Muse is also a lot more comfortable so it's just this headband and it's a bit flexible so you can adjust it to your head. In the direct comparison, the Muse has four channels and the mind wave has one channel. In theory, the Muse will give you better quality data. The Muse also has an accelerometer channel two accelerometer channels so this measures how the head is moving and that's quite useful because if you know that the head is moving you can already say the brain waves are probably not very useful but you will probably have artifacts in there. They have different sample rates so the mind wave uses 512 samples per second because of bit reasons so it's two times 256. The Muse has a little trouble with the P-sets so you can't really change the options directly but you have to set certain presets and then there are consumer presets and research presets. What the documentation didn't say is that the firmware that was distributed with the Android app does not give you the research presets and then you only have 200 hertz. Also the protocol is somehow a bit compressed so they have a variable bit length scheme to encode their packets in the consumer things and then this year they switched to a newer firmware in the Android app and then I got the research presets to work and that did work nicely. But the newer versions of this headset so I have the 2014 one and probably the 2016 one doesn't do research but it only has these 200 hertz so very important is that both these devices have dry sensors so you don't need to put any gel or fluid on there to make it work like the medical devices. The mind wave also has pre-computed measures I'd say so there's a value from 0 to 100 for attention and for meditation, what they are calling it and this is already completely analysed from the brain wave data and the only thing you have to do is to react with this one byte and this means that for example an Arduino or a micro-bit would be able to pass this protocol and do something useful with it. With the MUSE that's not possible because it only gives you raw data and you have to process this raw data and that takes more processing power but on the other hand the MUSE data is less processed so you can have more control over how to actually process it. So that's why I say the hackability of the mind wave is excellent and from the MUSE it's only good. The MUSE still has documentation I found it a bit confusing or even wrong but it still has this documentation it's accessible over Bluetooth but the mind wave really is a bit easier to work with and the data you get from it also the raw data is easier to process but these devices are also very different in price so the mind wave costs about 130 euros and MUSE costs about 300 euros last time I checked a bit more I think. So the third kind of device which I don't own however is the OpenBCI they have two versions, one for $100 and one for $400 or $500 I don't remember quite correctly but anyway the one has four channels the other has eight channels which is more than the MUSE even and well the advantage of this OpenBCI technology is that it is extremely flexible so you can even program the firmware yourself if you want and you can even reprogram the amplifier so you can even measure heart signals or other electrical heart signals or electrical muscle signals and the big disadvantage of this device is that it's very do-it-yourself it doesn't even have an enclosure and you have to attach your own electrodes and these electrodes probably have to be wet and all in all it takes a lot more effort to get this set up but anyway it probably produces very good data and probably is very useful for custom mods and as the name OpenBCI implies it's probably mainly geared towards brain-controlled interfaces brain-computer interfaces now what I've done with this stuff is I've built a little library called Biology and it's geared towards real-time analysis of physiological time series and physiological time series are different stuff like can be different stuff like brain waves for example but also ECG, so heart waves if you want to call it that or even muscle movements or even breathing rates or anything else you can think of and they are transmitted at different frequencies of course and I'm using the PANAS time series functionality to make this a little less painless first of all it currently supports mind-wave and muse I'm also working on a bitalino driver but that doesn't really work that well if you know what a bitalino is but it's also something for ECG and stuff it supports especially multiple and irregular time series so you don't really know when the data is coming and maybe it has a varying rate or whatever and PANAS really supports this really well another main feature is integration into the Jupiter notebook and I try to make this relatively painless now I can try to show you how this works I have to adjust the headband to my head now it's streaming but it's not quite right yet so this device is a bit more finicky that looks a bit noisier than it should but in any case the problem with this kind of brain wave with this kind of electrophysiological measurements is that virtually anything in your face or in your head will have a higher or louder signal than your brain and for example I can move my eyes I can clench my teeth and this time I'm displaying here two channels the blue one is the left one and the right one the red one is the right one and I can just try to move my left face it doesn't really work currently so now I can move my left face and my right face and as you can see the strength of the signal is slightly different because it's a different distance but well so the architecture of how this works is that there's an async IO server that is a different process and I did that to isolate all this Bluetooth handling stuff from the notebook and it communicates with the Bluetooth headset to get the data and then it can send it onwards to the I-Python kernel so inside the I-Python kernel I have some codes that manages an experiment it can record the data and it can then update for example a vocal time series or something or inside the notebook there's also on the browser side there's also some software currently I'm not using any custom JavaScript earlier I had to but Bokeh now has this push notebook and that also works very nicely but it's important to know that the browser is sort of a different network endpoint than the actual server or the kernel and these are different moving paths so what physiology is mainly about is doing experiments and experiments well can be anything mostly some recordings and maybe you do some feedback or some sound experiments or whatever I have an experiment class so these are declared like this in this case I would be declaring two devices the MUSE and the MindWave to have these different MAC addresses and then the server is contacted and tries to reach these devices and stream their data this is a more practical example with only one device in the second cell there is first of all an HTML widget from the I-PyWidgets library and this is later updated to contain some information and I instantiate the experiment class with a file name where all the data is stored now the experiment class instance is used as a decorator on a handler function and this handler function is called every time some new data arrives then it checks if the experiment already has data on the AF7 time series which is one of the channels and then there is display values thing which updates the HTML dynamically in this case just with how long the time series currently is so that is the problem I jumped to 30 seconds because it has had data in the queue and I still have some problems with timing and synchronization and stuff it could be a bit smoother but it works quite nicely so with relatively simple means I can already have some kind of feedback well synchronization these experiment classes contain time series as I said and for example in this case from the MUSE the AF7 and this shows the first few samples every sample has a time as a timestamp this timestamp isn't really the real timestamp but rather I try to figure out inside the server at which point this was measured with some simple heuristics unfortunately MUSE contains any logic to tell you when they sample something now to analyze it often we need to window this data so a particular sample of brain wave is relatively useless what is more useful is for example different frequencies that you are seeing or different signal strength over a certain time period so for example you would window the data with a window width of 3 seconds and a step size of 1 second that means that the first window is at 0 seconds to 3 seconds then 1 to 4, 2 to 5 and so on and this is turned into an iterator and then we can do more processing on it so the analysis class then can so here on the button I pass this processing function over the windows and then into a data frame and this processing function generates several features for every window especially important is the standard deviation because this is a nice way to find if there is noise in the system or any bigger artifacts on the bottom here I try to or I do calculate some frequency measurements not really important to understand it right now there are several frequency bands so when you have alpha waves you should be more relaxed and better waves mean more concentration and so that's important to know of course this data frame now is just a standard pandas data frame with columns and now that we have this data frame information we can actually do normal data science stuff for example correlations now as I said the MUSE has four channels and two of these are on the front of the head and these are AF7 and AF8 and I'm plotting these here together the first the alpha strength on the left the alpha strength correlates very nicely just as a better one strength and this also means that even if you have more channels than one it's still relatively similar information but you can also imagine that having two channels who measure more or less the same thing is more reliable or more precise now the second thing I tried is to correlate MUSE and MINDWAVE so I did what I showed earlier I put two devices into an experiment where I did two versus devices at the same time which I want to know because it looks silly but it doesn't really work as well it could mean that I did some other I didn't put much work in it and the signal could be horrible that could be a reason why this is not such a good correlation it could also mean that they have very different latencies so that's basically everything I have to tell today I'm trying to do more experiments but most of the work I have done so far is in integrating these devices and trying to make them work so I haven't really progressed into doing much experimentation but anyway the code is also on Github I'll have to publish it later because I have to clean something up and the slides will also be on Github this whole notebook presentation and some supporting code will be there and I will treat it later now I'd like to talk about some other stuff I've been up to I've been working on PocketPython which is a library I created to deal with Oxford Nanopore data which is a sequence the DNA sequence so the Oxford Nanopore Minion is a DNA sequencer which is about the size of a computer mouse and it can sequence DNA and it has a particular data format and I try to work with that and maybe next year I'll present genome sequencing for hackers I'm also working on a workshop called Presenting with Jupiter which contains several tricks that I'm also using here so you probably noticed that I don't have the help thing and the X thing and various other ugly artifacts change the transition styles and adjust the size a bit that's all nice to know and it's not so common knowledge a lot of presenters had problems here with the screen resolutions anyway I also did something called Jupiter Flight Gear which is a silly afternoon hack so I used Flight Gear which is a software open source and I managed to stream data from the flight gear directly into the Jupiter notebook and have some graphics or something in theory you can do your own autopilot but I just want to mention it here because it's sort of cool but I have to abandon it because of time now I'd like to thank you for your attention to the physiology especially it doesn't have the news code yet but it will today another project I've been working on is Table Cleaner which cleans up data frame like data and turns the errors into data itself and the notebook assets stuff is all that I should have deleted it so we now have time for questions and I'd like to start with that anyone questions we should have a microphone but we don't even have a chair yes it depends sorry let's repeat the question he's talking about headphones which name? cocoon from from Kickstarter they have e.g. sensors somehow maybe it's an interesting device to add to that yes I'll see if you want to do a pull request or if you need help I can certainly help you can you please repeat this visibility you mean the data quality the question was which has a better data quality and I'd say the news clearly has more data and you probably can get more information out of it it depends for the kind of application you want I mean the mind wave the newest guy stuff does very much signal processing already inside the device that can have advantages and disadvantages I'd say the news produces certainly more data and probably more information especially of different parts of the brain can you please speak louder? Hardly he can tell some things about the mental state but even that isn't very useful without knowing so without knowing I have to repeat the question if the if you have the recording can you tell what the person was doing? No so the question was can you tell what someone is doing just from the brain wave data? No, you can't really because it very much depends on the person and it's very vague information and you can maybe tell if he is very concentrated or very relaxed but that depends on how his brain waves usually are distributed and there are interpersonal differences yes, it's possible to recognize feelings if you set up the experiment's rights I guess you could do that but the question was that I only showed two channels and you asked about yes if I can analyze different kind of wave bands inside the brain waves so the data analysis works like you can do a Fourier transform analysis and then you can detect the strengths of different frequencies and of course you can calculate that in my example I only calculated alpha and beta one and I could of course calculate everything else and it should be more precise with the user and with the mind wave but if there is no further questions then I'd like to thank you again and I'll see you next time