 Eigentlich geht es um eine ganz einfache Frage, wo stecken wir den Stecker zwischen Hirn und der Maschine rein? Wie können wir das Hirn adaptieren? Das ist eine Frage, die Menschen schon ganz lange beschäftigt, gibt viele Ansätze, gibt viel Forschung, gibt viele Ideen. Und wir haben das große Glück, dass Lucy hier ist. Lucy hat in Karlsruhe E-Technik studiert, macht das eigentlich schon immer. So E-Technik, nicht studieren. Und kriegt ihr dann irgendwann mal so diesen einen berühmten Anruf, so, hey, hast du nicht Bock zu uns zu kommen? Und dann ist sie aus Karlsruhe leider weg und ist nach SF gezogen oder in die Richtung. Und darf an diesem Thema mitmachen und ist bei Neuralink gelandet. Und das ist jetzt, geht es halt eben um, wie sie erzählt von den verschiedenen Methoden und welche Firmen sich damit beschäftigen. Und ich bin sehr gespannt. Applaus für Lucy und enjoy! Ja, thanks for this wonderful introduction. I'm Lucy and I'm here to talk about brain-machine interfaces or brain-computer interfaces. These two terms are pretty much interchangeable. You find both in literature. I personally prefer brain-machine interfaces because it's more general than computers. And who knows what the future brings in terms of technology. We want to connect to our brain. And fundamentally the question I want to answer today or since it's impossible to answer this question in 45 minutes, what I want you to get you started on thinking about is how do you possibly read from and rate data to the brain. So this is like the outer scope of this presentation. And I want you to, like as I go through my slides, as I present different technologies, I want you to think about this problem. I want you to connect existing knowledge you might have from engineering or even software development and how knowledge you already know applies to this topic and might provide a solution for some of these challenges. First, I need to clarify what I even mean by data, why it is important to communicate with the brain because obviously we are already doing brain-to-brain communication by me standing here speaking through this microphone. In fact, this is already brain-computer communication. I'm talking to this microphone. My voice is streamed to the internet. Someone on a computer is already listening to the signals in my brain just with all these intermediate layers in between that limit the bandwidth. And fundamentally what we have today in terms of language that humans use that has evolved over thousands of years of evolution and knowledge is pretty much this graph. You might agree with or disagree with some details. But fundamentally, when we talk about human-to-human communication, the speed, like the bandwidth of how fast data can be exchanged is bottlenecked by talking, by speech. Concepts in my head and my brain need to be compressed into speech that is then transferred through the modulation of my voice of muscles, goes through the air into someone's ear and has to be decoded into a concept. This is pretty slow. It works much faster if you don't have this intermediate step of transferring knowledge through body language or speech or gesturing. You can imagine things much quicker than you can express them. Hopefully all of you are pretty ... Some of you might have an inner monologue. Some of you might not have an inner monologue independently if you have some speech-based representation of your consciousness. You almost certainly be able to visualize or imagine emotions or abstract concepts at a much higher bandwidth than we can communicate them. This is what we call thinking. This is what we had for the past few thousand years with spoken language and more recently written language. Now, a couple years ago, we introduced these new devices called computers that we also need to interact with, that we also need to communicate with. The means of communication that we have accessible today to communicate with computers are significantly worse than what we can do with humans. If you think about human-to-computer communication, if you are on the Internet, if you are composing some posts, you are most likely using a keyboard, which is a very slow mechanical thing where you press individual buttons significantly slower than talking. The other way around is a little better because humans have this great interface with the environment, your eyes, very high bandwidth into the brain. We have modern display technology that can represent display information with up to 4K resolution or even higher. So this channel is a little better, but it's still bottlenecked. What virtual reality tries to solve is fundamentally get disband with of human-computer communication and computer-to-human communication to the same level as nowadays human-to-human communication is. All this research that's happening with the metaverse, with VR-Glasses, with haptic feedback is basically trying to replicate the environment we already interact with in some virtual reality in some virtual digital world where you have access to the full bandwidth of this communication, ideally to make stuff like teaching, transfer of information, transfer of knowledge, transfer of emotions easier and more accessible. But this still has this fundamental limit of basically this bottleneck of our brain having to push information through the spinal cord, through muscles in order to exchange this data. So if we at some point want to go beyond communication speeds that are possible through traditional language, we need direct read and write access to the brain, and this is essentially what brain-machine interfaces try to achieve, is basically pushing the limits of human-to-human and human-computer communication to the fundamental limits of what your brain is capable of processing. And to get an understanding of how we potentially get there in some sci-fi future, we need to understand how neurons work and how they already talk with each other, and what we can do with technology to access this data for both input and output. And as it turns out, as you probably already know, the neurons in our head already use electrical signals. So evolution, millions of years ago already realized that electric fields are a great way to transfer information. There are tons of physical principles that rely on electric fields to modulate, for example the ion flow through gated ion channels through the membrane of neurons. And just for you to get an understanding, get an idea of how big these neurons are and what scale we are talking about. This is a screenshot from the H01 data set, which was created, brought into existence by the Google Brain Project, which is this great research project where one cubic millimeter of human donor cerebral cortex tissue been donated and sliced into four nanometer-fin slices and fully analyzed, fully visualized using scanning electron microscopy to a level where you can identify and see individual synapses to create this connectivity graph of how neurons work. And lots of researchers gained tons of valuable insights from this data set. The key takeaway you should get from looking at this is some sense of scale. So when we think about neurons, when we think about neuroscience, neurosurgery, we probably imagine this to be incredibly small. But in fact a neuron, as you find it in your brain, isn't that small. It's like on the order of 20 micrometers, it has about the same diameter as the human hair. It can be visible to the blank eye or with like some simple optical microscope. In fact, the technology we are capable of creating from microfabrikation is significantly smaller than neurons in our head. The transistors that you find in your smartphone, in your laptop, even the pixels of this place are on the order of dozens of nanometers, multiple orders of magnitude smaller than what biology came up with to do computation. So we already have the technology, we already have the means of creating structures that are on the same order of magnitude or smaller than the kind of stuff that's happening in biology. And as I explained earlier, neurons are already using electrical signals to communicate with each other. Neurons use sodium-potassium pumps to create a gradient and ion concentration between the inside of the cell and the outside of the cell, which causes a resting potential of about minus 70 mV between the inside of the cell and the outside and as a neuron is triggered or stimulated by neurotransmitters, there's a wave traveling through the neuron. I hope you can see this, the contrast is pretty bad. But you can basically see the voltage potential across the cell membrane, changing as a neuron is triggered and we call this voltage spike a so-called action potential. This happens about 0.1 to 10 times per second for every neuron in your head and you're relatively involved in some form of communication. This could be thinking, this could be language, this could be motor control. But what's important to take away from this, which also was very surprising to me as an electrical engineer, these voltage levels we are talking about, they're pretty significant. If you're involved in audio technology in any way, if you have done audio engineering or anything with sound, you might realize that this voltage amplitude is on the same order of magnitude as the signals you use in headphone amplifiers and in fact in the 70s in the early days of neuro research people would literally stick little needles into the brain that are hooked up to a guitar amplifier and you could directly listen to the clicking noise of individual neurons firing. Obviously the signal is significantly attenuated as you go away from an individual neuron and tapping into a neuron with our current technology is very hard. It is possible to do on Benchtop but it is not very feasible for any large scale electrode array. So the actual signals we are talking about in a second are much smaller but I just wanted to give you an idea that these signals that are flowing around in your head are actually pretty strong and significant and can be easily captured by today's analog to digital converters. Another interesting thing to understand in order to come up with solutions of interfacing with the brain is getting some understanding of how the brain is structured. This is an artistic interpretation of neuron distribution and connection. This is not accurate, this is not an actual brain scan but what is interesting to take away from this is to realize that the cerebral cortex where the highest level of your consciousness is taking place where neurons are connected at the highest density is basically only covering the outer surface of your brain and this layer of highly connected neurons goes down about 3 mm deep. Sometimes this layer goes into brain folds which makes it hard to access from the outside but after all you don't need to scan the entire brain at some point you want to get or build a full brain machine interface. The most layer is sufficient to capture everything that happens below in the white matter is mostly responsible for relaying information from in two different areas of the brain or down the spinal cord where it is then distributed for your body and where sensor signals come back from the surface of your skin. With this being said we can dive into different technologies that try to read these brain signals from neurons and fundamentally there are two different approaches to how brain interfaces work there are technologies that I will talk about in the scope of this talk that directly try to measure this electrical voltage as a waveform however there also exists technology that is significantly less in ways that tries to measure and detect secondary effects from this neural activity like for example fMI or fNIA technology which you probably commonly associate with brain interfaces when you look at like I don't know your brain on acid or something like that those visualizations where you see different areas of the brain light up these are usually fully noninvasive scans of the brain that detect secondary metabolic processes in your brain based on increased brain activity so if some area of the brain is showing an increased activity it might increase in metabolism oxygen concentration blood in this area of the brain goes down and this can then be detected by a sensitive fMI scanner however this is not the kind of technology I want to focus about in this talk because the problem of this technology is that since it is a secondary process the reaction to brain activity is very slow sometimes for byproducts to build up that are then detected and I personally think for this to become relevant in our modern technology you need to directly look at the electrical signals in the neurons or generated by the neurons and this being said I want to separate these types of brain machine interfaces that directly measure electrical signals into 3 more types ranging from noninvasive technology to minimally invasive technology to fully invasive technology and the most simple brain machine interface that you might have already tried personally on yourself are so called EEG sensors these are sensors that you basically stick on the outside of your head on the skin and you are measuring the voltage of a whole bunch thousands of neurons firing at the same time intervals and this superimposed firing of neurons causes some large scale voltage difference we call these local field potentials that can be strong enough to translate travel all the way through the skull and be detectable using very sensitive amplifiers on the surface of your skin and the obvious advantage with this technology is it's noninvasive you can just put on those electrodes some of them are not even sticky they literally just make contact with the surface of your skin you can read signals and if you are done with your session you can take the brain machine interface off the big disadvantage with this technology is A the signals are very weak and B you don't have high spatial resolution because as you can imagine by the time all these superimposed electrical signals make its way through the non conductive skull they are mixed together and you only see very significant brain activity you don't have good spatial or temporal resolution and this gets a little better as you go below the skull below this insulating layer that nature put in place to protect the very thing that we try to access which are so called ECOG arrays or electrocortical graphy arrays which are usually two dimensional structures so as to place lay on top of the surface of the brain or to be more precise on top of the dura which is an additional separating layer between the brain and the bone which separates basically the fluid that's surrounding the brain from the environment and by not penetrating this dura you greatly reduce the risk of an infection of an actual brain infection in case of all you need to do is to make an incision cut a hole in your skull you can place this electrode array on top of the dura without actually damaging any of the brain structure you close this incision this chronotomy back up and you're pretty much done and the advantage with this is you get significantly stronger signals and you're much closer located but every single electrode is still recording from multiple hundreds of thousands of neurons and if you want really high spatial resolution if you want to decode individual muscle movement in the motor cortex if you want to stimulate individual pixels of your visual field you really need to talk to single neurons and for this you unfortunately as it stands right now need to stick needles into the brain and this is what we call brain penetrating microelectroids and these today form the most advanced technology in a couple in a second so this is pretty much a biological background you need to know to understand why it is so hard to interface with the brain first of all the signals we are looking at are pretty small and they are in a very hostile environment the body is very good at protecting itself everything you put in is recognized as a foreign object by the immune system so interfacing with neurons is a material science problem and a minimization miniaturization problem where you want to make electronics as small as possible as energy efficient as possible to create the smallest possible damage to a healthy brain tissue and this is what brings us to the so called congestion problem which you will understand in a second so first of all back to EEGs one noteworthy platform is the so called open BCI project and I highly recommend you taking a look at it this is a really cool open source project started in 2013 it is fully open source software and hardware that provides these 8 channels high gain EEG channels on a circuit board that provides wireless connectivity the authors of this project provide 3D files to print your own little head trots to wirelessly record and stream this data and what is really cool about this is since it is open source software you can easily script it and design your own studies so for example on this screen on the right you can see how someone used Arduino to script A experiment where the person wearing this BMI is instructed to move the right finger and at the same time you can look at these local field potentials and you can see how they work broad signals that are picked up by the electrodes on the surface of the skull and this works great for some applications specifically if you don't care about very precise movement intentions if you don't care about high fidelity information high fidelity data if you want like wipes any commercial products using this technology that focus on concentration that try to give the user feedback on how well they are focusing on a specific task so for stuff like this or for sleep tracking for example this is great if you want to detect different phases of your sleep this is great technology but it doesn't get us to a point where it's actually benefiting or like increasing the bandwidth of communication also compared to a keyboard where you can literally type this brings us to the category of minimally invasive brain machine interfaces and one noteworthy company in this field a Synchron which came up with this great idea of using a stand which is an existing medical product used for like multiple decades at this point that can be inserted through endovascular tissue through your body because some of these end up in your brain so you can get electrodes all the way through your neck into the brain into specific areas and you can probably see this wire of one of these dendroids running up this endovascular tissue all the way up in the motor cortex without making a incision without being invasive into the head and this solves a bunch of problems that we have of EEG so we are much a much closer proximity to the actual neurons we get much stronger signals but the big disadvantage remains that we record for multiple hundreds of neurons per electrode so we can't really differentiate between specific movement intention specific like visual stimulants specific emotions potentially in the future and to give you an example what we can do with this technology we can do a lot of clinical trials as of this year and one application they are currently demonstrating is for patients with paraplegia who are basically paralyzed down their neck these patients to interact with computers usually use iris trackers since they still can move their eyes to move around the curves on the keyboard but to press buttons they have to do this very annoying gesture where they move their eyeballs for a specific period of time to basically click and what this BMI enables for these patients is to perform this click action and a secondary zoom action by thought alone without actually moving their eyes to perform this action and this already abide for like fully able people seems like not that big of a deal is a great improvement of quality of life for people who are living with disabilities in terms of helping people with paraplegia was taken by BlackRock Neurotech usually we get progressively more invasive and it becomes progressively more scary also I guess I forgot to mention this it wasn't the abstract but trigger warning there will be pictures of actual brains over like the next couple slides so if you are like sensitive to seeing blood you probably want to leave now it's, I don't know just putting it out there so this is this is the first company from all the companies that I presented that aims to drill a hole into your skull and the reason they do this is because they developed they invented the so called Utah Array the Utah Array is a piece of silicon that's manufactured using traditional micro application technologies that we have available semi-conductors over 40 years now and it's using silicon substrate to 3D structure these little needles that you can see on the right and each little needle is incredibly thin it has a diameter of only 3 micrometers and every single this specific Utah Array has a total of 256 channels that are connected to physical wires that go to a percutaneous connector that basically provides connectivity between these individual electrodes and the surface of the outside reading device this company has not yet achieved to make a fully implantable device so you have this connector that's not only awkward but also very dangerous since it provides additional risk for infection for study participants this product was never commercialized and it was used as part of the brain gate study which was a very successful research study running since 2009 to get us the first real data from using brain machine interfaces with humans and on the next slide I want to show you one of these research participants who is also a paraplegic patient who has not been able to use computers with high bandwidth or like a little like joystick you control through blowing into a little straw and in this video this patient is using this black rock brain machine interface that's implanted into the motor cortex to move around the cursor and actually perform typing actions with their thoughts alone which is really impressive I think this was after 1001 days of this clinical trial that's been running since 2009 and it's been hugely successful unfortunately one problem with this technology is not only do you need to cut a hole into your skull on top of this each single of these electrodes is very small the combined surface area of all these electrodes together is pretty big and you actually need a little pneumatic piston to ram this micro electrode array without causing too much trauma simply because the force required to insert it is so high also since they are all connected you can't really avoid all the vessels on the surface of the brain so during the insertion you're getting a lot of bleeding and this brings us to the congestion problem which I mentioned earlier current technology one bottleneck with this approach is that for every neuron you're recording from you're killing about 100 which is not great if you eventually want to scale this technology to a point where we want to record from 10.000 100.000 of neurons because it would mean that you are also destroying a significant part of your healthy tissue which brings us to Neuralink which is a company that try to or at least minimize the impact of this problem through not having a single rigid electrode array but by distributing all these electrodes through a flexible electrode structure and Neuralink has been very successful in making a fully self-contained device unlike BlackRock Neurotec where you have this external wired connector all the electronics of the current neural interface that Neuralink is working on is self-contained in a single hermetically sealed package which has a bunch of advantages first of all you can fully explain it under your scale so there is reduced risk of infection also data compression of the actual neural recording happens on the device itself so the bandwidth of data you need to transfer to your computer or the device you want to control using this BMI is significantly lower which enables the use of Bluetooth wireless technology to stream out this data but the real secret source of Neuralink are these frets and here we are looking at a 3D rendering or like a large scale view zoomed in view of one of these electrodes I guess you can't see the human hair for scale but luckily there are these error indicators you can tell it's much smaller than the human hair this is one single of the 96 frets that are connected to the Neuralink device each fret has a total of 16 electrodes that each have about the size of one neuron so these are about 10 by 20 micrometers in size which is roughly the same size as the human neuron in the cerebral cortex and these electrodes are manufactured also using a traditional semiconductor Wafer as the bare substrate which enables the use of existing fabrication technology to spin code for example the main isolator which is polyimid onto the wafer and build the metal structures on top using sputtering processes that you commonly find in the semiconductor industry and unlike the Utah Array where you have to insert the full thing at the same time thereby introducing a lot of trauma to the brain Neuralink developed an R1 surgical robot which is basically a fancy pick and place machine kind of thing it has this little needle cartridge that contains a very thin tungsten wire and maybe you've noticed the top of each of these electrodes has a little loop and the needle that goes into this cartridge basically hooks into this loop pulls one thread at a time out of this silicon carrier of the silicon substrate and places it into the brain once at a time so basically the workflow during the surgery looks like this you open the skull drill like a hole about the size of the Neuralink implant which is about 20 millimeters in diameter you insert these threads one by one and then the hole in the skull is replaced by the actual implant which contains the battery, electronics, radio and so on and then you close the incision and the robot is using optical image recognition to target these loops hooks onto the loops peels them off the silicon substrate and is inserting them into the brain one by one and it kind of looks like this actually in addition to optical imaging this robot is using optical coherence tomatography to actually look deep down into the brain tissue where it aims to implant and using again image recognition it's actively avoiding hitting vessels so you're still targeting one specific brain region but you can be much more precise about not damaging healthy tissue and once you have all these electrodes inserted into your brain total of 1024 electrodes in this case you can actually do quite a lot of stuff so currently Neuralink is also trying to provide a medical product for people with paraplegia which means the main brain area they are targeting is the motor cortex because turns out people living with paraplegia usually get their spinal cord injury through a accident so there is a physical disconnect in the spinal cord but their brain is still fully developed when they think about moving their arm when they think about moving their thumb there is still neurons in the brain firing that would relay this information to the actual muscles and by recording the neural spike patterns from this area of the brain and sending it through an artificial neural network that then interprets these signals to intended movement we can recreate this intended movement using a cursor on the screen or potentially provide people with prosthesis or have them control the wheelchair Neuralink currently does not yet have approval to start a clinical trial with humans however you might have already seen this video of a non human primate which is implanted with a neuralink who uses this implant to control the cursor on the screen so what you can see here is the monkey sitting in front of the screen and at this point in the video the monkey is using this analog joystick to control the cursor on the screen however at the same time data from the monkey's motor cortex is recorded and streamed wirelessly to this decoder algorithm and the decoder algorithm is associating the neural spike pattern it is seeing with the intended movement by this monkey and here you can see like the raw neural spike data that is streamed to a computer via bluetooth every line in this like plot is basically one channel and the white pixel means there has been a spike detected within this time interval and now you can see you can unplug this joystick using the output from the decoder algorithm the monkey is still able to play the game using just its mind alone and actually the hard part with this research is to now teach the monkey that it can play the game without using the joystick since it's still trying to like use the thing it takes a while for them to understand that it takes their mind alone to move this cursor on the screen and since it's a universal input device you're not only limited to like this specific tile game you can actually teach monkeys a whole variety of different games so in this example in this example they are controlling one of the paddles playing against the computer using just this brain implant alone and this is pretty much the state of the art of the technology we are right now obviously it's like impossible to tell what would happen if we put this technology into humans but since monkeys are very similar in their physiology to humans it is expected to see similarly effective results where in hopefully the very near future we are looking at people living with disabilities finally being able to control a cursor again with like significantly higher bandwidth than we do using iris trackers and in some very far sci-fi future hopefully you would also be able to make this technology attractive for fully able people to increase the connectivity of people in the world through the internet and there are like plenty of companies that all try to aim the same thing and the other message I want to give you in this talk is unfortunately like this technology has like huge potential but as every technology specifically in capitalism it also has a huge abuse potential and I think this crowd in particular this audience and I'm really happy to being able to have this presentation here at GPN and I'm very well aware of all the potential security problems of all the put like I'm sure every one of you were like talking about this thing is coming up with like the worst case nightmares scenarios of how this technology could be terribly abused and I think this is very important and if I want to like give you one message and if you like if I made you curious for this technology fundamentally see if this is something that potentially interests you in your professional career and join one of these companies because I think this crowd of people is a very special crowd of people that is both very competent in terms of technical skills but also has extremely high ethical standards and this is something that will become incredibly important with this technology so why is this important we probably all know Moore's Law for semiconductors where every two years we are doubling the computational power of semiconductors of microchips turns out kind of the same thing is happening with brain machine interfaces where as our capabilities in micro fabrication our capabilities in making efficient semiconductors is increasing we are developing new battery technologies and enable higher energy density to actually put compute into the human body we can see that in fact the number of simultaneously recorded neurons also is doubling about every two to three years and this graph ends in 2020 but as of 2022 we are at about 10,000 neurons we can record from and in like five years this is very likely to be on the order of 100,000 neurons and at this point this technology does actually become attractive to be used for virtual reality applications and moving further in the future I can very well see this technology becoming mass adopted as basically successor of the smartphone where this is technology that becomes so cheap and so commonly available that people decide to use this for communication and at this point data security ethical concerns accessibility become very important and I all want you to be not afraid but aware and curious for this technology so sure we are doing it right thank you this was my presentation I think I have plenty time left to answer a whole bunch of questions so let's have some discussion about this ja danke wir haben ungefähr 20 Minuten Zeit und thank you I wanted to ask about the other way around about transferring information from the brain to the outer world what is the state of the art for transferring from the outer world into our brain or is there some research already ja, so in fact I can only talk about the Neuralink implant because I am deeply involved with design of this device and this is news, Neuralink is publishing a bunch of papers about their technology and one thing our ASIC team is very proud of is the capability of having full STIM capability per channel so not only on this application specific silicon that connects to these individual neurons, there is an ADC analog to digital converter connected there is also a STIM engine connected which is basically a digital to analog converter with an amplifier that is capable of generating arbitrary waveforms on each and area of these electrodes and this is something we currently have disabled for our studies but in the future this becomes more and more interesting to involve into applications like visual like where you can potentially restore vision without using a regional implant the whole thing with STIM is you need a significantly higher channel count in terms of electrodes to actually do anything useful other than causing some uncontrolled muscle twitching so we are not quite there yet but it's certainly something that's already explored and taken into account when making these designs Hello my question is more related to the operation because it is an invasive surgery so how do you address scarring of the tissue like with drugs or with the elements used in the electrodes how do you do that? I think there are two different types of complications or scarring that can occur obviously there is the surgery itself which poses some amount of risk since you are opening the skull you are introducing something into the brain and this is like a whole process you still need, even with this robot you still need a neurosurgeon to perform all the manual work and there are risks associated with this but these are pretty well addressed through traditional medicine so generally you have about the same risk of infection as you have with arian incision and I think this is mostly based through medicine and anti biotics the second part of the question aims on what happens with the material once it is inside the brain and this is where biocompatibility really comes into play and the key here is to basically make use materials the brain or the immune system doesn't recognize as foreign objects by making them as bioidentical as possible so with these threads specifically there is research going generally with brain machine interfaces or any implants in biotech in general is to functionalize the surface of the silicon material or this polyimid material using peptides that are bioidentical so your immune system sees these electrodes or sees these structures and doesn't necessarily recognizes them as something that needs to be attacked by the immune system and with such special coating with such functionalization you can reduce the scarring that occurs around foreign objects significantly yeah thank you first for the question it's known that the brain is learning from when it's used in a certain area is there anything that shows that it's learning from the way of connecting to that device if you check that monkey would it actually show any improvement in the area where it's connecting this is a great question I was hoping something like this comes up because now I get to tell a little story that really fascinated me about how adaptive the brain really is and this has nothing to do with the brain-machine interface as well to some degree but in fact one thing that happened to me that really changed my view about these things and why I am now so excited for this technology really think it will be successful in a short term is after the pandemic when I moved to the US I was meeting with a friend who I haven't seen for a long time who is very active in the furry community and also in a social group because they usually happen to be people with high tech salaries so they have access easy access to new technology but they are also often socially isolated or prefer to engage in their own social groups which is why furry conferences are a thing but during the pandemic there weren't much conferences happening so a lot of furries really got into virtual reality stuff so my main touch is VRChat and in VRChat you get to customize your avatar and you get to program custom gestures that are detected and tracked using for example the Valve Index virtual reality glasses so this friend had a custom skin where they could with their tail control their ears movement using these gestures and this person has been using VRChat for over a year multiple hours a day to stay in touch with their friends and I was meeting them at a restaurant without any of their VR headset and I was talking to them it was one of the first interactions we had in person after the pandemic or after vaccines became available and I was talking to them just having a casual conversation and they would do this thing with their fingers and I was asking them hey are you okay are you a stroke or something and they were like oh this is how I show excitement using my ears in VR so within a year, within less than a year they rerouted their body map their brain learned to express emotions not only through skin muscles but also through thumb muscles and if you apply the same to a brain machine interface in this area of your brain you could imagine that even using existing technologies we might already be able to express emotions digitally using this technology which is very exciting to me the fact that you can do so much just through rerouting your brain and learned behavior so I hope this somewhat answers your question so about the bandwidth as I understand it right now so you are using pattern recognition in some way to associate nuance firing with hand movement do you think that with putting feedback back into the brain that maybe the brain activity could be rewritten so it doesn't depend on like patterns that are used for handwriting so that you can increase the bandwidth or maybe with finer gesturing imagine like you were soldering not like moving a joystick could you imagine that the bandwidth would be higher if you try to remove the hand movement part from the entire brain computer interface so I can show you this slide which basically shows how the decoding currently works so currently decoding this motor movement this thumb movement in case of the monkeys actually very simple transformer what you are looking at here is basically a map of these threads as they are inserted each thread has 16 channels as you go down and color coded you can basically see how much neurons in the motor cortex are contributing to movement in one specific direction so as we gain more knowledge with neuroscience and this is actually one thing that is kind of annoying with the work we do we have this very powerful tool to record neural activity but actually we have no idea what is actually happening on any abstraction layer in the brain that goes below the actual direct movement and stimulation of muscles so we gain more insight into what all these dots do that do not significantly contribute to the obvious movement we can write newer decoder algorithms that become much better at interpreting not only actual movement but also intended movement and like movement directories that your brain imagines as it thinks about moving a cursor in a specific direction and then the other aspect to this is how can we come up with a language that is possible to compress abstract information into whatever language we will have using BMI's and we already see some of this development with the internet where language you use on the internet in chat rooms is very different to language that you use to communicate with people we as a species came up with emojis as some form of representing emotions through ASCII characters and I think something very similar will happen in some cases where we have this new tool available for communication and we will find higher and higher more efficient ways to express ourselves through this new technology so my question is not really a technical one but you work for Neuralink which is owned by the single richest individual in the world and all the research you quoted is not really public research but research by private firms see any problems there and do you have any idea why all the research is not funded by public institutions but rather by private capitalist companies well that's a fundamental problem with financing in academia in general where research in academia and research in private companies have very different incentives obviously academia tries to make knowledge accessible while a private company tries to make a profit and for some companies those are not necessarily in conflict in fact Neuralink is publishing a lot of their research through papers basically everything I presented in this talk is public knowledge that's out on the internet there are plenty of papers that talk about the specific digital architecture of these chips it is a problem the way I see it is in public research there is not sufficient funding to make this technology happen I really deeply care about this technology coming into existence because I think it will be very important for the digital age it has huge potential to really benefit people immediately as soon as this technology becomes available so I am trying to optimize for I want to see this happen in the world what is the shortest path and unfortunately with how economics work these days this is through private companies but I hope as this technology becomes more accessible there will be a community very similar to how the hacking community, the electronics community works today where private individuals become interested in this technology reras engineer this technology and publish stuff under open source licenses that is then available to everyone Hi, thanks I just have a more practical question did you do any degradation studies on the implant degradation I personally am a electrical engineer but this is certainly something that is happening in pretty much every company that makes bioimplantables there is a technique called accelerated lifetime testing where you are basically exposing your materials and your fully assembled devices to an environment that is even more aggressive than the brain so in the case of a neuroimplant this would be some environment where you are having a very high ion concentration very high oxygen concentration like hydrogen peroxide solution that is heated to a temperature way above body temperature like 60 degrees C and within this environment as you operate devices you are exposing them to an environment that is accelerating the aging of materials by a factor of 4 to 5 so by having your device in the solution in this accelerated lifetime testing environment for three months you are effectively simulating the degradation over one year and this is the most common technology that is used to see how materials would fail in a patient first of all again thanks for the talk same kind of question as before but not confined to the technical aspects these implants at the moment I assume it's their life intended lifetime is for the study for a few, maybe a few years but if we start to implant them in humans they will stay there for a long time is there already research in the technical aspects but also in the social and economic aspects of how do we make this in a safe way economic way but also in respect of human rights ja absolut so was du addresst is upgradability i compared this technology with smartphones earlier because i believe this already is a brain machine interface that everyone of us uses but as this thing gets outdated you just throw it away or recycle it and you get a new higher performance device obviously that's very difficult there's no obvious solution on how to address this obviously making the device as explantable as possible is like one design goal which is advantageous for a variety of reasons also if there are any complications the device fails for example you would want to explant it safely which also goes into the direction of biocompatibility making materials that are biologically inert and do not actually form scar tissue around them so that they can actually pull them out but ja, this is one unsolved problem and it is questionable if this technology will even be mass adopted at all as long as it is necessary to undergo surgery obviously ideally we would discover some technology that enables to read neural signals using rarer sensitive Hall effect sensors or something like that currently this is the only approach we have I unfortunately do not have an answer to this problem if you come up with some idea on how to make this upgradable please let me know, I'm really curious ja, so on that issue what do you think in practical use would be the limiting factor to the lifetime of it would it be degradation of the electrodes for example due to the iron conditions in the brain failure of the actual electronics like the battery degrading or electron migration inside the ASIC scar tissue is certainly a concern where currently with these devices as you leave them implanted the body is still showing some immune reaction in the way for the body to address a foreign object inside the tissue is to encapsulate it so it's not getting infected or anything the body is like aggravating cells around it which increase the impedance of the electrode and thereby the signal you're recording is lower so the device doesn't fail in like a bad way but it becomes less and less useful as like the signal to noise ratio goes down the other concern, yes battery like this is all of these devices are using lithium ion batteries since those are the most advanced battery technology we have available right now so you get around 500-2000 charging cycles before capacity goes down which with daily use gives you like 2 or 3 years yeah, that's some fundamental limit we currently have with all variable technology Ich würde sagen, wir haben noch eine kurze Frage Zeit You talked earlier to one of the questions about decoding these spikes that we're recording and so far we've been doing that for motorics and it's been working well as you can see from the videos with the monkey I'm wondering what the developments are in decoding actual thoughts or if consciousness is a nut we're still going to have to crack before we can like think send message to Alice I think this is a question for academia this is a question where we as companies can provide tools that are then used by researchers to study actual science the way I see this technology is we are building the microscopes that are then used by researchers to understand consciousness we do not have any bandwidth or interest to actually dig down into decoding thoughts we are making the devices that make it possible to examine the working of the brain so, yeah, that's the approach there I think we have no idea how the brain is working right now ja, danke Applaus