 Hello everyone. Thank you for coming at the crack of 11 o'clock in the morning. Okay, how do I full screen? Why do I have the thing here? Is that what I want? No. This is terribly embarrassing. There we go. Cool. Okay. So, after that little embarrassment, my name is Catherine. I am here to talk today about hacking your thoughts. Batman forever meets black mare. The standard disclaimer. The work in this presentation was done when I was at the University of Washington as part of my PhD dissertation. The results and views presented here do not necessarily represent those of my funding sources or my current employer. So, thanks. Okay. So, there's been a lot of sort of hype in the media recently surrounding brain computer interfaces, putting things into your brain, getting signals out. So I want to take a little bit of time to sort of separate the hype from the reality and let you know what really is possible. I'm going to talk about some of the experimental results from the experiments that I did. And as part of that, I did ethics and policy research. So even though my dissertation was electrical and computer engineering, I actually did a neuroethics survey and I spent some time looking at the policy proposals that we can come up with for emerging technologies like brain computer interfaces. Okay. So real quick things that are not covered by this presentation. Um, I am really sorry there are no aliens involved at all in this. Um, I'm also really sorry I know nothing about any chips that the government may have implanted in your brain. Um, I use noninvasive stuff so that is outside the purview of my research. Before we get started, um, we're going to just baseline the definition for brain computer interface. Um, this is the definition that I used in my neuroethics survey that we're going to get to in the second half of the presentation. Um, I defined a BCI as, um, it can record brain activity while an individual is performing different actions. For example, blinking their eyes, playing a video game or texting on a phone. BCIs are often used to give a user control of a computer using their brain activity. So this could be anything from playing a video game to actually controlling like a prosthetic arm or a wheelchair. So I'm also going to be talking about targeted elicitation. Emphasis on targeted and elicitation. So we're showing specific stimuli in order to obtain a particular response. This is not writ large, I'm taking everything from your mind. So, right off the bat, what do you think of when you hear brain hacking? Um, for some of us of a certain age, you might go immediately to my favorite movie from 1995, Batman Forever. Um, I forget which streaming service it's on, but you should totally go check it out probably while drinking if that is your, your thing. Um, for those who haven't seen the movie, one of the plots is Edward Nigma, aka the Riddler, wants to find out who Batman really is. So he creates a device that basically sucks up all the brainwaves of everyone in Gotham. Um, and then figures out who has bats on the brain and that's obviously Batman. So at the, uh, end of the movie, um, they're having the showdown and, and Batman says you've sucked Gotham's brain waves, you've advised a way to read men's minds. And the Riddler says, you betcha, soon my little box will be on countless TVs around the world, feeding me credit card numbers, bank codes, sexual fantasies, and little white lies. So this is in 1995 that we already had this concept of taking information that could be useful either for, you know, stealing all of someone's money or using it to blackmail them. Um, for those of you who have not seen this lovely movie, you may be familiar with Blackmare. This is the episode Crocodile. Um, sorry spoilers, it's been out for a while, I hope you've already seen it. But basically in this futuristic Iceland, it is required by law that you give up your neural information as part of investigations. And so this woman, um, saw a man get hit by an autonomous pizza delivery truck and the insurance agent is, um, compelling her by law to give this information. Um, unfortunately for the insurance agent, this woman also killed someone in her hotel room. Um, and you can all guess where that goes from there. So this isn't a future where you have to give up your brain signals, which kind of leads to this question of, do we really want to have to do that? So based on those two examples, you might think that, oh, we're gonna be able to pull things out immediately and kind of like a pensive in Harry Potter, just look at them. But not all of that's really possible. And so two things have happened recently, um, and I want to sort of explain them for you. So some of you may have seen Elon Musk's presentation about Neuralink and they have this great little chip that has thousands of little electrodes that they're gonna put into your skull using this great little, uh, um, sewing machine essentially that by the way was developed by DARPA at UCSF. Um, and Elon's goal is to have someone who is not a neurosurgeon just use a laser to drill a hole in your brain, they'll pop this in and you're good to go. So my problems with that, first of all, a lot of what they're talking about deals with reanimation of limbs. So spinal cord injury, prosthetic arm type deal. It's great if you can get the information out to I want to move my arm. But what happens is there's nothing coming back saying, um, this is what the arm is, or where it is in space. This is what I'm feeling. This is how much weight I'm carrying. So without that feedback loop, it's not actually that helpful. Um, a lot of the things they're talking about for the augmentation, things like memory and things like that, it's unclear how the current implantation method they're talking about is actually going to get to those deep brain structures. Um, right now, if you're looking at things, um, for like the hypothalamus, you're looking at very, very long electrodes that you stick in the brain and oh, by the way, once you put them in, the brain says we have a foreign invader and it scars around it. So it lessens that, um, effectivity. Um, one of the other things that Elon's talking about is actually using electricity to stimulate the brain, which is great except for the fact that we have a very extensive literature from deep brain stimulators that show that putting electricity in can cause side effects like profound behavioral changes, um, changes in sexual preferences and behavior, um, compulsive gambling and spending. And so if you're just going to be putting one of these in your brain and turning electricity on, you may want to know that it's going to do a lot of really crazy things. So that's an area of research that really should be explored more before you just go down to your local, you know, Radio Shack 2.0 to get one of these put in. So the next one is Facebook. This one got a little less press because they released a peer reviewed article in nature communications. So two years ago, they said they were going to do typing by brain and it was going to be 100 words a minute from your brain to Facebook. Um, right now the gold standard for typing by brain, if you're going to be reading someone's neural signals noninvasively, so just a cap on their head, it's about one word a minute, I think Stanford can do eight words a minute. Um, also the study that they came out with, which if you're interested, more than welcome to go online and read, they only did it with three subjects and they actually did it very invasively. So I'm going to warn you now in a couple slides, I'm going to show a picture brain surgery. I will tell you when, feel free to close your eyes, but I'm going to show you what they actually did to get this information out. And of course, everybody knows Facebook. Do you really want Facebook to have direct access to all of your neural signals, particularly when they know what you were looking at when you were using it? So just little, little thought in your back your mind. Okay. So the way that the people, uh, the way that they did the experimentation in this is that they use something called electrocorticography or ecoch, and that is for patients who have intractable epilepsy, they don't know where the, the locus of the seizures coming from, so they bring them in the hospital, they take off the, the skull, and they put electrodes on the surface, and they let them sit in the hospital for two weeks, and they have seizures and they can find out where those seizures are coming from. While they're sitting in the hospital, they're bored out of their gourd, literally. Um, that wasn't as funny as it's supposed to be. Um, and so researchers come in and you can do really cool experiments. So this is what the grid looks like. So it's, you know, they have different sizes, they have different numbers based on where you want to put them and hemispheres and coverage. Okay. The gory picture is coming up next. If you do not like surgery or gore or blood, please close your eyes. I'll let you know when you can look again. Okay, here we go. This is what they're actually doing. So this is, um, electrocorticography. This is the subjects from the Facebook experiment. This is kind of what Elon is trying to do, except smaller. Um, so if you just look at this and think about, do you really want someone who doesn't have a medical degree doing this to your brain? Yeah. Little, little thought for mine. Okay. Gory picture's gone. You can open your eyes now. So what is currently feasible now that I've scared the living crap out of you? Um, what I ended up doing is I used electroencephalography or EEG. This is no surgery needed because it turns out I am not a neurosurgeon. I can't put, uh, electrodes into my head. Um, as you can see, I take goofy pictures. I actually have lots of these of me wearing EEG caps because I think they're cool. Um, this is the setup that I used. It's a brain vision, brain products cap. And what I was looking for is I was looking for event related responses to specific stimuli that I was showing. So on your right hand side of the screen, you can see sort of this family of, um, brain wave patterns. And so there are event related potentials and what happens is they come in response to different stimuli. So ERN is, um, error related negativity. So if you make a mistake, your brain actually creates that so you know you made the mistake. There's one for spelling errors. There's one for grammatical errors. And the particular one I'm interested in is called P300. That's a positive peak 300 milliseconds after the stimuli is shown to you. And the best way to explain this is to tell you about the experimental paradigm that a lot of people use to test this out. So this is called the guilty knowledge test. Um, and the P300 is called the oddball response because it recur, it occurs in, um, uh, in response to things that are different than things around it. So the way this usually goes in the experimental literature is you'll have a subject come in the room and there'll be six pictures usually of jewelry or something. And you'll be asked to steal one of them. So either put it in a drawer, put it in your pocket, put it, you know, somewhere else. And then they sit you down with an EG cap on your head and they start showing you pictures of the things that you could have taken. And they record your neural signals to see if they can figure out which one elicits the response of this is the one that you took. And lo and behold, it was the watch you have been caught. Um, obviously it's not quite this drastic, you have to do this over and over again, but this is the general idea of you have a family, a set of stimuli and you're hoping that one of that set is the target so that you can elicit this response and then you can actually use that information. So what I ended up doing is I did a single-digit guessing game and I did this because I wanted to go back to basics from the literature. So if you look at the prior literature and there are a couple of papers specifically about elicitation of private information, they tend to use overt or conscious so you know what you're looking at stimuli or subliminal, technically unconscious, but most people can actually see what it is because monitors being the way they are, they don't refresh fast enough, et cetera, et cetera. And they all relied on experimental training data. So what happens is you come in and the experimenter show you a series of stimuli and they know which one you're going to have a response to and they can use that set then to match the test data so they know what they're looking at. I was looking at completely untrained data. So you come into my lab, I put the cap on you, I start showing you stimuli and then I try to figure out the number. So this is the picture of, again, me wearing a lovely cap, you know, in the lab. And basically what I did was I had subjects pick a number and then I told them they were going to stare at a dot on the screen and the numbers would flash around it. You can see the timeline of the stimuli on the screen. And the only thing I told them after they selected the number was that they would have to put the number in again at the end of the experiment. So I didn't actually tell them that they were supposed to think about it or what they were supposed to do with that number, I just said pick a number and at the end you're going to tell me what that number is again. And I ended up with three kinds of results. One of them about the overall effectiveness in identifying subjects chosen digit, the effect of attention on identifying the subject's digit and then determining the current versus future digit information. So actually getting it in tension here. So for the first one, for all but one of the subjects that I had, the computer correctly calculated the correct digit that they were thinking of two to three times out of 10 sessions that we were doing. So this is an example of one of those sessions, the computer got it right three times out of 10. And you may be thinking, well, 20 to 30 percent when chance is 10 percent, that's not great. However, this is with zero training data and actually fairly simple signal processing techniques. And if you look at the rest of literature, it's actually not that much worse compared to even things with training data sets. And I will say the one paper that did have untrained data in it, they said it was quote, five to 10 times harder to calculate with untrained data versus trained data. So it's okay. You can actually do this where if you show enough stimuli over and over again, you can increase that confidence interval, you just have to have more time and more data. The effect of attention. So in my experiments, the subjects literally sat there and stared at a dot and a screen for five minutes at a time. And it turns out people get really tired and start falling asleep. So what I ended up doing is I had subjects in counter balanced sessions press the space bar when they saw their digit. And so that meant that I had some sessions where they were paying attention and others where they were more passive. And so percentage wise, the correct digit was calculated more often for the space bar rounds when they were paying attention than non space bar rounds, which is when they were passive. And this actually holds up with the literature and another experiment where they were having people count the number of times a region on a map where they live showed up and that one actually they also got a higher percentage of gas accuracy. So this is great. The third one, which I think is super cool, is determining current versus future intent. So like I said before, I didn't actually tell subjects how they were supposed to maintain that digit in their head. And it turns out for some subjects, the number the subject was going to pick in the following round was calculated almost as many times as the number for the current round. But this was not consistent. So as before, it was 20, 30% across all subjects. But here you had some subjects for whom you only got, you know, 20% right and no future guesses. And you had some subjects for whom you got more of the future digit correct than the current digit. So this is super cool. It may have something to do with how they were thinking about the experiment. Maybe they were just thinking about the future digits because they wanted to get done and stop staring at a screen. But if anyone wants to do a PhD dissertation, I have a great research topic for you and I can tell you which lab to talk to. So let me know. Okay. So that's all fine and dandy. We can extract information. It may not be 100%. It probably will never get to 100%, but we have that technology. So can we then ask the question, what do consumers think about neural privacy? So if you lived in Gotham City and Edward Nyingma's device came out, what would you actually think about the fact that, you know, information was being put into your head and then taken out? So what I talked about in this scenario is what's being protected. I am interested in the quantifiable information that is determined from the combination of the electrical signals from the brain along with the relevant environmental stimuli. So what you're looking at and we looked at it. The original raw neural signals without context are much less informative. There are some studies where they're looking at determining things like Parkinson's and Alzheimer's using just raw traces of neural signals. But for the actual information, you need to know the context in which it was generated. So we're talking about definitions of neural privacy. This is not the first time that people have thought about privacy. So for those of you who are law nerds in the audience, if you go back to 1890, this is the very famous right to privacy by Warren and Brandeis. And they wrote this article in response to this crazy new technology called photography and how it was going to be invading people's lives and things were going to be put in permanence and there's all these papers. And so they basically were like we need to declare right to privacy now. Now in 2019, we're still having that conversation about privacy. What I'm saying is we also need to extrapolate it to emerging technology. So let's talk about neural privacy specifically. And so the four issues that I considered in defining neural privacy is privacy a right or an interest. So in legal terms, a right is something where if you're harmed, you can actually get some sort of compensation or reimbursement for that. An interest is just, yeah, I'd really like to not have this happen to me. But if it does happen to me, I have nothing to do about it. So can we actually come up with sort of a legal structure? Whereas if something happens, your neural signals, you can do something about it. Do we own our own thoughts? So I love this question because everyone has their different ideas about who our thoughts are and how they make us a person and our thoughts are a person or our thoughts are just inside of us. But what happens when they are extrapolated? So if you are playing a video game with your brain and they start showing you different pictures of coffee logos, and they determine that you like Starbucks, do they own the fact that you own, you like Starbucks now? They're obviously going to probably monetize it and try to send you targeted ads about Starbucks. But who owns the information? So that's a great question to ask. What is the relationship we have with those who elicit information neutrally? So there's a lot of questions about relationships we have with the data aggregators and social media. Is it fiduciary, information fiduciary relationship? Is it a parasitic one? Is it a symbiotic one? And there's a great philosophical discussion that I get into on the next bit. And then the importance of trust. So do you actually trust when you hand over your neural information that they're going to do what they say they're going to do with it? And that's something that we're going to talk about from the NeuroEthics study. So I took out a bunch of slides because y'all don't need to sit here and listen to my dissertation chapter about philosophy. But what it boils down to is we should all have an interest and protect our neural privacy, but we do need additional legal frameworks to make it a right. So Congress people, I heard there were congressional staffers in here take note. Defining and ascribing ownership is necessary to provide value to what is being elicited. So this is the case of, yeah, maybe eliciting that you are a fan of Starbucks over Pete's coffee may mean more than you turned right in a video game. So how can we actually ascribe a value to that information if we do want to come up with some sort of economy where you are actually allowing these thoughts to be elicited? Users should be able to trust that the information taken from elicited neural signals by a company will be used and interpreted properly, making the relationship between a user and a company an intimate one. So the concept of intimacy and privacy is talked about a lot by Julie Innes. She actually has a great book about this if you like philosophy or just like reading. And she talks about an intimate relationship where you actually have an understanding. And so that I like that framework for talking about this. And so to test out some of these questions, I actually did a neuroethics survey. And so I put this out online. Some of you may have seen this on Twitter, but I got 77 respondents in about 24 days at the beginning of the year. And in it I had four questions. I'll go over three here and the last one in the latter half of the talk. So I basically asked is there a difference in perceived privacy violation between a person intercepting BCN information versus a phone or an app. So something that is not a person. What are the differences in trust and willingness to share neural information with the range of entities? And is neural information more important than other data that's already available about us? So things like your Fitbit or maybe your online shopping history. One of the things that I did do in this survey is I asked about mobility status. And the reason I asked about this is that someone who uses a wheelchair or cane or may not be able to move about like someone else in the world, they may have a different relationship with privacy and trust and that they have to have a homemade come in and help them use the bathroom or they can't reach the top shelf. So they're always asking someone for it. And I wanted to see if we could sort of suss out that relationship. So question one, who or what is taking your information? So the scenario is you're sitting on a bus, you're using a BCI to control your phone. And some malicious hacker is sitting behind you and is able to intercept those signals. Meanwhile, in a different alternate timeline you have an app on your phone. I was ambiguous as to whether or not you installed it or you knew that it was doing all I said there was an app on the phone. And it was also doing the same thing. And so I asked in sort of a stair step set of questions, what is your level of violation of privacy based on scenario? So if it's just the content, if they're able to take a video of you typing it, if they have access to the current brain activity that you're doing when you type it out, if they have access to you planning to type out, so getting back to that experimental part of future intent and then if they have access to the emotional content of that message. To take away from this one, the personal procurement, so the person sitting behind the bus of neuro planning information, so the future intent is a statistically significant privacy violation over the app. That was the only one that I found to be statistically significant. So that's kind of strange. You know, you don't want someone to know what you're doing, but it also gets to the, we tend to be okay with giving away information on our phones even though the phones may have far, you know, worse implications about it going back to the face app thing of aging everyone. Everyone was like totally okay with putting their faces on there until everyone's like, wait, are they Russian? Sort of thinking about that. I also found that across all five categories, mobility status did not statistically impact perceptions of neural privacy. So I have more analysis. It gets a little more nuanced when you look at each of the individual scenarios, but overall didn't have an impact. So that was interesting. So the next question, if you're using a BCI and it has the ability to find out what foods you like, what your physical and mental state is, who you're attracted to, or maybe your political views, are you willing to and do you trust giving this information to six different entities? So I started with a family member, a physician, university researcher, a government entity, a nonprofit, and a for-profit. So a lovely chart from R for those of you who are familiar with R, you can basically see that family members, medical professionals, and university researchers, you're okay, trust and willing. You're willing to go there, but you get to government, nonprofit, and for-profit and it's like, no, no, untrustworthy, not go in there. And it's interesting because I feel like the thing that I left out of this is the feedback loop. So why were you giving it? So maybe if you needed to, you wanted to donate something to a nonprofit, like you wanted to donate your brain signals for an AG repository, maybe it'd be a little more willing to do that. Or, you know, if you know that in medical profession you have HEPA protecting you or university researcher has to go through an approval process through a view board, then you might increase it. So there's a lot of variables that need to look out here. But I really like this because I can go to a for-profit company and say, look, people really don't trust and aren't willing to give you this information. You should do something about that. Finally, what's more important? So I asked, is your neural information more equally or less important than Fitbit or similar exercise tracker? The record of your personal medical history at your doctor's office, genetic information, like 23andMe, your online shopping history, monthly credit card statement, and a journal or a diary. So these are the results here. What's really interesting and there are definitely people who think that their neural information is less important than things like their online shopping history. I would really love to meet these people to find out what they're buying online. But you can kind of see that things that may or may not have more bodily salience. The medical records and information, journal or diary, directly projecting your thoughts and feelings onto a page, they're about equivalent. But you get to exercise tracker and credit card and it's like, yeah. So that's a level of abstraction of what you're thinking about. So cool result. Okay. So what are the potential policy and regulatory implications? So people obviously have thoughts and feelings about this. We've shown that it is possible. Is there anything that we can actually do about it? And so this gets to the, you know, back to the Batman forever analogy. If Harvey Dent hadn't turned into Two-Face, could there have been a law in Gotham City that would have allowed them to go off and prosecute him? Or at the beginning of the movie, when Edward Nygma was proposing this, Bruce Wayne and Nygma's boss could have been like, oh, according to the FDA or, you know, whatever government agency, you can't do this because of XYZ regulations. So let's talk about that. So there are some existing biometric precedents where they're either protecting or profiting off you. The 2008 Genetic Information Non-Discrimination Act is implemented by Congress. And this allowed people to seek out genetic sequencing and then not be discriminated against it. So the words, the way that they were calling this out in the bill is they specifically talked about sickle cell anemia. So that is a particular affliction that only happens to a certain part of the population and they should not be discriminated against for going out and seeking treatment for it. So that's all fine and dandy. There also it ties into the Affordable Care Act where you can't technically be discriminated against for that genetic information, but that's because we have the preexisting condition clause. So if we get a preexisting condition, you can technically be discriminated against. So go for covering preexisting conditions. The life insurance one is interesting. In this starting this year, I believe it's John Hancock will only provide life insurance if you do active tracking. So you have to wear a tracker or like a Fitbit or something or you have to fill out a survey. They will no longer just let you sign up or fill out a questionnaire. They actually have to be monitoring you at all times to make sure they put you in the right life insurance bucket. In the state of New York, you are also companies are also allowed to follow your social media feeds to figure out how they're going to set your life insurance rates. I'm really annoyed because I can't find the notes in this particular setup. But the Wall Street Journal actually published things that you should and shouldn't do for your insurance rates in New York. And one of them was like, you should frequent gyms but leave your phone at home when you go to the bar or do activities like running. But if you go skydiving, you know, that's a little more risky. So they're literally telling you how you should and shouldn't act because otherwise your life insurance is going to change. There we go. Yes, boo, very boo. If you live in New York, talk to your state legislators. So the final one that's really interesting is a state case called Rosenbach versus Six Flags. And so this is a Illinois State Supreme Court case. The state of Illinois, if you live in Illinois, good on you. They actually have one of the strongest biometric protection laws in the country. Unfortunately, there's not much competition because the only two other states that I know of are Washington and Texas. But they basically said that in the statute, you are required to get consents to take any form of biometric information that includes fingerprints. And you also have to have a written documentation of what you're doing with the information and how long you keep it. And so in this case, a mom signed her son up for a season class at Six Flags and she said, go fill out the paperwork and you get there and bring back the pass. And the kid comes back and she's like, where's the pass? And the kid said, oh, they just took my fingerprint because it turns out of the Six Flags in Illinois, they didn't have passes. It was a biometric to get into the park. And so the mother said, not only did I not consent to having my son give his fingerprint, I don't know what they're gonna do with it. And so they went through and the state Supreme Court said that it was a harm that Six Flags had violated the statute. So they had taken the information, even though they hadn't done anything with it, there was a harm and the mother was allowed to seek a right of action, i.e. she could do them or do whatever. And this is different because most of the time when someone takes something, you have to prove that they did something with it. And so this is a great example of how you can actually create laws that allow you to get compensation for something even though I think terrible may have happened with it. So this gets the last question that I asked, do people actually have feelings about who should be involved in development and the sale of and then in reparations for a malicious use or elicitation? So I asked this question, it was kind of a grid chart of who should be in charge at what portions of it. So if you're a user, an industry researcher, an independent regulatory organization, a legislator or a device manufacturer, how should you be involved in BCI development compared to current involvement? So there is a subjective, how much do you know about what's currently going on for development oversight, the actual implementation in use, and then the reparations. And so the two main takeaway points here are that independent regulatory organizations, legislators and device manufacturers should be more involved going from development to reparations in use. So as you sort of go down this development chain, they should be more involved in regulating or actually saying that you should be anonymizing or where things should be happening. I also found that users should be the least involved in reparations from use and device manufacturers should be the most involved. So it shouldn't be the onus of the user when something is taken from them to go out and try to sort of survey how they can get reparations. The device manufacturers should really be taking charge of that either in just giving out money or maybe they should be protecting the information begin with so they make sure that no one's eliciting information that they're knowing about it or they're not the ones who are doing it. So based on all that, there are some policy solutions and there's a part in here where all the y'all can participate so get ready to take notes. One of the biggest things is increased involvement by legislators with reparations from elicitation or misuse. So it'd be great if we could get federal or state level rights to neural privacy or broader genetic biometric data privacy legislation. Also possibly providing reparations by statute so either monetary allowing for a private right of action i.e. you yourself can sue someone you don't have to wait for a class action and also empowering regulatory agencies like the FTC to actually have more money and have more people to look into this because thank you FTC, I know you're doing great things but there's not a lot of you. Another great one is involving independent regulatory organizations so things like IEEE, ACM. I actually consider you the hacker DEF CON audience as an independent regulatory organization because now that you know about this maybe you start looking at source code maybe you start taking down the terms of service and actually looking at what's going on so that you can inform here at DEF CON or otherwise what's actually happening with the systems that we're using. It would be great to have accountability for device manufacturers because let's be honest there's not a lot of that right now and then overall how do we actually portray to consumers the risk of using device? So how do you let someone know that there's a 75% risk that information could be elicited from them by using the device or how do they actually understand that 99% of the time they use it they're gonna be protected from hackers coming in eliciting information. So how do we have that conversation and this gets to overall tech literacy in the United States and beyond? Okay so homework, here's my asks of all of you. For those of you who are familiar with this is Find Dog we are about on slide three but it's not too late we can start putting out the fire and there's a couple different ways that we can do that. I actually have stuff this is Find Dog at home. It's very cute, a little plushy. Reminds me that things are terrible but they can get better. So to the developers in the room just because you can doesn't mean you should. And I say this, I see some yeas in the front of the audience here, okay. I say this because as someone who loves new technologies and loves hacking and things like that you really have to start thinking about the things that you're doing. So if you are trying to create a game controller for someone who's paralyzed that's great because maybe the brand signal is the only thing that's left. If you are literally like shooting electricity through your skull using a nine volt battery or whatever or you find a friend and a drill bit and you're like yeah we're gonna drill a hole in my skull or we're gonna stick this wire and like no, please for the love of God, no. Ask yourself what problem it is you're trying to solve. Maybe are there other modalities that you can use to obtain that information and what is the least amount of information you need to complete a particular task. So maybe you don't need to have complete coverage over the entire cortex. Maybe you just need motor cortex. Maybe you just need proloxipital for the P300. Try to gather the least amount of information possible to make yourself the least liable. And then finally do as much processing as possible. Locally you're on the device. And so the best example I have of this, this is a BCI anonymizer. This is from a paper called AppStores for the Brain. This is by Tamara Banachi, Ryan Kalow and Howard Chizuk. Tamara Banachi started this research and lab that I was in. Ryan Kalow was on my dissertation committee and Howard Chizuk was my PhD advisor. But they basically were saying look, is there a way that you can still get information out but you're only releasing information that's necessary to be used by the device itself. So if you're controlling a helicopter with your mind, it doesn't need to have the entire raw data stream of your neural information. It just needs to get the commands up, down, right, left type deal. To the privacy conscious people in the room, I'm really sad there's not a lot of aluminum tinfoil hats in here. I was expecting a little bit more. So right off the bat I can say now don't use these kinds of devices. And it's easier for me to say because they don't have market saturation. It's not like you need them to do your job. This is gonna be a lot harder if for some reason these devices start getting mandated for use. So you have to pass some sort of lie detector test using a BCI. You have to wear it for your job because they're monitoring your productivity or that's just how you use the computer. So as we get further and further down this technological path, how can you opt out and how does that disadvantage you when you do that? If you are worried about people eliciting information without your knowledge, you may feel better with a slower screen refresh rate to prevent that subliminal elicitation. Go pull that CRT monitor out of the basement or the garage, you should be fine. Because then at least you'll know that someone's trying to get information out from you. And here's my ask. Okay, contacting congress.org. Everyone write this down or go to it right now. So you can look up your federal legislators, your congresspeople, your house representatives and your senators. Step one, you can either call or email their DC offices and ask them point blank, what is your position on data privacy? What is your position on regulating emerging technologies? They will probably send you back a very nice form letter, but having been a fellow and staffer in congress, they will categorize this and if they get enough people calling about it, then they know that the consumers are interested. If any of you were at the hacker talk yesterday that had Reps Langevin and Lou, they also said the same thing, like you can totally be involved in this process. It is the August recess, which means all of your congresspeople are back home in their districts. Go to the town halls, ask them what their data privacy feelings are. You can even call and make an appointment and you'll either talk to a staffer there or you'll talk to the representative or send it to themselves depending on their schedule and have a conversation. Let them know that you are an expert in security. Let them know you're an expert in privacy. Let them know what expertise you have. And then maybe when a bill comes up, they'll be like, oh, we should find out more information about it and they can use you as a resource. And just generally be involved in the democratic process. So the offices of the members are actually, they belong to you, not to them. So you can go into them if you go to DC. You should be totally feel free to reach out to them in your state as well. And this goes for state legislatures and city legislatures. So anyone from the state of Washington, we just had a big showdown over data privacy in the last congress. They're probably gonna be bringing that back so get ready to call your state legislators in 2020. And most importantly, and I know that there are studies saying this is gonna take years off your life, but really read the terms of service to find out what's happening to your biometric information, particularly biometric information. It's probably too late for most of the social media sites, but if you're gonna be putting that cap on your head, you should really know what's happening to your information. Okay, three letter agencies in the room. I know you're here. I know that you offered money to my advisor to fund this project and we turned you down. I will say, I don't know if I was supposed to say that crap, nevermind. Too late now. So I'm guessing that there's a couple of people in this room who are probably saying why the hell would you do this research? You're enabling the further use of this technology and you're gonna let the three letter agencies come and steal all our information. And my response to that is if I'm not the one telling you that this is happening now before it becomes an actual problem, would you rather find out later when it is a problem and they've already been taking information from people? So now that you, the hacker community at large, are aware of this, you can start looking for it. You can start asking those questions and you can start being skeptical of these kinds of devices coming onto the market. Trying not to make y'all too paranoid, but let's be honest. If you're thinking about using this kind of technique for interrogation, you have to come to terms with some serious ethical and legal questions. Yes, there is actually a neuro law group. It's part of the MacArthur, it's got a MacArthur grant. It's out of Vanderbilt. You can sign up for their distro list and they will send you on a semi-regular basis papers and conferences related to neuro law. It's actually quite interesting. But you can look at questions of freedom of speech and expression, reasonable expectations of privacy, self-incrimination. Are you really gonna find a judge who for whatever reason is going to allow that if you don't have some sort of warrant? I don't know. There are, I'd also like to point out that all of these results are from compliant and willing participants. So these are mostly graduate students who volunteer to come sit in a room and you give them money or a gift card afterwards and they're perfectly happy to stare at a dot on the screen. I don't think someone in custody is gonna be that compliant. So I don't know if the results are gonna be that good. This technology is also still in its infancy and you really shouldn't think of technology as the solution to a problem. For those of you who are familiar with FMRI literature, there was a poster that someone did where they took a fish, a dead salmon from Pike's Place Market and they put it in an FMRI scanner and they got statistically significant results when they showed it images. So think about that. If you can get statistically significant results from a dead fish, are you really gonna be confident that the information that you're gonna be taking from someone who doesn't want to be giving the information is really what you wanna be getting out of them? So think about that. So in summary, this is one future. So going back to Black Mirror, this is play test. This is someone who volunteers to come and play a video game against spoilers. I'm sorry, but they figure out that he's afraid of spiders. They figure out who his childhood bully is and they really start using this information against him and super spoilers, he dies, sorry. But it's a very dystopian future of everything that we think is now going to be used against this. And I'd like to pause it that we can create a different future. So shout out to my Star Trek fans in the audience and shout out to the Forge, yay! But this is another BCI, it is taking information in from the outside world and putting it into his brain. And I don't know if there was a plot line that I missed, but I don't remember them being like, oh, we found out that Jordy likes Starbucks because we showed him a bunch of logos. So I don't know, it just seems less dystopian. It's a much happier future. So start thinking about the ways that we can do this for good. And if there's one thing that I need you to remember from this talk, there's a difference between telepathy and targeted elicitation of information, targeted elicitation. So we're not gonna come and use a brain ray and take all of your thoughts. Most of the time it's going to be very specific and the stimuli are going to be very particular. So on that note, I took out my funding screen slide, you can come talk to me about funding. Like I said, I have lots of pictures of me wearing nerdy EG caps. You can find me on Twitter. And thanks to everyone who made this talk possible. And thank you so much for coming so early on a Saturday morning to Vegas. I'll be around if you have any questions. So thank you.