 Thanks everybody for coming. Our next speaker is Tiffany Lee giving a talk on facial recognition, DNA and biometric privacy. Stop Facebook from buying your brain. Alright, hi everyone. I'm Tiffany Lee and I'm here to talk about biometric privacy in all different forms. Let's see if this works. There I am. So I'm a tech attorney and I'm a researcher at Yale Law School. I focus on privacy, AI and tech platforms and I've written and spoken about topics ranging from Facebook and Cambridge Analytica to if there's a legal right to AI and AI and this microphone is on and off but here we are and a lot of other topics like that. But today I'm here to talk about one specific topic that's really close to my heart which is biometric privacy. And also the main topic in the title of this talk. Can Facebook buy your brain? So I like to preface this with the answer which is no. Facebook cannot buy your brain and is not seeking applications for brain sales. Even if you think you know someone who has a completely unused brain that's fresh off the market for them, not happening. And not happening anytime soon. But I use this example when I talk about biometric privacy because it brings up a lot of the really very dystopian notions that could potentially happen some day. Things like whether or not a company can own your brain waves, a company can read your thoughts. These are the scariest sci-fi scenarios that occur when you think about brain communication interfaces or things like that. But that scary dystopian vibe is also apparent here today in current biometric technologies and that's something I think we should keep in mind that the world is already a not future. So here is what I'm going to talk about today. First going over biometrics generally, what they are, what the law and landscape is for it. Then a few special cases that I think are especially important. So facial recognition, DNA privacy and then data related to brains. And I think these cases are important because they're very unique I would say and even unique for biometric privacy. And then finally I'm going to go over a few things that we can potentially do to prevent the most horrifying of the sci-fi scenarios from happening. So what are biometrics? Biometrics are the physiological and behavioral characteristics of individuals. This could include fingerprints, voice, face, retina and iris patterns, hand geometry, gait and so on. What we often say with biometrics, um, so biometrics often are described as data that is derived from the body. What this means is that biometric privacy then is privacy for all data derived from the body. What makes biometrics special and why is that biometric privacy is a special category of privacy with unique and important risks. Again, biometric data is data derived from the body. And that's the main problem. You can't delete your body. You can't change or modify a lot of the data that comes from your body. And this is why we often say biometrics are user names and not passwords. Biometrics are distinctly identifiable to a certain individual. And the risks from biometric data are unique and potentially worse than other forms of data and data privacy. The good thing is there are some biometric privacy laws. So I'm a lawyer, but I don't think laws are the only solution. And as we can see, just for biometrics, they're really not complete. In the United States, a few laws exist in some states covering biometric privacy. Texas, Washington, Illinois in particular, have biometric privacy laws. A few other states are considering new laws. But again, this is only a small subset of all the 50 states. More states in that look at biometrics as part of what's called PII, or personally identifiable information. PII is a really specific category that's important in privacy law. Because many laws only refer to PII. So often you only have privacy rights, the data that is identifiable to you. Some laws include biometric privacy or biometric data as part of PII, and those protect some biometric privacy rights in that way. In the EU, the GDPR, which is the general data protection regulation, protects biometric data as a special category of protectable data. So again, there are some laws that exist for biometric data broadly speaking. Additionally, there are a few laws that exist specifically for facial recognition, for DNA data, and even for brain-related data. For that category, the most laws that we see right now in the books refer to brain scan images, which are something that have shown up in some jurisprudence revolving criminal justice. So the first special category of biometric data is facial recognition. And this photo is actually taken from an advertisement for an app that uses facial recognition for dogs. So can we protect the privacy of your dog for facial recognition? Is a really important question that I will not answer today. Privacy for people I think is more important. Facial recognition tech, for those of you who don't know, is tech that identifies or verifies a person based on an image or a video. This is distinctly different from facial detection technology. Facial detection technology merely recognizes if a face is present. And I mentioned this because I feel like many times when someone talks about facial recognition technology or positive applications of facial recognition tech, what they're really looking for is facial detection technology. Facial detection technology is much less privacy invasive and can be used in many of the same contexts. For example, you can consider an application that opens a grocery store entrance door if a person is walking past. For this sort of program, you don't need facial recognition. You don't need to know which customer is walking past your door. All you need to know is that a human being is walking. So facial detection could be used instead of facial recognition. This kind of tech solution is something I talk about a little bit later. But it's one of the big things I think we can do to minimize the privacy risk for biometric data. So facial recognition is used in both corporate government surveillance worldwide. And it's really becoming more widespread by the minute. A few examples that you've probably used or have heard of, face based logins on your phone or your computer, photo apps that manipulate your photo to look older or younger. And also law enforcement uses. A lot of law enforcement around the world relies on facial recognition to identify people they believe to be suspects and so on. So some of more of the risks of facial recognition data. First of all, facial recognition is widespread. And that in itself is a risk. The fact that facial recognition is everywhere right now and people aren't aware of that is a huge risk. If you're not aware of what's happening, you can't be aware of how much, how many rights you have and you have no way to know when you can actually advocate for your rights. The Georgetown center on privacy and technology has studied that one in two American adults are already enrolled in a law enforcement facial recognition network. This means that everyone in this room, probably about I would say a third to a half of us just broadly speaking are probably already in a law enforcement database. This is also DEF CON so probably more of us are in the facial recognition database already. And this data is pulled from many different accounts. From driver's license images, from CCTV surveillance and more. It's really hard to know where the data is coming from and for a private individual, it's almost impossible to have recourse against the government for these uses. The second risk for facial recognition is that facial recognition tech right now is often very flawed and can be biased. Joy Bulamwini, a researcher at MIT Media Lab and founder of the Algorithmic Justice League conducted a study of facial recognition tech from some of the largest companies and she found that error rates for darker skin women range from 21 to 35%. While the same error rates for light skin males were below 1%. So why does this matter? If you think about the fact that facial recognition is used by so many law enforcement agencies around the world, you can definitely see the risk for criminal justice. It's more likely that a law enforcement process using facial recognition could generate false positives for people who are darker skinned or who are women. And this creates obviously an unfair discriminatory impact on these people. And finally, facial recognition can be used to harm human rights. This is often the case, especially with countries that don't have democratic rights or don't have open rule of law. One example that you've already seen happening in the world all of today is when governments use facial recognition to scan photos of protests to identify political dissidents and then take action against those dissidents. So facial recognition tech already has a lot of risks and we're not adequately protecting against it. The next category of special biometric privacy is DNA data. So DNA is essentially the code to your body. And I found what I think is the creepiest pictorial representation of DNA. But I think it really tells you what DNA is, right? It essentially carries the genetic information to your body and is unique to each individual. So one of the biggest cases that happens recently is about the Golden State Killer. So many of you have probably heard of this case. This was early last year when investigators identified a decades old cold case serial killer based on DNA sampling. They had a very old DNA sample of the suspect that they then ran against a publicly available DNA database. They found a match with a distant relative. And just based on that distant relative's DNA match, they were able to then eventually identify the killer. This seems probably like a positive outcome. I think a lot of people who hear about this case think it's great. That it's wonderful to have DNA out there because we can identify more serial killers, more terrible people and so on. But I think many of us in this audience already know that that's a huge risk to everybody involved. That if tech like this can be used for good, it can also be used for bad. The Golden State Killer case brings up one really important risk with DNA privacy that isn't the case with other types of privacy. Your DNA data can be used against you even if you don't have data anywhere. The data of say your distant relative can be used to identify you. So in this case the government isn't the real threat, the companies aren't the real threat, but Uncle Bob at Thanksgiving dinner might be the real threat, especially with all the 23 and me ads out there. So DNA data I think is a special case for biometric privacy that we should be thinking of more clearly. Aside from the privacy risks, we also have some interesting discussions of who can own DNA. So this is a picture of Henrietta Lax and she's probably the most famous test case for medical genetic information and how that information can be owned and then used by other people. Henrietta Lax was an African American woman whose cancer cells are the source of the HeLa cell line, the first of what they call immortalized human cell line. She went in for testing for medical reasons, her cells were collected and cultured without her consent or without her knowledge. And eventually this led to many scientific breakthroughs. Again, it seems like a positive outcome, right? Like the Golden State killer case, there was a positive outcome involved. The problem is that there was no compensation for Henrietta and her family, no consent from her for how her cells could be used and no rights against the companies or the medical information agencies using her data. So while it's amazing that there is a positive outcome, there's still a question of who has the rights to that data. And broadly speaking, who has rights generally to DNA data? The good thing is there are some laws right now. DNA privacy laws include the 21st Century Cares Act, HIPAA, which is a general privacy act broadly covering health privacy, and also GINA, which is a genetic non-discrimination act. Finally, the last category is BCIs and Neurotech, which are really the crux of what this talk came about from. The inspiration for this talk was when I was thinking about what companies like Facebook are doing with Neurotech. So in 2017, Regina Dugan spoke at Facebook's big tech showcase, and she asked this question, what if you could type directly from your brain? What if you could type directly from your brain? Just imagine sending Facebook messages just by thinking them. What this entails would be that Facebook would have to be able to access your brain data somehow, would you be able to ping the interface with your brain and Facebook has access to whatever sort of pings that occurs then. This involves what's often called a brain-computer interface, which is a computer-based system that acquires brain signals, analyzes them and translates them into commands that relate to an output device to carry out a desired action. That's a lot of words. It's really just brains talking to computers. Easy. I think you can already assume a lot of the problems here, but I think that we have to take a step back before we talk about the privacy issues and realize that a lot of this tech gets a little over-hyped. So I don't know if Neuralace is the new blockchain in terms of hype, but there is already a lot of it out there. Recently Elon Musk said that his new company, Neuralink, is going to meld humans with AI, which is kind of a ridiculous prospect. It could, let's say, might happen in the far future. It's not happening in 2019. It's not happening in the next 10 years. Those are not the issues that we're discussing. The issues we're discussing are more closer to 2019. Here's a not-so-extreme hypothetical. If BCIs become common and we use them for communications or for work and so on, it's likely that companies will own and operate the brain communication interface platforms. At that point we have platform regulation issues. We have privacy issues. We have communications issues. We have questions about who can access this data, who stores the data, how secure is the data. If there are communications being performed across this platform, then we have similar questions as we have with current communication platforms. For example, we can see that Facebook and Twitter moderate content. Will future platform providers be able to moderate the communications across a BCI-informed platform? These are questions that no one has the answer to. In addition to finally the questions of Simply Again, who owns the data? Can someone own the data emitted by your brain waves in a BCI-informed platform? Those were a lot of questions and here is the solution part of the talk. What can we do? I think this boils down to a few different categories. First I'll discuss legal and regulatory solutions, next some tech solutions, and then a little bit else about what we can possibly do to stop a lot of the worst of the privacy violations. In terms of legal solutions, I think there are a few things that are maybe not easy to do, but things that we can consider legally. So first obviously pass new laws or make better laws for facial recognition, for DNA data, biometric data, BCIs and so on. We consider updating the GDPR guidance in the EU to include more of these categories and in the US we might consider more regulations or even just one federal privacy regulation that includes biometric data. Also this gets a little in the weeds, but I think it might be interesting to think about new classes of legal privacy harms not directly related to the rights of the data subject. This could cover for example when Uncle Bob uploads his data to the 23andMe database, right now you have no rights to that, but there could be laws that give you some protection. And finally if we get to the point where there are platforms for BCI communication, we need some sort of standards for how those platforms can be moderated and how those platforms can be operated. There are also of course tech solutions and I separate this broadly between data level solutions and system level solutions. That's an incredibly broad and oversimplified way to divide this, but this is how I think about it. On the data level we have to protect the data through security and privacy protections. This can include better security and privacy tech including encryption, differential privacy on device analysis and so on and so forth. These are interesting technologies that make sure we can secure the confidentiality and the integrity and the access of data, but they're not enough. You can have the most secure system, the most privacy protective system, but it can still be a terrible system. You can still have terrible tech that creates real world impact. So generally we have to figure out how to design better systems and how to deploy better tech that keeps in mind issues of criminal justice, social justice, bias, fairness and so on. The problem though of course is first, re-identification is always possible. No system is truly secure. Laws are always behind technology. So tech can be legal, have privacy and security protections and still violate core human rights. So what do we do then? And I think the solution is ethics. And it sounds a little cheesy, but one thing that we should all think about whether you're designing the system, implementing, critiquing it or whatever, is think about the ethical issues at play. So think about the ethical issues before you design your new technology. Think about the ethical issues before you purchase a new app. Think about the ethical issues before you talk about this tech. As a society we have to figure out what we believe to be privacy for the future. We have to figure out if we believe that DNA data is intrinsically linked to the body or if brain communication interfaces should be using communication or someone should own that communication. Laws won't solve these problems. Tech won't solve the problems. We have to figure this out as a society. So here's what you can do. Design better systems, deploy better tech. It sounds super easy but it's very very hard. Do your best to use your technical expertise and many of you in the audience have a wealth of expertise here. Use your expertise to design the better systems and critique flawed systems when you see them. Keep ethics in mind whether you're on the product team or if you're a consumer. Be a conscious consumer and an informed voter. Support companies that protect privacy. Vote for policies and policy makers that care about tech and good tech regulation. And finally, you can advocate for change no matter who you are or what position you're in. This can be as small as talking to the people around you about and informing them about their rights and it can lead to supporting large groups, advocacy groups or even speaking your mind out in speeches and letters and articles or even running for office. You can always be an advocate. In conclusion, here are my steps for how to stop Facebook from buying your brain. Understand the current landscape of technology and laws. Update the current laws and create better laws. Design and deploy better tech. Remember ethics and always consider ethics first. Be a conscious consumer and an informed voter and advocate for change. If there's one take away from the talk that I would like you to have, it's that yes, the biometric tech isn't proving and biometric privacy is at a greater and greater risk every day, but there's still time. There's still time to stop the worst of the privacy violations from happening and you can be a part of that change. At the very least, one day when your brain is owned by a Facebook reality labs laboratory and is sitting on a shelf somewhere, that brain can tell future generations that you gave it your best shot and you try to prevent that future from happening. So you can change the future and that decision starts today. Thank you. Alright, thank you Tiffany. We have time for questions. Please come up front and line up by me. Hi. So what are your thoughts? Obviously like these are excellent steps, but what are your thoughts on the fact that a lot of big data companies like Facebook don't make the decision to track these things and like store the data public until long, long after it's happened. So there are a few issues of that. You asked about issues of Facebook and other companies not making their data or the decisions to store data public. And there are actually some legal potential there that you could solve things. For example, people have said that companies can't really tell you how they store data or how they use data because they're afraid of liability. We could create legal safe harbors that would allow them to have more transparency reporting to tell us what they're doing with the data and how they access data. But a lot of this I think is on the companies themselves. Most companies at the end of the day are motivated by profit. And if we as consumers don't really seem to care about privacy, they're not going to care about it either. So we either have to have consumer pushback or regulators enforcing something to change. Other questions come on up. Thanks for your talk. Really enjoyable. You mentioned a little bit about efforts or ideas around regulating data that doesn't necessarily belong to one person but is shared. That comes up in a lot of conversations I have with peers who are interested in this stuff of like you and I are having a conversation right now which one of us owns it is the Unity pairing. Can you give a little context into like what influences the popular thinking around this and what progress has been made? Sure. So the question is about data that's not owned by you and about some of the privacy implications there. So one issue I bring this up and I don't talk about it too much in the top because this can get very technical. But there are a lot of privacy laws out there and most of them specifically just give you rights as a data subject. There are very few laws that talk about what happens to people when data that's collected about them but not from them impacts them. So some people are doing work on what they call for example inferences in algorithmic decision making and how those inferences can impact someone regardless of how the data was collected. I mean that's really important. I also think some other categories we should be thinking about in terms of data include for example data that's not collected on you or even about you but that's related to you. For example if there is a company that collects all the data of different DEF CON participants you're not included in the data set maybe but whatever predictions are made based on that data set will likely impact you because you are part of the larger group. You have no rights to that. The law doesn't talk about that. These are the sort of edge cases I think we need to think about in the future. Hi. Thanks for the talk. When you were going over your questions for how to prevent a really terrible future with BCI I kept thinking all of those questions are applicable today to virtual reality and will be soon applicable to the rumored platform shift to augmented reality. Have you thought any about how those same questions should be applied to virtual reality? Definitely. So virtual reality VR and AR tech right now it's similar to a lot of this tech in that it's still pretty new and it might be overhyped in some cases but I think it's definitely coming and I actually have a paper in progress right now specifically about VR and AR and that being a sort of new type of tech platform. So right now we regulate ISPs like Cloud Flare or like Facebook as a sort of platform for communication or for the transfer of data. And in the future if we have VR and AR tech becoming more common place that raises a whole class of new issues similar to how BCIs raise new issues. For example in the really crazy future scenario in which we all live in the matrix what happens if Facebook owns the matrix right? If Facebook literally owns reality then we have a lot more problems to deal with but that's a small snippet of some of the issues of BCIs. Hi so to your earlier point about like the privacy subject laws to what extent do you think it's possible practical or reasonable to say like have Uncle Bob require my consent to leak his DNA or have some kind of like recourse or restrictions for me in those types of scenarios. That's a good question. I think you honestly have very little ability to control the data that you specifically don't contribute to a data set. So you have no rights to stop Uncle Bob from doing anything. That's why I think the law should look at those other classes. They should figure out some sort of rights for for example for the algorithm decisions that are made based on that data. We have to think of the harms that happen after the data collection because we don't know what the points of data collection might be. So I know that like all about 30 thousand or whatever of us are here at this conference. You know we really we care about this stuff. But how do we get the normies to care? How do we get the people that like look at me weird because I have a Mohawk out in the casino. So it'd be like oh hey yeah I want to I want to protect my privacy because I feel like let's say all about 30 thousand of us had Facebook accounts and we all deleted them. I'd be like a drop in the bucket. You know a big drop nonetheless but still a drop. So how do we get them to care and see that like oh yeah the government's spying me on me is probably not a good idea. Yeah so how do we get other people outside of Dev Con to care about privacy? That's a really big question. I think what's interesting is that a lot of research has shown that people do care about privacy. Kids care about privacy. Adults care about privacy. People from all different spectra care about privacy. They don't know what's going on though I think is part of the issue. So when something like the Facebook or in Cambridge Analytica scandal happens people suddenly realize that their rights are being violated. Or when there's another big hack like the Capital One scandal recently people realize that their data is you know potentially out there. So I think that a lot of this has to deal with informing the public. So education programs, journalists presenting good tech reporting can help a lot. As individuals I think what we can really do most is just talking to people. So I know that not your entire circle is made of Dev Con people right? Like you know people who maybe aren't as plugged in as you are. And explaining some of these concepts or just bringing some of this to people around you is often really helpful. All right that's all the time we've got for questions. One more big hand of applause for our speaker.