 Good afternoon, everyone. My name is Shobita Parthasarathy, I'm Professor and Director of the Science, Technology and Public Policy Program here at the Ford School of Public Policy. STPP, as it's known, is an interdisciplinary, university-wide program designed to training students, dedicated to training students, conducting cutting-edge research and informing the public and policymakers on issues at the intersection of technology, science, ethics, society and public policy. We have a very vibrant graduate certificate program and an exciting lecture series. Before I introduce today's speaker, I want to let you know that next term, our speakers will explore the themes of health activism, prescription drug patents and pricing, and graduate STEM education. Our first talk on January 22nd at 4 p.m. is by Lane-Sherer, a Ford School alum, who is now at the National Academies of Science, Engineering and Medicine, and she'll be talking about graduate STEM education in the 21st century. If you're interested in learning more about our events, I encourage you to sign up for our listserv that is just outside the auditorium, but even if you haven't had a chance to, if you are already on our listserv, please do sign up there as well, because it gives us a sense of who's been able to come today. Today's talk, Show Your Face, the pros and cons of facial recognition technology for our civil liberties, is co-sponsored by the Center for Ethics, Society and Computing, and the Science and Technology Policy Student Group, Inspire, as part of their themed semester on just algorithms. Inspire is a Rackham interdisciplinary working group run by SDPP students, but it is open to all graduate students around the university who are interested in science and technology policy. And now to today's speaker. Mr. Christopher Calabrese is the Vice President for Policy at the Center for Democracy and Technology. Before joining CDT, he served as legislative counsel at the American Civil Liberties Union's Washington Legislative Office. Don't try to say that 10 times fast. In that role, he led the office's advocacy efforts related to privacy, new technology and identification systems. His key areas of focus included limiting location tracking by police, safeguarding electronic communications and individual users' internet surfing habits, and regulating new surveillance technologies such as unmanned drones. Mr. Calabrese has been a long time advocate for privacy protections, limits on government surveillance, advocating for the responsible use of new and developing technologies such as facial recognition. This afternoon, he'll speak for about 15 minutes, giving us a lay of the land, and then he and I will chat for about 15 minutes or so, and then we will open the floor for questions. Please submit your questions on the index cards that are being distributed now and that will be distributed throughout the talk. Sujin Kim, our student assistant at STPP, will circulate throughout the room to collect them. And if you're watching on our live stream, you can ask questions via the hashtag STPPtalks. Claire Galligan, our wonderful Ford School undergraduate research assistant, and Dr. Molly Kleinman, STPP's program manager, will then collate and ask the questions. I want to take the opportunity to thank all of them, and especially Molly and Sujin, for their hard work in putting this event together. And now, please join me in welcoming Mr. Calabrese. Thank you. Thanks to all of you for coming. This is obviously a topic that I care a great deal about, so it's really exciting to me to see so many people who are equally interested. Thanks to Shobita for having me, and thank you to the Ford School for hosting. I think these are really important topics. As we incorporate more and more technology into our lives, we need to spend more time thinking about the impact of that technology and what we want to do with it. And face recognition is a really great example. It's powerful, it's useful, and it's often dangerous, like many technologies. This is a technology that can do so many things. It can find a wanted fugitive from surveillance footage. It can identify everybody at a protest rally. It can find a missing child from social media posts. It can allow a potential stalker to identify an unknown woman on the street. This is really a technology that has the potential to and is already impacting a wide swath of our society. That's why it's gotten so much attention. We saw a ban on face recognition technology in San Francisco. We've seen a number of lawmakers really engaged, and we as a society really need to grapple with what we want to do with it. Before I get too deep into this, just a word about definitions. I'm going to talk about something fairly specific. I'm going to talk about face recognition, which is taking a measurement of someone's face, so how far apart are their eyes? How high or lower are their ears? The shape of their mouth? And using that to create an individual template that is essentially a number that can be used to go back to another photo of that same person and do that same type of measurement and see if there's a map. So it's literally a tool for identifying someone. It can be a tool for identifying the same person. So if I bring my passport to the passport authority, they can say, is the person on the passport photo the person standing in front of me? Or it can be used as a tool for identifying someone from a crowd. So I can pick one of you and do a face recognition match and see if I can identify particular people in this room based off of a database of photos that the face recognition system is going to run against. That's face recognition, and that's what we're going to talk about. There are a few other things I won't talk about. One of them is something called face identification. And that's literally like, is there a person standing in front of me? We might use that to count the number of people in a crowd. We might use that to decide if we're going to show a digital signage at a billboard. And that's usually less problematic. There's another type of technology I won't talk about called face analysis. Face analysis is literally looking at someone's face and trying to make a determination about them. Are they lying? Are they going to be a good employee? This technology doesn't work. It's basically snake oil, which is part of the reason I won't talk about it. But you will see people trying to sell this concept that we can essentially take pictures of people and learn a lot about them. But I can tell you that face recognition does work. And it's something that we're seeing increasingly deployed in a wide variety of contexts. So I already talked a little bit about what exactly face recognition is. This sort of measurement of people's faces, turning that measurement into a discrete number that I can store in a database and then compare against other photos, see if I get that same measurement, and then see if I've identified the person. There's a couple of things that you need to understand if you want to think about this technology and how it actually works and whether it's going to work. The first is a concept we call binning. So binning is literally putting people in bins, putting them in groups. And so it turns out, and this is pretty intuitive, that if I want to identify someone, it's much easier if I know they're one of a hundred people in a photograph, one of a hundred people in a group versus one in a million. It's just a much simpler exercise. So that's one thing to keep in mind as you hear about face recognition is to think not just about the technology that's taking that measurement of your face, but the technology that's being used to pull the database in from outside. And the size of that database is hugely important for the types of errors we can see how accurate the system is. So a little bit of history for you. So face recognition has been used for a long time, even though it really has only started to be effective in the last couple of years. If you go all the way back to 2001, before 9-11, police tried out face recognition at the Super Bowl in Tampa. And they actually did a face recognition survey of all the people who entered the Super Bowl. And it didn't work. The technology wasn't ready for prime time. It couldn't identify people. It was swamped by the number of different faces and the different angles that those faces were taken at. And so for a long time, that was the beginning and the end of the conversation as far as I was concerned. Because if a technology doesn't work, why should we use it? But, you know, as I was saying to someone, I had a friend who works in the industry and we had lunch a couple of years ago. And he said to me, it works now. This technology will actually match and identify people. And that was kind of a rubicon. And we've seen that in the last couple of years. The NIST, which is the National Institute for Science and Technology, which does standard setting for the federal government, has confirmed that. They've said that in the, you know, earlier this year, they said massive gains in accuracy have been achieved in the last five years. And these far exceed improvements made in the prior period, which is the prior five years. So we're seeing this technology being used more and more. It's more and more accurate. And we can really understand why that is. We have more powerful computers. We have better AI that does this type of comparison. We also have better photo databases. I mean, if you look at the LinkedIn photo database, if you looked at the Facebook photo database, these are high resolution photos. Often many different kinds of photos to give you many different kinds of templates, all linked to someone's real identity. That's a perfect tool for creating a face recognition database. So why do we care? Like what's the big deal about face recognition? And there's a couple of things that I think as advocates and I hope that we care about, and I hope I can convince you to care about a little bit too. The first thing is that we have sort of all kinds of assumptions that we make about our privacy that are grounded in technical realities. So we assume that while we might go out in public and somebody might see us and they happen to know us, they might identify us. That's where you get this idea that we don't have privacy in public, right? You put yourself out there. But the reality is that when you're out in public, you don't necessarily expect to be identified, especially by a stranger. You don't expect to be potentially tracked across a series of cameras and you don't expect that record to be kept indefinitely. That's a different type of use of the technology and it really sort of changes our assumptions about what privacy looks like and what privacy looks like in public. And of course you can imagine the impact on that for if you're taking doing photo recognition on, for example, a protest rally. You can see how suddenly I have knowledge of who may be worried about the border and that allows me to take other kinds of punitive action. And of course it also allows me to figure out who your friends are, who are you walking with, those kind of associational pieces of information that we worry about. It also changes the rules in other ways that we don't always think about, but I would encourage you to. So we jaywalk every day. We cross the street when we're not supposed to. You are breaking the law when you jaywalk. Everybody does it. But what if we could enforce jaywalking a hundred percent of the time? What if I could do the face search, identify you and send you a ticket every time you jaywalk? That would fundamentally change how the law was enforced. It wouldn't change how you interacted with society. We could do it whether we would do it or not or whether we should do it as a separate. But these are laws around the books that could be enforced using this technology. And so that's a concern. And the second concern I think that's related is if we don't enforce it against anybody and we start to enforce it in a selective way, what kind of bias does that introduce into the system? And you can just sort of think about that for a minute. In the private sector, we also see a lot of changing in relationships. I already raised the stalker example. But there is off-the-shelf technology sold by a variety of companies. Amazon Recognition is one of the most well-known that you can purchase and you can use to run your own set of databases. And we've already noted that there's a lot of public databases of photos and identification. You can take those, run those databases against your own off-the-shelf face recognition software and identify people. And so suddenly that stalker can identify you. Suddenly that marketer can identify you. Suddenly that embarrassing photo of you from 2005 that still exists on the web but nobody sees and it's not captioned and nobody knows it's you. Well, suddenly you can be identified. And if you're in a compromising position or you were drunk, I mean there's a lot of photos out there about all of us. Potentially that's revealed information that can embarrass you. The next sort of the other reason we might worry about this is that mistakes happen. This is a technology that's far from perfect and in fact has a great deal of racial bias in it. Because many, when you create a face recognition template, we can get into this maybe in the Q&A, but you're using essentially you're training the system to recognize faces. So if you only put the faces in the system that you get from Silicon Valley, you may end up with a lot of white faces. A lot of faces that are not representative of the broader population. And as a result, your face recognition algorithm isn't going to do as good a job of recognizing non-white faces and literally the error rate will be higher. And so this is sort of a bias problem but there's also just a broader mistake problem. As the technology gets used more broadly, people will rely on it and they will be less likely to believe that in fact the machine made a mistake. People tend to trust the technology and that can be problematic. Ultimately, I would just sort of give you this construct just to sort of sit with this idea of social control. The more that someone knows about you, the more they can affect your decisions. If they know that you went to an abortion clinic, if they know you went to a gun show, if they know you went to church. None of those things are illegal in and amongst themselves but someone, especially if it's the government taking this action, may make decisions about you. I'll give you an example that's not face recognition related but is I think instructive. So when I was at the ACLU, we had a series of clients who protested at the border in San Diego. The border wall runs right through San Diego. And so they all parked their cars at the border and they went and they had this protest. And then as they came out of the protest, they found people that they didn't recognize writing down their license plate and they didn't know who that was. But then many of those people found themselves on being harassed when they were crossing the border. These are unsurprisingly people who went back and forth a lot and they found themselves being more likely to be pulled into secondary screening, face more intrusive questions. And they believed and this was something we were never able to prove but I feel very confident was because of this type of data collection because they were identified as people who deserve further scrutiny. That's what happens as you deploy these technologies. You create potential information that can be used to affect your rights in a variety of ways. And face recognition is a really powerful way to do that. So what should we do? What should we do about this? You know there are some people who say we should ban this technology. Face recognition has no place in our society. Well that's a fair argument. I think it does discount the potential benefits of face recognition. I was at Heathrow Airport. Or maybe it was Gatwick I think. But I was in London and I was jet lagged. It was red-eye. I was like 6 a.m. I kind of walked up and I looked at them at the sort of ran through the checkpoint. And then I looked up at this literally just like this and then I kept walking and I realized 30 seconds later I had just cleared customs. That was face recognition and it sort of completely eliminated the need for them to do a customs check. Now maybe it's not worth it but that's a real benefit right. If you've ever stood in one of those lines you're saying gosh that sounds great. And that's a relatively trivial example compared to somebody who says lost a child but thinks that maybe that child has been abducted by someone they know which is unfortunately frequently the case. You can imagine remember going back to that binning you can imagine that maybe there's a photo that might help somewhere in your social network. If you could do face recognition on the people in your social network you might find that child. These are real benefits so we have to think about what we want to do whenever we talk about banning a technology. So the close cousin of the ban and this is one that I think is maybe more effective or useful in this context is the moratorium. And that's this idea that we should flip the presumption you should not be able to use face recognition unless you there are rules around it and rules that govern it. So and that's that's a really effective idea because it forces the people who want to use it to explain what they're going to use it for what controls are going to be in place why they should be allowed the authorization to use this powerful technology. So if we did have a moratorium or even if we didn't and we just wanted to regulate the technology. What would this regulation look like and by the way this regulation can happen at the federal level and it could happen at the state level. There is already at least one state the state of Illinois that has very powerful controls on biometrics for commercial use. You cannot collect a biometric record in Illinois without consent. So these are these are laws that are possible. There's no federal equivalent to that. So as we think about how would we think about this. I think the first thing especially in the commercial context is to think about consent. If you can say that it's illegal to create a face print of my face for this service without my consent that gives me the power back on that technology. Right. I'm the one who decides whether I'm part of a face recognition system and what it looks like. And you know that's a hard that can be a hard line to draw because it's so easy to create this kind of face template from a photo without your permission. But it's a start and it allows you to you know responsible people who deploy face recognition technology will deploy it you know and require consent. And then after consent is obtained you probably you want transparency you want people to know when face recognition is being able to be used. So that's that's the broad idea we can talk a lot more about this in the Q&A but from the consent side from the private side. Government side is a little bit more tricky. I think from a government point of view government is going to do things sometimes without your consent. That's a that's a fundamental reality for law enforcement for example. So what do we do and I think in the government context we fall back on some time honored traditions that we find in the U.S. Constitution. And that's the concept of probable cause. So probable cause is this idea that and this is embedded in the Fourth Amendment of the Constitution. This idea that it is more we should be able government should be able to search for something if it is more likely than not that they will find evidence that of a crime. And in order to get that probable cause they frequently have to go to a judge and say hey I have evidence to believe that this going into this person's house will uncover drugs because and here's all the evidence that they were a drug dealer and then I can search their house. We can deploy a similar the same idea with face recognition. We could say that you need you can only search for somebody remember I said there's that one fugitive who's who I think I can go look at surveillance camera footage and maybe find him. You need maybe need to go to a judge and say your honor we have probable cause to say that this person has committed the crime. They're likely to be somewhere in this series of you know footage and you know we would like to you know we believe we can arrest him if we if we find him. The judge can sign off on that you know vet that evidence and then the technology can be deployed. Similarly there's a you know there are exigent circumstances and we have this in the law right now. So if I think that there is an emergency say I have you know a situation where someone has been abducted. I believe they're still on the for example the London Metro which is blanketed with surveillance cameras. And I believe that that child's life is in danger. There's a concept in the law called exigency which is this idea that there's an emergency. I can prove there's an emergency I need to deploy the technology and we can build those kind of concepts into the law. So I'm going into a lot of detail on this mostly because I think it's worth understanding that these are not by these are not binary choices. It is not flip on face recognition. We're all identified all the time. I'm sure many of you are old enough to remember Minority Report the movie which used a lot of biometrics scanning throughout the and it was sort of this everybody just was by it was scanned and there was face recognition happening all the time. And advertisements were being shown to them constantly. We don't have to live in that world. But we also don't have to say that we're never going to get any of the benefit of this technology and we're not going to see see it used for all kinds of purposes that may in fact make our lives more convenient or more safe. So with that sort of brief overview I will stop and should be to week in chat and then take some questions and go from there. Cool. So I'm very I've been thinking about this issue a lot and I'm very interested in it and I think I tend to agree with you in lots of ways but I'm going to try my best to occasionally at least play devil's advocate. As my students know I try to do that although sometimes I'm more successful that than others. But maybe first I'd be interested in in your talking a little bit more about the accuracy issue. So you said it's evolved over time. It's more accurate than it used to be. Now NIST says it's accurate. First of all you know what does that mean and how is NIST determining that. And yeah why don't we start there. Sure that's a great it's a wonderful place to start. So so accuracy varies widely depending on how you're deploying the technology. It depends. So just to give you an example. So if I am walking up in a well lit customs office even if it's not a one to one match where I'm somebody's already holding. If it's a well lit situation I'm looking right at the camera that you're much more likely to get a good face print and one that's accurate. Especially if you have a database that's backing up that image or that's backing up that search that may have like three or four or five or six images of me from different angles. Like that's a that's a very optimum sort of environment to do a face print. And you're going to much more likely to get an accurate identification especially as if I mentioned before you have a relatively narrow pool of people that you're doing the search against. The reverse is true obviously if you have a side photo of somebody that you only have a couple of photos of and the photo quality may not be particularly good. You can see how the accuracy is going to sort of sort of pin go up and down depending on what what the environment is. And so you know part of the trick here part of the thing we have to expect from policymakers is to vet these kind of deployments like how are you using it. What's your expectation once you find a match. How accurate are you going to treat it. What's going to be your procedure for independently verifying that this person you just essentially identified as a perpetrator of a crime actually committed that crime. It can't just be the beginning and the end of it as a face recognition. And so in terms of what this does they do sort of exactly what you would expect they would do right. They have their own photo sets. They will have they will take the variety of algorithms that exist and that you can they will run those algorithms against their own data sets and just see how good a job they do. See how accurate they are in these variety of different contexts. And this I think it bears putting a fine point on the accuracy doesn't just differ depending on whether you're straight on or on the side. Right. One of the big issues with accuracy is that it's different for its most accurate among white men and then it degrades in accuracy. And thank you. I should have I should have made should have said that first because that's really the most important thing. We are seeing a lot of racial disparity. Because mostly because of the training set data but I'm not I don't know if we know actually enough yet to know if it's 100 percent the training set data or not. Or it's that you know there may be other questions other areas of machine learning that are also impacting it. But we are seeing a tremendous variation. It's problematic not just because it's not just because of the identification issues but because Robert you and I were talking about this earlier today. I mean if you're not identified as a person at all right because the system does not recognize you that has all kinds of other potential negative consequences for others. It's a very automated system. So it's a it's a very big deal. It's also worth saying that it's it doesn't you know I worry a little bit that people are going to say well once we fix that accuracy problem that then it's OK. And I hope I've sort of convinced you at least a little bit that we're not the problem doesn't end even if the system isn't racially biased. That's sort of the minimum level that we need to get over before we can even begin to talk about how we might deploy. So sort of linking to that and maybe you know you mentioned a few of these cases of potentially I'll put it in my language and you know sort of new forms of social control or reinforcing existing forms of social control. I think some of you in the audience may have heard about this but I think it bears mentioning in this context which is that now about a month ago news broke that a contractor working for Google. They probably know who it was was caught trying to improve the accuracy of their facial recognition algorithm for the pixel four phone by going to Atlanta where there is of course a large African American population and asking men homeless African American men to play with a phone and to play a selfie game so they were not consented but their faces were scanned right. And so that keeps ringing in my head whenever I'm thinking about this stuff and I think what's interesting to me about it and I wanted to get your sense of this. What's interesting to me about this and it ties to what you were talking about in terms of social control is that that what what the act of supposedly increasing the accuracy supposedly to serve. At least argument arguably the the additional you know to serve African American populations actually ultimately serves to reinforce existing power dynamics and the discrimination that African Americans have historically experienced. And so I'm wondering you know in the in the sort of in pursuit of this goal of accuracy in the pursuit of you know this wonderful technology that's going to save our lives. You know these kinds of things are happening to. Well I mean that is the funny thing about rights. I mean it's everybody that needs their needs to have their rights respected. Everybody deserves equal rights but the reality is that those are the kind of communities who really need to have their rights respected. They really need something like a consent framework because they're the kind of people who are most likely to have images because they have less power. They have less ability to say I am not going to consent to this or maybe less knowledge of how the. So they're really when we're creating these rights part of what we're doing is building on existing power structures and power imbalances where I may have more power and you may have less. And hence it's even more important that I have this ability to to actually exercise my rights and know what they are. Another piece of this which I didn't mention in my talk but is there's a number of already unfair systems that face recognition might be built on top of the most used. I think one of the most illustrative examples is the terrorist watch list. So there is a list in the United States main ever changing list maintained by a part of the FBI where you are you can be identified as a as a potential terrorist. There's a master list that then feeds into a wide variety of different parts of the federal government and affects things like whether you get secondary screening at the airport and in rare cases even whether you're allowed to fly at all. So and there's this is a secret list. You don't know when you're on it. It's hard to know how to get off it and the incentives are very bad because if I'm an FBI agent and I'm sort of on the fence about whether to put you in a database. I can tell you if I put you in the database and nothing happens. No harm no foul. If I don't put you in the database and you do something bad my my career is over. So there's a lot of incentive to put people in lists. Well you can imagine putting somebody on a list and combining that with the power of face recognition creates an even greater imbalance because now I've got a secret list and I've got a way to track you across society. So that's an existing unfairness that has nothing to do with face recognition but face recognition can exacerbate. So how would a consent framework work in that context given that there are already places I mean in this context where there's information but also you know we're in a society now where our faces are being captured all the time. So how would you envision. So what you would consent to in a very technical way you would consent to the to turning your face into a face print. You would consent to creating that piece of personal information about you literally the way your social security number is a number about you. This would be a number that encapsulates what your face looks like. That would be the point at which you would have to consent. And I think we might have to do some stuff around exist a lot of existing face recognition databases either saying those databases need to be reopt or you know. But the reality is that we if you can catch it there then at least you're saying you're taking the good actors and you're saying it's not OK to take somebody's face print without their permission. And that and then again as we said the government's a little different and of course it's not these are not magic. This is not a magic wand right fixing the problems of face recognition doesn't fix all the other problems with society and how we use these technologies. So you mentioned the you know going through going through customs or going through European immigration and the ease of facial recognition. And there and that's sort of the excitement of convenience right. And I'm wondering and you said maybe that's a that's an acceptable use of it. And I guess when you said that I was like well I'm not sure if it's an acceptable use of it because I worry a little bit about the fact that that normalizes the technology. That then people start wondering why it's a problem in other domains. Look it worked when I went through immigration. Why would there be a problem for us to use it for you know crime fighting or or you know to education schools or hiring or you know sort of. You know it's always a balance. I mean I when I'm considering some of these new technologies I tend to think about people's real world expectations. And I think in the context of a border stop you expect to be identified. You expect that a photo is going to be looked at and that somebody is going to make sure that Chris Calabrese is Chris Calabrese. So that to me feels like a comfortable use of the technology because it's not it's not really invading anybody's you know the idea of what what task is going to be performed. So for a while and they don't do it this way anymore but a less intuitive example of this but one that I thought was OK. And it was this is a little bit controversial was that Facebook would do a face template and that that's how they recommend friends to you. They're like you know when you get a tagged photo and they say is this Chris Calabrese your friend and you get that's face recognition for a long time. They would only recommend people if you were already friends with them. So the assumption was that you would be able to recognize your friends in real life. So it was OK to tag them and recommend them. Now that's a little bit controversial. It's definitely not you're not getting explicit consent to do that. But maybe it feels OK because it doesn't feel like it violates a norm you expect to identify your friends. They now do it. They now have a consent based framework where you have to you do have to opt in. But for a while they had sort of that hybrid approach. So I think it's helpful to map in the real world. I do think that you have issues where you're potentially normalizing it. And I another area I didn't bring up but one is one that's I think going to be kind of controversial is face identification and employment. You know obviously we know that the consent in employment context is a kind of a fraud concept often you consent because you want to have a job. But you know you really do have a potential there to have that technology. You know well we're not going to do the punch cards anymore. We're just going to do a face recognition scan to check you in. But then of course that same face recognition technology could be used to make sure that you are cleaning hotel rooms at fast enough. Right. Make sure that you're you know track your movements across your day. See how much time you're spending in the bathroom. These technologies can can quickly escalate especially in employment context which can be pretty coercive. So yes there's a lot to this idea that we want to set norms for how we use the technology because the creek can happen pretty fast and be pretty you know violative of your privacy and your rights. So I've been asking questions that are pretty critical but I but I feel like I should ask the question that my mother would probably ask. So my mother would say I live a very pure good life. I live on the straight and narrow. You know if I'm not guilty of anything if I'm not doing anything strange if I'm not protesting at the border. Why should I be worried about this technology or why should I care. What you know it's fine and it actually protects me from kidnapping and other things and I'm getting older. Sure. You know this is a great public safety technology. Yes the old if I did nothing wrong. Right. You know what do you have to hide. So I mean I think the obvious first answer is just the mistake answer. Right. Just because you're just because it isn't you doesn't mean that somebody may not think it's you and that technology may be deployed and especially if you're you know part of a population that may not actually you know the system may not work as well on. So that's that's one piece of it. I also think that you don't always you know who are you hiding from. Right. Maybe you're you're comfortable with the government but you're really comfortable with like the creepy guy down the street who can now figure out who you are and maybe from there like where you live. That's you know that's that's legal in the United States right now and it seems like the kind of technology use that we would we would really worry about. You know activists and I think this isn't something I you know this is something CDT did but there were activists for fight for the future. They they put on big white decontamination suits and they taped a camera to their forehead and they just stood in the halls of Congress and took face recognition scans all day and they actually identified a member of Congress. They were looking for lobbyists for Amazon. They were using Amazon face recognition technology. It's an interesting illustration of this idea of like you are giving a lot of power to strangers to know who you are and then potentially use that for all kinds of things that you don't have control over. So we take for granted I think a lot of our functional anonymity in this country and the reality is that face recognition if unchecked will do a really good job of stripping away a lot of that functional anonymity and some people are always going to say it's fine but I think at least what I would say to them is you don't have to lose the benefit of this technology in order to still have some rights to control how it's used. There are ways that we have done this in the past and gotten the benefit of these technologies without all of these harms. So why are you so quick to just give up and let somebody use these technologies in harmful ways when you don't have to? So how would you, I think in our earlier conversation this morning you may have mentioned this briefly, but I'm wondering when you think about governance frameworks how you think about what the criteria might be to decide what's a problematic technology and what is not. Is that the way to think about it or are there other criteria? What kinds of experts who should be making these kinds of decisions? Is there a role for example for academic work or research more generally in terms of assessing the ethical social dimensions and on what parameters I guess? So it's a great question. So I think I would say we want to start with having a process for getting public input into how we're deploying these technologies. The ACLU and CDT has helped with this a little bit, has been running a pretty effective campaign of trying to essentially get cities and towns to pass laws that say anytime you're going to deploy a new surveillance technology you have to bring it before the city council. It has to get vetted. We have to understand how it's going to be used so we can make decisions about whether this is the right technology. So just creating just a trigger mechanism where we're going to have a conversation first because it may sound strange to say this but that actually doesn't happen all that often. Oftentimes what happens is a local police department gets a grant from the Department of Justice or DHS and they use that grant to buy a drone and then they get that drone and then they might get trained by DHS and then they fly that drone and they haven't appropriated any money from the city. They haven't put that in front of the city council. They just start to use it and then it comes out and sometimes the city councilor is really upset or sometimes the police draw it back and sometimes they don't but just having that public conversation is a really useful mechanism for controlling some of that technology. So I would say that's a beginning. Obviously state lawmakers can play a really important role. Federal lawmakers should be playing a role but we're not passing as many laws in D.C. as we are. We're not doing quite as much governing in D.C. as maybe people would like. It's a pretty without being too pejorative. I mean we are at a little bit of a loggerheads in terms of partisanship and that makes it hard to pass things federally but that there's a lot of other you know that's the wonder of the federalist system is that there's lots of other places you can go. Academic researchers are tremendously important because I mean I said I think at the top like for a long time my answer to many of these technologies is this one specifically was it doesn't work. So if it doesn't work and if an academic can say this technology doesn't work or these are the limits that's in a tremendously powerful piece of information. But it's really hard for your ordinary citizen to separate out the snake oil from truly powerful and innovative new technologies and I think technologists and academics play a really important role in just as a vetting mechanism and saying you know yes or no to a policymaker who wants to know like is what they're saying true that kind of neutral third party is really important. So I don't know how much you you know about this but facial recognition has been particularly controversial in Michigan. So for two years over two years Detroit was using facial recognition something called Project Greenlight without any of the kinds of transparency that you're that you're recommending and you're talking about it came to light with the help of activists and so now the city you know they sort of said OK fine I mean then it was sort of being used indiscriminately as far as we can tell and more recently the mayor came out and said OK we promise we'll only use it you know in very very narrow criminal justice uses but of course again Detroit a majority African-American city one in which there is not great trust between the citizens and the government you know that kind of falls on deaf ears so and one of the things that even though they're now using it my sense is that one of the things that's missing is transparency in understanding how the technology where's the data coming from how is the technology used what kinds of algorithms there's no independent assessment of any of this so I'm wondering if you know anything about this or if you have recommendations on how you know in those kinds of settings how you might try to influence that kind of decision make because often these are proprietary algorithms that these police departments are buying and they're not even asking the right questions necessarily right. So they're not and I think so it's a it's a really compelling case study because you're right the reality is it's gosh it's really hard to trust a system that hasn't bothered to be transparent or truthful with us for years gets caught and they say oh I'm sorry and then kind of we will put some protections in place so that's not an environment for building trust in a technology it doesn't say you know citizens and government are partners in trying to do this right it says what can we get away with so yes so in no particular order clearly there should be transparency about what who the vendor is what the accuracy ratings for those products are without really without revealing anything proprietary you should be able to answer the question of how accurate your algorithm is in a variety of tests you know NIST has a series of they test these products and they'll tell you go like you know just Google NIST face recognition tests and you can read the 100 page report that late that evaluates all the algorithms this isn't secret stuff you should know when it's being deployed like you should have you should have be able to understand how often a search is run what was the factual predicate that led to that search what was the result of that search did it identify someone was that identification accurate I mean these are kind of fundamental questions that don't reveal secret information they just are sort of necessary transparency and we see them in lots of other contexts if you if you do an emergency if you're a law enforcement officer if you're Department of Justice and you want to get access and read somebody's email in an emergency context right you say it's an emergency can't wait to get that warrant I you know I have to get this you have to file a report it's I for it's I won't bore you with the code section but it's it's just a legal requirement I have to report why why this is and what's the basis for it so these kind of like basic transparency mechanisms are things that we have and other technologies and we kind of have to reinvent every time we have a new technology like the the problems do not change that many of the same concerns exist it's just that the technology is often written or excuse me the law is often written for a particular technology and so when we have a new technology we have to go back and reinvent some of these protections and make sure they're broad enough to cover these new technology it's also so in my field we would call this a socio-technical system I mean one of the things that that you didn't say but I would also think you would want but you know and I'm wondering I guess I'm wondering how there are previous technologies and there's a lot of things you could I would add to that yeah no I was just thinking about there was a recent article lengthy article investigative article in the New York Times about breathalysers and in that article they talked about how there's both the calibration of the device and ensuring that the device remains appropriately calibrated but also that there's an interpretation there's interpretation there's a lot of you know it's a human material system right and in this case the there may be a match right it's a percentage match it's not you know you have humans in the system who are doing a lot of the interpretive work who also need to be trained and we also don't have transparency about that either do we no we don't and and and that's an incredibly important part of the sort of training of any system is understanding what you're going to do with a potential match and you find it so I'll give you I use this example we talked about earlier but so how probably I don't know if they still do it this way but this wasn't a lot longer probably maybe ten years ago I went to the big facility in West Virginia that handles all of the FBI's computer systems right that the NCTC says we're not the excuse me the the system that when you like get stopped for a traffic violation the system that they check against your driver's license before they get out of the car to make sure that you're not a wanted fugitive and they're not going to you know that they it's all fadequartered here and one of the things that they do in that head in that facility is they do all the fingerprint matches so if I you know if I get a criminal you know if I get a print at a crime scene and I want to go see if it's matched against the FBI data this is where I send it so you know what happens when they do a fingerprint match at least ten years ago but still this is a technology that's been deployed for 150 years there's a big room it's ten times the size of this room it's filled with like people sitting at desks with two monitors and this monitor is a fingerprint and on this monitor is the five or six match it potential matches and a human being goes to see if the worlds of your fingerprint actually match the right print that so if you think about this technology that's hundred years old and we are still having people make sure it's right if you so that is the kind of just to give you sort of the air gap between what automation can do and then what the system can do imagine now how are we going to handle this protocol when I have a photo of my suspect and then I've got six photos of people who look an awful lot like this person like how am I going to decide which is the right one and it may be the answer is that you can't definitively you just need to investigate each of those six people and see if they're and the reality is with face recognition it's often kicking out not six but fifty and so there are real limitations to the technology is getting better so I don't want to over solve those limitations especially if you're there are other things you're doing like narrowing the photos you're running against but there isn't there is there are systems that will have to be built on top of the technology itself to make to make sure that we're optimizing both the results and the protections. So you know we've been at STPP doing a research project around this in our new technology assessment clinic and one of the things that we've been thinking what we've noticed in our sort of initial analysis of the sort of political economy of this is that it is of course a global industry and and I'm wondering how you know the legal frameworks what are the legal frameworks that are evolving what are the global dimensions of its use and how are those interfacing with the legal frameworks and does that have any implications for the way we think about it here in the US. No it has huge implications so there's a couple of things to think about globally. I think maybe the first is that most developed westernized countries have a baseline privacy law so there's a comprehensive baseline privacy law that regulates the sharing and collection of personal information so if you were in the UK for example there would be rules for who could collect your personal information and what they could do with it getting permission for it and those rules I believe you know by and large I believe do apply to face recognition I think there's there may be some nuance there but we I think the expectation for the people in those countries is that face recognition will be covered and then what that you know impact of that will be and so that's a that's important because it goes back to that idea that I mentioned before about you know do we start with justify why you're going to use the technology or do we start with go ahead and use the technology unless you can prove that there's a reason not to and I think we want to be more in the don't use the technology unless you have a good reason but what equally interesting at least to me is that this technology is becoming as it become diffuses it becomes more global and there's a number of countries that are really leaders in face recognition technology Israel is one. You may have a harder time controlling it if I can go online go to a you know an Israeli company download face recognition software scrape the LinkedIn database without your permission and create a database of a hundred million people that I can then use for identification purposes that's really hard to regulate I you know it may be illegal eventually in the United States but from a regulatory point of view it's it's a real enforcement nightmare to try to figure out what the when that system how that system was created and how it might be used so this globalization issue is a real problem because a US based company may may not do that but then certainly there are going to be places offshore where you may be able to use that and it may be less of a problem I mean you see there's lots of places that you can illegally torrent content there are lots of people who do that there are also lots of people who don't because they don't want to do something that's illegal because they don't want to potentially get a computer virus you know so it didn't want to overstate that problem but it is a real concern especially with the internet and with the diffusion of technology across the world it often can be hard to regulate and it's also being used in Israel but also I know in China right and for a variety of different kind of crowd control and disciplining context so I'm always a little bit careful with China because China is the sort of the boogeyman that allows us to feel better about ourselves sometimes like well we're not China so like don't hold just don't make China the example of what you're you know you're not but yes China is a really good example of how you can use this technology they are using it to identify racial minorities they're using it in many cases to put those racial minorities in you know in concentration camps or at least separating them from the general population these are incredibly coercive uses of the technology China is becoming famous for its social credit scoring system where we're starting this you know I think it's it's not it's not yet as pervasive as it may be someday but we're it's being used essentially to identify you and make decisions about whether you should for example whether you're a good person and you should be allowed for example to take a long distance train you know whether you should be able to qualify for a particular financial tools and so again tools for social control I can identify you I know where you are and I can make a decision about whether you should be allowed to travel where you should be allowed to go and this again is is part of as you know called the socio-technical sort of you know system that allows you to sort of use technology to achieve other ends. And at least perhaps a warning for us right? Yeah no it is it is a cautionary tale but we we have our own we have our own ways that we use this technology you don't have to you know don't don't think that just because we're not quite as bad as China that we we are not deploying this we cannot be better in how we deploy these technologies. Maybe we'll start by asking some questions from the audience. Do citizens have any recourse when facial recognition technology is used without their permission? If you're in Illinois you do. No I mean in Illinois is a very strong law you actually it has a private right of action you can actually sue someone for for taking your face print without your permission and it's the basis for a number of lawsuits against big tech companies for doing sort of exactly this kind of thing. I believe the technology is also illegal in Texas there is not a private right of action though so you hear less about it. I'm trying to think of there's any other I mean the honest answer is probably no in most of the country but you know you you could if you were you know if we were feeling kind of crazy there are federal agencies that arguably could reach this the Federal Trade Commission has unfair and deceptive trade practices authority so they decide you know taking a face print is unfair they could potentially reach into that it's not something they've pursued before though and it would be a stretch from their current jurisprudence. Another audience member asked what led to the Illinois rule of consent and what is the roadmap for getting new rules in? Well it's interesting because in many ways Illinois happened really early in this debate. The Illinois law is not a new one it's a it's at least seven or eight years old so in a lot of cases I think what happened was the Illinois legislature was sort of pressing and getting ahead of this technology before there were tech companies lobbying against it before it became embedded and they just sort of they said you can't do this and and for a long time the only people who are really that upset I think were like gyms because like you couldn't you know take people's finger print at the gym right without getting up you know going through more of a process and so that in some ways is a way that we've had some success with regulating new technologies is to sort of get at them before they become really entrenched. We're kind of past that now but we're also seeing as we see a broader push on commercial privacy we're seeing a real focus on face recognition. People are particularly concerned about the deployment of face recognition we're seeing it in the debate over privacy legislation in Washington state it's come up a number of times in California both at the municipal level and at the state level. I think some of the other sort of state privacy laws that have been proposed include face recognition bands so I would guess I would say that it's it's something that is right to be regulated certainly at the state level and you've seen some federal we had saw a federal bill that was fairly limited but did have some some limits on how you could use famous recognition that was bipartisan and it was introduced by Senators Coons and Lee earlier this week so there is there's sort of interest across the board and I would say right now the state is the most sort of fertile the state level is the most fertile place. Beyond policy advocacy what actions can individuals take in order to slow the growth or subvert the use of this technology by companies or the government. So this is so there's it's interesting right I mean there are things you can do right you could actually put extensive makeup on your face to distort the print image like there are things you just sort of privacy self-help kind of things you could do by and large as society we we don't we tend to like look a scant at somebody who covers their face that's a thing that is maybe we aren't comfortable with but maybe we could be comfortable with it I mean this is certainly an environment I mean you're in an academic setting you're in a place where you could be a little different without being you know without sort of suffering if I tried to pay check put checks on my face and go to work tomorrow well I'm the boss actually so I can just do that but if I wasn't the boss people might like not might look a scant at me for doing that but here you could probably do it and if somebody said gosh why is your face like that maybe you could explain like because we have face recognition in deployed in our cities and that's wrong and this is this is my response and maybe that's sort of a little bit of citizen activism that can help us kind of push the issue forward but there you know you you can I mean you can try to stay out of the broader databases that that fuel face recognition so if you don't feel comfortable having a Facebook profile LinkedIn profile anything that links a good high quality photo of you to your real identity is one that's going to make face recognition much easier obviously it's harder to do if you can't stay out of the DMV database and that's a you know and that's one that police are pulling from so that that's harder to escape what are the ethical and technical implications of the increased use of facial recognition for intelligence and military targeting purposes oh that's a hard one I mean there are a lot of they're very similar to the ones we laid out that stakes are just higher I mean we're identifying people for the purposes of potentially targeting them for you know for an attack and we've seen obviously seen drone strikes for the last at least seven or eight years you know you can imagine a face recognition enabled drone strike being particularly problematic not just because drone strikes are really problematic and when that goes back to the whole argument about unfair systems and then layering on face recognition on top of it you know you have greater potential for error but to be fair and I'm low to be fair here because I think drone strikes are are just unjust for so many reasons you could argue that that actually in fact makes it more makes it more likely that I'm not going to target the wrong person that in fact it's another safeguard that you could put in place that is as charitable as I can be with drone strikes now this audience member wants to know what can we do when biometrics fail so for example your facial measurements changes you age so what are the implications of facial recognition their validity and reliance over time so there's a big impact certainly for children as you grow up your face your face print changes substantially the prints have become more stable as you grow older as an adult there is an impact but if you have enough images and you you know you have a robust enough template the aging process has been shown to sort of have less of an impact on accuracy but that has a lot to do with how many photos you're using to sort of create that initial template that you're working from there's also an issue with transgender people right I haven't read it in detail but I was just saying today that you know there are many DMVs that force a transgender person to you know wipe off their makeup and you know sort of appear as their biological the given biology of birth, gender and that's used for facial recognition and then it has again the kind of I mean I think one of the things that's interesting to me about what you've said is that is actually yes it has very difficult implications in terms of criminal justice but these kinds of quieter perhaps at the outset you know in the process of data collection the kinds of you know social disciplining that's happening super interesting and distressing, I mean disturbing well we're interested in technology that's part of why you get into this sort of thing and I mean technology is often a multiplier in a lot of ways it can multiply benefits in society and it can multiply harms I mean that's true of many tools right and technology is a tool so yes I mean there's no question that as you kind of go throughout these systems as you see them deployed more broadly you're going to see these kind of impacts in all kinds of unexpected ways what kind of regulation should be put into place to protect data collected by big companies such as Apple so that's a really we haven't talked at all about data protection but it is worth understanding that this is personal information social security numbers, personal information you should expect good cyber security protections for it that information you should have the ability to delete that information if you access it find out what how that information is being held and deleted if you want and that would be a right you would have if you were in the EU for example you'd have those rights we do not have them in the United States by and large except in California once the new California Privacy Protection Act goes into effect in January but you should also Apple does some interesting things that are illustrative of this so Apple doesn't actually take the biometric off the device what they do is they store it in a separate place in a separate place on the device that's actually separated from the rest of the systems in the phone to make it even harder to get access to so when you take a face print through a face ID or previously through a fingerprint it resides on a physically separate place on your phone and that's a really good privacy protection it makes it much harder to get it that biometric makes it much harder to if a hacker wants to get access to your information it makes it much harder to do which is illustrative of a broader concept that we should all embrace which is this idea of privacy by design we can build some of these systems at the outset so they are more privacy protected we don't have to wait till after we see the harms and then try to backfill the protections in place why don't we try to anticipate some of these problems at the outset and build systems that mitigate those problems at the beginning how can the government even subject to technology like facial recognition to a moratorium when private companies are already using it that's a very good question and that varies a lot depending on where sort of who's doing the regulating it's like for example the city of San Francisco cannot tell Amazon they cannot regulate the use of recognition in San Francisco they can regulate how the city of San Francisco chooses to deploy the technology they just don't have the authority but a state can impose a moratorium they could require any face recognition be either banned they could say that face recognition requires consent they could say we're going to have a moratorium while we think about rules and they have that authority and because there's no overriding federal law that power devolves to the state the state could actually do that and similarly the federal government could do the same thing would the increased accuracy of face recognition just lead to better surveillance of a group that's already disproportionately targeted by the criminal justice system yeah it could I mean I think that's certainly what we'd worry about arguably and this is not and this is not a face recognition example but so we are using we're starting to see artificial intelligence deployed to do things like pre-trial bail determinations right so when I go to decide whether I get released on bail or whether I have to stay in the criminal there are off the shelf technologies compass is one of them that will say sort of red yellow green and nominally they're not making a determination but they're making a judgment they're saying red is definitely not yellow as maybe green as you should and judges by and large are following those determinations very closely I won't get into the details but there are real concerns about the racial bias and how those assessments are made the training data that's used and the way that they're weighted but the current system for doing bail determinations is really bad too like judges don't actually turn out to be real good at this either and they tend to rely on their own set of biases so it's not that automating this process is automatically bad the trick is that you have to automate it in a way that's fair and that's a harder and that requires more understanding from policy makers about how the technology works and it requires more deliberation about how these systems are built how often are facial recognition databases wiped so if I'm in one am I in it for life that would really depend on who created the database in some, in a lot of countries like in western democracies there may be data retention limits so that any kind of personal information the expectation is that you're going to delete it after a set period of time or after you haven't used the service for a set period of time but that's going to vary widely depending on the jurisdiction and who holds the data is there a way to encourage tech companies to innovate and develop thinking about consent from the start rather than just retroactively putting in place after they've been caught well there are a lot of ways some of them are more effective than others I mean tech companies are I think becoming more sensitive to these questions right I mean the tech backlash that we've seen the last couple of years is real like people are really worried about these technologies and companies are really worried about people being worried about these technologies they want them to use them so I think it's a we're seeing a lot of different ways to put the pressure on we're seeing it in state and federal laws we're also seeing it in putting individual employees of those companies putting pressure on their companies to behave in a more responsible way and those values, most precious resources are its engineering talent and if the engineers aren't happy then that can make real change in the company and so saying like we want to deploy these technologies in a more responsible way we the employees of a big tech company it really is a way to make a meaningful change and there's a whole bunch I mean there's just a lot of ways I think we're in a little bit of a moment where people are paying attention to this technology and that gives us a lot of opportunity to try to push changes across the board Is consent the right way to think about it? I mean I think in the U.S. an individualistic society like ours individual consent is seems like the straightforward way to think about it but this is a technology that implicates families and communities just like I'm thinking about Yeah I'm thinking about forensic DNA databases as an analog for example and in those conversations around DNA databases and biobanks there's been a lot of discussion about how consent is an inadequate way of thinking about this so I'm wondering are there alternative ways of thinking about this? I do think consent is so I'm not a big fan of consent as a solution to privacy problems I think we all understand that like checking that little box and saying I agree to your terms of service I don't think any of the violations you've consented I don't think anybody feels like their rights have been protected by that process that's just not a that's not working for us and so one of the things that we've really been pushing is this idea that we need to put some of the responsibility back on the data holder as opposed to the sort of person who's consenting but I do think that we can do that in a way that is closely analogous to what we think of as true consent to give you an example when I use my iPhone actually I use an Android phone because I'm a funny duddy but my kids are always like why do you use a phone but I don't have face ID but if I had face ID and I did it I understand what's happening there I understand that I am giving you my face template in exchange for the ability to open my phone that's a pretty close to the pure idea that we have of consent I get it I know what the tradeoff is here so the trick I believe is to stop there and say congratulations you got consent to collect that face template for the purpose of allowing somebody to open their phone you don't get to do anything else with it that's it we're going to stop and we're going to make that a hard use limitation and if we do that then I feel like we've gotten you are responsible as the data holder to hold that line you understand what the benefit was you don't get to use it for anything else we really do honor the individual's desire to either sort of have more or less use of this kind of technology and so I do think that there's a role for consent it's just that it can't be like a get out of jail free card that says once I've gotten consent that's it I'm good I can do whatever I want Is transparency the right way to be thinking about this issue considering that transparency could mean opening up all data for everybody definitions and values as we frame this issue transparency is interesting in this area because transparency doesn't work super well for what we're talking about right the fact of the matter is if I put a sign up that says face recognition at use in this facility but if I need to go use that facility or I want to shop in that store transparency is worthless to me that's not a useful technology or it's not a useful sort of method to me I do think that transparency can be useful in the kind of way that we described it before like understanding as part of a broader system how the system is being used how many searches are being run who might be run against a face recognition database like that kind of transparency how accurate is the system like I do think that there are ways that we can use transparency to really try to drive but transparency itself is probably not an optimum tool in this case for a lot of reasons it's hard to escape the technology and it's also hard to know as a user how this technology is being deployed so being transparent about the fact that you're deploying it doesn't maybe help me understand what's actually happening we've heard a lot about policies about the use of facial recognition technology that are self-relevant for example last week news reports reported cameras being marketed in China with built-in minority detection yeah I mean I think that there's I think regulating the technology itself is really important I mean we are seeing more and more cameras with internet connected we're seeing more nets of cameras that are internet connected and then can be have a variety of add-ons regulating like when we're actually using the technology is really important here's a great example we're activists for many years have been very excited about police body cameras this idea that we can use a body camera and we can really see what happened at a crime scene while a confrontation happened with the police as they become more widely deployed we sort of started to grapple with the real limitations of this technology police turn them off often times they're not pointed in the right direction or police will be allowed to look at the camera footage before they write their report and will sort of write a report that matches whatever happened in the camera footage no matter what allows them to kind of curate that well now say imagine we just said well I'm a police officer which is a company that makes many of these body cameras I'm going to put automatic face recognition on all of the body cameras it's great new technology to help everybody so now what you've done is you've taken a tool that was supposed to be a tool for social justice that was supposed to protect people and their interactions with police and you've turned it into a surveillance tool you said now I get to identify everybody as I walk down the street and I'm a police officer I get to identify people on my patrol I potentially get to put them in a database and track where they are I get to you know know who everybody is and you know rely on that identification in ways that may be problematic so now we've actually flipped the presumption it's gone from being something that's supposed to benefit those communities something that may actually harm them so yeah we got to think about when we're deploying these technologies what the context is going to be used in and who's actually going to benefit from it and last question so we're in a public policy school and a lot of the folks who are getting master's degrees or undergraduate degrees here will go off into policy or law they'll be in a position of having to someday I'll look for you someday well yeah or perhaps maybe and I'm wondering you know this conversation in some ways our conversation hasn't been too technical but it is a technical issue and people often might say oh that's really technical I don't under I don't sort of black box it and say okay I can't I can't deal with it and yet it's incredibly consequential as we've been discussing so for students who are interested in this or who are even just generally you know pursuing policy careers which are you know given the size of this issue they're likely it's likely to intersect with their lives what kinds of what kinds of training what kinds of classes expertise do you think is useful in being able to navigate the kind of these issues these technical issues I mean in your own career you've come from law and you've had to navigate pretty technical questions I'm wondering how you think about this so I mean I guess I would say I was sort of purposefully not making this a too technical conversation because I don't think it needs to be you can all understand the concepts I'm talking about we don't need to get too deeply into the weeds of the technology to understand the policy implications of it I think that you do have to be willing to ask hard questions and be willing to explore what under the hood and be really skeptical about claims about the efficacy of technology technology is often treated by policy makers like it's some sort of magic fairy dust that you can just sprinkle on problems and make them all be fixed because technology solves it and it very rarely does right and so anytime someone comes in and says to you oh I've got this silver bullet that's going to solve it all right there your antenna should go up and say I'm going to be sold a bill of goods here so you have to ask hard questions and then I think you have to go to your own sets of validators and say you know I'm not a technical person but certainly your local university has a sort of neutral person who can tell you whether claims that are being made are real a lot of congress has been pushing in recent years to add more technology policy fellows so there are more people with a background in technology policy so you don't have to be a technical expert you just have to be willing to not accept any sort of claim that you're being offered as unvarnished truth without probing pretty deeply into it and without looking for outside experts to help you sort through the fact and the fiction and if you do that literally if you just kind of get to the point where you separate out the stuff that doesn't work from the stuff that works you will be miles ahead of many policy discussions because you'll at least be having a factual discussion about what technology can do as opposed to sort of a wishful discussion about what we'd love it to do in an imaginary society well I certainly endorse that as well well thank you very much thanks so much