 I wouldn't want to miss the chance to introduce Margo, who has become a real force of nature in the field such as it is of cyber law. And she has worked on intermediary liability on intellectual property and trade and on robots, where she has been part now of a four school consortium, oddly enough, not including us, which is an outrage and should be remedied very soon. Talking after. That's right. Teaching on robots in the law over the course of the spring. And today she's going to be speaking about robotic surveillance, authorship, or intrusion drawing upon her work I gather at the Yale Information Society Project, our counterpart in New Haven, and on her work at EFF and in other such places. So I should also, Amra will kill me if I don't say, we are being intruded upon as we speak by microphones, cameras, we are being webcast, this is being recorded, your permanent records are being altered as we speak, so just be alert. But Margo, we're so pleased that you could make it here, and thank you so much for speaking today. Thank you. Oh, that'd be wonderful, yeah, and good luck with your flight. Oh, thank you. I guess I was already introduced, and half-high work is done. So thank you all very much for having me and for introducing yourselves, I appreciate it. This piece on robotic surveillance is in, this is my disclaimer, very, very, very early form. So I've been spending a lot of time recently looking at surveillance issues as they arise in drone regulation, which is where some of these questions come from. And this project is an attempt to take some of the issues that I've seen arise in the drone context and figure out how they translate to other kinds of robots. As Jonathan mentioned, I'm currently co-teaching a class on law and robotics. So some of what I'm going to talk about today is going to come out of the discussion that we had only yesterday in my class about whether robots are, in fact, exceptional. So my thesis insofar as there is one so far is that it's sort of a soft exceptionalism for robots and privacy. I'm not saying that robots are going to change absolutely everything about privacy law, but I am saying that robots are going to stretch doctrine and stretch legal metaphors in some really interesting ways. And the second observation I came upon relatively recently, again, from looking at what's happening in the drone space, is that robots are going to push an interaction that's been long coming between two different kinds of privacy regimes in the United States. The regime that we use to govern spatial privacy in real embodied space and the regime, mainly federal, some state that we use to govern information privacy, which takes place often not so much in real space. And in short, this is happening because robots are software embodied. They're software put into a physical being. That's not a complete definition, nor is it an exclusive definition, but it's sort of the definition I'm working with for this particular project thus far. So robots as software embodied are going to raise questions that are coming up across different areas of legal privacy and speech doctrine. They're going to raise the question or push the question of what counts as an author, whether you have to be human to be an author, or if you can be an author as software that is written by a human. What counts as speech? So the central issue I've been dealing with in the drone context is the idea that there is a right to record, a First Amendment right to record, that you have a First Amendment right to videotape things, not just sort of an ability to videotape things. And the third question robots as embodied software are really going to push is what counts as privacy harm? So we're seeing this happen right now actually in the NSA discussions. There's a question over whether automated recording counts as harm or if recording only counts as a privacy harm when it's viewed by a human being. And the other sort of what counts as privacy harm question that robots are really going to push forward comes from taking information, privacy issues and putting them in real space. And you see this in the case of US versus Jones, right? Which asks when you're in public, traditionally you haven't had a reasonable expectation of privacy. But if software is evaluating all the times that you've ever been in public ever, you might have a reasonable expectation of privacy in that information put together with other information which is known as the Mosaic Doctrine. So I should probably also backtrack to a different definition that I'm relying on, which is the definition of privacy. Privacy is defined very differently in different areas. And this is not a uniform definition at all. But for the purposes of the robotic privacy problems of taking software, embodying it, and putting it in real space, I'm working from a, actually a social psychologist understanding of privacy. So those of you who are coming from interdisciplinary backgrounds definitely helped me work with this. I was only recently exposed to this, but I think it's pretty valuable. His name is Irwin Altman, and he's coming out of the field of environmental psychology. And the idea is that privacy is a process of boundary regulation by embodied subjects. So if you are existing in the space, you use different kinds of tools to mediate the relationships that you have with other people in that space. That boundary management can be dependent on physical boundaries, such as walls, right? When we walk into our home, we tend to expect that things can't see through the walls, whether that's correct or not. Or it can be based, boundary management can be based on social relationships. So if I invite you into my home, I assume based on our existing social relationship that you're not going to walk around taking photos of my most intimate affects. But the interesting thing about this is that this understanding of privacy is mainly located in our US privacy policy in state privacy torts. So in the kinds of injuries that involve people, usually a human actor, intruding on the boundary management tools that we as individuals use. And robots are going to start interacting with privacy torts in ways that information generating tools such as software usually don't, right? So a robot which is code is able to walk into my house and walk into a space that I didn't want to open up to them. But my computer is not able to do that. So there are these ideas in privacy scholarship that there are existing practical limits to surveillance that have just been built into our understandings of the world around us. And this is part of how we deal with managing those boundaries in relationships. The example I most want to point out or lean on is the idea of privacy and reading, right? So if I have a book, a physical book, my default assumption is that the book is not tracking me as I read through the different lines. And we didn't have law built in that said you shouldn't track where I read because the assumption about my boundary management was built into the technology itself. When we moved from having the book to having ebooks, that changes, right? My default boundary management, because I grew up reading physical books, may stay the same, but what happens in terms of surveillance is going to change. And I think that robots are going to be something very, very similar. They're going to the transition from a physical book to an ebook, right? They're gonna challenge these physical and social mechanisms that we use to manage boundaries because in having them in our home, they're not gonna behave like a refrigerator. They're gonna behave more like our cell phone or our computer or the digital book that tracks everything. So the reasons that robots are gonna challenge that assumption of boundary management have to do with how an individual robot is constructed. One reason is because they have human plus sensors, right? So we may assume that a wall gives us privacy. But if you are a robot and you're using non-visual surveillance tactics, that is not necessarily the case. You use a heat sensor, you can look through blinds, etc. Robots will challenge these tools because they have human plus memory. So this again goes to the information privacy issues. We depend on the fact that when we walk outside in public and we interact with other people, they're not gonna remember exactly where they are when they met us all the time perfectly. But a robot or GPS can perfectly remember where we've been all the time. They're gonna challenge the use of boundary managing tools because of info processing capabilities, so they can correlate where we've been with what our political affiliations are, etc. And then they can challenge our use of the tools because they can actually physically walk around the wall. They have embodiment and mobility. Ryan Kahlo has brought up an interesting point, and this actually intersects nicely with Kate's work, that robots also may challenge our boundary management tools because they're anthropomorphic, right? So this could kind of go either way. We don't have empirical evidence on which direction it's gonna go. Robots may challenge our management of boundaries because we look at them, they have faces and we trust them. Or the fact that they have faces or other anthropomorphic characteristics might cause us to behave more carefully and not behave like ourselves around them. And then the last one I've added, which I haven't really seen people say before, is the idea that robots are gonna challenge our boundary management tactics because they're not human. So if I have a robot in my home and I'm talking to it, it differs from the person in my home that I'm talking to. Because I have, again, the social norm control over the person in a way that I won't have over the robot. The robot's been programmed from an external location and doesn't necessarily have the sense of what's appropriate or what's not appropriate to share. So I wanna go a little bit to the concrete challenges rather than the theoretical leap of what robots bring into privacy problems. The concrete challenges are going to take place in two locations. Again, because this whole thing is really location and embodiment based. One location is in public, right? So that gets us into the complicated questions of whether we have privacy in public. The way robots will change, the way drones are already changing, the way GPS is already changing in privacy in public, is that surveillance can be low cost, can be highly personalized. And it can be correlated with a whole lot of other information about you. But it's not necessarily clear that robots end up stretching that further, that much further beyond GPS in this case. The other place where there is a conflict, as I've mentioned, is privacy in private. And there are this idea that robots are partially information privacy and partially spatial privacy is gonna come into a really interesting head where we have a robot whose Yula I have clicked and accepted their presence in my home committing a privacy tort. And the legal question is gonna end up being, if I've consented to having the robot observe me in my home, does that contractual consent end up overruling the fact that they have committed, the robot has committed a privacy tort in some way? And there are a couple of cases going either direction on this. So that is for much further research. So I wanna back a little bit into the authorship or intrusion question, which is where all of my interest in this came from, and show how these tensions have played out in the drone space. In 2012, as a Valentine's Day gift to us, Congress, Obama signed Congress's FAA Reauthorization Act, which asked the Federal Aviation Administration to have civilian drones in civilian airspace by 2015. That timeline is now fairly delayed. So 2015 is no longer the date, but it's in process. And in response to the FAA Reauthorization Act, a number of privacy advocacy groups pointed out the new problems, privacy problems that are posed by drones. And pointed out also that the Federal Aviation Administration, has a lot of experience regulating safety, but doesn't have a lot of experience regulating privacy. And appeared to have no plan in place at all for the kinds of privacy problems that would be raised by drone use. So there were kind of three, four types of privacy legislation that were proposed, some of which have passed. On a federal level, there was a bill proposed by Markey that regulated drones from an info privacy like perspective. And basically said, if you are a drone operator, an unmanned aerial vehicle operator, you have to, when you apply for your drone license, submit your code of info practices. And the FTC will have the authority to keep you working within the bounds of your code of information practices. And your code of info practices should align with the usual ideas of fair information practices for information privacy. So this is the example of robots, drones, being treated as an information privacy problem, even if they're going to exist in real space. On the other hand, we had 40 plus states considering drone legislation. And the states approached drones from an electronic privacy perspective, treating the use of drones to record things as wire tap like behavior. And asking the government to get warrants before they record. And also, much more interestingly, governing private drone use of recording. So the Texas example I think really highlights this authorship versus intrusion idea, when you have the federal government recording, there's no question over authorship because the federal government doesn't have a First Amendment right. When you have a private party recording, the private party potentially has a First Amendment right, possibly depending on context. So Texas has a drone law in place now that says that you cannot use a drone to capture an image of a person or property without the express consent of the person who owns or lawfully occupies that the property that's been captured in an image. There is a fact pattern that actually did occur that shows why this is potentially problematic from a First Amendment perspective. Prior to this law passing, a Texas drone photographer who has remained nameless, so we don't know if they were pressed or if they were a private citizen, flew a drone over a meat packing plant, saw that there was a river of blood coming out of the meat packing plant that went into a public waterway and sent the photo to the EPA. And enforcement actions ended up resulting because of that. But with a Texas law in place that requires permission of the property owner, that photographer would be in violation of the law because they didn't get consent from the meat packing plant, which obviously would not be given. So that is the authorship versus intrusion problem, is that you have the authorship of the drone photographer running into the feeling of intrusion on behalf of the meat packing plant owners. You can imagine this in, again, fact pattern actually occurred in a scenario where it weighs more towards protecting from intrusion than towards protecting authorship. There was a woman in Seattle who was sitting in her third story floor of her building who saw a drone hovering outside of her window and came down and saw that the drone operator was standing on the public street. So she called the cops. The cops said we can't do anything about this. He was standing on the public street. Guy left, and she obviously felt that she was being intruded on and his authorship claims probably were not as strong. So the other example from drone legislation that I think shows, illustrates some interesting things is this effort on the part of a Colorado town named Deer Trail to pass a drone hunting license. So this is not treating drones as an information privacy problem. It's not treating drones as a spatial privacy problem. It is figuring out defensive ways to combat intrusive technology. And the way in which I think you're leaving this aside how serious it is, the vote's currently been delayed until April 1st of this year. Also, there's a bounty on shooting down federal drones. So if you shoot down a drone that has any federal government demarcation on it, you can receive it anywhere from $25 to $100. So the Deer Trail hunting ordinance example, I think does connect to the Texas intuitions a little bit in that it shows an intuition that privacy and property should overlap in the real world, right? If you're standing in your private property and you see this unmanned vehicle approaching, Deer Trail wants to give you the ability to defensively shoot down the unmanned vehicle. Okay, so I'm going to skip over that because I do want to actually get to questions. So in this right to record debate that I brought up with the meatpacking plant example, the fact that the drone is a robot, let's say that the drone is entirely autonomous and is not being operated by a remote operator, brings up a couple of interesting doctrinal questions. So it brings up a question of what counts as an author. If we count the drone photographer who flies over a meatpacking plant as an author for purposes of First Amendment protection, are we going to count the drone that autonomously flies over the meatpacking plant as an author even though it's recording as just something that's happening automatically? And there has been doctrinal debate over Google search results and whether those count as speech, this basically is going to push that debate into a new space. Then there will also be a doctrinal movement over what counts as speech, right? So there is an existing debate over whether code counts as speech comes up in hacking cases, and when code causes a non-speech related harm, a non-information related harm, their courts look at it as sort of a symbolic speech question of how much are you regulating that speech versus how much you're regulating it's a non-speech harm. So with recording, they might end up going in that direction, it's not entirely clear. And then as I brought up in the beginning, there's this question of what is going to count as the privacy harm to weigh against that speech. So we have a location-based privacy harm potentially. You might have more, and this is the way privacy courts have traditionally treated this, and actually Fourth Amendment up until Jones treated it. You might have more of a reasonable expectation of privacy if you're in private than if you're in public. But again, the Jones idea of information-based harm of the mosaic doctrine pushes against that, and we might end up finding that there's a reasonable expectation of privacy that's violated in public. And then as I said earlier, also we end up with the does automated watching count as harm, and what happens in the NSA debates is going to actually end up interacting a lot with what happens in what we consider to be harm in the robotics context. All right, so in summary, and I want as much input as possible, please, robots are going to end up, because they are embodied software, challenging our ability to manage relationships through existing boundaries, whether they're physical, legal, or social, in the physical world. And because of these challenges, they're going to bring up a lot of interesting theoretical and doctrinal collisions between spatial privacy governance and information privacy governance. Doctrinally, they touch on a lot of the fast developing debates which are happening in cyberspace. And then because I'm at the Berkman Center, I think the question that I haven't asked that needs to get asked is the relationship between any attempts to regulate privacy and innovation in general. So will attempts to regulate privacy end up preventing the kind of innovation that we want to see in this space? Thank you. I mentioned in the beginning, briefly, the anthropomorphism question and how making robots anthropomorphic might give people the incentive to reveal more personal information or could go the other way. And I just wanted to push back on that a little and say that it's likely to go in the first direction, because it's a very simple design decision that's, or a design problem that's solvable. So, I mean, for instance, instead of making a human face, you can design robots to look like cute little animals. And so that's immediately non-threatening in a way that will make people bond with it more. I think this totally ties into your general argument, by the way. But I also, I don't know if you saw the movie Her. Not yet. Yeah, well, okay. I won't spoil it for anyone. But Scarlett Johansson is not threatening. She's not threatening. And the movie unfortunately doesn't look at any of these questions. But I mean, you could imagine the privacy implications of software creating that kind of bonding effect. And I think that will be really easy to design and a huge problem. Yeah. I think that the reason I was saying something cut either way is because of the social science research that shows that when you have a pair of eyes watching you, you behave differently. So it's the chilling effects idea. So your idea is that a non-threatening pair of eyes looking at you, Scarlett Johansson's eyes looking at you, would not cause the chilling effect, but would cause you instead to confide in her. There's been research about people like putting their robotic dog in the closet when they change their clothes, but that's actually not the privacy that I'm really interested in. I'm really interested in data and data collection, not people getting seen naked. So I think that the latter is going to be more of an issue and the eyes aren't going to really help that. Yeah. About a month ago, Shari Turkle was talking here at Harvard. And this is exactly what she's dealing with right now in her research, as many people know. And it's interesting to try and understand what that means. How do we relate to our machines? How do we relate to our machines when we make them more like us in variety ways? And one of the conversations that I've seen around her, which I haven't seen, is the idea that people are beginning to imagine AI is different from human beings as AI is having multiple relationships at the same time, human plus, so that her Scarlett Johansson can be in love with him, right, who's using her interface while at the same time interacting and maybe being in love with a whole variety of other users at the same time. So there's that maybe another layer of privacy and intrusion as well. So the idea that your default as a human is to think that the relationship is a one-directional relationship. Yeah, so I guess I thought about that a little bit. Robots scale differently. Yeah. I thought about that a little bit in terms of like a third-party problem. But I think it's interesting to think that if you and your spouse both have the same model of robot and it's behaving with you, whether you would adjust your behavior with it accordingly. And do the robots talk among themselves? Behind your back. That's a confidentiality problem. Yeah. I have a question. I started working on drones a little bit too and I thought it's very interesting because people project stuff in drones but they're not familiar with the object. It's still like a new thing, a new flying object. And it's really odd when you observe the same privacy questions translating into cars, which are also somehow in the process of being roboticized and slower in version, because people project stuff about cars, right? If they know their cars and know what it can do, it's not a whole new question. It's the transformation of an object with a specific ego regimes into new robots question. And it's what we're saying in the Jones case, right? Suddenly the car is behaving differently. It's not your car anymore. Maybe you're not driving it. Maybe it says things about it with the data. And I wanted to know how you're dealing with these objects that will also shape existing regimes. And in this particular framework in which people think they know where this is going and are surprised by the transformation of the object. Yeah. No, that's great. So I do think that there's going to be a spectrum of robot privacy problems, right? And that you can look at the GPS problem or the car problem as being just on the edge of the spectrum from what we think it was like normal cyber law privacy problems. I think the reason that cars aren't necessarily creating the same kind of reactions is because of the way that they overlap with their intuitions of spatial privacy control. So cars are existing mainly on the street, right? And one of the biggest fears that people have about drones is that they move off of the street and are able to look in through your window. So that may not be like a satisfying explanation because technically they're very similar, but the spatial occupation is different. Yeah. What kind of legal doctrine is there to determine who's at fault? I mean, taking your Texas example and the abattoir, if a drone flying overhead that's broadcasting sees that and the law's in place, who is the action against? Is it the drone manufacturer, the person who flew it, the website that's broadcasting it? There's a lot of issues and who's on the other side of a tort when privacy and drones, when privacy and robots come together. Excuse me, can I add something to that? If someone was walking by and took a photograph of it, the law would not apply in the same way, isn't that correct? Yes. Even with a tall ladder. Yes, which is why the the text and law is really special. And there's also, there was a great series of photographs that were published from Stanford that illustrate this, not intentionally speaking to the subject matter, but they ended up functionally speaking to the subject matter. Stanford had a photo slideshow that showed photos that had been taken from tall construction ladders, from unmanned aerial vehicles, and from helicopters, and from people in high vantage points with telephoto lenses from buildings, and they were absolutely indistinguishable. So there's a question that that goes to privacy harm is, you know, whether you're protecting, is the state justified in regulating in a way that runs into the First Amendment when it's protecting from a privacy harm that's really no different from what's being created with other tools? Let's see like a knee-jerk reaction to we have to protect our our immoral activities behind these walls. I mean, it really did feel like the police can videotape, but we can't even police type of law. Yeah, there's an enormous amount of like Tea Party motivation behind a lot of this legislation also. So going to the who's responsible thing, this is enormous. So at this point in the state of US First Amendment law, I think that the website that's broadcasting unless it's the same person who's doing the observation probably won't be liable because they're distributing rather than capturing the image. And that presents all kinds of problems. I think that that model of First Amendment protection is under a lot of strain right now. You don't go after the distributors because you have, you know, you can see it in, you know, situations such as revenge porn, you can see it in situations such as the mugshot cases where a reporter, you know, has the option through freedom of information law to go and get all the mugshots. But then these mugshot sites use the mugshots to basically extort people to get paid to take the mugshots down. So that system's under strain. But right now, I think the website probably would not be liable. Then the question about the drone manufacturer is really, really interesting. And that also goes to the idea of like regulating through technological standards and whether we should have technological standards that say, you know, drone don't record all the time, be enforceable by law. And I'm not sure I don't analyze that under the privacy framework, but I think that we're heading in that direction with a lot of other robotics regulation. Right. Yeah, I mean, the hammer or the motor that's in the drone or yep. Yeah, well, it would have to be, I mean, this is going to Parkfield. But the idea of treating robots as just a product's liability question is one that actually works fairly well until you end up having emergence, right? And then when you have emergence, you start looking back to this idea of should you be legal responsible as the manufacturer to build in like emergence checking tools such as, you know, the Isaac Asimov three laws of robotics idea. I was curious as to, you know, what your thoughts on where you draw the line between, you know, the areas where law currently, you know, we understand, you know, how it works. And, you know, and as we move more along that spectrum towards robots, like where, you know, how you identify where the problem areas come from, you know, like with the example being that like if I'm just walking down the street with a camera taking photos, you know, we understand the torts there. But, you know, then you can start adding in, you know, a GoPro or Google Glass. And, you know, as you take humans and add in more sort of cyborg type components, at some point you get to the like RoboCop, you're more robot than human. And then some of the things that you were talking about, you know, become bigger issues. But at some point, you know, it seems like you're saying like the traditional torts stop, stop, you know, making sense, you know, that we sort of know about if I was just walking with a regular old film camera. So I was curious as to like where you think that line is when it starts breaking down. Yeah. So I think there are two ways of looking at that question. One of them is to ask where there are gaps in the law that robots are going to create something that we think is harmful, but the law won't cover it. And then the other way is if you're going to create law that regulates robot-specific harm, how do you define when it is, you know, a robot harm versus a harm that's just sort of what humans do anyway? And both of those are really difficult. I think the where there are spaces for new kinds of harms question, the drone legislations are a really good example of it. We really had Ryan Killow a number of years ago called drones the a privacy law catalyst rate because they would make people realize all of a sudden oh shoot we have no privacy protection in public. And what's interesting is that the drone privacy laws came into place around the same time as the Jones decision which starts to roll back that idea that we don't have privacy expectations in public. So there are gaps and I hope to identify more of them. I think that the as I pointed out that there's a potential regulatory gap between the or in the interaction between info privacy and spatial privacy when you have a privacy tort committed by something that's governed under an info privacy regime. But for the spectrum of like when does a human privacy problem turn into a robot privacy problem? There I think that that question is common across a whole number of robotics and law areas. And I'm trying to think of the features of robots that make them special. So the as I mentioned earlier in the presentation like the perfect memory, the human plus sensors, the embodiment, the non contextual awareness, all of those things I think make something into a robot privacy problem as opposed to a human privacy problem. Yes. Yeah. So you can have your little drone be the pleasant and it's not necessarily visible. Notice problems are huge too. Yeah. So I think maybe this is also an answer to this question. I think for the question of authorship, it could be interesting to look at the copyright law. I just bring it up today. Yeah. Because let's get into this distinction at least in Europe between the computer assistant, which actually the copyright is owned by the person that is using the device or the computer, the software and then computer generated, which in Europe, so except for the UK, Europe doesn't actually provide any copyright protection on the generated work, which kind of makes sense. But then you could also say that so when you have a Google Glasses, it's actually assisted. So the user is actually deciding what to record and what to look and whatever. When it becomes generated in that case, then it's, then it becomes actually a robot. It's no longer the user that controls. But then the question, the first is in the UK, the computer generated work obtain protection, which is limited. It's 20 years. And the copyright goes to the producer of the software, which in some way would mean that by analogy, we take it to the drones. The recording made by a drone will actually be owned by the producer of the drone and not by the person who purchased it and will actually make it go and film something, which is kind of strange. Yeah, it would be very, yeah. So the drone manufacturer is opposed to the, yeah. The drone will theoretically hold the copyright in every recording that is made by the drone. You should write that piece. Yeah, I think one thing that I found that was interesting. So I looked at, there's a great piece by, recent piece by Ann-Marie Brydie, that's on the AI author in the copyright system. And one thing I found interesting was that there was this debate over photography back in the 1880s and copyright about whether photography counted as authorship, right? That ended up getting resolved as authorship for, in the copyright sense, because of the posing that the person did of the situation. It wasn't like documentary authorship. But I do think that authorship, at least in the US regime in copyright, differs from authorship in the First Amendment sense. Because the creativity element that's required in copyright is not required in First Amendment law. So you may still end up, you still end up in the First Amendment with complicated questions of conduct versus authorship, but they don't hinge on creativity the same way they do in copyright law. Yeah. In terms of potentially fruitful and also brutally offensive legal metaphors, are you looking at all at the last time that this country had independent agents? Yeah. Is there a lot of useful doctrinal stuff for you there in how slaves and what they created or privacy and such around slaves that you can repurpose or rethink here? I will definitely do that. I think this is one of robotics, robotics, no, this is one of robotics biggest metaphorical challenges is that that might end up proving to be the most useful legal analogy and it's terribly bad press. Following on that, I'm just wondering what are like some past instances in history where human qualities have been attributed to non-human entities and would it be worthwhile to look at these non-human entities? Yeah, well right. And exactly how they've gained rights and challenge or stretch our concept of legal regimes, in particular in regards to privacy and information. I'm thinking just like religion, corporations, nation-states, nonprofits, or even animals, like how have they gained this and how has that conversation and discourse evolved and of course this is happening much more quickly, but would that be useful in anyone doing that? Yes. I'm not sure who would be doing it if so, but those analogies do come up again in other areas of law and robotics so it's helpful. One of the other things that made me think especially with the religion comment was in that same copyright article she talks about automatic writing. So this would have been treated by the copyright regime. So the idea is you're not an author, you are a psychic and you sit there and whoever it is, the spirit writes through you. So can you have claims for authorship in a US copyright? And again... That's my dead mother's death. I've lived for more years so I own it. Yeah, and again in the copyright system judges have rather, you know, respectfully said this is, you do own copyright in the work even though you technically didn't write it. Yes. I was wondering when you have a patchwork of state legislation, how that actually applies given that the reach, field of vision, and operation of drones aren't limited by space as much as we'd like. And another thing is I was wondering if you could talk about function creep. You've talked mainly about civilian drones but I was wondering about function creep when they become other things. Okay, so to the patchwork of state legislation, I looked a little bit at airplane law and there's not express preemption in normal airplane law of the application of state torts to activities that take place on airplanes or when airplanes crash in a given state. So you may end up with a situation, so basically which state's law applies has to do with which location you're in at the time when the tort occurs. The biggest problem this presents for using torts to govern, using state torts to govern drones is that it's not always easy to tell where it occurs and it's not always easy to trace the ownership of the drone back to whoever it was that caused the harm. So this has prompted, this is a whole different kind of drone privacy and authorship issue which I should have included in the presentation, discussion of drone license plates and registrations with the idea that you get your drone but only if you say who you are and only if there is some sort of GPS record of where the drone has been. All of which we think is totally reasonable when it's applied to airplanes, mainly for safety reasons, but if you start looking at drones as recording devices or authorship enablers then you get into anonymous speech questions because having the police be able to tell whose drone is hovering over the protest and where that drone has been and where it comes from might not be the greatest idea. So one of the unsurprisingly smartest things that I heard said in this area came from Ed Felton who suggested some sort of system by which you keep the identity in escrow and then you come back after a series of legal process and get the identity afterwards but you still have the identity recorded somewhere. Then going to function creep, this is the big problem for like all of privacy law. If everybody saw the NSA angry birds hacking issue that was on the front page of the Times, I don't think this is a problem that's unique to robotics. I think this is something that US privacy law has to figure out in general. I just want to play out the paper that compares the status of robots to corporations legally but I don't think it was written by a lawyer. I don't know if you know this. I forget who it was. I can look it up and send it. Yeah that'd be awesome. Thank you. Page, news, paper. A couple of things came to mind in terms of potential parallels. As you were talking about sort of the interaction between intrusion theories and First Amendment rights and disclosure of information gathered, it's stuck with it as a parallel to some of the hacking cases that have been coming through lately like the like US versus Orenheimer where there was this double level of prosecution first of all for accessing what was allegedly a protective computer and then a separate enhancement of the penalties in the case for disclosure of the information gathered and potentially different First Amendment issues arising at each stage there. Also, since you were discussing First Amendment rights record, I'm guessing you've probably done a lot of looking at state wiretap statutes. Yes, that's where the whole section that comes from. What I find interesting about those cases is that with all due respect to my many friends who are reporters' attorneys, I think there's a lot of overstatement on how broad the right to record actually is. These are really only recent cases. They come I think starting in like 2010 or 2008. They split over whether there is a well-established First Amendment right to record. Two circuits found that there is a well-established right to record matters of the public interest especially when it's a cop trying to arrest somebody. One circuit found that that right was not well-established and then there was a sweeping decision I think it was Seventh Circuit. First Circuit and then Seventh Circuit. But is ACLU of the Alvarez was Seventh or was Eleventh? I don't remember. Alvarez was Seventh. Yeah, so in that one the court made some pretty sweeping statements about there just being a First Amendment right to record but then there's a footnote that says we're not considering any wiretap laws that deal with private areas. So yeah, so there's a big push by media press in general or media attorneys in general to have that right to record apply everywhere and I don't think that's where the courts have taken it yet. Right, and well in those cases largely focus on recording conduct of public officials in some some aspect not general recording public activity. Yeah. I'm curious about your normative beacon here so that in this intriguing welter of rules and technologies what the aspiration is what you would hope to see. It seems to me there are at least two general possibilities and I don't yet have a sense of which one you're leaning toward. So one possibility would be to hypothesize that people are reasonably stable creatures and they have a correspondingly stable need for a certain amount of privacy and as the world changes we want to adjust legal rules so as to ensure that they keep or get that. Another possibility is to imagine people as much more malleable creatures and it's inevitable and not so terrible that they are going to change as the world changes so then the goal is much more limited it would be some mixture of transparency and speed I guess so that people aren't surprised and can adjust as the world adjusts around them but the end state might be radically different like the world in which people have more or less comfortably morphed into settings in which they expect to enjoy far less protection against certain kinds of intrusion. It would seem that choosing among those roots would make a big difference in terms of what legal regime you're arguing for. Know which one you're going to do. Yeah I think that was intentional so there's a little bit of hiding my hand. So I think the answer is it will depend on what the particular privacy violation is. There are some things that people will need to adjust to and there are some things that may be fundamental to the protection of self-development and to democratic development and I'm not at a stage in the project where my personal literature gathering and reading to say do I try to identify those different categories but I do think that normatively there are areas where my normative intuition would be different depending on how they relate to both development of selfhood and development of democratic values. Yeah I have to pick up on that I was having a conversation with a friend of mine who just said look all that's happening with all of this is we're back in the pre-industrial village where everybody knew everything about your life all the time then you when you were having sex and whom you were having it with then you where you went and then you were expected to be and then you if you were going somewhere what route you were taking and you whom you spoke to and I'm thinking but it's different now and I couldn't put my finger on how exactly it's not just that people who have power of you or the power to damage you might have access to this knowledge because that happened back then too the priest might find out something or your you know overlord might find out something that could then be used against you you know so what is the difference between that zero expectation of privacy and now is it is it you mentioned the progress of democracy um maybe that's it I don't know yeah so again very big question um I think that the this is one of the reasons why I came into this project with that sort of spatial definition of privacy attached to it because I think that even in pre-industrial with by acknowledged limited knowledge of the history of pre-industrial privacy I think even in those contexts there was still boundary management you just use different tools to manage boundaries right so it's not as though like you lived out in the public street and had all interactions in public there were still tools that you were using to manage who saw what when and there might be leaks between places where you managed it but they weren't perfect leaks and they weren't perfectly porous so when you're looking about privacy I think especially since we'll all live in global village in different cultures privacy means very different things and different things that I come from a culture that doesn't even have a word for privacy and when I speak to people it's much more common for me to stand much closer where Americans become much more uncomfortable so privacy is not necessarily since we live in a global village it changes with access to other cultures as well right so I need to determine whether I'm dealing with robotic privacy issues only from a US perspective which I think at least right now the scope of the project is definitely the US scope just another shocking example it's totally okay in my culture to ask how much money you make but you may never ask where the bathroom is not only anything sexual but where is the bathroom if you come to somebody's house you cannot say where is the bathroom yeah I want to take it back to robots and to the idea that you said they're embodied software right and I think what we're discussing is their embodiment makes them go places right so it's it's moving eyes and ears but the thing with surveillance and robots it seems to me that their embodiment can also make them act upon people right and so the surveillance debate that we're seeing we're seeing collection but we're also seeing back doors so the same way people are realizing that surveillance can have agencies activate their camera by distance I guess that surveillance and robots can also you know if we accept the idea is that back doors which we have legally in the past as back door is being you know normal and legitimate law enforcement tools that would mean that someone can take control of their robot by distance and mostly the government and then this embodiment can act upon you in in ways in which the harm goes beyond you know your cultural idea of privacy right so if you're driving if you're in a self-driving car and someone takes control of the car by distance it's not about like you know well it's it's it's the embodiment has a very complicated action and I wanted to know if you were working on these ideas of yeah back doors and so the back door thing I think it's interesting I think even thinking about it more from the perspective again the third party problem that like I may be in interaction with my household robot thinking that I'm interacting with just what's there but actually what's happening is that I'm interacting with the whole complex you know uh commercial complex of of actors um which probably in turn has relationship with the government um and so so uh yeah so I think the back door idea plays into that too um embodiment in the self-driving car context again I think ends up bringing up more traditional physical tort issues than privacy tort issues but you can imagine with a household robot that embodiment plus like third party or back door control uh can lead to the exploration of spaces or the recording of spaces where that you otherwise would not give access to those entities potential robots being hacked and privacy harm coming from hacks and not from established structure yeah so there is there there is an FTC um there was a consent decree that uh was like the first of the internet of things um that talks about the I think it was a camera that people had in their home um that was advertised at security but then was hackable um but the liability there stems again from info privacy principles and in the U.S. our info privacy uh enforceability of the info privacy regime comes from that security camera making those those promises to begin with so you can only the the reason the FTC was able to go after them is because they advertise themselves as being secure and in their Eula said you know we're secure um but if you if you have the robot again that says you can set to everything including hacks by third parties I don't really know how that's going to be enforceable yeah two comments on privacy it might be interesting to look at Japan um it's always interesting to look at Japan my understanding of of Japanese culture is is that um privacy is enforced by the language and the culture not by physical space because you have paper walls you can hear everything that happens and it's it's also my understanding that there there's very little public space in Japanese culture um they're no town squares for instance they're marketplaces but they're no town squares if they need to do something they like a Japanese house which can open up all the way so that you can see through it they open up spaces in the public so that they can gather together for festivals and whatever uh the second thing is um there's a book from the 70s called do trees have standing which is a legal argument from the environmental perspective and just recently I think it's Ecuador which which is legally given uh some ecosystem services legal standing and there is at least one country I want to say India but I'm probably wrong which is just to announce that they're they're allowing dolphins non-person status as non-human person status so there there's some things that that overlap into the environmental field which might be interesting now my my question is recently we've had at least two instances in the news of google glass so the cyborg problem you had somebody who was driving on the street the driving car wearing google glasses pulled over right and evidently this person was not using the glasses at the time and so it all went away but there's going to be a time when it's not there was another person who is seated in a movie theater right wearing google glasses because they needed that in order to watch ICE and the uh NPA came in and said out of the theater we think that you're filming the movie and we don't like that and they they uh put this person under surveillance you know questioning for like a few hours he was he or she was not doing anything so what happens with the cyborg problem where you amplify you amplify yourself with the technology that you're wearing Steve Mann has done a lot of work on this going back to the 80s even when he was walking around as a cyborg here around Cambridge and going into places which were filming him and then he would say hey you know I've got my glasses on and I'm filming you why can't I do that so the cyborg in problem I'm going to translate into an enhancement problem and that actually has there's been a good amount of doctrinal development on it whether that doctrine has landed in a good place or not I'm not sure so initially I think there's a second sort of case that said that the use of telescopes to spy into the home could be prohibited because you're using a an enhancement that people don't normally have then you have the supreme court thermal imaging case that had a mention that once so thermal imaging is not you know is privacy intrusive but publicly adopted regularly used technologies such as telescopes are not considered to be privacy violated because everybody expects you know that's what happens and I think same thing basically with the plane cases where there are enough planes flying overhead that you can't really expect to have privacy from from things above and so that's that's getting pushed in interesting ways but I think not because the enhancement is definitely is changing because the the capability for how long you can store it how you can correlate it with other information and where from where it takes place gets more interesting and I don't know whether we'll end up doctrinally reverting to protecting against enhancement even if it's in public regular public use or if we'll stick with this idea that if everybody's using it then there's no protection necessary um there's actually a really an interesting thing that happened in New York recently there was a photographer who was using a telescope from his apartment building telephoto lens to take photos of people in this glass walled apartment building I think on like the 13th or 14th store story his name's Arn Svensson and he took these beautiful photographs none of which like individually identified the people they you know show the back of their head they show their children they showed you know their dogs and they went up in a gallery in Chelsea they're beautiful photographs and he actually he was sued for violation of privacy and the New York courts found that he was protected by the first amendment because you have your giant glass walls and it's reasonable that people might be using telephoto lenses from the apartment building across the way to take photos of you so that's an example of that doctrine and action right now yeah to continue on this point is like when you're presenting you were saying that there are some situations in which a drone will not actually be infringed upon the privacy because there is no explanation of privacy in a public place etc or for instance if in the case of household robots if I actually accept consent to the to the privacy policy of the the producer of the seller then but then so there is as such there is no infringement on the privacy of the user but as the data actually gets aggregated etc then maybe it could actually harm the privacy and so I was wondering what kind of like under which kind of body of law would you rely in order to actually identify these thoughts if it's if you think privacy should actually be extended or if you can rely on a different body of law right so those were two areas where I said I think that the existing doctrine is not going far enough toward protecting what where we think I have privacy arms harms are going to arise and you can see it starting to change with the drones the Jones case but you can you can also I think see it extending with with self-regulation on the part of some robotics manufacturers so one thing I found really interesting is that there is a robotics company that makes a conference robot called the beam which is a way for you to remotely attend the conference and they decided not to have the beam record anything if you're the user of the beam you can be you know you're with your telepresence you can take a photo take a screenshot through of your screen and your experience of it but the robot itself is not recording anything so what we might end up finding is that in places where there's no legal protection companies realize that there will be a rejection of the technology unless they end up using technological protection of some kind there is actually no no body of law that could be relied upon in order to argue that this could not be done the aggregation of event or there is a consent or event or there is no expectation of privacy there's a complicated answer to that basically if if I am in public and somebody takes a photo of me and takes a series of photos of me and ends up aggregating that information then the only thing that suggests I might have a reasonable expectation of privacy under the fourth amendment is the are the concurrences in us versus jones and it's not clear how those are going to end up extending into the privacy tort space so i'm working on a piece with kevin bankston who's down in dc on whether you could take the ideas the fourth amendment ideas of reasonable expectation of privacy and port them into a privacy tort context but no there is no data protection regime yes you talked about his embodied software but implicitly in almost all of our conversation it's been embodied mobile software and i wonder how important that is to you i'm specifically thinking about smart home tech and in particular thinking about the interesting challenges of smart home tech where companies get aqua hired like nest just did and suddenly like you know you invited nest into your home and then now google has bought their way into your home and those kinds of questions or is it more important to you the ability for the robots to go places so i think that is a very important cyber privacy problem it's not necessarily a robotic privacy problem um but the line between the two is not definite in its spectrum can turn your lights on or open your doors or like when does that home be turned into a robot yeah and if what if nest starts like responding to your to effective reactions right so so it it sees you and it communicates with you and it gets the anthropomorphic part of robotics so when we we had a conversation again ryan kello is writing an amazing piece that's sort of trying to do for robots and law what the early cyber law debates did for cyber law to establish like what makes robotic law interesting and the conversation that we had on this piece a number of months ago in a group of people who are at fortum talked about a number of people came up with the idea that you could do like a certain number of characteristics make something into a legal robotics problem as opposed to this one definition creates a legal robotics problem i think that's the the direction that i'm leaning in here like here's 10 things if you're hit seven of them right then you have a robotics problem but what those things are again work in progress yes what i think is that the notions of privacy in the human relationships and the notions of privacy when we talk about the robotics or the their drones are completely different because you know we want to create the robots with high expectations with high capabilities that what we cannot do they have to achieve and in that sense you know but at the same time we want to attach the the privacy notions of our own things you know to that and that may not work so we we wanted a super species at the same time and we wanted to rate it as you know a subordinate species so i think you know we need to look at different sets of rules that that will be acceptable to the human beings at the same time that allow for better performance of the robotics and are you taking such kind of you know do's and don'ts or some what kind of the alternative rules that we can think of in this new technology age that is one thing that you know i'd like to know from you the second aspect is about the the liability part of whether the manufacturer whether it is operated whether it is the the person who is you know shooting the photograph so who you who is the the person liable i think the the traditional uh the the tort law uh you know already has the answer that you know the environmental law jurisprudence has already and the the tort law has also uh i know has the rules that you know anybody who controls the instrument anybody who instructs that anybody who who has the the the operation so who gives the instruction is the person who is in charge of that okay so it's not the manufacturer it is not the person who is selling it and it if i am having this drone in i'm giving instruction that it has to operate in a particular manner and i'm liable so that is a way and do not with the steven since your instrument is defective in which case the manufacturer may and that's my relationship with the manufacturer that's a different issue but it could be some shared you start a program you really know what it does you know you turn it on and hit go that way and who's really in charge of it is it the manufacturer the programmer some aftermarket vendor you know for example the mobile phone see the for example the mobile phone comes with several instruments for example it has a recorder it has a photograph and it i can also talk and everything is involved in this and somebody has enabled this you know instrument with all these features but it is up to me to use or not that's one thing the secondly when you use it rightly or not and if i violate your privacy by taking a photograph without your consent and publishing it uploading it then i'm liable not the person who has you know put these features into this but that's because the default for your mobile phone most of the time at least just we understand it is that it's off so you're they're they're giving you that decision making capability part of the reason this gets so interesting is that if we have an ever on robot then the liability would shift back to and possibly the authorship would shift back to the manufacturer and not to the person who is operating it and many robots are going to end up being over on ever on because the the sensing capabilities are what make them able to navigate in space if it happens without the consent of the the operator or the owner is it so i have a robot and ever on robot and i know that it is always on yeah then you get into the yula the end user license agreement issue again of whether that ends up getting in the way of my responsibility or the my ability to claim a privacy harm so i do think it's more complicated as i understand yes but the first question that what are the new the legal standards that you are looking at i'm not sure i fully understood so the the idea that we want the super species but we still want to attach our privacy notions to them our intention with each other completely agree i think that the the super species that we're creating doesn't get to dictate what the privacy expectations are and you have different groups of people with different levels of x of social expectations who are developing the super species and and expecting the privacy so then we negotiate that through democratic for us thank you all if there are no further questions i really appreciate it