 Good afternoon. I'm Lisa Shea. I teach electrical engineering at West Point and I'm here with my colleague Greg Conti who teaches computer science at West Point and our friend Woody Hartsog who is a lawyer. He's an assistant professor at the Cumberland School of Law which is part of Samford University. He's also an affiliate scholar at the Stanford Center for Internet and Society and he's worked at the Electric Privacy Info Center as well. So we're going to talk about confronting automated law enforcement which is any use of automation computer analysis within the law enforcement process. These are our own ideas, our own thoughts, not those of our employers. So the whole idea is we like living in a free society and we don't want to live in a police state. And you think about it's always good to obey the law. I mean that's part of our value system, that's part of what makes society work. But think about what happens if everyone obeys every law rigidly all the time. This is a video that maybe many of you have seen where a group of students at Georgia Tech tried an experiment. They drove around the Beltway around Atlanta at exactly the speed limit and they got a whole bunch of friends together and drove right across the entire highway at exactly the speed limit. And you can see they got off a traffic jam that's building up behind them. They were happy too. And the people behind them you can imagine were just thrilled to be going exactly the speed limit. It could actually have been a very dangerous experiment. I mean there were people driving on the side of the road to try and get around this. So as I said, automated law enforcement is any computer based system that's going to use input from unattended sensors that we have all around us to algorithmically determine whether or not a law has been broken and then to take some kind of action. So really what we want you to take away from this talk are three things. One, the network technologies exist right now. Second, if we aren't careful in paying attention to how these systems are in place, really disastrous consequences could ensue. We could end up living in that police state we showed earlier. And then third, you all in this audience are in a unique position to help prevent this. You have the technical knowledge, you have the networks, you've got the skills and abilities to see what's going on and to ask the right questions of people who are trying to implement these systems. So over to you Greg. So what leads us to this conclusion? Well we argue the precursors are in place for this now and it's a natural extension. We can see what's going on now and look into the very near future to see where it's all heading and hence the motivation for our talk to try and generate some interest in deflecting the trajectory of this. Just imagine the sensors in your home, the sensors on your body, the sensors in your body, the sensors in your car, the sensors in your community. They're proliferating at a massive rate and that creates data and data flows at an enormous amount, increasing views on our lives from every, you know, every angle that can be sampled by a sensor. And we're seeing increased diversity and sensitivity of sensors. This is an example of someone who had a medical test that injected a radioactive compound into the body for a medical reason, driving home, sets off a radio activity sensor in a police car and gets pulled over. This is actually true, we have the links in the slide. But that's the type of thing. So we're seeing increased diversity of sensors. We're seeing sensors becoming mandatory. As in the United States, this trend toward mandatory black boxes in cars to track. We see increased mobility of sensors. We, and alongside that we see the transfer of military technology such as drones into law enforcement roles. Probably one of the largest areas of concern is the mobile device that we carry on our pocket and this closes our location. It includes, it's replete with sensors and high speed network connectivity. So what I'm trying to paint here is a portrait of where we are now. All these precursors are in place that are beginning, you know, if you put them all, if you examine them individually, okay maybe you don't see so much. But if you put them all together something larger and more concerning emerges. You know, the idea of connected cars and on-star. Sometimes you can't get the car, particularly in a rental, without it. This closes your location and this full time connection. If they can start your car remotely, presumably they can turn it off remotely as well. We're losing control of our technology and there's a great quote here from Corey Doctorow. But the idea is more and more we're having closed source technologies where it's illegal to lift the lid and look inside the firmware or the software. And so general purpose computing is under attack and Corey has a great talk on that. Highly recommended. So we've got these data flows and we have, but once you have this data, well what's a key component? It's identifying the people that are potentially the subjects of this law enforcement system. Obviously there's current advances in facial recognition system. Alongside mandatory biometric databases at the nation state level being constructed such as in India where there's over a billion people being enrolled in such a system. Computing, I mean I think we'll admit that facial recognition isn't there yet. It has its flaws. But hybrid systems are emerging to allow identification of individuals. And this is from identifyriders.com. It's a great time to be alive when there's a website called identifyriders.com. But this is from the Vancouver riots where they are trying to solicit crowdsourced identification of people who were allegedly involved in the riots. There's even business models where like say convenience store camera monitoring is being crowdsourced and then they give small rewards to the citizens. So what can't be automated can be combined in a human machine hybrid system. And as we look to the future, has anyone seen Google's project last video? Even better, have you seen the parodies where they're wearing the glasses and they get text messages across the street and get run over in the crosswalk? You should definitely go to YouTube and look up the parodies. They're better than the original. But the bottom line is if this actually comes to pass, there'll be millions of people wearing always on sensors looking around in every facet of your life or life on the street being collected. And it's a matter of then getting access to that data by corporate America or law enforcement. We also see a trend of analog systems being converted to digital systems. And analog systems and traditional law enforcement is moderated by the human, it's human in the loop, intensive nature. But as things become increasingly efficient, then you can have automated or law enforcement with unprecedented rigor. And bonus points, what is this license plate from? Back to the future too, yes. That's why I love DEF CON because you know people, you know someone will come up with it. We're seeing increased capability to analyze these data flows. Here's a system that claims to be capable of tracking 32 vehicles across four lanes. And what we see emerging is on all major highways, the ability to track every car in real time, identify the car, identify its speed and other activity. We also see other sensors, technologies being developed such as the wide area motion imagery system which can track individual vehicles and people over long term, you know over course of hours, you know, long term monitoring of paths that individual objects, people and cars are taking. And the very aptly named persistent stair, exploitation and analysis system. And then of course the cost of technology is dropping. Location tracking, so you can track your spouse or children is dropping on a daily basis. So the technology is getting cheaper and cheaper where it's almost disposable. In predictive policing, the ability to take the data as it exists that you have now and then try and project out into the future when and where crimes will occur is actually reality. We've seen this before. Where have you seen it? Yes. So okay, these aren't pre, we're not saying they're pre-COGs involved, but it's definitely out there and there's, we have links in our slide deck about where it's actually being done. And of course there's lots of interested parties because a lot of this comes down to who is incentivized to employ these systems? Who is incentivized to constrain these systems? And there's lots of interest across the law enforcement spectrum and from industry because there's benefits to this and there's certainly dangers and there's certainly financial advantage. And for those of you that live in New York, New York City, and we live an hour north of New York, so we're apparently in the near future will be no longer able to purchase 32 ounce sodas and that just gives a trend toward their well intentioned officials that might like the idea of broadly enforcing the law across the populace. And there are other, you know, historically if you look back, there are certainly law enforcement agencies that have strict, shall we say strict enforcement models. Some would call them speed traps such that the automobile association of America has erected billboards outside some of these towns to warn motorists. So there are, there is clearly the opportunity for abuse by automating some of these. And you hear the term quotas and most people law enforcement officials will say, well quotas don't exist. We never tell our officers to enforce a quota system to, you know, up the numbers for a given month. Well, we're, we argue that they do and I mean I'm sure they're not supported officially but here's an example that the New York City Police Benevolence Association has actually reported to the report form to report when the officer claims that they've been asked to enforce a quota. So these are all like trends pushing this forward. And the, who's seen bait cars, right, they leave a car on the side of the street and someone decides to get in and the keys happen to be in the car and they drive off and they get locked in the car a few blocks away and arrested. So, but, so the idea of these systems can be interacted and used in many, many innovative ways. And as we look to the future, you know, who knows what type of bait, that bait could be used, you know, the pleasure model robots or who knows what. And then out and out illegal enforcement. Well, you know, we don't want that to happen. The power that these systems provide allows for illegal enforcement and potentially. And we can certainly, and I want a picture like that on top of my car just for the record. But clearly certain regimes will abuse these systems if they, if they have the power to do so. And this is all in the context of citizens who want a free lunch. And this is from Sans Fire where a bunch of security professionals were literally offered a free lunch in return for their, in return for their personal information. And similarly, social media is part of this, disclosing so much information in largely a public way. And it has not gone unnoticed by law enforcement entities. And here's an example of college students who knew their local college police were monitoring law, monitoring Facebook and decided for underage drinking and decided to stage a party that they bragged about. And it turned out it was just cake and soda. But when local law enforcement arrived, they were not pleased. But still, the idea here is that social media provides a data flow. And this isn't all pie in the sky, just this is going to happen. We already see successful prototypes and business plans now. And if any of you, for example, driven the I-95 between, say, the Washington, D.C. area north toward New York City, there's sections of the highway that have cameras at regular intervals. So we're not making this up. And then the law itself lags technology and it's progressing forward at such a rate that the law just isn't there yet. And we'll probably never be there. So where's all this heading? Okay, we've got enabling technologies. Check. Sensors everywhere. Check. Permiscuous data sharing by citizens. Check. Security and financial incentives and well intentioned law enforcement and complacent citizenry. Check and check. And then the law lagging technology and successful prototype. So all the precursors are in place. And we have to now look at where this is all going and make sure that we deflect it onto a trajectory that makes sense. Because otherwise, frankly, I don't think this is going to end all that well. That's what you get for having someone else's keyboard. Okay. So how does the law become involved in all of this? So Greg just talked about how the technology is in place. The sensors are there to record our activities. But that's not the only thing that can be automated. When we're talking about automated law enforcement, we're talking about a complete loop, not just with sensors and recording activity, but in storing that activity, processing that activity and making a decision whether to punish someone or not based on that activity. And so we've actually got three different levels here where we talk about there's the subject that is surveilled, the law enforcement agency at various points which you could automate decisions. And finally, the judicial system. Do we decide to meet out some kind of punishment or not? I'm going to turn it over to Lisa to talk a little more about this diagram. Okay. Granted, this is a busy slide. It's in the CD that you got. So you can look at it in more detail later. But briefly, I'm going to go over the areas of how this could be automated. So looking at the upper left corner of the diagram, subjects going about his or her daily life being surveilled by an automated system, which a law minority report might have a predictive module in it that then says, oh, subject is about to commit a crime. And depending on the scenario could potentially stop that crime, prevent that crime, a law minority report, or in another scenario could warn the suspect that, hey, what you're about to do is illegal. And then the suspect has to make a decision to commit the crime or not. And then if the crime is in fact committed, there will be some post processing again by the surveillance system that then decides, ooh, did the system detect that a crime was committed or not? Yeah, somebody committed one, but does the system catch it? If the system doesn't catch it, that's a false negative. There are errors occurred. But in our personal view, a system that doesn't catch a crime is, you know, it's it's got problems, but that's not the most serious kind of error. What's worse is if the person says, oh, you're right, I'm not going to commit this crime, then the post processing algorithm decides that they did anyway, that's a much more serious error. We call that a false positive. And that's, that's where some of the real danger lies. And then if the crime was detected, the law enforcement system then decides to prosecute or not. In a normal, everyday human based system, the police officer on the beat has discretion. But in an automated system, this becomes embedded in code. And code does this code is a deterministic process. It will do the same thing every time. And there's not going to be the opportunity for discretion. The system then, if it decides to prosecute, hopefully will notify the person that they've been prosecuted. Again, one of the things we're looking for if some sort of system is implemented, is transparency and notification so that you're not automatically punished without even knowing what you did wrong. Then if the suspect is notified, that suspect then has the choice to contest it or not. And then in the judicial process, if they contest, then they can either be acquitted or not. So this whole system sets up a feedback loop. And in a normal life situation that we have now, that's a fairly slow loop. And there's time for reflection, there's time for consideration of all the facts. In an automated system, we fear that that loop gets so fast that the person is punished before they even realize what's been done to them. There's lots of opportunities for automating the punishment side of things. The top row, and again, this is a small slide, but it's in your city. The top row shows things that are currently in place. The bottom row shows technologies that could be in place in a couple of years. The middle row shows what could be done now with technology if there was a will. And the range of punishments run the gamut from just notice to execution. And we'll go through a couple examples. So in Virginia and in many other places, there are websites that show you where cameras and systems are in place. There are also places that will send you notice or emails of warning systems that are implemented. And then I think we've all been driving and seeing one of these radar systems. And you can, it's an automatic feedback system, you can see how fast the police think you're going, you can see what the speed limit is, and then you can make a decision. Now, I'll grant theft auto, I don't recommend trying to make that delta as large as possible. If you hit 99 extra points. If any of you saw Bruce Schneier's talk yesterday at Black Hat, he talked about one of the one of the mechanisms that help a law abiding society stay functional is that people are concerned about their reputations. And we all want to protect our good reputations. And what some law enforcement agencies are starting to do is put on websites pictures of people who have committed a variety of different crimes. And you can imagine how easy it is to just take a police blotter and and write some some very simple scripts that put all that information up on a website daily. In a more detrimental way, there are automatic systems for providing citations, points, fines. I know lots of people who've gotten automatic tickets based on red light cameras or speed traps or things like that. And then finally, or next to finally, there are systems that could could arrest people. The bait cars are one example, or robotic systems with a variety of mechanisms on them. We can automate the prosecution process as well. Here's an example of some source code from a open source camera security project that can, again, it's just it's code that makes a decision as to whether or not a crime's been committed. We have systems in place already for confinement. There's a whole variety of GPS tracking devices. Greg showed a couple examples, your your cell phones or GPS tracking devices as well. And there's a business model for for outsourcing this type of enforcement of certain types of punishments. You know, for those of you who are interested, the website says GPS is now available for only $3 per day. So run out and get yours. Track your family members for only $3 a day. Yeah, well, you know, I can think of my children to be a good use for this too. And then finally, you know, the ultimate punishment is death. And there are already examples of automated systems that have lethal weapons attached. The ones that we know about right now, by and large, have humans in the loop. And that's, you know, but but that technology doesn't necessarily require humans in the loop. All right. So clearly there are advantages to this and that but there are certainly disadvantages as well. And and it really depends on your perspective. Are you the subject? Are you the law enforcement agency? Are you the judicial system? So we're going to roll through some examples. Some would argue that these systems provide a more secure society and a safer society. They clearly have the potential or in theory to offer increased efficiency. And for some they'll be financial incentives. And really what underlies this, I believe is incentives, who's motivated to employ these systems. They have the potential to reduce bias. And depending on where you're coming from, that may be a good thing. For example, you may there's some great research, literally called driving while black that shows of bias of police officers. There's also stop and frisk activities in certain parts of the country. There's a ongoing debate between crack and cocaine as in the the punishments associated with possessions of each, which are really very similar substances. There can be protection from abuse. Or these systems can be abused. So depends on your perspective. So clearly if you go back and look at the history of various countries around the world, none are without their blemishes. And certainly there are false positives. If any of you have seen ED 209 from from the movies where in Robocop, they're demonstrating the robot, point your gun at the ED 209 and put down your in the response, please put down your weapon. He puts down the weapon. The robot responds, you have 15 seconds to put down your weapon. And after time runs out, I'm now authorized to use physical force. Things don't end so well. And there are also false negatives. And this is a classic example from Google Street View. Now this individual could be doing exercise or have lost his keys. But these systems could see, could, a crime could be occurring and could miss it. We argue that's probably better than a false positive in most cases. A key component is the identification of the people in the pictures. Well, historically we have had issues with the improper identification or incorrect identification. Classic example is Senator Ted Kennedy appearing on it, getting held up at the airport for being on a terrorist watch list. The results of this could be a less compliant populace because we as citizens have to agree. It's a contract. We have to agree to support the law, to believe in the law on some level. And if you take that decision making out of people's hands, there can be problems. And there's always the risk of un-proportional response. That the system will respond in a way that's inappropriate. And this is a Texas speed trap motivational poster. A chain gun. And clearly this has the ability to enforce social control on a large scale. Particularly as we move forward. And really depends on whether your local politicians want you to have 32-ounce sodas or not. Or a variety of other activities that can enforce it with automated means. Some won't like the loss of power. Yes, and that's Batman. And it turns out Batman was pulled over for incorrect plates but he was actually going to a children's hospital. His plates were expired so they let him go. But it was a good picture so I thought I'd include it. But law enforcement and others and I would assume some in power like the professional courtesy perhaps that the current law enforcement system provides to them. Well they might not enjoy that loss of power if you have an unbiased automated law enforcement system. And there's a nice example of Montgomery County Police Department in Maryland photographed speeding past the camera with their extended middle finger. The unions and other police related organizations will certainly have something to say because efficiency could very well mean lost jobs. So there are there are many questions necessary to move forward. You know as we move forward in this area I'll be followed by Woody. So what can we do? Chances are that we're not going to see full automation overnight. It's going to happen piece by piece. It's going to automate a little bit on surveillance side and maybe a little bit on the decision decision-making side. And we think that the appropriate response is to start asking questions now. To start demanding answers and that we if a system is going to be implemented it be implemented responsibly. And there are some things that need to be attended to if that's going to happen. So for example the method of implementation. Are they going to use the sensors that we're carrying around in our bodies or are they going to mandate that everyone install a government brand sensor. Control. Who gets control over the enforcement system. Is it going to be low level administrators. Is it going to be third parties. That's perhaps software vendors that create the code. And if so what kind of influence are they going to have over the decision-making process because ultimately if they're the ones writing the code they are the final stop and they're interpreting law. And there are some potential problems with that. Legal integration of algorithms. Are we going to reach a point where there is going to be incredible incentive to personalize the law. So for example if I'm a very good driver I can perhaps drive 10 miles an hour over the speed limit because I've been proven to be trustworthy whereas someone who has a horrible driving record perhaps only gets about five miles an hour. And they're able to integrate all kinds of algorithms that would be able to determine that. Do you stop the violation before it happens or do you wait until after it happens and then give a fine. Now that may seem like a simple question but I think that the political pressure could be great when these systems are already implemented and entrenched for someone to say well if you can stop the crime. If you can stop the violation of the law why wouldn't you stop the violation of the law. But I think that there are significant problems with preemptive enforcement of the law. System error and malfunction. How much error are we willing to tolerate in a system. So because of course no system is without error we've got to make the decision well if it's only got a five percent error rate then that's good. Or ten percent or fifteen percent and we need to determine who makes that call. And some of the big questions and I think the one that goes to the heart of our talk today is whether we want perfect enforcement of the law. And I would like to go ahead and say now that we need to dispel this notion that the goals of law should be to achieve perfect enforcement. I think that to be effective laws need only be enforced to the extent that violations are kept to a socially acceptable level. We don't enforce jaywalking one hundred percent. We maybe enforce it point one percent of the time and we're okay with that. You know we know that it's a rule and as long as everyone more or less keeps it together we're fine with that. And so the goal shouldn't be perfect enforcement and that's one thing that we'd like to to make clear in this talk. Besides another question about perfect enforcement is many times for example particularly with minor violations we might violate the law seven to ten different times. So for example if you're speeding and the speed limit is 55 you may go 57 and then drop back down to 53 and then go 60 all on the same trip. And so if you violate the speeding limit 17 different times in one trip do you get 17 tickets or do you get one ticket and these are difficult decisions that have to be made particularly if the goal is to perfectly enforce the law. Woody I'd add that we sometimes I mean it's just we will violate the law we could try scrupulously to not violate the law but it's certainly just a function of minutes perhaps an hour you'll make you'll make it in fact you'll do something wrong. Absolutely yeah I mean we're all violators. Yeah but I mean even with the best intentions. Absolutely. Another problem that comes with automated enforcement is the loss of human discretion. So while Greg talked about the fact that discretion can be bad because it can lead to unjust results discretion can be also very good. It allows us to be compassionate it allows us to follow the spirit of the law instead of the letter of the law and then allows law enforcement officers to prioritize enforcement. So I'm not going to investigate the case of the law sneakers real hard because we've got some murder over here and so we prioritize where we want to spend all of our energies and when you take discretion out of it I think that there's some significant problems. It also leads to the phenomenon known as automation bias so there's a fair amount of research out there that shows that we as humans as a group tend to irrationally trust judgments made by computers even when we have reasons to potentially doubt that. You know the idea is well that doesn't look like the guy but the computer says that's the guy so that's probably the guy and there's a fair amount of that in the literature and if you're going to automate a system you've got to find a way to combat automation bias. It's one thing to exercise your opinion and your right to freedom of expression and it's another thing to do it when there's a government camera right in your face and with the ubiquity of censors and surveillance around I fear that there will be some serious chilling effects to the freedom of expression in the United States and that's precisely what the First Amendment was created against and I think that any automated system should take measures to make sure that there are no undue chilling effects on speech and our First Amendment rights. Also imagine if let's say the use of speeding violations increases seven hundred percent when you automate the system we all decided to appeal simultaneously. We would crash the system. You have to make sure before you implement any system that there's a mechanism that the infrastructure can handle both the burden of the initial violations being issued and the appeals process that comes after it. And finally there's the issue of societal harm automating a law enforcement system to achieve perfect enforcement says two things but it says one thing in particular we don't trust you we don't trust you to do what's right and we're going to go ahead and force the law automatically particularly when you engage in preemptive enforcement and so that risks eroding the necessary trust between the citizens and the governments that's critical for any kind of effective governance. Also there are some moral implications of doing our best to make sure that nobody can violate any crime with preemptive enforcement and so what does it say about a society that takes away all accountability for violations like don't worry you can do anything you want because if it was bad for you we wouldn't let you do it in the first place. And so I think that that's those are the significant questions that have to be answered in any in any automated law enforcement scheme so what can we do about it well first of all we can ensure that their procedural safeguards we can ensure that the basic fundamental due process rights are respected the rights to notice and a hearing so we need to know when we have violated law and we need to have an opportunity to be heard. Privacy rights we need a better fourth amendment jurisprudence we need to solve the problem of privacy in public which I think that we're headed towards a conflict over that sooner rather than later we need better electronic surveillance laws the necessity defense all of us probably understand that if you are headed to the emergency room you probably it's okay to speak it's fine that if you need to go 75 miles 80 miles an hour to get to the emergency room that will let you pass on this one there are many instances that you can imagine where we need to go ahead and violate the law because the costs of not violating the law are greater. Transparency who sees the source code is it going to be a trade secret or do we all get to see it and we can look to a lot of the e-voting disputes to learn from this but I think that open source and transparency in the code is absolutely critical in any automated law enforcement system. So what can we do about this? Obviously there are there are counter measures that are available for all different kinds of problems. We gave Greg and I gave a talk at the Hope 9 conference last month or actually earlier this month about a taxonomy of countermeasure approaches and you know in this community we love to defeat the device. We are all about reverse engineering the firmware and repurposing devices for our own needs and and that's great and that's absolutely a way of providing countermeasures or man in the middle attacks on the network or defeat the processing. How securely is that database recording all our data and can we can we tamper with it and make it make it look different for us and those are those are fun and those are exciting engineering challenges but even more important we we assert is the the countermeasures or or the influence on the actors the decision makers the people who decide to build these systems to replace these systems and potentially to regulate these systems because if we can prevent a bad system from being in place in the first place prevention you know an ounce of prevention is worth a pound of cure and and that takes us out of our comfort zone because those are dealing with real people not with inanimate objects but that's a vital task to engage the media to engage policymakers to engage law enforcement officials to engage the people who design and build and test these systems because once the system's in place the local you know economy the local leaders are becoming addicted particularly if it's profitable addicted to the to the the financial you know resources that it brings in and getting it dislodged is going to be far far more difficult yeah better to prevent it than to try and remove it after the fact and then also you have to worry about competing sensors when you have these these different sensor mechanisms how are they how are they calibrated how are they how often are they maintained how how regularly are they maintained because if you have different sensors that that detect different things about your activities which one is right and then we have to look at really how are these laws written and how would they be algorithmically implemented this is a graph of data that I took from my 2006 Prius showing vehicle speed over a period of about five minutes in this test I set my cruise control to 42 miles an hour which is the pink line going straight across the graph and at the very beginning I was going a little bit downhill and you could see the speed rise and then I went a little bit uphill and the speed dropped below 42 and then I was on some relatively flat terrain and yet the speed is still bouncing back and forth why is that well speed isn't is inherently an analog quantity it has an infinite variability but the computers on board our cars are digital systems so they're doing analog to digital conversion and inherently there's some quantization error involved and it turns out that the computer system on board my Prius has a quantization window of about 0.6 miles an hour so it turns out that that computer will never read exactly 42 miles an hour it's going to read 41.6 42.2 42.8 and so forth even though the number that it actually spits out is four decimal places behind you know four digits behind the decimal place so it'd be like 41.6374 miles an hour and so you think it's really really accurate but it's not and so if you just look at this little graph if the speed limit was exactly what that red line was there's about 17 times within three minutes that I violated the speed limit even though my cruise control was set at the speed limit so would I get 17 speeding tickets I hope not but the law has to take that into account if if any one of us was tasked with writing code to enforce the speed limit law how would you do it and you know would you have this kind of level crossing scheme where every time you cross the speed limit on an upward trend you counted that as a violation or would you have some kind of sliding window scheme that said only if the level was crossed for 500 milliseconds or 300 milliseconds would you count that as a violation if there's three violations within a certain period of time you know does that count or is that three or is that one so there's lots of the levels in the details and I should add that that there's no current infrastructure in place in the law to respond to that because of course these laws were not written with algorithmic precision in mind and so for example take trespassing so it's it's a violation of the law to trespass but if you were tasked with making sure that for example if a GPS device could it could probably tell whether you're on someone else's property or not how long do you have to be on the property before it's a trespass is it a few seconds what if you're walking down the boundary and one foot touches over and how far deep into the property do you have to be before it's a trespass and there are all of these little decisions that we make as judgment calls all the time using discretion in deciding whether to force the law that then have to be encoded and if you make an error then all of a sudden you've systematized the error of the law so that opens up a wide range of research topics this is an unsolved problem and we're trying to prevent problems so the community really has to engage in critical analysis of what are the metrics to decide risk versus benefit at what point is it worth implementing an automated system you know how much benefit do you have to derive versus what kind of cost and then these systems need to be designed for transparency they need to be designed for accountability we submit that they should have manual overrides in them if a car is going to prevent me from violating the speed limit well in theory that sounds like a great thing but what if i am running to the emergency room i'd like to be able to get there quickly or how many of you saw the video footage from the japanese earthquake when there was this huge tsunami wave and there was this little car running down the writing down the the road trying to outrun the tsunami wave you know if my car couldn't go past like the 30 miles an hour on that road ooh that wouldn't be good so we want to have manual overrides and and we want to build in the security systems now you all are going to find the flaws and hopefully you'll tell us and hopefully you know we'll be able to do something about it but we want to be able to build in some minimal level of security um and the thing is this isn't going to happen overnight this is this sort of problem is is similar to environmental problems you know a little bit of pollution here and there and there and then suddenly you wake up one morning and your river is on fire uh you know it's the same kind of thing here you set you get a little a little sensor here a speed speed camera there um you know a new computer system in the police department and then the next thing you know um we're living in a police state so you'll be careful be careful what you build and um you know in summary these systems can be implemented there's there's sensor technology out there right now that that have the potential to automate a lot of the law enforcement process and if it's not done well we could have some really serious unintended consequences and you all in the audience are in a unique position to help avert these these catastrophes if you're interested and and the pdf on the slides has has these has all the uh references um we've done a a talk at hope on on countermeasures and we've also written a paper for the we-robot conference that talks about some of this in more detail and we'd really like to thank uh John Nelson and Dom Larkin who are two colleagues that collaborated with us on the we-robot project who weren't able to to be here with us today um so as we said if you've got questions uh we'll be in q and a room number one thank you very much