 My name is Sandra Wachter. I'm a lawyer and a research fellow at the University of Oxford at the Internet Institute and I focus on the legal and ethical implications of big data machine learning, AI and robotics. I guess this is problematic with autonomous cars because the risk is just too high, right? You don't want to launch a car that is not tested enough and then actually injures or kills somebody. There's a lot of similarities with other digital devices that we have. It could be like a normal IoT device like a Fitbit or it could be like a cell phone and I think there are very similar privacy implications for autonomous cars than there are for cell phones as well. With autonomous cars it's a bit different because the cars have to communicate with each other, have to communicate with the infrastructure so you collect vast amounts of data from different people so you not only collect my personal data but also the passengers data, the pedestrian, the other driver and the other car so we all share that personal data all of a sudden and it's very unclear how we should manage that and who should have rights over that data because we all share it now and I think that's something that is unique as opposed to traditional technologies. And I think the other thing is that usually you don't share your digital devices with other people so it's mostly your own data that you can manage with consent and you know what's going to happen to that and you can request access. We might have to think about finding a very good balance because I think there are a bit more competing interests in terms of access and managing your data in terms of what companies want for example trade secrets concerns or intellectual property concerns but now you also have privacy concerns of other people as well. So I think one thing that is very important that we have a bit of a better dialogue between different disciplines we need to be a bit more forward-going, engaging with academia and with people from the private sector to understand how technology actually works, where the risks lie and then come up with sensible solutions and if we find novel risks that we might need to tweak there so I think we need to figure out where the actual risks lie and see if we actually have emerging problems and then move forward with that.