 All right, this is better. I'm Dony, and I work in tech support for Caesars. We're not in Caesars, though. That's why you broke the mic. I'm Dr. Ramon Choudhury. I lead ethics and AI at Accenture. Hello. My name is Ariel Herbert-Voss. I'm a PhD student at Harvard and a former black hat hacker and now on a redemption arc. He's getting better as he is. Second round is better. Hi. I'm Sam. I'm a reporter at Motherboard, VICE's Science, Technology Outlet. And my team and I were the first to discover deep fakes. So my name's Dr. Parris. I'm a researcher at Dayton Society Research Institute following rhetoric and technical solutions around deep fakes. And I've been following this since Sam broke it in 2017. So yeah, I think that's enough. Robert and Handi, in fact, let's start back with you. The first question I want to start with is Sam and her team have been across the story for a while. But it's only been the past six months or so that organizations like CNN and I think lawmakers are really talking about this. But do you think it's being blown out of proportion? Is it our deep fakes, the national security and the risk to the 2020 election like some people are saying? Well, here's what I think. It may not be a popular opinion, but I will commend Sam's reporting when she broke the phenomenon back in 2017 in sort of more clearly laying out the harms and the harms to whom that we should actually be worried about. And I think a lot of the discourse around deep fakes at present, the policy that's being introduced around it, the ways in which sort of larger, major media news outlets are glomming on to a lot of this panic and focusing on the threat to elections, the threat to politics with the capital P, really disregards and I think problematically disregards the harms to already vulnerable populations like women who are targeted disproportionately by fake imagery, even going back to the beginning of the internet, people were developing sort of Photoshop fakes and things like that of women and it hasn't stopped and indeed this is just sort of an expansion of that trajectory. So, yeah, yeah. Yeah, I think that was really well said. I'm glad we're jumping right into this. I was thinking we're gonna ease into is it all, is deep fakes fake? But here we are. So yeah, as Britt mentioned, this technology was first and foremost developed by men to harass or use women's images without their consent. That's the reality of it. Deep Fakes, I spoke to him a little bit over Reddit DMs. I asked him, why did you do this? Why did you make this? And his response was, I'm just a programmer. I'm just honing my craft. I had a hobby. I like porn. I like developing algorithms, so I put them together and that's kind of the story going back before Deep Fakes for years. So yeah, it's tough because a lot of the reporting does focus on politics, people who are public figures being targeted by this. But I mean, correct me if I'm wrong and if I'm missing something, but I haven't really seen any substantial harm done to anyone in politics or in a public position that has been a result of Deep Fakes. So yeah, it's tough because that's where the media is kind of focused. And I think because it was December 2017, it was right after the elections and right before the midterms, it kind of hit a really soft spot about fake news and what's gonna happen to truth. And what are we gonna do with this new technology that's gonna destroy our image of truth? And the fact is that it's still being used to harass women. There's been a ton of good reporting on that, but yeah, I'm not really sure why other than just it's attention-grabby to say that it's gonna create a nuclear war or something. The end of reality. So yeah, I don't know, that's kind of my too long didn't read of what's happened. I hope you read. So I think that the concern with Deep Fakes actually resides more in how centralized a lot of the media infrastructure is these days. Because when I think back to when I used to run troll operations, the way that people would communicate was on forums that weren't Reddit and in various places on the internet. But these days, most of the comment sections have turned into YouTube or they turned into Reddit or they turned into Twitter, which means if you wanna get content out to a large population you can do that so much faster than you could like 10 years ago. And to me, I think that that's more of a threat than the Deep Fakes are, because as my co-panelists have said, this technology has been around for awhile and we have seen some harms that not like anything significant, but I think that social media is maybe a bit more harmful and we should look more at that. I'm gonna take a slightly different opinion. So Deep Fakes technology has improved quite a bit actually and you can train on much fewer images. This is where deep nudes came from. It just became much easier to do it over time. So by background, I'm a social scientist but a quantity of social scientists so I work in AI as well. And as both my political scientist had on and as my technologist had on, I can tell you that confirmation bias is real. Deep Fakes don't have to actually be good for people to believe them. People want to, people believed pizza gate, right? Like let us not forget pizza gate. It is very easy to make something reasonable enough that a population will believe it. So when we think about elections, how might this be used? I think actually the way it's being framed is a little bit wrong. It's not that it's going to be used to make a video about somebody to attack that person. What will happen is there'll be misinformation, disinformation, those are two totally different things being spread to particular communities to misinform them or disenfranchise them very specifically focusing on low income minority communities to get them to not turn out to vote to get them to think that the issues aren't relevant to them, that politicians don't care about them. So what's the point of turning up, right? So that's one way it's used. And also like, and I do agree on the centralized media but that is in the West. So when we think about Myanmar and the massacre of the Rohingyas that's happening right now, that can be directly attributed to certain social media outlets. Why? Because in developing nations and in smaller, poorer countries, all due respect to CNN, they're not exactly reporting on the election results of Myanmar or what's happening and et cetera. So the people in these nations turn to Facebook, Twitter, et cetera to get their news. So when we combine deep fakes with the proliferation of bot architecture and the ability to spread misinformation, very, very directly we've seen impact assessments done of how did Facebook drive misinformation in Myanmar. And so when we think about how elections might be impacted we're not just talking about the US election. So the US it would be something like finding vulnerable populations and disenfranchising them and other parts in other parts of the world it's very, very directly impacts the elections that are happening. So Sam, maybe this might be best for you but open to all of you. I think as we discuss this in the political context that is oftentimes where the social media corporations actually react because when there's a pile on from either media or Congress that's where we'll see whether it's between disinformation or deep fakes. It's only when it seems to get political and you get calls from about an anti-Pelosi video that Zuckerberg and Sandberg and Jack Dorsey and people start really talking about it. But you mentioned the real problem which is actually happening right now of the sort of revenge porn type material. I was wondering if you could speak a little bit about that of victims of that that you've spoken to specifically. And I know the impact is the same whether it's deep fake or not but have you spoken to victims who were specifically been a deep fake? Yeah, so there's nothing I hear quite a bit from people who have good intentions about talking about revenge porn and algorithmic face swapping stuff is, well if nothing is real and everything's real and everything's real and nothing's real and you can just kind of say, oh, that's not me in that video. It's not me having sex in that video, like blow it off, like it's nothing. But that's not really getting to the truth of what it feels like to have one of these made of you. I was talking to Danielle Citron who was gonna be here but she said it very succinctly and I think it was a very good way of putting it that when you see a video like that of yourself and it looks like you having sex on camera which it does look like that. The images are low res and grainy anyway so it's believable. But you see that and you think hundreds and thousands of people have seen this. It feels like your body. It doesn't matter that it's not. What people see is you in that video. So yeah, I think that's something that a lot of people don't quite understand about DeepFakes especially when it comes to when it's used against women. It's definitely the same feeling as having revenge porn posted of you online. It is revenge porn but it's just made with an algorithm. So yeah, I think that's kind of the heart of it because it feels like it's you even though it's not and it doesn't matter that it's not. People see that it is you and that kind of gets to the other points that were made with the Nancy Pelosi video. That wasn't a DeepFake. That was slowed down footage to make her look drunk. It was a video that was posted on Facebook and it got hundreds and thousands of shares. That wasn't a DeepFake at all and it was just bad editing but people shared it whether they thought it was funny. They were like, oh, I know it's not real but I don't like Nancy Pelosi so I'm gonna share it anyway. Or people thought it was real and they saw it in their feed and they scrolled by and they were like, oh, crazy, share. It doesn't matter. People don't think before they share a lot of the times. So yeah, I think that's one of the key points to remember with them. And totally agree, so with the Nancy Pelosi video, the thing to think about is the network effect. So it's not just you sharing it is what about the second degree person or third degree person that maybe heard or said, heard somebody talk about there's a video of Nancy, so it's like a giant game of telephone. I may have seen the video. I'm like, oh, there's this terrible fake video of Nancy Pelosi looking like she's drunk but she's really not. A next degree person just sort of scrolls through as you mentioned and sees Nancy Pelosi's drunk on some video and even if it gets debunked, even that second or third degree person, that news may not filter to them. So you absolutely correct. There's this reputational effect and when it goes to thinking about DeepFakes and nudes for women, it is akin to assault. It is a violation of your body and your being even if it is not you and the psychological impact it has on women, it is actually very massive and I'm curious to see sort of where legislation heads on this because we've seen New York pass revenge porn laws and DeepFakes would be included in that but I'm curious to see like what are the protections that are better gonna exist for women or for people. It's not just gonna be women for people that these videos are made about. How receptive, are we okay? How receptive has, cause obviously if you post a content like this on Facebook or Twitter, it should be taken down because it's porn regardless but sometimes that doesn't happen but how receptive have the sort of major porn websites like Pornhub and others been to removing this content and taking it down. Who's best placed there? Yeah? Yeah. Oh, do you want to? Or no? All I was gonna say is it's still there. It's still, DeepFakes are still on Pornhub even though I've been on Pornhub's ass about this, they don't care. Pornhub is owned by a giant conglomerate called Mind Geek and they don't give a shit. Twitter and Facebook are better about this but it's not until a journalist comes knocking that they care at all if it's just you and you're like I'm in this video, what do I do? It's not gonna get taken down. So I wanted to get at the, I guess I think you mentioned how DeepFakes fits into the wider sort of disinformation shit show that we've sort of has particularly risen over the past three or four years. How does one prepare and what should researchers do and what should the media do when as we talk more about DeepFakes and particularly in this sort of political context, I often think that we're something like the access Hollywood tape to drop of Trump in 2016. If that were to happen in 2020, he could easily just say, well, that's fake audio. So how DeepFakes and the knowledge that this technology exists actually allows us to say, well actually what really happened didn't happen and I think you have some experience of sort of that wider space. I'm kind of of two minds of this because on one hand I'd like to say that personal responsibility like you should pay attention to what media you're consuming but on the other hand since so many of these media companies like they're all optimizing for you to pay attention to them, you kind of don't have a choice in a way. So a lot of people's campaigning for teaching kids and adults how to consume better media often fall flat because it's just exhausting to have to audit every single piece of information that you read online and it's just easier to fall into like, well my neighbor or my parents who are sharing all this stuff on the internet like they're sharing that so that's maybe a signal of trust and I can trust that even though like it's actually some sort of fake news sort of situation. I don't really have a solution. I'm sorry for a dystopic take. So I'm piggybacking up for Ariel. Again this sort of goes back to how people form their ideas, their opinions and thoughts and this is removing it from the technology like we cement our opinions and thoughts and ideals about the world from our network and our community and actually from childhood. So a lot of the political science literature goes back to about like why do people have the beliefs they hold and what would make them change their mind and the negative answer is like it's very, very hard to change someone's mind and like when we make fake media people consume it in order to reinflict. It's not that they're getting fooled they actually want to consume things that reinforce their beliefs. So that's so part of the deep fakes is sort of feeding into that very human nature of wanting these things. So to your point like trying to teach people like there's an assumption being made if we want to teach people to verify media that they want this media that they want to verify when they actually don't mean people will follow weird like Dr. Merkola you can cure cancer with Goji berries websites because they desperately want to believe these things. So there is something deeper sort of about humanity but then to your question of will this then sort of disrupt the ability to even share any information that like is this, are you jumping into is this the end of reality question? There is some work going on from a technical perspective to try to understand the provenance of media. So I am Accenture's representative to partnership on AI which is sort of the big like industry everybody gets together and we're trying to kind of solve these things. One of the work streams are having it's actually going to be an XPRIZE is about media provenance. So how would you figure out a technical way to prove if something's real or not real and it's going to actually have quite a lot if you are familiar with XPRIZE they actually have quite a lot of money attached to them. So it is actually going to be a challenge put out to the community. I guess sort of to again piggyback of what my colleagues here on the panel have said I think it's important to ground rhetoric around deep fakes in these very social and political processes of not only technical production but of belief of the distribution of harms and of holding those in power and people in charge of distributing these accountable for what they're doing and we don't have a whole lot of that now. I mean I could go on and on about this but there have been a lot of solutions that people have put forward in terms of maybe beginning to take steps to hold platforms accountable. None of them are perfect solutions. They include all sorts of suggestions of sort of actually supporting and promoting human content moderators in addition to technical sort of content, I guess detection systems and actually meaningfully supporting these human content moderators hiring them in meaningful ways because they are sort of the only way that we can understand and begin to really I guess sort of catch some of these sort of social, historical and cultural processes that are going on in the ways in which different types of content would be interpreted by different audiences to mean different things to interpolate different types of action based on this fake disinformation, misinformation, things like that. Other people have said maybe we need to develop or sort of we have a lot of laws that could hold a lot of these platforms accountable for disseminating these types of videos but they're not narrowly tailored enough, they're not enforced. That's one avenue to pursue as well as actually encouraging platform companies to decide on what their values are and to enforce content in that way. And taking all together these solutions show a very long road ahead. It's very hard to combat power but it's something that I think that we have to do and it shows that it is possible. We just need to sort of all work together and do it. So I think broad coalitions in this area are one thing that the media could work to foster. So yeah, actually the point you mentioned about this idea of verifying providence and figuring out where something has actually come from. I've heard that sort of even outside of deep fakes but when we think about videos whether it's from Meinmar or Syria to say, well, we want to use this video, this image, in a case at the International Criminal Court but we can't be 100% certain it is what it says it is, it was taken where it was taken. Obviously high-end cameras or most actual cameras will have a lot of metadata attached to the picture but we know that that normally gets ripped out when you post it onto any of the major social media platforms. I hear a lot of pitches from folks of saying just this, we're building a system where that will support providence and basically saying, we're gonna watermark every piece of content. To me, that seems like such a mountain because you would have to obviously embed it in every piece of hardware, get every technology platform to buy into it and then what about when you start editing, editing videos and images, but do you think that's, I mean, it's technically possible but do you think there's the will there and that that could ever happen? Yeah, so there's a few ways you can figure out providence. So one would be, I suppose like a lo-fi way of doing like this watermark and I agree with this insurmountable task and what to stop someone from faking the watermark and then you have to verify the verification and turtles all the way down, right? The next one would be finding a way to algorithmically determine if pixels have been altered in an image and that's I think where some of this is headed is like determining providence based on having some sort of a thing or another algorithm to verify if somehow it's been doctored. So that might be a second or the third one that actually I have been toying with that I think is worth discussing is what if these open source algorithms or parties that make them and put them out there purposely make some sort of like backdoor way of verification, some way that it would fall apart, right? Some way that you could interrogate it because these algorithms don't fall from the sky, they're not made by God, right? They're made by companies. Somebody put these algorithms out there, they're open source. So is there responsibility of the organization or the company that makes them to provide some sort of a way that you can verify and that's, I think that's a question worth asking. Just before I pass this down, just on that point, when you say to build an algorithm on the pixel manipulation, I mean, that gets back into the arms race thing, right? And I'm sure there's people in this room that the second you build an algorithm, somebody here will be able to build a better one to fuck with it. So how do you address that? And actually you're pointing out one of the issues with adversarial networks is in doing so, you've created the adversary that is just as good as the thing you've made, right? So you actually, you end up pretty much creating it yourself because that's just how adversarial networks work. There is, I mean, I suppose this is a hacker community, like that has been the classic hacker problem, right? Like it's a classic security problem. How do you create something that like, you only have to be wrong ones, right? The other, like people on the other side just have to find one flaw. So it is a very, I don't think there's a good answer to that. Like you have thoughts. You, oh, you've covered what I had to say mostly. It's very much a cat and mouse game and I don't think there's a good way to win. I think you just have to keep playing the game. The only way to win is to not play, but unfortunately we can't do that anymore. I think the cat's out of the bag for that. Yeah, I mean, I also get a ton of pitches every day about the new greatest way to catch a deep fake. I, yeah, like everyone has said so far, it's a fine goal. I don't want to say to stop doing that, stop making those things because it's important, but I don't know if that's the answer. I don't know if watermarking, it's like we've already discussed like bias already exists if you want to believe it or not. A watermark is not going to make you think that it's not real or is real. Yeah, I don't know. And I think also something to pay attention to and I know Roman sort of touched upon this with a couple of our comments is that as much as we need to keep an eye on deep fakes, we need to keep an eye on the solutions and who is making these solutions, how power is being consolidated, economic power, social power, political power, discursive power, whenever we're talking about these different types of solution that either focus on verification of pixels or of people doing, uploading videos or forensic detection of other things. And I think going back to the news coverage around this, it sort of fomented this panic that is very urgent and palpable in a lot of ways that Ramon has talked about and that Sam wrote about. But I think the message then gets translated into we need a quick fix. We absolutely need like a quick technological fix and a lot of the social and political problems that are absolutely necessary, the structural inequalities that are sort of implicated and grow from these technologies are not being attended to in meaningful ways. And I really like what Ramon was talking about earlier when she was saying the group that you're working with is, I can't remember of them. Yeah, the partnership was working on, working with various communities, various sort of frontline communities to begin to consider how we might build more just technologies. And I think that's something that we really need to highlight going forward that a lot of technologists and public interest technologists if we wanna think about it that way, they don't need to necessarily reinvent the wheel but they need to highlight the work of people, researchers and also communities, these frontline communities that are already grappling with these issues, already have sets of solutions and already can clearly identify what the problems are to promote better solutions. And so, yeah, I think that's good enough for me. Yeah, just to add to that real quick, sorry. Yeah, I think that's, I'm glad you said that because yeah, I feel like a lot of these solutions are of a point of privilege. Like if you, maybe I don't have access to the ways to verify that these people are pitching me. I got one recently that was like, we'll make a camera that will put it in iPhones and it'll only take true images and it'll be watermarked that way. If you pitched me that and you're in this audience, please come talk to me. Yeah, no offense. But yeah, it's, that's tricky because what if I can't afford the special iPhone that's just for journalists or that's a lot of people that are being targeted by this stuff can't reach these fixes. So yeah, that's just my. I think the other part of, I think Britt touched on this is who holds the responsibility. And like there's really great work by this another researcher at Aden Society, Madalyn Ailesh, and she talks about how we're the liability sponge, right? So like we have these powers that be that create these things that are bad for society and then the negative externality actually gets borne by society as a whole. And we actually need to question that. I think that's what Britt's getting at. Like that's the paradigm we fundamentally need to question. Is the responsibility of taking care of deep fakes on all of us because a few powers that be decided to open source an algorithm that can make these things. And that's the part where we have to ask, is that really fair? Is it the responsibility? And again, it is not evenly borne by society as Britt's been saying. It is disproportionately borne by women, by people of color, by low income folks. And especially when you cross the three, you know, like these are the people who are the most. These are also the people who have the lowest access to resources. So even the liability sponge varies a lot by your demographics and your ability to even address or even understand the problem. So that's also a fundamental question we have to ask ourselves. Sure. So where does the responsibility lie then in? If it's, sorry, Britt. No, it's fine. As I rant. As you mentioned, the victims of this, this is inherently, as you say, sexist and inherently elitist technology, at least the solutions in some respect. So I mean, obviously the victims, as you mentioned, do not have the resources to try to tackle the problem. So who is responsible for, yeah. But it goes back to the elites though, right? Yeah, absolutely. And I think this is a common theme that's come up in a lot of the talks I've seen in ethics village and AI village is who holds the responsibility and who is accountable. And like for those of us who work in the ethics space, this is the number one thing we're all tackling. So I'm like wheezing around your answer, by the way, if you haven't noticed. No, because seriously, it is not a solved problem. It's just like a hot potatoation. We all kind of agree that there's a giant turd and it's like smelly, but who's gonna pick, like seriously, but who's gonna pick it up? And we're all kind of like darting around each other to say, well, like, you know, people should be responsible for consuming media that's good for them. And we had social media companies say for years, years. Oh, we're just an intermediary. We're not media companies, we're not responsible. The companies that would make these open source algorithms would say, well, we can't be responsible for how people use it. It is like the ultimate distributed harm. And even in notions of privacy, security, whenever you have distributed harm, it is very, very difficult to determine accountability and culpability. I feel like Britt has thoughts. Go ahead. Plus one on all of them. Can I shout out your really great medium article about responsible rollout? Like you, yeah, cause like if I don't, I would like to ask you to talk about that, please. Sure. So it's a medium article published in data and society points. And it was covering open AI's, yeah, GPT2, the text, the sort of text misinformation, artificial intelligence driven text, disinformation disseminator, so to speak. And their sort of botched rollout of this tool and the ways in which they sort of were secretive about it, but really wanted to sort of get it out in the public and talk about like how very powerful it would be, but oh no, no, no, we could not possibly put it in the public, we could not possibly develop a real sort of strategy, security strategy around putting this new GPT2, this fake text disseminator into the public, but they really wanted to talk about how good it was. So in the end, this issue of panic around artificial intelligence driven technology is something that I don't think I need to explain. We see it every day in the news, but this panic creates openings for technical solutions or it creates openings for people to slide in like I was saying earlier with these very quick technical fixes that stand to gain a lot of economic profit from developing these solutions and putting them out in the public. And we're sort of hoodwinked in some ways by believing we must avoid these, I mean and we must, right? Like these negative externalities are admittedly bad, but we don't want the solution to be worse than the problem itself. So that was what I wrote about in the medium article. I don't know if there's anything else specifically off the top of my head that's useful. I'd like to share what I'd like to add for my technologist perspective. Which is very interesting because from a technologist's perspective what I thought was really interesting that you pointed out and it was, we do have precedence in other like in sciences, et cetera for like if somebody creates something that could potentially be dangerous, how do we roll it out, right? Like who do we give it to and the difference between let's say like biomedical sciences versus AI or the fields that we're in is we don't really have an accountability structure, we don't have like an FDA or something like that for like a verification or a third party that we fundamentally believe and trust. And if anything in these communities sometimes it is the most powerful organizations that are the ones we don't trust. So when you're talking about like a, let's say like some small startup doesn't have to be open AI, it could be anybody creates this algorithm, they realize it could be a multi-use, like a multi-purpose product, it can do harm. What are the steps to take? And like to your point about accountability, like these are the things people are trying to understand and address. So what does responsible roll out look like? Does it mean, we roll it out to the government first or some like trusted company or organization that we know will use it responsibly just to have people sort of pressure test it? You know, and I was actually talking to Jack Clark who's a policy lead for open AI and you know, and I know it's something there, like we're all sort of trying to figure this out together. So kind of to get back to your question, there is no good answer. And to Britt's point, there are no quick fixes and if anything is a quick fix, then it's probably a lie or it'll probably have a negative X now that we have not even thought of yet. I actually had a tweet about like something else the other day and pretty much I said like TLDR, if a founder tells you they're trying to save the world like don't trust them, like their product will not save the world, nobody can save the world with a product, so. Yeah, I mean, I think it's the, what I've seen a lot of from pitches that I get from, and particularly people in the academic community and nonprofit world who are working in this is, they're so, I think there's a pressure there, right, to show your work, to get future funding. So there's sort of a demand that you have to put this out in the wild to sort of remind people you're doing something. I'm gonna open it up to a question, Charlie, but just on the point of putting this stuff out there in the wild, Sam, you've obviously done a lot of reporting on deepfakes, the original Reddit user, but I wanted to get just a sense from you of what's your gut on who do you think they are and maybe you haven't been able to write in an article because you've done a proof, but who they are and what their motivations are. Do you think we're gonna dox deepfakes on this panel right now? Here? No. No, I don't actually, even if I did know, I would not be able to say. Yeah, it's tough because, yeah. I would like, we exchanged a few DMs and then the first article came out and he did not like it and stopped talking to me. So that's fine, but yeah, like I said, I know that he's a programmer of some kind, obviously. I don't know if it's like at a research institution or just a guy in his basement tinkering away, I don't know. I can only go by what he told me and he might have been lying. It might be that he was working somewhere really important and big and didn't wanna get in trouble with his employer or something. I can't really say for sure who he is other than he told me that he's just someone on the internet, which is factually true. Yeah. Questions? Yeah? Where there is no public record? Yeah, this is, I mean. So the question was, or the sort of provocation was you'd like us to talk about the private sharing of deep fakes and fake videos on encrypted and secure platforms like WhatsApp and things that are more like less publicly accessible. Yeah, okay. So this is something I've been thinking about a lot. This is something that I call hidden virality that the extent to which these platforms are created because they are supposed to foster sort of communal values you're supposed to know everybody that you share with. You can only share within a small group of people and those people can share within smaller groups of people, et cetera, et cetera, et cetera. And this is very useful and valuable in certain places where media structures are not to be trusted generally. And so sharing information among friends, among family members, among trusted parties is the sort of assumption that these platforms were built with, right? But what ends up happening is these are hacked by, hacked I say, you know, this is basic social engineering by nefarious parties who went to spread disinformation within trusted parties. And we, you know, people and the platforms themselves don't find out about it until the problem is already so big that it's caused something terrible to happen. You know, we can look to India, we can look to Myanmar, you know, these types of things are happening there. And there is no way at present. I mean, I don't know how to fix it. I know a lot of people are working on it. I know some of my colleagues at Harvard are working on this right now. But I think it is these very private sharing platforms that have a lot of, I guess a lot of ability or probability that something dangerous will happen based on the disinformation that spreads there. Yep, go ahead. Can you guys talk in helping to address that issue is having bounties for technologies as a relief? So the question was about whether or not there would be potential for making bug bounties to sort of interrogate technology. So there's sort of two parts to it. I think you're right, like with things like if PAI has the X prize on determining provenance of information, that's a place where you can use it. But there's the problem with the thing about algorithms and a lot of these things that they can be used for many different things. So there's no way to interrogate the ability to make a deep fake that would stop anybody from making pornographic pictures. Cause that's just somebody's application of the ability to generate synthetic media. So that's sort of one of the problems there. But I do think you're absolutely correct. I think that is a really good way to start thinking about a lot of the malicious use and even unintended consequences, ethical issues. So her follow up was that it's more about evaluating before us, you could identify the potential harms to determine whether or not it should be released. I mean, I don't, I don't, I would be genuinely surprised if any of these corporations that made some of these algorithms and think about like, you know, oh, it could be used for this thing. And this is not to blame these companies, it's to say we don't actually have a framework of going through these potential threats to understand what the impact might be. You know, and this is sort of the difficulty, I think, in the security space, the privacy space, and now in the ethics space, what we sell people is nothing, right? Like if I do my job, like this is when I go to clients, I will literally say if I do my job, then nothing will happen. It's very hard to make that as a value proposition versus someone who's gonna dangle this cool new fun shiny toy. And like, oh look, you can make like a video of the Mona Lisa singing, ha, ha, ha, isn't that cool? And then I'm coming in and saying, someone might make porn, so you shouldn't release this. Then they're like, oh, but I can't make my Mona Lisa video. It's like, the core of what you're getting at here is, I think in the ethics community, we're gonna have to figure out what it is we're offering people. Because when I think of my job, I think of incentives and actors. And I would love to think everyone's a nice kind, generous human being, but they're not, right? So I may be talking to someone who frankly doesn't care if pornography is gonna be made. It's not worth it to them as much as it would be worth, the media frenzy around releasing this new algorithm, et cetera, so it's balancing all those things. So it might be, I just don't know how effective it might be. So, Anna. I guess I have kind of a comment. So when I think about how deep fakes are gonna be used, like yes, there's obviously, like porn is gonna be a big thing, but if we also look at Photoshop, yes, people do make a lot of celebrity photoshops, and yes, there are a lot of Reddit groups about this. But there's also a lot of artistic potential for this kind of technology. And I kind of think it's gonna be sort of a bifurcation in terms of, or I guess more of like a bimodal thing, where either we're gonna have like extremely harmful uses of women getting victimized, or we're gonna have like really cool style transfer videos of like the Lion King and like some different animation style kind of thing. And one of those is very cool. One of them is maybe not as cool, obviously. But we need to figure out how we're gonna wait these two possibilities. Who are the threat actors that, like I guess this also comes down to threat modeling, like what is the likelihood that this is gonna happen? There's probably more money, honestly, in making weird Lion King videos than there is in porn, depending on how we do regulation for this. So I think that the regulation piece is pretty important. It's literally just like weird art shit. Yes. It's like funny, yeah. There's like a lot of funny stuff. Like the Mona Lisa singing thing is real, right? So this is a whole, it was actually very interesting. Like you could train on 32, I think it was like 33 frames, like 33 still images, and you can generate a deep fake. And what somebody did was all these different paintings, like they were singing or talking, and it was fascinating. It was very, very interesting. Yeah, and there's like style transfer stuff where you can make Lion King look like noir style, black and white with cool shadows. But in terms of like a four good application as in like seriously impacting society or people, I don't, you know, I mean, one can say humor, especially now that like everything's on fire. What about Snapchat filters? Oh yeah, what about Snapchat filters? What about when like they forget to turn them off and they're like, right? And they're holding like government meetings and they leave the kitten filter on. I mean, that made my day. And like I said, the world's on fire right now. So like, maybe that is AI for good. Yeah. If you just get the browser extension, this will never happen again, and nobody ever used it. This to me, it seems like your discussion is saying, this is a problem, but it's maybe not inherently a technical problem. It's a problem that's enabled by technology. Is there something special about the X-Prize solution that will help? And could you just comment a little bit more about the ethics phase? Right, yeah, so I mean, I don't want to put words in partnerships is not my like thing. I mean, the thing about the X-Prize is it's gonna be millions of dollars behind it. So like, somebody, it will be a massive incentive. Again, like I cannot speak to the project, it is not mine. I've sort of been on one planning call about it. That's it, that's all I know. But it is something they're exploring. And I think that back to the bug bounty thing, putting a lot of money behind it would certainly help having the name of like X-Prize which would brands in partnership on AI, et cetera, would really help whoever, if that's where they go and if it's something somebody solves, it'll earn them a lot of credit. But to your point, I not even heard about that browser extension thing, honestly. But it's also, it's a collective action problem, right? Like you would have to get all of the different media outlets literally around the world, as well as all the platforms together to agree. And I don't even know how you would solve that to your point about it being a human issue. To me, that sounds like a massive collective action problem, not a technology problem. Yeah, I mean, I think that that's, you know, I think a lot of this gets back to just human nature, right? And even with Facebook's fact-checking initiative, you know, a lot of people just don't trust the fact-checkers. And also, I thought it was quite telling in that after the New Zealand shooting and, you know, a lot of people were giving lawmakers, everybody's giving Facebook a tough time for that video circulating. But then Facebook released the numbers of how many people, or at least how many users, tried to re-upload that video. And there was, you know, a few million instances of that. So, yeah, it does, I have, and Ariel can speak much better to this, but, sorry, Britt can speak much better to this. And Ariel can as well, I'm sure. The, you know, how much as we focus on this technical and platform problem, how much is this a human problem? Right, I mean, I feel like I'm being a broken record here a little bit, but, you know, the fact that audio-visual evidence, we assume that it has ever spoken for itself, I think is something that we need to interrogate. You know, it requires sort of social work to determine what is and what isn't evidence. It requires social processes of determining what is truth. The reason that we agree on what truth is is because it is a social process of, you know, determining who the experts will be and determining, you know, what is the frame in which their expertise will be disseminated and people who have economic, social, and discursive power have a very vested interest in determining who the experts are and what they say. So that is to say it's a complex problem and it's a complex problem that people in political science, in sociology, in library and information science have been working on, you know, for hundreds of years and don't have a clear answer on, you know, what is the best solution here? But I think those communities are places to start to talk to people in determining how we might fix this problem. I know Ramon's very active in the AI ethics space and I know that a lot of people within the AI ethics space are Mark Lenton-Narrows here in the audience, or he was, he's someone else who I think could speak to this, but yeah, that is to say it is complex and, you know, truth is difficult. Let's do one or two more and yeah, most of them are pretty bad. I'm wondering, do you think there's organizations that are really creating research, like you said, the outside world are even going beyond that and being private and kind of in a more subtle way than just like releasing everything, like organizing like the hell of the services that might just have a Russian out behind the scenes? Right, so the question is, are there underground, is there an underground movement of sorts? Not like a movement, but like an underground group of actors who are taking these really shitty academic tools and trying to use them for bad and the answer is complicated because it's both yes and no. So outside of like the machine learning bubble, it's actually quite hard to penetrate the machine learning bubble, so if you don't already have these skills, you're not likely to really pick them up because you can go through like the Andrew Ng MOOC course about machine learning and you get like three lessons and you're like, why the hell am I doing this? This isn't helping me make deep fakes and then you quit and then you find something else to do. But that said, like there are still skilled actors out there, like maybe board grad students who do kind of network and do this stuff because they're bored, but the number of people who do that is very small and it's not something that I would consider a threat, honestly, at least at this point, because I mean like you said, the tools are extremely shitty and until we have better UI, we usually find that people don't really adopt tools very much. Hey Deb, Deb's question was about what does legislation look like in this space? That's an excellent question, so there's a few, there are a few bills out there, there is like a deep fakes bill out there, there are things like the Detour Act and things about trying to stop. So they're very focused on the harms and I think what you're actually touching on here is the fact that there's this level of knowledge and understanding about this technology that just doesn't quite exist in the legislative space yet and one of the biggest challenges has been like educating lawmakers, like I love how you started this panel by asking what's hyping what's not. Whenever I go to DC, that is the number one question that lawmakers will ask, like should I be worried about deep fakes? Should I worry about bots? Should I worry about killer drones? There seems to be like everything. So I think it's, so two parts to my answer, number one, legislation's gonna be a very long and difficult process, but number two, I think the way we've seen the most success is figuring out how existing law applies to these situations and looking at it from a harm's perspective. There's a lot of really great work actually that comes out of the privacy space because that's the space the ethics community is learning from, we're learning from the privacy and security space, to be honest. So what are the existing laws on the books that it doesn't have to be regulating a technology. It's about the potential harms of the technology. So New York City and then the New York State passing revenge porn law wasn't necessarily, it could have been just about videos and nudes that were real, but it can apply to a deep fake. So thinking about legislation that way might actually be more valuable, especially given we're going into an election year and to fight in government, et cetera, that might actually be a more valuable way of thinking about things. Yeah. I shouldn't allow anyone a second question but this guy's almost jumping out of his seat here, so. It's like an OJ Simpson question. Anyone who's old enough to remember, this is like the OJ Simpson DNA evidence, it's the same question. It is literally the question about DNA evidence, right? And the Rodney King video, yes, the Rodney. Yeah. I think you have. Oh, well, okay, oh, sorry. Jonah, just repeat the question. So the question was if there was a murder case and the defense for the murder, alleged murder, said that the video was a deep fake of the person, would that hold up in court? How would you rule it? And how would you look into it? How would you adjudicate it, so to speak? So I've actually learned a lot about how this works in the courts and if Danielle Satron was on this panel today, she would say that this is a classic example of the liar's dividend in which someone can claim that a video of them doing something wrong would be used by someone to excuse themselves from culpability of this act and this is a huge problem. However, the courts do have pretty strict forensic capacities whenever they're looking at videos, and I know this because I have a lot of friends who practice law, and I think that a deep fake at present would be able to, we would be able to understand there would be a lot of scrutiny on this video to determine whether or not it was a deep fake. But I have an example that I often use, say somebody's in a child custody case and they're bringing character witnesses to the stand and somebody makes, it doesn't even have to be a deep fake, it can be just like a fake, like a video of someone that they say is this person who's witnessed they're supposed to be, or who they're supposed to be witnessing the character of is doing something inappropriate and this was sent to their character witnesses. Would this hold up in court? It's not introduced into the court as evidence, but it is sort of distributed to people that would talk about this person's character and it could have an effect on what they would say. So that's the sort of issue that I would see, but. Do we have one more question? Yeah. I think people are now suspicious of static images, so is it going to be any different with deep fakes, like given more time, will people take on the deep fakes? I don't think so, I think that we're going to have, oh yes, okay. So your question was, do you think that, so people are already pretty suspicious about static images, but do we think that people are going to become similarly suspicious of video? And I think that people are already kind of suspicious of video because Adobe After Effects has been around for a while, you can doctor videos. I think we're still kind of too early in the tech cycle for a lot of these deep fakes to be really convincing. One of my favorite deep fakes is one of somebody took Elon Musk's face and they put it on a bunch of babies. And so there's just like these kind of weird looking Elon Musk babies things. That is AI for good. Right, but like the face mapping doesn't quite work quite well because the babies I guess are too wiggly and Elon Musk is just too meh. So like you can tell it, it's a deep fakes. So I don't think we're quite there for it to be suspicious. I do think we will get there at some point, but it's hard for me to put a number on one that might be. Actually just as you mentioned Adobe, and then Paul to your point on sort of corporate responsibility and putting this stuff out in the wild. I don't know if anybody remembers, but it was just a few days I think before the 2016 election where Adobe demonstrated this new audio software. What was it called? Voco and basically where you take a few minutes of somebody's voice and then you could type them to make them say anything. I've been asking Adobe, they haven't released that yet. And I've wondered if they sort of saw what exploded after the election and how disinformation came into the public discourse in a way and said, ooh, maybe we shouldn't put this out, but they won't answer my question. So that's not very helpful. But thank you so much to all our panelists and thank you all for your great questions.