 Hi, I'm Rebecca McKinnon coming to you live from my exciting dining room table with my industrial strength headset because we sometimes have some ambient noise around the apartment. So I'm founding director of New America's ranking digital rights research program, which we refer to as RDR for short. So if you hear people say RDR, you know what that means. And today we're publishing part two of a two part report series that builds on more than five years of research about company policies and practices that affect their users human rights. You can learn more about RDR and how we evaluate companies in our corporate accountability index on our website at ranking digital rights.org. So, the issue of social media content moderation a little hot these days in the news I would say the social impact of platforms business models, the issue of what companies are doing to moderate or fact check or deal with problematic content. Very hot in the past 24 hours, a few news stories we've been seeing, of course, Twitter famously has started putting fact check warnings on tweets about the election by the current occupant of the White House. The Wall Street Journal has a blockbuster scoop about how Facebook commissioned research that found that its algorithms cause social polarization, and then Mark Zuckerberg chose to do nothing about it. There's also a report that you might not have noticed but points to what happens when everybody says companies should just use their automated technologies to take down bad stuff. YouTube's content moderation algorithm has been found to be deleting comments critical of the Chinese government's propagandists online, and the company said that was a mistake of its AI. Whoops. What are we going to do about all this. Today, we're going to drill down on the source of what is now widely known as the infodemic on top of the pandemic, and what needs to be done to address it as we move into an especially fraught election season. We're going to be joined in our discussion by a distinguished expert on the Internet and civil liberties, Garav Leroy, a senior policy counsel for free press. But first we're going to hear from our lead author of this report series, Natalie Marachel, already our senior policy analyst who's going to explain why and how the core driver of online misinformation and other dangerous content that has a negative social impact is the social that the driver of this problem is not just the content. It's the company's targeted advertising business model. Take it away Natalie. Tell us how that works. You're muted Natalie. Thanks so much for their reminder Rebecca. So as Rebecca said, we have a real problem with how social media and the electronic computerized algorithms that drive the distribution and the targeting of content in this is changing our society. And at this point, I think civil society researchers and academics have helped us get a pretty sophisticated understanding of what's happening. But where we're still kind of stuck is at the policy level and what we should do about it. And both in the US and in other countries around the world. The focus has really been on how do we make sure that companies get rid of the bad content. And that's pretty much true regardless of how you define the bad content. And one of the things with this line of argumentation is that first of all, you have to agree what bad content is. There's some kinds of bad content with that's relatively straightforward in the sense that everyone agrees that images of children being sexually abused are unacceptable. And it's, there's a fairly clear line between what that is and what that is not. There's a lot of other types of problematic content and even dangerous content where there's a lot more discussion at what point the satire veer into hate speech. What what point does this hate speech is hate speech political expression versus a threat to somebody's to somebody's or or well being what can you tell the difference between somebody just being wrong in stating a fact versus spreading disinformation. So that's that's a really huge definitional problem that is not really solvable at the level of a broad society, much less at the level of a planet. But just as difficult is the problem of how you actually detect whatever content is deemed to be bad under the definition that you may have agreed to how you detect it and how you and within what you do about it. If we move on to the next slide. You'll, you'll, you'll see that the core thesis of this report series is that relying on revenue from targeted advertising incentivizes companies whether it's Facebook, Twitter, YouTube, or a host of others to design platforms that are addictive, because they want to keep you online as long as possible so they can show you as many ads as possible. Manufactured virality, right because they want to be able to tell advertisers and other power users of their platforms that there are certain tips and tricks that they can employ to make sure that their audience their messages reach as wide an audience as possible. And they also want to maximize the information that they can collect about the users, because the fact that they know so much about us. What what we do online, in what order we do those things whether we do, we engage in certain online activities more reliably at certain times of day or when we're at home versus on the go on different types of devices. But also, pairing this with all kinds of offline information buying up our credit card records so that they can compare our purchase history with what we did online the week before, and try to figure out which types of ads are more effective at actually getting you to buy stuff. And they use all that data to justify to their advertisers, the money that they charge them for this. So you end up in this vicious cycle where rather than thinking about as many companies when they are first founded and are too worried about making money yet they're just worried about getting as many users as possible. That's when you notice companies really thinking about what's what's fun for a user what's useful, what's engaging what's going to delight you, but as the user once they have the captured user base. So that's the incentive to actually make people want to use the platform and instead they figure out how to squeeze as much money out of you instead. And so that is what leads to that that is what leads to all of the social harms that we're seeing today. Next slide please. As a result of all this social media platforms and showing us a very distorted view of the world that is increasingly shaped by corporate algorithms and the people who know the tips and tricks to take advantage of them. And unlike in traditional media, you don't have a set ideally as diverse as possible set of professionals who have training who have judgment who have a certain ethical responsibility to to truth to accuracy to fairness and to and to promote a small d democratic and civil civil liberties compliant values in the news coverage instead what you're what you have is a mathematical measure of popularity that ends up surfacing what is most emotional, what gets people angry, what gets people upset, what makes people want to comment and get into flame wars on each other's on each other's pages and posts and and Twitter feeds. And none of that is good for democracy and none of that is good for human rights. Next slide. So the reaction as I was mentioning before that a lot of policymakers have around the world but also right here in the US is that what we need to do is find ways to make companies better at taking down the bad content. So the problem with that is that first of all in the US we have this little thing called the First Amendment, which means that governments can't prohibit speech, the government cannot tell platforms. This is what your rules should be outside of some some very narrow circumstances right of the images of child abuse is is a is a really clear example of that. And what happens in countries that don't have legislation or constitutional protections like the First Amendment is that imposing what's called intermediary liability so holding platforms legally accountable for the the the behaviors and and speech of their users on the platform ends up resulting in unacceptable censorship because what you do in those cases is that you incentivize companies to be as cautious as possible and take down anything that anyone anywhere might dislike enough to sue you because even if a lawsuit doesn't have merit, it's still a nuisance and many companies are going to be incentivized to just be as cautious as possible and not let anything remotely controversial onto the platforms and we're seeing examples of this in countries all around the world. Another challenge to to this approach is that both government officials and tech executives have really been overselling the promise of algorithmic content moderation of using computers to detect and take down speech and other content that's against the rules. You know, if you if you were to listen to some of the things that Mark Zuckerberg and Jack Dorsey and other tech executives have said over the past few years. You would think that if we just nerd a little bit harder, we'll be able to come up with some magic AI that's just going to find all the bad stuff and take it down while also leaving all the bad stuff up. And that's just not realistic for a lot of different reasons that we can certainly go into as part of the Q&A. So, you know, we're we've been very concerned along with many others in in the human rights and civil rights community about the risks that eliminating or drastically changing section 230 of the communications decency act would pose to freedom of the internet. And so as Rebecca said, we've spent the past several years thinking about what other interventions would be that would help address these very real problems associated with with dangerous and harmful content on the internet, but that strength and freedom of expression and privacy, rather than undermine them. Next slide. Actually, let's go on to the next slide. Thank you. So in I'm going to talk about a couple of our of our recommendations and then we'll take a pause to get to give Gorav's take on our suggestions before going back to Rebecca, who's a well established expert on corporate governance when it comes to to human rights and in the public interest and she's going to talk about that. So the first thing that we that we recommend is that we really need to understand the problems better at a quantitative level. One of the really striking things about the Wall Street Journal scoop that Rebecca mentioned is that Facebook in particular, knew years ago what all of us researchers on the outside have been saying for years, which is that the platform and the way that it's designed and optimized for virality and for targeted advertising has been ripping our society apart at the seams. But the entire time that they knew this based on their own internal research, they were poo pooing all of us on the outside saying that we were just making things up and that because we didn't have access to the same data that they did. We couldn't possibly have it right. Well, it turns out we did have it right and they knew that they just didn't like that fact. So to level the playing field and make sure that we're all working from the same empirical set of facts. We really need a much better public understanding of how these platforms work on the outside. So I won't read through all the bullet points here. You can check out the first report. It's not just the content. It's the business model for a longer discussion of these recommendations and exactly how they would help us diagnose the problem and figure out what the right steps are to take about it. Next slide. Our second recommendation is to also relates to to transparency. So back in 2017 a bipartisan set of senators first introduced the honest ads act, which has two main provisions. The first is to apply the same transparency and you know disclosure of, you know, this is a political ad paid for by this candidate or this political party that already apply and have for a long time to print and broadcast ads apply those same ads to the internet. Now that to me seems like a no brainer. There's nothing special about the internet that should exclude you from having to be transparent about the fact that it's a political message and not something else. The second thing is, is a transparency provision which again mimics something that already exists for print and broadcast, but makes it possible for for regulators for journalists for public interest advocates of all kinds and ordinary citizens for that matter to see exactly what ads are being run, who they're being targeted right so that we can see if, of course, if a politician is sending drastically different messages to different, different groups of voters. And and also see who's paying for what and ensure compliance with both federal regulation state regulation but also companies own rules. One thing that so I, you know, so I think passing the honest ads act is a total no brainer and and really should have happened a long time ago or this is something that's well overdue. However, I also think it doesn't go far enough. This is for two reasons. The first reason is that right now. Self declare, you know, a number of the companies have these voluntary online ad political ad libraries, but the problem is that in order to be included in it, the advertiser has to voluntarily disclose that in that they're a political advertiser that they're not a political ad when in fact they are that they're actually getting caught and anecdotally I hear from people in the political ad buying business all the time that there is absolutely no, no, no no enforcement factor there that it's it's really just based on good faith. And that's just not enough. And then the second reason we need to expand this transparency database to all types of ads is because political ads as defined by by by law are not the only types of communication that if abused can be can be really harmful to democratic process to public health and so on. You only have to look at the misinformation and purposeful disinformation that's being spread about the coronavirus pandemic to see that we really need more insight and transparency into the communications that are going on at the very least so that existing laws can be enforced. And so that we can, you know, have a conversation and figure out what kind of targeted public health communication need to be put in place to counter the harmful misinformation and disinformation that's out there. Next slide. And then the third thing I want to talk about is a strong federal privacy law. This may seem a little counterintuitive at first, but I actually think that the best way to to address the problem of misinformation disinformation hate speech. All those other kinds of problematic and harmful content on the internet is actually, excuse me, a privacy law. This is not about censorship. This is about limiting the spread and reach of messages on the internet and it's also about protecting privacy, you know, among major major democracies Americans actually have the fewest privacy rights of any other major democracy on earth. That's not something to be proud of that's something for us to fight for. And I think it's written. I think limiting the data that can be collected about people giving people strong control and access and deletion rights, making sure that abuses abusive targeting like the case last year where a bunch of advertisers were using Facebook's platform to target ads for housing and jobs in ways that excluded people based on their race on their gender, on their ethnicity, and so on is completely unacceptable and it's, you know, it's something that's that's being litigated and and discussed at various levels right now, but it's not acceptable and I should not be enabling a platform to to help advertisers discriminate in this way. Here again I'm not going to go into all the details of what we're proposing though certainly happy to answer questions, but I would direct you to to our second report for that now stop here and turn it back over to Rebecca. Thanks very much Natalie, and Angela if we could just move to the next slide just as as our placeholder for a moment and we're now going to go to garoff to respond to what Natalie has said, you are a prominent expert on civil civil civil liberties civil rights and the internet. And in terms of your own work. What is what is your perspective on how best to ensure that companies are operating in a way that actually upholds citizens rights. So first, thanks, Rebecca Natalie and the rest of the team at New America for having me on as part of this panel. And Rebecca by the work I've been doing mostly is thinking about right how data feed this ecosystem and what that means for the business model what that means for civil rights and how our society is is organized and how these these platforms interact with the way we connect and communicate with each other. I think that your 100% right that that the way that nothing about the choices about how the internet ecosystem works is is natural or inevitable. So I think policy choices that people have made. And, and you guys are absolutely right in your report is is is right to examine those choices and and evaluate if they're producing the kinds of rights respecting and humane outcomes that I think, well, practically everyone on this on this call would like to see. And as you guys have so clearly laid out, I mean the evidence is that they haven't this business of massive data collection to support behavioral and targeted advertising has given these bad actors the ability to further divide spread hate misinformation, led to an vibrant ecosystem of data brokers surveillance capitalist. And I think quite an online system that you know in survey after survey Americans routinely characterize with creepiness and mistrust. And so like Natalie just mentioned I actually do think a strong privacy long could actually change much of that and make a real dense and how this ecosystem works, and really affect those negative externalities that that you guys have so clearly laid out I know I think that privacy law and needs to have, you know, strong controls on permissible uses of personal formation, strong civil and human rights safeguards and effective enforcement. So, last year, for repress along with the Lawyers Committee for civil rights under law, we be published a model privacy bill that effectuated these these principles. And our guiding idea was that privacy rights are civil rights, and a potential law must have anti discrimination at its core. And I think disparate impact analysis is largely been missing from the, the conversation on regulation of data and, and you know this this the internet ecosystem. And finally, organizations like RDR other members of the civil rights and human rights community have really spoken up and started educating people that, you know, facially neutral policies and programs even the tech space, you know, further discrimination disenfranchisement and disadvantage already marginalized groups and that, and that, you know, and those effects ripple out throughout society. And in order to protect those civil rights people must have control over how their data is used. You know, there have to be strong prohibitions, and they can't be used to build systems that you know press discriminates and franchise, and further second data says Natalie mentioned that there are ongoing cases about about that. And these aren't abstract harms, you know there are definitely beneficial and harmless uses for information and some of these targeted mechanisms. But I you know as Rebecca just mentioned, there's, you know, a bombshell Wall Street Journal report that I hope many of you have read about how these are in fact deliberate choices that companies have made using the data that people have really have no but to give up to them to even use their services that turn around and have, you know, these these huge negative impacts. And so when you when you think about a civil or human rights framework for how data is used it's I think really nothing new, you know brick and mortar businesses have had to respect these core civil rights laws for over 50 years. And there is, I think, a pushback from many internet companies about well you know we exist in this new universe how can why do we have to respect laws or even that even think about how are these systems that we've created can have these can have different impacts, but really there shouldn't be any difference between the, the, you know, rights respecting offline world the structure that exists there and the one that exists, that exists online, right. I think people should be able to make an understandable bargain with internet companies. And they're personal information over for a specific service. I mean limiting the kinds of data people can collect company should only go to collect that information necessary to provide the service that people have asked for. I think the absence of these clear rules right this incredibly permissive business model has built a system that exploits people to maximize their own profits, and with all the negative effects that we're seeing now. I think just to just to close that idea up. I think, fortunately, in both the House and the Senate to bring back to the political situation. We've seen these kinds of rights respecting proposals from from both members of the House and members of the Senate. And what they, the best ones focus on how to protect people's rights, ending exploitative business models, and, and creating an enforcement mechanism that can actually go after these bad practices, and, and, and the exploitation of our information that feeds all of, right, all of the hated misinformation voter disenfranchisement that I, you know, that we that we actually don't have to live with. That's, that's really great. So we're going to come back to more questions to you in a moment. I'm going to talk just briefly about one, one of the things that you really touched upon which is, you know, what's the point of a company. You know, is, is the purpose of capitalism, an end in itself, or is it a means to an end for society and even even the business roundtable a business lobby group last summer. You know, came out with a statement recognizing the purpose of business is not just shareholder about value it's actually bringing value to all stakeholders right and that means respecting that that means environmental sustainability also means having a sustainable society in which people's rights are respected and protected right that's that's a, you know, part of adding value to all stakeholders right keeping us alive and enabling that our rights and freedoms are, are possible. And, and so which gets to kind of a theme in our second report, where we use this analogy really that initially came from the oil industry and then was adopted by the environmental movement and kind of more broadly, which is, you know, if you're going to deal with the downstream problems, you know, you have to fix the upstream systems, right, and we talk about that a lot with pollution and so on, but it's the same with content and and so everybody's focused with okay how do we take down the bad content, the pollution faster and put the onus on companies to do that, yet we're not really talking to companies enough about what they need to change about their systems that's causing the spread and kind of weaponization and targeting of this type of content in the first place. And so, if we could just go to the next slide please. One of the things, one of the kind of sets of interventions that is actually are actually a big focus of the broader business and human rights community kind of outside of tech, and also a big focus of the environmental movement really for the past couple decades has been about getting companies to disclose information about their risks to the environment their risks to society, so that the growing body of investors who are concerned about actually wanting to invest in the types of companies that reflect their values and that are actually contributing to the environment and society as opposed to the opposite, so that such investors actually have the data and information they need to make decisions, and also to hold accountable the company management of the companies that they invest in. And so there's a movement worldwide and to get companies to disclose more what's known as environmental social and governance information. And in the United States there's no legal requirements to do so in Europe they're starting to be such legal requirements and also starting to be proposed laws in a number of countries for and already enacted in France for something called due diligence, which is that companies need to actually conduct impact assessments, not only on how their, their business is affecting the environment, but also on on its social impact and identifying what the negative social impacts are, and then demonstrating that they're working to mitigate and prevent the negative impacts. Now we, from that Wall Street Journal article we saw that actually Facebook sort of did do a bit of an assessment around polarization, and then did nothing about it. And, you know, so that is a problem they should be doing impact assessments about how their algorithms and their business model is affecting society, they should be demonstrating that they are doing these in a credible way. And then they should be demonstrating what they're doing to to address that impact. Next slide please. One other final thing I want to talk about quickly is, you know, actually later today is is the Facebook's annual shareholder meeting. And there are a number of shareholder proposals related to issues of disinformation and social harms caused by the platform, including one calling on the board to set up a human rights committee. Last year, there was a shareholder resolution calling for Mark Zuckerberg to step down from being chairman of the board of directors remain CEO but no longer also be chair of directors, and for the appointment of an independent chair, so that the management could be held accountable and there'd be more independent oversight over the company's social risks, including other risks. And a majority of regular shareholders voted in favor of that proposal, but it didn't pass because Zuckerberg and insiders of the company own a special class of shares that are weighted 10% more than regular shares and so they were able to vote it down, despite the fact that a majority of shareholders voted in favor of this. And this is just one example of how actually the securities exchange commission could change the rules they could require that companies phase out these kind of dual class stock systems so that shareholders actually have the ability to hold the company accountable and require the company to to address its problems in a not disingenuous way. And next slide please. So, you know, that's one of our recommendations you can see more details about that in the report. But also there's another SEC rule that the SEC has proposed changing. And again, not to get the details here you can read about it in our report we just published today, but they're actually proposing a number of changes for how shareholders can file proposals to make it much harder for for any of these proposals to be to be filed and to be voted on at shareholder meetings, which which then means again, it's even harder for shareholders to put pressure on corporate management to address their social impact. So we'll go the next slide please. And so that's just to say, you know, this is a picture of a trawler, you know, the companies are trawling around the internet for people's information this is then being used to target people. And we're seeing how, how that that is causing content that might otherwise be obscure to travel far and wide across the internet and to hit the people who are to connect with the people who are most likely to be susceptible to that content, to be inflamed by it. And we need to do something about that it's clear that the companies understand, or certainly Facebook from that Wall Street Journal article we know they understand the harm, and and they're not acting. And it is reasonable to for society and for shareholders to demand that companies operating a man in a manner that is actually sustainable for our society that actually contributes to the kind of country we we want to be living in that ensures that our freedoms are protected. And that we're not just completely manipulated by opaque forces, we can't see. So, next slide please. That's, that's it. And we're going to go to our discussion. The links to the report but they're more accessible in the chat, and also our website, where you can find more information about our work. And I'm just going to hand it over now. Back to our panelists. Do you do you think that in the civil rights community there's there's starting to be more focus on kind of getting shareholders bringing shareholders in as allies to hold companies accountable. Yeah, I think there is definitely space and interest in that kind of activism. I mean, that's especially from the civil rights community, and projects that that pre press has worked on that to force shareholders to hold companies accountable, especially for creating the environment of hate and divisiveness, which clearly are as we know now even more than we did before deliberate policy choices that you know executives at those companies have chosen to do on purpose. So, this disempowerment of shareholders, you know, because of you know this, this institution of basically corporate dictatorship by, by CEOs, it just shows how problematic that is and it should be no surprise it's problematic when it comes to our political governance why would it not be problematic when it comes to, to business and corporate governance. Right. And, and so Natalie, in terms of some of the news that's been coming out in the past couple days. And all this kind of controversy, not just about Facebook, but about what Twitter is and isn't doing. YouTube's algorithms. What, what do you think, you know, what, what, what do you think the companies should be doing right now, like regardless of regulation. So, one key problem that I see that has been feeding a lot of these controversies is that the companies have had rules that are rules that they that they made in consultation with a number of stakeholders including civil society organizations, and that they're in the US to make to make their own rules on under section 230, but they've been applying them very inconsistently. And so I think, you know, one of the things that's come up in the past few days is Twitter's decision to put fact check labels on some of Trump's tweets about the electoral process. These are rules that Twitter has had in place for some time now that that they will put a fact check label and correct information that is factually incorrect about how voting actually works. The problem is that they have not chosen to to actually enforce that when it comes to to certain political leaders. And I think the fact that they're actually enforcing these rules that they've had on the books for some time now is is a positive thing because if you're going to have rules, you should enforce them in a transparent and fair manner. And you should also provide appeal mechanisms right because sometimes companies will make the wrong call, though I don't think that's the case in this particular instance. But what we're seeing now is kind of shock and dismay that politicians and powerful people are being held to the same standards as everybody else. Yeah, and, and the other thing too I mean one of the things we point out in the report is that you know this the infidemic with the pandemic has been deadly. If misinformation about how to vote where to vote when to vote, what actually happened at polling places. If that could, that could kill democracy right that that could result in a sort of a disaster for for the democratic process. And so one of the things we've called for in the report is, you know, given that it's pretty unlikely that Congress is going to be able to enact all the things we're calling for in the next two months. In time to have have an impact for the election that companies really need to step up and curb the mechanisms that are enabling misinformation to travel and to be so effective in its targeting. And also curb the way in which people can be targeted the way in which data is collected and so on. And I wonder Garth if you might comment comment on that as well in terms of what you think companies need to be doing now. Yeah, sure. So, when it comes to, you know, let's say let's talk specifically about the election. I think you're right there are. This is shows you how important outside of regulation, a lot of activism is projects that that New America has done, free press and others have a project called change the terms which, which has addressed some of these issues as well, which is that the companies have it in their power to, to, to, you know, turn this switch on and off tomorrow. And I do think they have a social responsibility, whether or not that's enshrined in regulation which I do hope, you know, we get to a point, legally where we can start talking about that. But I think as, as you know, participants in this society and hopefully stick in stakeholders I believe in the democratic process. Yes, misinformation about the time place and manner of the vote to protect people's franchise really actually doesn't make that many speech issues and, and on the other side, has such an important part in protecting people's hard fought right to vote that it's really incumbent on them to embrace and accept the fact that they have a responsibility here to protect democracy. We're getting a lot of good questions coming in. So I think, I think we're going to, I'm just going to, you know, use my moderators privilege to start pulling in some questions we've been given getting. And we can mix it up with some other follow ons and so on. But one question we got from from Adam on zoom. And maybe Natalie you could take the first crack at this because I know you've written about it both in part one of this report series and elsewhere is, have you seen state actors like China and Russia and other perpetrators actively exploiting social media as business models to amplify their disinformation campaign. Absolutely. And that's something that's been well documented by a host of organizations and researchers from civil society and academia, as well as as government, you know, I mean, it's, it's been well documented that the Russian internet research was actually used, not only targeted advertising, famously paid for infamously paid for in rubles to target ads to discourage African voters African American voters, in particular, from voting in the 2016 election. But they also have displayed a very savvy understanding of how groups work, how, how recommendations for joining groups work, what kinds of content gets boosted by, by the newsfeed and in Twitter timeline, algorithms. China has also been very active in this space, as has Iran, and a host of other state actors so that's that's absolutely indisputable at this point. And actually to follow up on that, a number of people have have pointed out, and I think actually Sam Sacks, who's a fellow with New America just pointed this out this week that a strong national data privacy law is actually a national security imperative in part for that reason, among many others. I completely agree. If it weren't for the data that that the platforms have have access to, and that they even if they don't actually transfer the data as well that did happen in the Cambridge analytical scandal as well as other instances but even if we take them at their word that this is no longer happening at all. They still lend the capabilities or rather rent out the capabilities that they have thanks to that data to target people as you said to to allow advertisers and other kind of influence to reach precisely the people who have been mathematically calculated to be the most likely to be susceptible to those messages. So if you if you take away that this this huge security flaw. That means that in order to persuade people you actually will have to be persuasive in a way that's much more robust and open to scrutiny, then the current opaque and unaccountable system that we're living under. Well, we you know we don't have to, as Natalie said we don't have to look any further than the Senate Intelligence Committee report on interference in the 2016 election to see laid out in in very clear terms that yes the systems are open to exploitation of this kind in a way that subverts democracy and so of course we should be incredibly incredibly worried about that. One of the other, I guess, really problematic parts of this is not only have have a lot of these companies created systems so open to exploitation, but because the tension and divisiveness gets eyeballs they are also profiting off of, you know, of what is the erosion of all these norms in our democracy and that situation. It really just can't stand and that's why I do think it's great that we're talking about how the business model can change to disincentivize like this, this, what exists right now an incentive to create division for profit that undermines, you know, our, you know, our society. Yeah, so yeah, go ahead. Yeah, something else that I'd like to add is that companies, you know, have this line that that that they invented that they've drawn between advertising, which is okay and, you know, if you take them at their word even beneficial for for all of us, and then you have inauthentic coordinated behavior. And is, and to me, that's really a very subjective difference, right, because what is an advertising campaign, if not behavior that is both inauthentic and also coordinated, right, that doesn't mean that it's all harmful, like I don't really, you know, I don't have a problem with traditional advertisers, you know, trying to get it to increase the market share for their detergent or their sneakers or their vacation packages, right. But, you know, the there's a there's a the the line between, you know, pure commercial advertising issue advertising, political advertising, and an out and out propaganda or disinformation effort is not that clear. And that's something that the companies are really invested in help making us believe that there really is such a line. So that they are the right people to decide decide where it is and enforce that line and if you're going to optimize a platform for, you know, the good type of coordinated inauthentic behavior marketing. There is no reason whatsoever to believe that it's not going to be equally useful and beneficial for the more nefarious types of disinformation that we're so concerned about. But if the companies, you know, Twitter, Facebook and YouTube anyway, all agreed to moratorium on all targeted advertising between now and the election and just just only allowed contextual advertising, or targeting by geography only and nothing else. What, how, how would things be different. And, you know, other than a lot of revenue being lost. What what might any other negative consequences be. Yeah. Well, first of all, I'm not that convinced that that much revenue would be lost, you know, if you look at the some of the major commercial brands and companies that own multiple brands. The Unilevers, the Nestle's, and so forth. They've been increasingly pulling their, their marketing budgets out of targeted advertising, certainly out of micro targeted advertising, because they're just not finding it that useful, you know, they're, certainly things like showing people ads in the language that they actually understand is very important. Broad geographic and sometimes you know even to the level of a city is is very useful right if you're doing a sale in one area or you're a local business you're not going to advertise to people at the other end of the country. Most brands are actually not finding micro targeting to be all that useful for those kinds of products. The campaigns that do find it useful are exactly the ones who are trying to exploit pre existing divisions in society or to turn people against each other to send different messages to different people. And I do think that a moratorium on that kind of targeting would be a really prudent thing to do in advance of the production and then based on the data that that is collected during that type by seeing how the platform, the platforms users change their behavior. That'll give us a really really useful way to make decisions about how to move forward from there. Yeah, I think this is where your recommendations and recommendations that emphasize transparency are so important. Yeah, I mean there are trade offs when it comes to putting the brakes on targeted political advertising right if I'm, if I'm trying to reach people that are interested in black lives matter in the locality that is that is let's be overwhelmingly white or has has, you know, attitudes that are anesthetical to that movement. I may want to have some sort of more precise targeting. The only way to figure out whether that trade off makes sense and I think it may well makes make sense is is we have to have really robust disclosures of who is spending this money where where is who is being targeted. And to whom I, you know, want to be able to go to these companies and say, you know, we look at the evidence on balance. This is actually a socially destructive force and and also go to, you know, allies in all the movements that that I'm a part of and say there is there is a trade off here and we have to be aware of what that is. And that's not possible unless we force some serious disclosures on from these companies. Well, well, I want to thank everybody who's been posting questions because we've got so many fantastic questions I wish we could go for another hour just with everybody's questions, but we've got about nine minutes so I'll get to as many more as as I can. The next question from Veronica, also in zoom asks, do you think increased oversight could adversely affect online social justice organizing work and I think maybe increased oversight she may mean content moderation, perhaps increased oversight can mean many things and I'm guessing at interpreting her question but, God, what, what do you think about that. Right there's a strain of thinking that we can't ask for nice things because giant companies are never going to have our interest in mind and I do think that not necessarily that's what this person asked but I've heard that flavor of comment. And I think we just have to ask that that companies act in socially responsible ways. Obviously, they are reticence to do so. We talked about this article a million times already, but you know, evidence piles on and on and on that they, they do not want to act in socially responsible which does not mean at all that we should not ask that they do so and, you know, demand the kinds of reports from them assessments from them showing that, you know, why they've made the decisions they've made, and to force socially beneficial outcomes. No, I'm not afraid of increased oversight. Now, I'm going to move on to the next question, even though I know Natalie will also have useful things to say on this just in the in the interest of respecting all these great questions that we've got and also, I think both of you will have interesting things to say about this next question, which comes from Monica via zoom. What are the odds that a federal privacy law makes it through this or the next Congress, what needs to happen to change the odds. Do you want to go first now or should I go first. Sure, I can go first I mean I think I think it's extremely unlikely that the majority House Democrats and the majority Senate Republicans are going to come to an agreement on a federal privacy bill between now and November. If you add the the odds that President Trump would also sign such a bill. I think that put us that puts us well into the realm of the unlikely. That being said, that doesn't mean that it's not worth talking about and discussing now. None of us know what the next Congress will look like or indeed who will be in the White House next year. The legislation doesn't happen overnight. Now is the time to be having these conversations with with members of both parties, as well as people who may not, you know, identify with either party for that matter about about what a federal privacy bill that would actually protect privacy and operating the threat in the public interest would look like. Yeah, I think the likelihood of privacy legislation getting passed this Congress obviously isn't that high but it's also not super great for some very good reasons. The human rights and civil rights community has started demanding that civil rights protections be in any privacy bill that row there needs to be robust enforcement with with forward looking rulemaking power that can look at data practices and decide whether it's municipal or not. That is something a lot of people on the Hill disagree with it is, it is, perhaps, a different kind of emphasis, and what privacy has looked like in the in preceding years that does make the boulder a little harder to roll up the hill, but I'm confident it's the right approach and that we will, we'll get there. We have a question from Sophie pilgrim is Facebook's oversight board a step in the right direction in terms of content governance. I will venture a quick response of my own and, and I'd like to know what what our guest Garov thinks in particular. You know, it may help deal with I mean the remit of the oversight board is very narrow in that its mandate is just to adjudicate specific decisions about whether to whether specific content stays or goes. So their mandate, while they could, you know, they could, they're welcome to issue opinions on anything else but, but Mark Zuckerberg and and Facebook management are under no obligation to even listen to or pay attention to any other questions that the oversight more board might issue around let's say their algorithms and the impact of their algorithms on social division, or the targeted advertising business model and its effect on content, etc. And this is where, you know, oversight is supposed to traditionally with corporate governance oversight. It's a body that's supposed to oversee management to make sure it's handling its risks and its social impact environmental impact and all those things. It's called the board of directors and the board of directors is completely failing because it's, you know, the people who've been critical of Zuckerberg have been eased out lately and and and it's it's full of people who who mainly agree with him and support his, his way of doing things and shareholders can't force a change in that regard because of the voting structure. And so now we have this oversight board that's kind of helping, you know, if Facebook has to make tough decisions about Oh, a particular politicians content is a statement on Facebook goes against the rules and they take it down and Facebook sort of the oversight board I think helps to reinforce and give sort of moral authority and legitimacy to the tough, particularly I think politically tough decisions about content moderation that Facebook might want to make but is kind of afraid to make. But it's not going to deal with any of these other broader issues as it's currently constructed but garb I'm sort of curious to your, your thoughts. Yeah, I mean I agree there's been a very odd from my perspective hype about what the oversight board is and it's supposed to do it has been thought of as right like you. I think this is going to be an oversight for how Facebook content policies work. That's not true with at all it's it is going to adjudicate a very narrow slice over a very long adjudication time of a very small slice of cases. The fact is policy at Facebook is going to is going to still reside at Facebook and at the top as Rebecca said. And so, you know, I think it is, you know, worth looking at within with an open mind, I am skeptical it is going to live up to the hype that surrounds it. And, you know, personally for me and I'm sure my colleagues at pre press, we will retain our laser focus on, you know, the directors and board and policy team at Facebook to try to get that company to call, you know, the recommendations that all organizations are asking for it to do. Yeah, the only thing I would add to, to all that is that I think it really serves Facebook's interest very well to keep the conversation at the level of content and content governance. Because this is one of those wicked problems that is not fixable, right, where we can keep arguing for decades and centuries and even millennia about the lines for content about how to enforce it about precedent about context and all that, and never get anywhere. And Facebook is happy to throw resources at having this conversation, just as long as it keeps everybody else focused on this and not talk about the fundamental problems and changing the upstream causes of these of this of this infodemic, which is exactly what we're calling for in this report series. Okay, well my clock is saying 1230 on the dot so we're about to turn into pumpkins here. Any final burning things that either of you need to say that I failed to ask you about. Just jump in and say once again, thanks for having on hugely agree the business model here is a massive problem. And the more the civil rights community the human rights community and people interested in this topic, actually investigate the business model and that's the place where we can actually make legal and regulatory interventions, we can actually move the ball forward on getting internet ecosystem that respects civil and human rights. Well, thank you very much, we're going to give you the last word there as our guest as appropriate, and just thanks so much to everybody who joined us today I see we have well over 100 people at the moment, they're probably more at the peak. And this has been a great conversation. We look forward to having more of these on Twitter and everywhere else online and maybe someday even in person again. We'll see. Thank you so much everyone. Thanks everyone.