 Thank you everyone for joining us today. I'm going to give it a few more minutes here while people are popping in but I figure I take the time now to introduce our panelists and just again welcome all of you for being here. I know as it seems every week it's a crazy crazy tech policy week and day so I really appreciate all of you taking the time to chat about a really important topic. So first I'm Sam. I'm Sam Saban. I'm over and we're in console reporting on tech policy and I'll be moderating today's discussion and we also have four lovely brilliant panelists with us today as well. David Brody who is the Counsel and Senior Fellow for Privacy and Technology at the Lawyers Committee for Civil Rights Under Law. We have Joseph Gudtachu, Director of the Media and Democracy Program at Common Cause. Spandi Singh, the policy analyst at New America's Open Technology Institute. And then Ian Vander Walker, the Senior Counsel at Brennan Center for Justice. And then throughout today's program I think we're dedicating, I'm going to fill it out but probably about 15 minutes or so at the end for any questions that the audience might have. So do not be shy about hitting the Q&A button if you're on Zoom or submitting a question elsewhere if you're a part of the live stream audience today. The more questions the better. So very excited to dive in. So I think just seeing, yeah I think we can go ahead and get started. Beautiful. Yeah so I guess first, and I'll open this up to the entire panel just to get things rolling, but I would love, as we all kind of know, you know 2016 was very much a time where a heavy spotlight was cast on just the sheer amount of misinformation and disinformation that's seen on social media platforms related to the U.S. elections. And so with that in mind I'd love to hear a little bit about just the kinds of tactics we see when it comes to the spread of misinformation and disinformation online to start. And how that's kind of changed between 2016 and 2020. Sure I can kick us off and thank you for having us. So I think one of the big shifts from 2016 to today is today the calls are coming from inside the house so to speak. 2016 was primarily about foreign interference and there was some domestic efforts but you know it was what Russia did when it was targeting African American voters with voter suppression propaganda and what other related actors did. It was primarily foreign driven. What we're seeing today is primarily domestic activity. It's domestic actors disproportionately President Trump and his supporters and echo chamber pushing disinformation and misinformation about a variety of topics and to the extent that foreign actors are engaging in the U.S. election they're largely just amplifying false information that's already being pushed by domestic actors. Yeah I can jump in here. So thanks to New America for hosting this and Sam for moderating this. I want to comment on what David said in a second but I think to kind of pull us back a little bit I think it's helpful to categorize some of the tactics around online voter suppression and election disinformation. One of the big categories is date and time. It's really important that people know what day the election is and what are the hours they can vote. Bad actors like to put out content on social media that say Democrats vote on Wednesday or Republicans vote on Monday or about the week after or things like that just so confusion. With COVID I think date and time has become even more an important way to spread this information. You have bad actors saying things like seniors have special hours to vote because of COVID or because of social distancing rules that there's differences in how you can vote and where you can vote. The other big category is deceptive practices which kind of tell you give you false information about how you can vote. So bad actors sometimes say text to vote or call this number to vote as opposed to legitimate ways to cast your ballot. Because of COVID again bad actors are saying things like text to vote because of COVID or vote from home because of COVID. So just using the virus as a way to spread misinformation and disinformation. Another point is voter intimidation. This is something that President Trump likes to amplify through his base by encouraging folks to monitor the polls or use intimidation tactics to scare people from actually going to the polls. We've seen false claims saying things like ICE is out there so don't vote because you may get deported things like that. And then building off of this are voter suppression narratives. The president has continuously used social media to say that vote by mail will lead to massive voter fraud and election rigging. It's entirely untrue and been discredited. It's a way that we've been voting for years now. But this is one of the ways he likes to sow discord. So those are the big categories. But to build off what David was saying these tactics have only amplified since 2016. To the extent that it was foreign actors it's now domestic actors just taking the playbook from foreign actors and amplifying it. And the goal is to really just spread as much disinformation as possible to create confusion and to make sure anyone who's undecided does not vote. That's really the biggest tactic of voter suppression is to take marginalized communities like low-income folks people of color and immigrant communities and make sure that they don't cast their ballot as a way to swing the election one way or the other. Yeah. I mean I think you all summarized it beautifully. And I think maybe the natural next question right is when we're thinking about the kinds of content that we're seeing and how it's changing. It would be ridiculous if we didn't talk about also the role of social media companies in this and how they're adapting and learning or filling in the gaps or maybe not filling in the gaps in certain ways as well. And so maybe to turn it back to you Yosef and I'd also love to bring Spandi into this conversation given New America's recent report on this topic which side note I recommend if you all have not checked it out yet to dive spend some a good amount of time sitting with it. It is filled with amazing detail and history about all of the various social media platforms and the work that they've done on this topic which is it's so hard to keep up so a beautiful resource. But I guess either for Yosef or Spandi right now I'd love to hear a little bit about how social media companies have adapted their policies and changed in the past four years as well. Sure. I'm happy to jump in there. So as Sam mentioned we came out with a report yesterday that looks at how 10 different internet platforms have been addressing election-related misinformation and disinformation. And we primarily look at these efforts in four categories. The first is how platforms are connecting users to authoritative information. The second is how they are moderating or curating misleading information. The third is looking at misleading information and advertising. And the fourth is how and if they're providing meaningful transparency and accountability around these efforts. And so I think compared to the 2016 elections platforms are definitely taking a more proactive approach when it comes to connecting users to authoritative information. So for example Facebook and Twitter both have dedicated spaces on their websites where users can access voting information and they a lot of companies have partnered with election authorities and other legitimate sources to sort of promote verified information. In terms of content policies and advertising policies, platforms have also definitely flushed out those approaches a lot more. And with content moderation we're seeing that companies are also using what I like to call middle ground moderation techniques such as down ranking and labeling a lot more than they were during 2016. And we're also seeing platforms really address the role that algorithms can play in promoting this kind of content and intervening to try and ensure that conspiracy theories are not being promoted through recommendations. I will say however one of the biggest challenges that we're still seeing is that platforms do not provide adequate transparency around exactly what the impact of these policies are. So for example out of the 10 companies we looked at in the report only one company which is Twitter publishes any concrete data related to their how their content moderation efforts impact election related content and Twitter only started reporting on this data very recently. And so and we have a similar lack of transparency when it comes to advertising. And so because of this like although we know platforms are being more proactive and they are doing more it's really difficult to understand what the impact of these efforts are. Yeah building off of that first kudos to OTI for putting together that comprehensive report on platform policies on disinformation. Super helpful. You know I think platforms are taking a mix of a proactive and a reactive approach. All the things that Spani mentioned are things that they've been doing for a while. But the challenge is that even when they're taking a proactive approach they're still election disinformation that's running rampant on their platforms. Part of the challenge here is that the policies they're putting in place aren't adequate or may have holes in them that are preventing them from from actually taking enforcement actions when appropriate. The other challenge is that these policies may just not be adequate enough. So for example Facebook still has a gaping hole in this political ad policies right where they allow politicians to lie more or less. And so it's been a challenge of really communicating with platforms to figure out okay how are your policies actually strong and can you consistently enforce them. So for example kind of platform consistently put a warning label or a fact check label on a piece of content that violates the policies as opposed to doing it in some instances versus another and where the policies just don't actually allow the platforms to take action can they modify their policies in a way where they can take proactive action. Great and something that kind of came to mind while you all were speaking about this is just also I would imagine the timing for these policies is probably also so important right we're maybe roughly a month away from the election and we're still getting new updated policies from Facebook and from Google with regards to their political ads and how they'll restrict those or what types of content they'll be cracking down on and so I'd love I guess I'll open it up to the entire panel here to see if anyone just to hear a little bit more about how timing plays into this and if there are any concerns about the fact that Facebook might be doing something too late or too soon or Facebook or any of the other social media companies. Yeah I think I'll jump in just to say that it is kind of mind boggling that we're four years away from 2016 and we're still getting updates about you massive corporations that are in some way still responding to those problems. I mean in other ways there are new problems right there are new challenges so they are of course in some sense trying to be adaptive to address those new things but also I mean as you said they're being reactive they see something in the news and they think it's a bad PR issue for them and so they come out with something that is intended to address it and that you know that both makes it hard to see how the problem is being addressed but it also has negative consequences for legitimate organizing on these platforms so there are groups out there who are trying to play by the rules who are not trying to engage in disinformation but who are trying to engage in political organizing get out the vote efforts and it gets swept swept up in these rules and swept up in the changes to these rules so you know it's important to recognize in this conversation that yes there are tools to try to tamp down the disinformers but other people are trying to play by those rules who may be negatively affected by some of the rules and the enforcement which is not always across the board. Yeah the other thing I'd say about is you know waiting until the last minute to update their policies is a conscious choice. All of our organizations and plenty of other organizations and people who are very interested in protecting elections have been raising these alarms literally since 2016 or in some cases since before 2016. We've talked to the companies, we've proposed ideas, there's been countless meetings and the companies have deliberately chosen to wait to come up with a plan and they've chosen to call audibles at the last minute and those choices have consequences like Ian was saying also the other consequences are the companies, their own staff and contractors don't know how to enforce their own policies because they changed at the last minute so we you know Twitter just changed its civic integrity policy a few weeks ago and they made really positive changes it's great that they took those steps but they're not enforcing their new rules so you know we're in a situation where they're saying we're not going to allow content that makes baseless allegations of rigged elections or voter fraud meanwhile the the president and others tweet such things regularly and and and there's not any significant consequence to that because I doubt their own enforcement staff has been properly trained on rules that came into effect just a week or two ago. Similarly the platforms are struggling to to properly label content so you know Facebook, Twitter, other platforms have made a big deal about how they're putting these labels on political posts saying you know voting by mail is safe and secure click here for more information things like that but they're not there there's two fundamental flaws in how they're doing it first in the case of Facebook they're putting that same label on every piece of political content regardless of whether it's true or false regardless of whether it violates Facebook's rules or doesn't and what that means is that that label effectively becomes just another piece of noise and clutter on the site that users will ignore it doesn't signal anything significant to the user if it appears everywhere right you know think about all the times you go to a website and some random thing pops up and you just click for it to go away because you don't want to see it and it doesn't matter it's the same kind of filler. What they need to do instead is only put labels on content that that violates the rules and the label needs to be somewhat particularized to the rule that's being violated and it needs to say very clearly this post is violating this rule or engaging in this type of activity but we're leaving it up because we think it's really important for people to see this information but they need to be properly informed so we're labeling it so that you know uh and and furthermore the content should be hidden behind an interstitial that has that label so that the user knows before they see the content that there's a problem here uh the sites regularly do this for all kinds of other graphic content involving violence or or or other sensitive types of content they're they're perfectly capable of doing it and and they've made a choice not to. But also add that I'm sorry I would also add that when we talk about political ads like as David mentioned the rules are constantly changing but the definitions around political ads are also pretty fluid and so even even companies who have banned political ads outright there are still ads that slip through the cracks and companies aren't really able to or they just don't share you know like what what does slip through the cracks how often does it slip through the cracks what's the review process like and I think right now one of the only companies that does that is red bay context of political ads but red also has also accepts a much more narrow scope of political ads like it's only US base it's only at the federal level um but without that kind of information it's also difficult to know like you know what are the limitations of these approaches as well. Totally and I want to circle back up on the ways in which it's moderated especially the labels versus completely removing and things of that nature but before we do that um probably diving into the actual content itself and familiarizing people with that uh it's probably a good stuff in the flow and so I guess the you know I'm curious to hear a bit more about the challenges facing social media companies when it comes to misinformation that's shared through organic content or just you know stuff that your typical user is sending there's very there's no paid promotion um I am curious to hear a bit about what set of challenges having so much organic content being the source of this poses for a social media company of course it varies by platform but yeah I mean one thing to note is it's very hard there's a lot of content and that means you can't read everything right it's been true that you couldn't read everything on Facebook for many years I mean billions and billions of posts um so that makes and then in the now that nobody's in the office they all right that's another reason to to have less human beings reviewing things um so Facebook you know makes a big deal about having 20 000 human reviewers or something but you know in the context of billions of pieces of um content that's literally nothing it's a drop in the bucket so they're relying a lot on algorithms and um you know algorithms aren't that smart in this context they tend to look for keywords they don't know the difference between uh somebody saying vote on Wednesday because they're trying to trick voters and somebody else saying people are saying vote on Wednesday because they're trying to trick voters um right an algorithm thinks both of those one one a warning about voter suppression one actual voter suppression they think both of those are the same thing um and so they get caught up in the same um sort of review process whatever that is um whether that's a takedown or otherwise and then you know in theory they're supposed to be a human being who's sort of appealed you can appeal to if you think you had a mistaken um either takedown or other negative consequence on your account but people report that those processes aren't very um responsive either so you know in some sense Facebook and just it's just Facebook but Facebook is just it's almost like too large to police or at least to police at the level that they're that they're trying to do I mean really if they wanted they need human review and if they if they're actually going to do human review it would be you know hundreds of thousands of employees um so you know and and then part of it is the things that they're the things that are there for human review like you know groups like ours can flag things for them and they don't they're not always responsive to that either even when a human being is telling them that there's a problem so uh it's definitely a problem of scale and then and then that leads to reliance on algorithms which have sort of systematic ways that they fail yeah and if I could just add one thing on there to sort of illustrate the situation a little bit so like you was saying Facebook probably has one of the largest enforcement teams in the industry it's it's 20 to 30 thousand people something like that okay but Facebook has a couple billion users uh the last time I ran the numbers what this shakes out to is they have about one enforcement person per 70 000 users so you know imagine if you have 70 000 people is a small city imagine if you had one police officer for 70 000 people and that's the best equipped best resourced company in the industry so that sort of tells you a little bit about the scale of the problem Facebook is you know very eager to say like oh we have 30 000 people working on this and it's like okay that's a drop in the bucket totally yeah yeah um oh just one quick point I want to make I think the challenge with organic content is just the way it can get amplified on the various platforms and seen quickly um you know we'll get more into this later I believe but the differences between an ad and organic content is that you know anyone all your all of your followers or friends can see organic content and they can share it quickly and so if you're someone like Trump who has tens of millions of followers on Twitter you can put anything on your personal account or your campaign's account and it can get traction through retweets and likes and other ways to share and you know that that's something to moderate when you have potentially millions of people seeing a piece of disinformation quickly and reacting to and sharing it as opposed to other types of disinformation through content on groups or content and ads that pose other challenges and I would also add that when we think about how platforms are addressing organic content now that they're using you know more than just removing content um it's important to think about like what what approaches are sufficient so um in our report one of the things that we recommend is that if a user engages with misleading information around the election then platforms should notify them and direct them to authoritative information so that they're closing the loop there and they're not just removing the content they're helping the user understand that the content was wrong and access more verified information and a lot of platforms were doing that in the context of COVID-19 misinformation so I definitely think it's it is something that they can clearly do um it's just a matter of whether or not they want to. Yeah and to build on what Spandi was saying it's it's not just important for correcting the user in that moment but if you tell a user that oh you were looking at news from you know xyzsite.com and that was misinformation that user is then going to know like okay maybe I can't trust this site maybe I should look elsewhere for my information going forward so there's a prospective inoculation that happens there that's really important. Totally and also when we're talking about organic content something that I'm thinking about often um it's also just this push that we're seeing with social media companies to be more interconnected to um I keep picking on Facebook but they had many a policy change in the last 24 hours so maybe poor timing in that regard but um you know they Facebook's making this huge push to promote groups these private messages and things of that nature I'm curious to hear a little bit about how maybe that can also make it more difficult or not um to moderate for organic content that contains misinformation and disinformation I mean a lot of the Q and on content is spread through groups um and it's yeah I'm just so curious about how that plays into this as well. Yeah I mean one of the things with I mean so Facebook likes to say it's connecting people and um it is one of its responses to issues that is some of the issues we're talking about we're talking about is that it's promoting more you're going to see real content from real people that you know right so your mom or your cousin or whatever but groups sort of um belie that in that groups are typically formed around interest they blossom they bloom in size very quickly um and as you said these pieces of disinformation are frequently shared there as well as sort of what you might call just sort of polarizing content or extremist content that's not exactly misleading or that's not exactly false but is about our side is good their side is bad um and one of the one of the sort of services that Facebook provides is group recommendations right so if you're in a group that's sort of dedicated to a conservative viewpoint Facebook will recommend to you well here's these other groups that are even farther to the right and here's these other groups that are even farther to the right and that sort of pushes people through the group ecosystem to the most extremist ones which again is where a lot of the really bad content the COVID misinformation the voting misinformation the QAnon conspiracies that are literally causing death threats and violence um are happening in those groups and Facebook is not only I mean they are trying to shut them down to some degree or another but they're not acknowledging that their algorithms actively push people into those and there are some problems of course people talked about the YouTube recommendation algorithms there are similar problems on other platforms yeah the other very big problem we've been seeing with with Facebook groups in particular uh is the way they're being used uh to organize militia activities in the real world so you know we saw in Kenosha that the Kenosha Guard Facebook group was used to to mobilize uh armed militiamen to counter uh racial justice demonstrators and that resulted in in three people being shot two of them killed uh and you know that Facebook group before the events in Kenosha happens that Facebook group was reported to Facebook over 450 times for violating Facebook's rules the there were members of the group explicitly calling for for violence and issuing calls to arms and trying to organize um armed response and Facebook took no action uh similarly you know we are we are seeing in multiple parts of the country uh how Facebook groups are are being used to you know organize folks that want to interfere with the right to vote that want to unlawfully intimidate voters at the polls uh that we've we've seen militias uh setting out roadblock checkpoints and stopping drivers of color uh and and you know social media has not just enabled these individuals to find each other it finds them and brings them together through the recommendation algorithms that that Ian was talking about yeah just a few quick points um everyone's talks about all the challenges with the groups um you know I think one of the biggest challenges from a social media monitoring perspective is that uh the groups are closed off right so it could be potentially thousands of people sharing disinformation without any sort of check from an outside voice to see is this wrong or is this inaccurate um so there isn't a built-in tool for a civil society group or a non-partisan group to actually help spread accurate information I think another challenge is that as Ian was saying with recommended groups that Facebook utilizes it pushes you further and further to the extremist content and I think there are situations where um the group labeling or the group titles aren't as clear so you could join something that seemed innocuous or seems fine and then all of a sudden you're involved in something that's sharing disinformation and potentially suppressing your voter discretion for how you think so you know I think um platforms need to figure out ways to create more transparency around how groups are actually operating um and potentially think about if a group is a certain size um to the point where it's potentially thousands of people in it where any type of disinformation could um create um a lot of harmful situations what are the mitigating ways uh to to prevent that from happening whether it's making it public or whether it's creating some sort of safeguards so disinformation isn't spread quickly um there's just a lot of dangers with how big a group can be in the size of scale of disinformation on groups and I would actually like to give an example that's not Facebook related um since Sam you mentioned this interplay between the need for privacy and security but also the need to address this kind of misleading and harmful content um and I think a good example is how WhatsApp approaches this um so WhatsApp offers end-to-end encrypted messaging services which are critical for privacy and security um but they also that also means that the company can't view or review the content that users are sharing um but WhatsApp has introduced a number of measures again I will caveat as always very little transparency so we operate off of the few data points we get to try and understand whether these are effective but WhatsApp has for example introduced um limits on the number of times a message can be forwarded um and if a message has already been forwarded five times and the receiving user can only share it to um other chats like one at a time um the receiving user will also have a it'll have a label on the message letting them know that you know it has it is a forwarded message um so they can know that the person who's sending it to them is not the original source um WhatsApp also recently introduced a feature that allows users to fact check messages that they receive um through their browsers without having to compromise the encrypted nature of their messages and then they've also been um sort of using different tactics to try and identify and remove accounts that engage in automated or spam like behavior which are often associated with misinformation so um according to WhatsApp these um these approaches have helped reduce the virality of misinformation and disinformation on the services um of course with greater transparency you know uh orgs like us can help corroborate that but I do think it's important to recognize that um it is important to strike a balance between privacy and security and the need to address these uh kinds of harmful content but it doesn't mean that just because you have privacy or security you can't address that kind of content totally totally and um just a reminder for the audience you will be taking um questions from you all so if anything is sparking your curiosity feel free to um send them our way uh whether by the q and a function in zoom or other means if you're live streaming but um kind of want to pivot a bit from organic content to talk a bit about paid political advertisements um as we all know um Facebook Google others have been making a few changes to their political ad policy um yeah political ad policies with regards to the election um we're going to see bans on Facebook the week of the election Google is um or halting them not a complete ban but um Google is uh restricting them the week after the election um we're seeing several companies make restrictions in terms of political advertisements around claiming who won the election since it might be a bit longer with all the up uh ticket mail or mail-in ballots so I mean I'm curious to hear a little bit about what you all make of these changes that have been made to political advertising policies with regards to um election information and whether or not um they're enough to combat the problem yeah uh you know I'd say sure they're they're positive developments but they're they're very small positive developments given the scope of and scale of the problem uh you know to to give another example Google has one of the the most problematic ad transparency libraries for its political ads uh so you know all the major platforms have these these libraries of political ads where you can go and see what different political actors are running on the site even if you're not a target of the ad in Google's case however they only include ads that explicitly name a candidate or an incumbent or a political party which means that the vast majority of of political and election ads that that don't explicitly name one of those people one of those things uh don't go into the library and and we have no we don't even know that they exists so we you know we've had some some recent circumstances where there were political ads run by super PACs uh spreading you know misinformation about voting by mail but because the the ads don't mention a party or a candidate they don't go into the library and so the the only way you learn about it is is someone happens to see the ad and fortunately they report it to someone else who can report it up but uh you know if if no one sees it how are we supposed to have any sort of meaningful uh accounting for what's happening through through Google's platforms and and to be clear you know Google's platforms also include YouTube which is a huge huge vector for for disinformation and misinformation I would add that the data that a lot of these ad libraries share is also like pretty it's helpful it's a good first step but if you're really trying to understand who saw these ads and what impact did they have most of these ad libraries really lack granular engagement and delivery data that can help us understand um that kind of an outcome totally um yeah and I guess maybe with that um you know it's been brought up a few different times actually we're gonna do a different question uh I guess I mean it's been mentioned before um the president has definitely been um someone who has tested many the content moderation guidelines on several platforms when it comes to spreading or sharing misinformation or misleading information even about the election and and so with that in mind I'm curious as to how um how politicians kind of sharing misinformation about the election particularly when it comes to mail-in voting or voter fraud um how that challenges or throws a wrench into social media companies plans to um take on this issue I mean each company has kind of done something different right so um and yeah I'm just curious about that and and whether you will think it's the role of a social media company to intervene in that instance um I mean I think that companies definitely have a responsibility to ensure that users have access to information especially from prominent figures like politicians but they also have an equal responsibility to ensure that the content of their services is not going to you know result on like an offline imminent harm and so I think that when a politician or prominent figure is posting something misleading or false related to the elections users have a right to know that a candidate or government official is essentially lying to the public um but I think sorry if companies have a clear public interest exception policy that can help address this so if a politician posts content that is false it's fact-checked it's debunked the company can leave the content up but they can label it and this label should explain to users that the post has been debunked it should explain that there is a public interest in keeping this post up because users should know that that individual is spreading misleading or false information um and it should importantly also connect users to verified information on whatever the subject of the post is but I will caveat this by saying that if a you know if a politician is posing posting misleading or false content that poses imminent harm like it's calling for violence then the company should be removing that like it would any other users posts um I don't think that they should get an exception in that kind of a situation uh yeah I I think that the the companies need to hold politicians to an even higher standard than they hold regular users for a variety of reasons I mean first off these are our representatives and they're supposed to be you know embodying the values that we share as a society and that includes adhering to the rule of law and and norms about uh appropriate uh ways of conducting an election and ensuring that everyone has a safe and secure and ease equal access to the right to vote uh but it's also um it's just simple harm prevention like like spandy was saying these are users with gigantic megaphones and if they say something that incites voter intimidation or uh spreads misinformation about how or when or or where to vote it's a lot different than just your rank and file user saying something these are people that speak with with voices of authority and have huge followings and the media is going to repeat what they say and their supporters are going to repeat what they say and so the the the magnitude the risk of harm is quite high so you know my my feeling is the the platform should have an even stricter higher standard of review for politicians elected officials candidates for office where they're held to stricter rules uh than than regular users where um you know when they share misinformation or disinformation or voter intimidation or calls to arms or incitement of violence or you know anything like that that you know we know these results in real world harms when that happens the content needs to come down and it's the borderline content that should get labeled and flagged and and and you know downright the actual violating content needs to come down totally um and I want to ask you guys one last question before I turn it over to um the questions we're getting from the audience so just a reminder to those watching feel free pop in your questions um but we brought up the um the impact that the pandemic has had on content moderation earlier the um issues surrounding uh you know it's not really safe for people to be in an office uh unless you go through many different um guidelines and safety procedures and so a lot of these platforms have turned more to algorithms and to moderate content and of course algorithms have many flaws um humans aren't perfect either but um you know algorithms maybe aren't smart in this area or as we all know have many biases um in regulating content and so I'd love to hear a little bit um more again from anyone who wants to um hop in um just about how um you know relying more on algorithms during this time during um a heated election um might be also throwing a adding another challenge to this issue yeah I mean I would say that when we talk about the fact that platforms are increasing their reliance automated tools like we shouldn't be thinking of it in a binary so it's not that they are only using automated tools or not we need to and I think it really varies from platform to platform whether they're using tools more for detection or from moderation um but as Ian was mentioning earlier like algorithmic tools definitely are flawed they're unable to uh sort of effectively detect and moderate content that doesn't have clear definitions so like disinformation hate speech terror propaganda um they also can't make subjective and contextual decision-making so they are there are clear limitations um but I think in the context of the election like honestly from especially from just doing the research from the report it's really unclear to what extent companies are using automated tools for moderating election content uh Facebook for example has set up like a Facebook elections operation center that they said was going to have human moderators reviewing content 72 hours before the election now because there's a lot of early voting they've sort of expanded that period but there there's a lot of focus I think on uh emphasizing like David's saying like you know we're hiring more people we have a lot of people working on this but there's not a similar level of transparency around how those people interface with their algorithmic systems and how those systems are being used at all in general and just you know a couple things to add on to that is uh I totally agree algorithms are not inherently good or bad it's all about how they're used so for example you know I think we've sort of seen some examples of different platforms setting the threshold differently for for when an algorithm is going to trigger something for further review or take it down you know I believe there's some recent reporting that uh Google's algorithm they on YouTube they chose to be a little more aggressive in and what they were going to automatically take down whereas I feel like on Facebook and Twitter they're they're less aggressive you know that's a calibration issue but the other things that that algorithms can do that are positive is uh what there's a proposal from the Center for American Progress to set up circuit breakers where if certain content is going viral very quickly you know the algorithm can spot that intervene and sort of act as a circuit breaker to to slow it down or stop the virality while the the the human moderators have a chance to check it uh so you know that's an important thing that an algorithm can do but things that algorithms can't do like Sandy was saying is evaluate the context in a nuanced and culturally competent way so when we're talking about the the accounts of major politicians and other major figures on these platforms the people whose follower accounts are in the hundreds of thousands to millions those people should sort of be in a separate bucket where there are dedicated human review teams that are just tasked with keeping an eye on those high impact accounts uh in in real time or near real time to to check for uh problems that could occur around the election. Gotcha, Gotcha um and so I'm going to turn it over to our audience questions and I am doing this because I think this is a good transition one but um the first question I have for you all um from the audience is should any political advertising on um electronic or social media be forbidden straight out um the more little rules you add the more loopholes you create. I'll just say I think um I figure a lot of groups in this space sort of struggle with this idea um banning political ads uh has the potential to throw the baby out with the bathwater there are um you know social media ads are an extremely powerful tool and they're extremely cheap and they're available to those candidates who don't have a lot of money to these like small organizers who don't have a lot of resources and want to get their message out and the idea of sweeping all that away um I think causes a lot of uh concerns uh it it's definitely the case that creating rules creates loopholes bad actors will try to find ways around the rules we'll try to find the loopholes we'll try to find holes in enforcement but I tend to think that you know that's the situation we're in um we don't sort of ban entire other you know media from having political rules we try from having political messages um we try to make sure that those messages like uh attain some standard and imperfect as that is I think it's better than shutting everything down. Yeah um to build off what Ian was saying I think there are two sides or two extremes everyone to look at one is um leave political ads unchecked which is more or less what Facebook is doing or maybe what Twitter is doing is just not allowing political ads to run um in pretty much all cases I mean my personal belief is that I'd rather have no political ads run than political ads with latent lies that are getting targeted to marginalized communities surrendered for voting but that being said I think what we all want is a is an enforceable political ad system where there are rules of the road and it's not just anyone can say whatever they want without any type of moderation or at least some sort of limitations on how ads can be can be run whether it's it's targeting or other ways that ads are getting out to certain audiences there there are just a lot of harmful ways that that ads are being used and so if it's a policy situation where we are promoting labeling or warning or fact checks on ads or it's limiting how they're targeted or how they're seen um you know those are the types of things we we should be striving for um as opposed to one way or the other just letting ads run unchecked or not at all yeah I would also add that um I think both David and me I'm sorry Joseph and Ian made really great points um I'm not sure how I would approach the ban on political ads but I think that when we think of how companies are thinking through their policies um one of the biggest challenges I've noticed is that even to find companies approach to political ads or even organic content you have to dig through their websites to figure out like oh you changed this three days ago okay but what about this change three months ago um and like one example I can give both for the ads and organic context is like Facebook has community standards and that's supposed to outline uh you know what's permissible they have their political ads policies which are on the business side of things um but when they are talking about elections all of these updates are in blog posts they're in posts by Mark Zuckerberg on his personal like Facebook page they're in you know they're in a million different places and so before we can even think about like banning or versus not banning it's like what is the actual policy on any given day because frankly like if I looked yesterday what I thought three weeks ago would not be true and probably there'd be like 19 more links for me to dig through in order to figure out what their approach is and the one thing I'll just add real quickly is you know like Yosa was saying Twitter doesn't allow political ads and clearly the president has no problems spreading misinformation and disinformation on Twitter uh the the the core of the problem is really the organic unpaid content gotcha um I think we'll have time for two more questions that's my goal um the first one being you know this problem is so massive I'm curious um what potential role there could be for government uh whether it's congress or federal agencies um in this space I mean we've seen rumblings of section 230 bills tied to election um related content or um grumblings of bias and various aspects of our government so um yeah I would love to get your thoughts there so I would say that there's for one thing there you know voter intimidation is a federal crime and is a crime in many states um the outlines of that are a little unclear and the ways that it's enforced are not entirely uh consistent so um one thing there's a sort of easiest case is the deceptive practices the vote on Wednesday the voter intimidation clarify and federal law that those are illegal the DOJ will enforce those laws um and there's a bill you know in congress uh to do just that um that I think is the sort of obvious easy first step is to clarify that these practices about disinformation about voting specifically that they're trying to trick people out of voting by saying vote by text or other things are federal crimes and give DOJ the clear uh mandate to enforce them um no so that's enforcement the other piece of the puzzle as as many have mentioned is providing correct information to people um so uh the government can do more and this is maybe more of a state and local issues since states and localities actually run elections to push out correct information to work with the platforms to push out correct information um to make sure that everybody knows who to ask questions to if they don't know how to vote and you know congress can certainly help with that certainly provide financial and other resources to do that and and there is some of that happening including with dhs funding for sort of election security and cyber security but um I think there there could be more yeah I I think you know a lot of what we're seeing in the election context on these platforms is symptomatic of a deeper problem which is the fundamental business models of these platforms in in two respects right the first is uh you know how the companies are collecting and analyzing massive amounts of personal information and the second is how they are running their engagement algorithms to optimize for the for for the most outrageous types of content and that amplifies bad actors more than good actors so you know if we want to get to systemic reforms and fixes what we really need is federal privacy legislation that regulates what kind of personal information the the a company can collect how they can use it things that they're forbidden from doing with it like discriminating on the basis of race or sex and and get providing real meaningful transparency requirements not just handouts uh and and then providing for robust enforcement mechanisms at all levels of of government so that these companies can be held to account because let me tell you you're not going to be micro targeting uh specific tiny communities of of black voters in certain cities as channel for news just reported that the trump campaign did in 2016 you're not going to be doing that type of micro targeting without a data profile that has tens of thousands of data points on a person so you know if you change the fundamental business model so that these companies aren't collecting and profiling and monetizing vast amounts of personal information then you will change the way that the companies operate you will change their incentives so that they are not incentivized to to create an environment where the negative externalities of their business the pollution of their business is voter suppression and voter intimidation and hate and violence so you know change the incentives change the business models through systemic regulation and and you will address these other problems yeah just real quick to build off of that what Ian and david said are are spot on I think another issue that's tangentially related to voter suppression and content moderation is just competition and competition policy that's currently not existent a lot of the bad actors are the biggest platforms simply because of the size and scale of their platforms and how quickly disinformation is able to spread on their platforms and we look at the various products and features that these platforms offer it can be difficult to moderate at scale we keep picking on facebook but if you look at facebook it has a public feature where folks can post organic content it has a political ads feature where folks can put political ads microtargeted to various communities and it has a groups feature where you can be involved in closed groups of thousands of people spreading disinformation so you're having three different components of a platform where this info can spread from one to the other quickly at times and no ability to to enforce at scale and so you have to look at what are the policies where you can actually promote competition amongst platforms allow new entrants to enter into this space and look at ways where individuals can say okay I don't want to be on facebook anymore twitter or whatever and go to another platform with their data in hand so there are various ways of looking at this privacy is one the competition is one and looking at transparency through election regulations is another one yeah totally and I know we're at time here but I wanted to end with just one more audience question which I think it's probably a good one to end with because it's probably really easy to listen to this conversation I feel really overwhelmed by all the ways in which platforms are being manipulated or used to spread false information about the election so the question is what brief advice would you give a friend who is nervous about falling for election misinformation but has a hard time identifying it I could just jump in real quick anytime you're you see something on social media and your immediate response is oh my god I have to share this right away that should be your signal that you need to take a break and pause because the content that is really designed to deceive is designed to trigger that visceral emotional response and so you know when you see something or like oh my god that's outrageous I have to share this take a pause think about it for a few minutes maybe check and see if it's it's really true if it is true and important it will still be true and important in five minutes yeah absolutely I think that's it's so important to anything that seems really shocking really outrageous or also too good to be true from your own political biases right it's always like the things that people share are the things where it's like yeah the my side is really right and the other side is really wrong and you know some of the things that people experts recommend to stop yourself from sharing misinformation is to check the source who's saying is this a legitimate thing who's saying it are they they post propaganda all the time is it a satire site that's really like jokes like the onion and one of the things key skills is lateral reading so you look at the thing and then you just put it down and go look for the information somewhere else google it half the time if it's a hoax there's already 10 articles news articles saying that it's a hoax out there and you can just find that without even getting to the end of the article you saw in the first place or you know the headline or the tweet if you're if you're not even getting engaging it so just look for other sources look for legitimate trusted sources and in the case of voting specific information ultimately that's the people who run the election in your jurisdiction your county election officials right they have a phone number you can call them and ask them how to vote check their website make sure it's their website and not you know an imposter awesome um great well i think that should um pretty much cover everything i mean it's not everything but with the time giving covered a lot of ground here so i want to thank our panelists for taking the time to inform everyone here um and for just a lovely conversation um about such an important topic uh thank you to new america for organizing this and also thank you to our audience for um taking time out of your day to hop into another video call uh six months into um the pandemic i really appreciate all of you um and of course this is an event from new america so if you like it feel free to go to new america's website check out their other events they're it's all lovely so great thank you everyone