 Good afternoon. Welcome to the future of democracy, a show about the trends, ideas and disruptions changing the face of our democracy. I'm your host Sam Gill. On this show, what we try to do is take a critical issue of debate or discussion happening in our democracy and open it up, try to get into the factors that are driving that issue that may not be apparent or readily apparent in the surface level public debate. And one of the thorniest issues that we're dealing with right now is the role of social media in our democracy from COVID to the election. We're in a debate of unprecedented intensity about how these platforms which connect billions of people should be managed. What content and voices should they allow and who if anyone should they block at the white hot center of this spotlight has been Twitter. In many ways, Twitter has adopted an aggressive stance. They have removed or corrected content from world leaders and pursued a hard line on authoritative health and election information. Recently, they even rolled out a read before you share feature to reduce the spread of potentially harmful or misleading content. Joining us today is one of the company's leads on this issue, Nick pickles. Nick is the senior director for public policy development at Twitter. So please welcome to the show, Nick pickles. Hey, so how you doing? Good. How are you? I think you're still missing your video. Apparently, I can't share it because the host disabled it so. That's a taste of your own medicine, I guess we will work on that but why don't we why don't we dive in. While we get that get that fixed. There we go. There we go. You're back. See. So, you know, the place I love to start is is is really in this moment, which is, it certainly seems to me that there has been a real acceleration in kind of pace of policy change on Twitter in the covert era and during this election, particularly around forms of misinformation. And so, tell us a little bit about some of the changes that you've made, why you made them and what effects you've seen. Well, thanks to be thanks for the invitation to join you and I can't think of a more critical time to be discussing this topic I think. You know, Jack Dorsey our CEO and start talking about this concept of health and rather than looking at individual metrics individual problems in isolation. And then much more holistic approach to how we tackle some of those issues and then faced with the covert pandemic. The literal metaphorical priority to protect health became something that really crystallize a lot of our work in this area so, as you referenced, we'd already taken you know big decisions, things like banning political advertisements, disclosing information that could be removed. But with covert. We really had to reimagine I think some of the boundaries that had perhaps sort of colored our thinking on these issues, and you do see it's now taking action to make sure that harmful misinformation about the covert virus itself is providing extra context to our users which is I think an increasingly critical part of how we make sure that tech companies balance the need to help their users navigate this information system themselves. And at the same time taking action to protect from that most harmful content so really for us it's been a culmination of several years work combining people and technology. And I think, you know, there are a wide range of challenges that still still are out there. covert is changing the conversation is changing the challenges are changing. And so for us, the big challenge for me my colleagues is that we can't stop thinking about what comes next and we have to keep thinking, how do we keep evolving to keep pace with everything that's happening around the world. Is it working. We're making progress I'm never going to say that the jobs done because for where we are today, where we're going to be in a month in six months, you know, depends on a whole range of factors some of which are outside of our control. But certainly I think, you know, taking a decision like banning political advertisements was not an easy one. It's not a simple one. We had to make sure that we protect the ability for people to run messages from, you know, non partisan official offices about voter registration about telling people where they're pulling places and balancing the ability to advocate for something like climate change. So really striking those nuanced rules about around course based advertising with careful regulation of who can place those advertisements, but then taking a much bigger step, not just as Twitter to recognize that this was content that we didn't have on our service, but to actually I think, and this is something that I'm really proud that companies work in this area is to think bigger than just the day to day problem and for us, micro targeting, the way that AI ML combined with ever decreasing segmentation and is something that we felt was bigger than just our company and so we made the decision in a societal framing, not just in a Twitter framing and that's how we're looking at what we're making good progress. Bad actors keep changing their behavior, which is always something that we have to be aware of. And I think as the, the weeks and days proceed, both the US election, but also the, the Brazilian elections, the Indian elections that are happening this year, every election is different and we learn from them all. One of the things you mentioned was rethinking the boundaries and and one of the boundaries that you all have been willing to cross that some of the other platforms have it is you have taken down or corrected content by elected world leaders, including in the United States. And I'd love to hear more about how you came to that decision. Sure, well there's, there's several factors at play in a decision around this that the first thing and this is something we said, two years ago, is that we recognize that Twitter is a place for geopolitics. In many ways, what used to happen in a smoke filled room now happens with a mobile phone and tweets, which is an incredible responsibility for us to protect that conversation. But we also have to recognize that there's a very special nuance towards communication between world leaders, the geopolitical saber rattling that we do see. And that's different from the conversation we see so crafting rules which protect against our users, taking action that may cause harm, while protecting that transparent public geopolitical conversation. So I think we struck a balance where we felt that, and this was informed in part by users and people on Twitter, telling us, we don't want you to make decisions for us, where the harm isn't pressing. We want you to give us more context, help us understand. And so that's why in some cases, you might see a warning message over a tweet that says this week broke our rules. We've applied that to world readers in a number of countries now. I think actually, Brazil and Venezuela, we took action in those countries before we did in the United States, and giving people the context that this breaks our rules. But we know it's a matter of public debate and public record. So we want to preserve its availability, but we'll limit the ability of you can't retweet that tweet you can't like that tweet. And then also, in situations where the harm is lower, taking a situation and putting any find out more, learn, you know, for example, about COVID about its transmission. In some countries around the world, we saw a public figures talking about 5G, for example. So we had a dedicated information resource that we directed people to just say, find out more about COVID and 5G, and then curating the negative sources from tweets from researchers, government agencies, experts in the field, and getting our users that context through those information buttons. What do you, it seems to me there's kind of a couple of views about this. So one would say, this is a kind of global public square and so you need to balance the rights of different speakers. But I think there's another version of the critique that you face, right, which is Twitter is like a new affordance. It's a new tool, and the tool can be used by autocrats and the tool can be used by authoritative legitimate sources of information and guidance. And it seems to me the argument that some people make that you should be more aggressive in responding to world leaders in particular is don't allow yourself to be the tool, you know, of their autocracy to be the tool of their misinformation. Don't become a megaphone, a weapon they didn't even have access to before. How do you respond to that critique? Well, I think that something we see every day around the world is that Twitter is one of the most powerful tools for people who are oppressed, for people who live in societies who in some cases don't even allow access to Twitter. Twitter is a tool for those people to challenge, to bring, you know, bring to attention. Some of the beginning of the COVID crisis video that was being smuggled out of China and being broadcast to the world through Twitter. So I think the, the nature of Twitter as a tool is something that is for politicians, but it's also for activists for journalists for citizens. I think, looking globally, the bigger challenge I see is a lot of the policy conversations we have. Twitter or social media as a silo. And don't look at how the wider information ecosystem plays. And so you do see differences between, for example, the role of state media, the role of corporate media that perhaps might have a franchise owner. You see the role of cultural institutions and the whole web of the information ecosystem has a very different, depending on which lens you look at through has a very different problem. And so one of the things that we talk a lot about is we need to look at policy solutions that protect the whole media ecosystem. So we introduced a policy, we do not allow you to post hacked materials to Twitter. But what do we do when those have materials on the front page of an international newspaper? And so I think that's where this, this tension arises between people looking to social media to solve problems that actually may exist far beyond our kind of control on our boundaries. I want to push on this a little bit though, because I am, it certainly seems to me to be at least partially an evolution in in our whole thinking about social media. I mean, I had a chance a few weeks ago to talk to a former CEO of Twitter, Dick Costello, and, and, and about a quote that I think is attributed to him but it actually belongs to your original counsel, which is your Twitter is the sort of free speech wing of the, of the free speech party. And if you know that that sentiment was espoused, you know at the same time that you know your predecessors at Twitter we're getting calls from the State Department saying, if you take the service offline to do maintenance, you're you will have a tangible impact on the green revolution happening in Iran so there certainly was a real moment in which I think your point about the power of the tool to liberate people really validated this idea that we had this sort of new, this new space for expression and activism and ideas. It seems to me though, you know, the question we started with is COVID and the election are raising real questions about, you know, how in the real world, to what extent is the tool effective for that kind of discourse and to what extent is the tool really enabling harmful content I mean has your as your guys view changed about about the kinds of the real in real life the risks versus the theoretical rewards. I mean, I think that quote has been attributed to so many people at this point. In the spirit of free speech everyone said Anyone is free to claim it free of intellectual property rights for sure. No, I think, you know, certainly, and I think, you know, looking back at the Arab spring I think one of the reflections that some people have shared is we focus too much on the technology and not enough on the people. And actually the activists who are putting their lives in danger every day. And they were doing that work and that the technology enabled that the technology shared their message but without those activists on the ground. The technology wouldn't have had the impact it had and so I think, you know, I do think that sometimes it's easy to give the credit or in some cases the blame to the technology company, rather than looking at the social conditions that exist but I think your point is absolutely right the, the way that we understand how public conversation happens now has evolved in part because the companies have matured and the research, you know, one of the things that the Twitter from its beginning. The phrase was the tweets must flow. Well, one of the things about the tweets must flow is they've always flowed through an open API to researchers and to academics. And so we see studies around the world every day where people are looking at how is Twitter being used, whether it's in the context of religious and social issues. And actually, particularly for COVID, we opened up a dedicated research endpoint with no cost attached for researchers wanting to specifically study tweets about COVID. So I think one of one of the challenges is, there's the public open internet, you know, Twitter being public and open by definition. And then there are walled gardens that exist whether because of the actions of the company, the actions of the government. And so understanding what's happening between those two spaces is something I think that's becoming increasingly hard but our view is that particularly with COVID, the risk of harm significantly increases when you have information telling you masks, for example, called health side effects or that social distancing isn't required when broadly recognized by the scientific community both are essential. And so I was taking action there, whether it's to remove content to provide context is reflective also of the world we're now living in. And so, you know, that's why we focused our policy on three key areas, COVID, civic integrity to protect elections, and also synthetic and manipulated media. And those three policy areas having the greatest potential for harm are where we focused our efforts but I think the shift that has happened in thinking is also that the world isn't just about do you take content down or do you leave content up. I think for me that's one of the biggest shifts that's happened in the past year or so is now there are a range of interventions. The work we've done on QAnon, for example, to deamplify, we allow people to speak. But we're not going to allow the amplification through the product versus in some situations adding a label to add context, while always maintaining that for issues like promotion of terrorism. We take a zero tolerance approach and are removing that content. That I think is the biggest shift is that the world of 10 years ago is was just leave up take down. And now we have this range of interventions, each of which appropriate to different harms, different risks. So I guess, so this to me if one feels like a very profound shift, I mean it feels like a shift from in less than five years from a worldview that seemed to suggest that innovation and openness will always be at least net beneficial, that's sort of in the language of tech the affordances will somehow outweigh the vulnerabilities to a view that says there's real harm and we have to be actively engaged in harm prevention and harm reduction. Even if that has some innovation cost. But I guess the question would be, is that enough, right. I mean, even even this, you know, the story of this week is that is that sort of thanks to the affordances of thanks to what technology can do. This sort of characterizes however you want the way Trump responded to the question about repudiating white supremacist is sort of seated a viral campaign. Among the proud boys among this white supremacist group the proud boys that they were actually able to promote themselves and and I you know I take your point that that's that's a mainstream media moment that gives birth to that that's a that's a real on live national broadcast television, but it certainly the ensuing campaign took advantage of what technology allows people to do can you get ahead of the way that these actors, whether it's a foreign government, an extremist group, or someone just interested in creating fog around health information, their ability to move and adapt, even as you make policy. I think this is this is exactly as you say that the big shift that's happened and, you know, perhaps even since Dick was the Twitter is the shift from being reactive to proactive. So if you take something like state media. Two years ago, or three years ago now, we just took the decision for Russia and Russia today and Sputnik to remove them from the ability to advertise, which we've now broadened to any state controlled media. That's a reactive change it stops and advertising. We're also proactively now applying labels. So if you see a tweet from Russia today and Sputnik from Chinese state controlled media, you actually in your timeline in real time are notified. This is coming from a government source. We've also taken that action for government accounts because one of the things that we we recognized and particularly looking at protests around the world. The interplay between state controlled media and government is incredibly important. So by being proactive and applying those labels, we give people more context. I think that's the big shift that has happened last year about half of the content that Twitter took down was taken down surface by our technology reviewed by people and removed rather than waiting for a user report. So that to me is the big shift is rather than being reactive waiting for the problem and then trying to deal with it on a case by case basis. It's how do you take much more systematic approaches, ideally leveraging technology so that you can get ahead of these problems. But I think the challenge is that bad actors always evolve. So there's always been an element of reactivity and how do you how do you build resilience. And so one of the things that we decided to do, not to empower Twitter but to empower the research community to empower governments, the public was every time we now take down an information operation that we attribute to a foreign state. That content isn't available for anybody. And so what we are now doing is making that archive available to researchers, not so they can just look at how many tweets were sent on a certain day. But look at the narratives look at the tactics, which through then having wider social discussion of, as you may have seen yesterday we took down some Iranian accounts. Thanks in partnership with the FBI. Educating people about the tactics adversaries about the narratives they're using is part of building resilience and empowering the public to be better protected. So yeah, I think we're I think industry is far more proactive than it has been and for us that use of technology and people going forward is going to be critical to how we protect the public conversation. So one of the questions speaking of proactive we're getting from the audience is, what are you, what are you starting to plan for the aftermath of the US election where there's obviously a lot of anxiety about content calling the election content that there was, you know, some some election administration problem that should cast the cast the results into doubt are you already thinking about how you'll respond to that. Yeah, and I think it's my point I chose just looking at my mug. I have a very branded Twitter mug but it's from the UK general election 2015. And the number of elections that have happened around the world with every year is an election year on Twitter so we learned more the previous experiences so in some cases take the Indian election. Actually the polling process takes place over several weeks. So you're thinking how do you protect against those kind of cascading effects of results from previous regions. So that our approach on this is, is again a combination of taking down content where there's the highest risk of harm so a simple level they're telling someone to vote on the wrong day. And we're going to remove that under our civic integrity policy. We're then going to take a look at content that perhaps might be confusing and risk be misleading where there's no call to action and there's no specific issue there. That's where we can provide extra context. So that might be linking people through through a label to information that's coming from credible authorities at the state level. It might be that if you're in a certain state how do you find out what's happening. Well often those state election boards those state attorneys general tweeting in real time. The latest information so we want to make sure people can find that information quickly. We've already banned political adverts. And I think you know what you're now seeing actually the recognition that political ads aren't just about campaigning. It's about setting a narrative and and spreading a message far beyond organic reach. And so by limiting that advertising already, I think we've closed down the risk there. And then we're going to make sure that you know that the news organizations that we partner with their credible information is prominent for all of our users, so that when people do start to make statements. We can provide context, but if people do start to make statements and I'd urge everyone who is watching to go and read our civic integrity policy, we have we have updated it specifically to cover questions of undermining confidence in the election and also claiming victory early. Politics is always going to be fluid. So we've got to have flexibility in our policies. But this is something that I think we and with our partners in government and in civil society are looking very carefully at how we make sure we get the best information and the most accurate information to the most number of people quickly. One question that often comes up around around policy is for for platforms of the scale of Twitter is is the enforcement question, you know, can the computational tools or human moderation whatever or hybrid actually keep up with the level of content that might be violating the policy. How do you assess the ability for Twitter to especially given the frenzy of content around some of these issues to actually be able to enforce at a level that you think is going to have beneficial effect. And this is something that we're thinking about carefully as well so the way that Twitter works. Obviously there are accounts who have prominent followings there are small and new accounts. And we said when we updated our policy that we would be focusing on the most harmful content with the widest audience. And so that's our focus is on. And again this is I think is a recognition that the, certainly when we talk to regulators globally that all pieces of content are not the same. And so if you try and have a standard approach where every piece of content must be reviewed in the same period of time, what that risks is is that you're not not focusing your resources on the areas of highest impact. And so we're focusing on whether it's, you know, the, the verified accounts that you will see on Twitter, whether it's those accounts with the highest engagement, but also working with partners. And again, this is something whether it's our partners in government, our partners within political parties, and our partners in civil society, trusting their expertise to say this might be something that's building momentum. A really pressing problem for us is the idea that people are organizing on other platforms. And so actually working with partners who are saying hey you might have seen this conversation happening. So thinking of coming onto Twitter, be aware, those kind of conversations where we can be proactive and prepare to things that being organized off Twitter is also a big part of how we make sure we stay ahead of this challenge. What. So one thing I want to be sure to ask you about before, before we wrap is an announcement you made in the last couple weeks. It's basically nudging people to actually read something before they share it, which I thought was sort of an interesting admission of something I think we all suspected which is that the ease of sharing which is obviously one of the great fantastic elements of social media that that those of us who use it take advantage of can be an incentive to pass along content that an individual may not have fully digested and and obviously you know by definition leads to the proliferation of that content. What tell us a little bit about how how this how this came about and how it's going. Well, I think the the simplicity of the intervention, I think is it speaks to the benefits of taking an approach that isn't just content moderation led. So in the case of digital literacy, which is something that is a societal need. It's going to come from schools. It's going to come from parents. It's going to come from nonprofits and some society. But there are things that we can do and this isn't a company expressing a view on the content. It's not a company. I'm trying to tell you to take one view or the other is just making sure that people are in and David random MIT's done some great work about looking at how do you trigger a mindset of critical thinking. And I think this is in this intervention and so that the data we saw was that by prompting people to say you haven't read this are you sure you want to share it. We actually saw a 40% reduction in the number of people who were retreating and content without having read it. So I think that's a really simple example. We have a great long standing partnership with UNESCO and to spread digital digital digital literacy skills. I think this is at this intervention is a really good example of how often with content moderation you're all focused on making a judgment of the content. And actually this is something where you can use behavioral signals to say actually just by nudging somebody is say would you like to read this. We can improve critical thinking improve digital literacy and then hopefully improve the wider quality of information that's being shared across Twitter. One of the sort of generalized arguments about social media particularly social media that includes that that that makes that earns revenue out of advertising is that the phenomenon that you identified in our intervening is exactly the thing that's trying to be generated right that what we what the systems are designed I'm not saying Twitter specifically but but sort of a general argument with social media is that the systems are designed to to get you not to read right to get you to just engage with as much content as possible share it with as many people as possible. That's the network effect that makes the platform valuable. So to what extent is this is this against interest. Do you think to what extent is this going to is this kind of intervention going to run a ground on the just the the economic physics of the way that a lot of social media media works. Well I think this is a really good example of when Jack Dorsey CEO testified to Congress and he spoke about this is something about rethinking the fundamental incentives of services like Twitter. And so I think this is a good example of how you know people may focus on the business model that we have, but actually in an intervention like this, taking decisions to understand why people on Twitter behave the way they do. Can we help improve the quality of the information on Twitter. And this is a good example of how actually rethinking those incentives is something that we can make meaningful progress on and improve Twitter. And I think then, and this goes to the question of how we improve the health of Twitter is our view that improving health, improving critical thinking is it is a supporter of our business model, and that the the healthier Twitter is. That's healthy for our people on Twitter. It's healthy for the conversation and it's healthy for our business. So I think actually what you're seeing is you can. The sort of the cynical assessment maybe those two things were intention, but actually from a point of view of looking at the health of Twitter, we think they're actually very complimentary. So last question you've used the word health a lot and I know it refers in part to to a specific way that you've defined what a healthy Twitter is but let me ask you a bigger question which is in what what is what is the healthy relationship between social media and democracy if social media is making democracy healthier what's it what's it doing for democracy. Well I think there may be a whole whole other conversation I'm not sure how much time we've got left but I think for me as someone who, you know I live in a different country from my family and where I grew up and I'm someone who's deeply passionate about issues of politics and how society and technology interact, the transparency that a platform like Twitter brings where things that used to be written in diaries things that used to be shared in small circles or advisors and only years later brought to life. We now have real time open public conversation between elected officials across state lines across national lines across continental and political divides. That's something that I think is still transformative and so progress is never linear. And there's always going to be challenges and we have to be deeply and acutely aware of the responsibility we have to make sure that conversation is healthy. But I think that the net benefit that we spoke about earlier the, the value of being able to speak as an individual to people directly in public office and actually hear back from them have conversations. You know, so there was some great research that the Knight Foundation actually published, looking at people who used platforms like Twitter, saw a broader range of information sources than people who were not digitally connected so I think you're seeing people accessing more information, having more conversations with people from different backgrounds and different cultures. That to me is a still a absolute underpinning benefit of democracy. And I think one of the challenges as we evolve through elections around the world is the, the role of social media in providing quality information and context to people on those platforms. But also that the responsibility of both policymakers, the candidates in elections, and the wider media ecosystem, each playing their part. And I think that's where we're now seeing an awareness of, you know, the Washington Post recently publishing principles that would underpin how it would cover certain challenging issues during the election. And so, I think, you know, we're, we're incredibly invested in making sure that the health of Twitter improves. We think that is a supporter to democracy, and actually as a company, advocating for the open internet, which we believe drives societal and democratic value is something that is far bigger than just Twitter, but also speaks to the fact that we believe the open internet does go hand in hand with democracy and those places where the open internet isn't available. Well, we think actually that by advocating for the open internet and protecting the open internet, we're also advocating for and protecting democracy. Well, if you want to get deeper in these topics you can follow Nick on Twitter at Nick Pickles. You can also follow Twitter's public policy team at policy to learn more about some of these developments and decisions as they precipitate. Nick, thank you so much for joining us. Thanks very much. All right, folks, we've got some incredible shows coming up in the, in the weeks to follow on October 8 for a very different view than I think you heard today, we'll hear from Rashad Robinson from color of change was elite architect of the Facebook ad moratorium by a number of advertisers this summer the stop pay for profit campaign. On October 15 we'll hear from Stephen Hawkins director of research at more in common, which has been putting out field leading research on polarization and division in the United States politics. On October 22, we'll hear from Zed up to fetch he an associate professor at the University of North Carolina, and who has emerged as a kind of contemporary Nostradamus about topics ranging from social media and its role in our democracy what we talked about today to the COVID crisis, and how we should be responding. As a reminder, this episode will be up on the website later today you can see this episode and any episode on demand at kf.org slash fd show. You can also subscribe to the future of democracy podcast on Apple Google Spotify or wherever you go for podcasts. Email us at fd show at kf.org or if you have questions for me just send me a note on Twitter at the Sam Gill. For a few seconds after the show to take a two questions survey. That is always will end the show to the sounds of Miami singer songwriter Nick County, you can check out his music and follow him on Spotify. Until next week, thank you for joining us and stay safe.