 Okay, so good morning, everyone. My name is Barry Sander. I'm a postdoctoral fellow at FGV School of International Relations in Brazil. I am originally from London. I'm not Brazilian, as you may have guessed by my accent. The title of my talk today is Democratic Disruption in the Age of Social Media, Paradigms of Platform Responsibility for the Governance of Online Speech. I'd like to begin by thanking the organisers for kindly inviting me to speak today. It's a great privilege and thank you all for coming to listen, more importantly, I guess. It's great to be back in Cambridge. I was an undergraduate here quite some time ago and being back here has really made me reflect on what the online environment was like back in the period 2003 to 2007 and how the general atmosphere has really changed since then. That period when I was an undergrad here was really kind of the startup phase for social media platforms and there was a real kind of innocence about them back then. The Arab Spring hadn't been still years away and I think, to put it in perspective, I think the greatest claim to fame for Mark Zuckerberg at the time was probably the creation of the poke function, which was still alive and kicking in those days. It's fair to say that the Age of Innocence has come to an end. The honeymoon period for social media platforms is firmly over and the world seems to be in the grip of what has been called a global tech lash and it's really that tech lash which forms the context of my talk today. So when we think about today's online environment, there are really three core types of concerns that have been raised about the impact of social media platforms on democracy. So the first is what I call the problem of domination. This is the concern that the economic and architectural power of social media companies or the leading social media companies gives them an outsized influence over not only the political sphere but multiple sectors of society relevant to public discourse. Secondly, there's the problem of content moderation. This is the concern the standards and techniques developed by social media platforms to determine both the visibility of content and the permissibility of content are arbitrary in substance, applied in a discriminatory manner, lacking in transparency or local stakeholder input and procedurally and remedially deficient in certain ways. And thirdly, there is the problem of data exploitation. This is the concern that social media companies have been exploiting the personal data of their users to maximize profits to the neglect of their costs on society including unprecedented forms of surveillance, their facilitation of micro targeted political advertising which can reduce the visibility of political communication and enable voter suppression and the facilitation of disinformation and trolling campaigns that may undermine and distort political discourse. So it's in this context of these challenges that the question has very much become not whether to regulate social media platforms but how to regulate them or perhaps how to build or change existing regulatory landscapes. But before turning to regulation or what regulation might look like a word of caution to date, both the diagnosis of digital threats to democracy and the development of regulatory proposals in response have largely been based on hunches and outrage. Now going forward, what I suggest is that we need to really move away from hunches and outrage to an evidence driven approach. And this is really important to guard against what Eliza Beck told yesterday called threat inflation. For example, a range of recent empirical studies have begun to dispel some of the hype concerning the existence of filter bubbles and online echo chambers by examining the media habits of individuals across the entire media environment. These studies indicate that in certain societies at least, such as the UK, concerns over social media driven filter bubbles may be overblown. Going forward, what is needed are further studies that examine the precise effects of social media platforms in different societal contexts. Because as Yoshai Benkler and his colleagues have recently noted in their recent book, the effects of social media platforms may very significantly depending on the cultural societal context in which they are used. Relatedly, amidst the rising practice of tech shaming, it is important not to scapegoat social media companies at the expense of examining longer term structural factors for democratic decay around the world. I'm not suggesting anyone in this room thinks this, but maybe some politicians it is worth pointing out that social media platforms did not create, for example, the 2008 financial crisis. They're not responsible for a revolving door between lobbying firms and political institutions. They're not responsible for a global network of tax havens, policy disasters such as the Iraq war, or economic policy that exacerbates societal inequalities. Yes, social media platforms are political, but so is the discourse by which politicians and certain sections of the mass media try to explain away a host of undesirable societal threats they dislike, and we should be wary of that. Now, importantly, of course, this is not to absolve Silicon Valley, but merely to caution that diagnosing the problem should be evidence driven and set in the context of broader structural factors. Equally, while the precise effects of social media platforms on democratic processes remain at least in certain aspects disputed, it is clear that sufficient evidence already exists to indicate that their governance of online content and data require dramatic improvement in any case. And it's in this context that my paper explores three regulatory paradigms that may help us to alleviate some of the content and data governance concerns that social media platforms have given rise to. Now, given constraints of time, the following is just a broad sketch, but maybe we can discuss more in questions. The aim is really to critically examine these paradigms, looking at what they can do, what they offer, what are their tensions that arise, and what are their limits. So the first one is a human rights paradigm. It involves adopting a human rights based approach to platform content moderation. And this is an idea expressly put forward by David Kay in his capacity as UN Special Rapporteur and Freedom of Expression. What does this mean? It means, substantively, would mean aligning platform content moderation rules and guidelines with the tripartite tests of legality, legitimacy, and necessity through which freedom of expression standards are defined under international human rights law. In terms of process, it would mean improving the transparency of community guidelines, algorithmic decision making, and human moderation processes, enhancing the quality and consistency of engagement with local stakeholders, and enhancing the capacity of platforms to resist rights violating pressures from states. And finally, procedurally, it would mean improving how platforms notify users of content removal, establishing robust internal and external appeals mechanisms, grievance processes, and ensuring adequate remedies for wrongful content removals. So a number of questions arise from this, and I'll just give a few. A number of tensions, a number of difficulties. One, should social media platforms be required when restricting content to align their justifications with the narrow public interest grounds recognised under international human rights law? Is that something that we think is a good thing? And relatedly, do those standards provide sufficiently clear and consistent guidance in any case? Secondly, to what extent should platforms, should we expect platforms to resist rights violating pressures from states in relation to their content moderation practices? And thirdly, what does an effective appeals mechanism look like in this context given the scale of some of these operations? These are some of the questions that I'm going to sort of look at in the book. The second paradigm involves data protection. And there are essentially a number of different ways, a number of different approaches to data protection around the world. One approach is to emphasise control over data. This is the idea that individuals should be properly notified of the reason, context and purpose of their personal data processing, and that processing should only occur with their consent. In practice, however, notice and consent approaches to data protection have become a de facto payment model in which individuals trade their consent to personal data exploitation in exchange for access to online services. Indeed, without more notice and consent models are a core fit for the cyber context. We simply lack the time and interest to go through these extensive terms of service agreements. And noticing consent models also fail to take into account the weak bargaining position of individuals. These are the social media companies. So a different starting point is to recognise that certain companies hold power over individuals who are vulnerable to them, dependent on them, and have to essentially trust them. This relationship of dependency and trust forms the basis not only for more robust consent requirements, but also a range of substantive obligations to be placed on social media platforms. And we've seen around the world a range of ways of doing this. One is the GDPR, which places responsibilities on data controllers and processes in a variety of ways. There are data protection principles, you have to have legitimate rounds of processing personal sensitive data, and there are a range of obligations like impact assessment, notification for data breaches. An alternative but related approach, I would argue, has been put forward by Jack Bulkin and Jonathan Strain in the United States, were involved placing fiduciary obligations on social media companies so that they owe similar to doctors or lawyers and professional capacities. They are duties of care, confidentiality, and loyalty to their users. Now in practice, the effectiveness of trust driven data protection regulations very much of course depends on the nature of the responsibilities they establish. With respect to the GDPR, for example, a number of questions remain to be answered. Is freely given specific informed and ambiguous consent possible for targeted advertising, given the power imbalance that exists between users and social media companies, particularly when it's offered on a take it or leave it basis? How broadly will the legitimate interests criterion be interpreted as a legal basis for processing personal data, and can online targeted advertising in particular be based on this legitimate interests criterion? And finally, how robust will enforcement be in practice? In this regard, it's notable that France's data protection authority has already fined Google 50 million euros for breaching the GDPR's transparency information and consent provisions, and so it remains to be seen whether similar fines of that nature will be delivered in the coming years. Now, a final paradigm discussed in my paper involves updating advertising regulations to respond to the corrosive effects of today's system of online advertising. And there are essentially two approaches that I look at here. One approach that has become prominent in recent years entails improving the transparency of so-called political or issue advertising. This is something that Facebook's implemented in the US. They require all political or issue ads to make clear who paid for them to be stored for up to seven years, and for those running such ads to verify their identity and location. However, in practice, the number of challenges have arisen. How do you define political and issue ads? Is it not better to just have greater transparency for all advertising? And secondly, there's the challenge of social media paid influences. The nature of advertising is changing so that formal advertising is declining, and you now have influences who are in the role of advertisers, and regulators need to be aware of that. And the second approach involves banning ads with certain types of content. We recently saw Facebook do this with vaccination disinformation advertising. But this raises the challenge of establishing, how can we establish a structured process for determining which types of content should be subject to ad bands in the future, so as to move away from the largely reactive approach that currently exists, this outrage-driven approach, which I was talking about earlier. Okay, so by way of brief conclusion, I wish to make four points. Four points, I guess not so brief actually. First, there is no single regulatory panacea. A mixture of paradigms will be needed to respond to the multi-faceted challenges raised by social media platforms. Either that line three, there are more. Second, some issues may be unresolvable, absent more radical structural policies. For example, solutions that have been put forward, like breaking up tech monopolies or requiring more radical changes to targeted ad-driven business models. Third, it is important not to focus solely on the responsibilities of social media companies. Political parties, states, governments, traditional mass media organizations, data brokers, and advertisers all play vital roles within the online ecosystem. The responsibilities of these actors would also benefit from greater attention given the challenges of the social media age. As already discussed earlier in my presentation, we mustn't forget the longer-term structural factors of demographic decay. Finally, it is important not to focus just on social media platforms. Other technologies, such as messaging services like WhatsApp, raise unique challenges that may require bespoke regulatory and technological responses. Ultimately, the final message I wish to convey is that while social media companies raise unique regulatory challenges, it is important to emphasize that they are very much regulable. As such, the challenge going forward is to identify which regulatory frameworks offer smart ways to alleviate the concerns raised by social media platforms without generating disproportionate collateral harms in the process. Thank you.