 Welcome everybody. We have a terrific panel today on advancing digital content safety. My name is Marty Barron. I'm the executive editor of the Washington Post. And our panelists are Mark Reed, CEO of WPP, Susan Wojcicki, CEO of YouTube, and Maritza Shaka, former member of the European Parliament and now director of international policy at the Cyber Policy Center at Stanford University. Thank you all for joining us and thank you to the panelists. I want to start with a quick lightning round, a quick question. The title of this session is advancing digital content safety. So let me ask every one of the panelists to very quickly and I hope very briefly and perhaps less than a minute to offer a definition of what digital content safety really means. And how do we measure that? So Susan, you want to start? Sure. Thank you so much for having me. To me, it means protecting the community that we have of users, creators, and advertisers against egregious real-world harm. And that's the way that we define a lot of our policies. And doing so in a rigorous, consistent, and scalable manner so that it can scale across the United States and across the world in all areas and address a large number of issues. Mark? I'd say that therein lies the dilemma in all of this and perhaps we'll get to discuss. I think different people have different definitions, but I think there are some basics around harmful content around bringing people together and around platforms that I think we all agree with. And you know, with someone with two kids, I want my kids, quite frankly, to be able to be on those platforms safely and not be concerned about it. And at the end of the day, I think that's something that everyone shares. Maritra? I think it's about keeping people and the community and the public interest safe from harms boiling over off of commercially-led social media platforms into the real world. As we've seen on January 6, then around the world with election-related violence, the harms to public health coming from disinformation about COVID. So I would say it's really about making sure that content shared online does not cause restrictions or harms to safety for people anywhere in the world. Okay. So we've set the table. So we already heard reference to the January 6 attack on the Capitol in Washington, D.C. So, Susan, almost a week after that attack, YouTube suspended President Trump's channel for seven days. Tell us why you did that. And in particular, what that has to do with content safety. And let us know if he's still suspended. And one question is, why did YouTube move to suspend Donald Trump more slowly? And it seems less severely than other social media companies like Twitter and Facebook. YouTube seems to move more slowly than other social media companies to take down videos that incite violence or hate and that spread lies like those about voter fraud. So is YouTube taking action fast enough? I would say definitely. And I would definitely take issue with the statement that we do it slower than everyone than other platforms. And probably to answer the question best, it's good for me to take a step back and explain a little bit about how our systems work. So YouTube from the very beginning has had a system where we have a number of policies and we have a strike system that goes with that. So we issue strikes when there's a policy violation, depending upon the severity of the strike. It either leads to a short-term suspension or ultimately a termination of that account. And that applies to everyone, from small creators to heads of states who have never said that any kind of, it's not like special people getting any kind of special exceptions. Everybody is treated in the same and consistent way. So on December 9th, which is when the state certified the election, we began removing all content that alleged that the outcome of the 2020 presidential election was changed by false claims of widespread voter fraud or errors. And I believe we were the first platform to do that. That was a month before the Capitol attack that we started removing that content. We did so across a number of channels, a large number from a variety of different backgrounds, including some videos from the Donald J Trump channel. Now, given this was a new policy, like as we do with all of our policies, there is a grace period because it's a new policy. So what we do is we remove the videos again to make sure that we're preventing egregious real-world harm, but we also want to give the creators a bit of a warning. So we remove it, but we don't issue a strike. After the Capitol attack, we did, we accelerated that and we started issuing strikes on all channels that uploaded content that violated that policy. And including the video that Donald Trump removed, that he uploaded that day, we removed that one. When he did upload additional videos, we issued a strike and we suspended that account and it remained suspended. So I would say, A, that we actually remove the videos a month before the Capitol attack, which is a time period that actually really mattered. And I would also say that different platforms are used differently. So Twitter, for example, may have had like a hundred tweets a day from the president, whereas a lot of times our platform was used in a variety of different ways, which is that might have had content uploaded from TV, C-SPAN, other channels. And so there was actually a couple day period after the Capitol attack where he didn't upload anything. But when he did, and it was a clear violation of our policies, we did issue a strike. And I do think it's really important to do so in a principled way, not just say, hey, you know, we've made a decision, we're going to suspend it for whatever reason. We have a policy system. We enforce those policies. We enforce them consistently. Again, regardless of who the person is. And whenever we see a violation, we do take action very quickly to make sure that we're keeping our community safe. So President Trump is still suspended, former President Trump is still suspended. And when would you be lifting that? What does it take? Well, I mean, we'll have to, I think in light of the situation, there's so many concerns around violence, that's something we're just going to have to continue to wait and see how that evolves. Okay. Mark, it was about two years ago, I think that advertisers were boycotting YouTube when Cut-A-File said infiltrated the comment sections. The videos were innocent enough. It didn't violate YouTube standards, but the comments were out of control with suggestive comments. And yet advertisers have really flocked back to YouTube, even though there appears to be a fair amount of content that violates YouTube stated standards. So why have advertisers come back? What are your company's standards? And what are the standards of your clients for advertising on social media sites? And are they other standards really enforced? Are they regularly enforced? Or just when certain content generates a huge amount of bad publicity? So look, I think the first thing to say is our clients are very concerned about their messages appearing next to any harmful or dangerous content. And our job is to protect them from that. And they expect us to protect them from that and hold us to account. I think as Susan said, many of these platforms work in many different ways. There is almost an infinite amount of content on it. And I think everyone's frustration has been, often these situations happen, and then they're brought to public attention often by a news outlet quite properly. And it's a revelation to people. And it shouldn't be a revelation to people. I think that's part of the problem. Several years ago, it was more than two years ago, there were issues on YouTube. And I have to say, I'm not just being nice to Susan. They were very responsive, first by raising the quality threshold of the videos. If you'd like with our advertisers messages appearing next to you, there was no chance it would appear next to this content. And then by taking other steps to root out and push the content down. Now, I think our clients' concerns, though, go beyond that increasingly more fundamentally to the platform overall. And so clients realize that their advertising is funding all of these social media platforms generating a 95% of their revenue from advertising. And so the questions have become even more serious than justice, your message next to the platform. But do you believe the platform overall is doing the right thing or the wrong thing? And therefore, I think, you know, they're being forcing the platforms to take more concerted action. But I would say that our clients and our people, and I think actually many of the people that work inside many of these companies, on several occasions, things come to light, if you like, after the event. And, you know, increasingly, I think, sorry, the challenge with my connection, hopefully, was sorted out. But, you know, increasingly, that needs to happen. So clients do take this extremely seriously. But there's no doubt that the platforms need to do more. Marietje, what's your own sense as to whether the platforms are doing enough? And who should be deciding that? I mean, should social media companies be regulating themselves when it comes to hateful and violent content? Or when it comes to disinformation? I think you've warned in the past about what you've called a threat of privatized power over the digital world and argued that there's a need for what you've called a global democratic alliance to set norms, rules, and guidelines for technology. So what does that mean in terms of the safety of digital content? Should governments be dictating what's safe and what's not safe, what's true and what's not true? What can be kept up online? And what has to be taken down? What's your sense of that? Yeah, before answering that, I just want to make a very short but important statement, which is to say that I want to express solidarity with the Russian population and Alexei Navalny, because Vladimir Putin was given a podium to speak here at Davos. And the idea is that dialogue is better than no dialogue. But of course, in Russia itself, there's no room for dialogue because there's no room for dissent. And so I think as we talk about freedom of expression online, let's spend one brief moment calling for the freedom of critical voices in Russia who are both threatened and peaceful protesters are beaten. Now back to the question about the power of private platforms, which has global reach, of course. And I think it's tempting to focus mostly on illegal and harmful content, but the business model itself deserves more scrutiny. So the amplification of content, the profit driven decisions are not intended to optimize for, let's say, resilience of democracy or making sure that there is protection of public health concerns. So just in the case of YouTube just because Susan is on the call, I mean, YouTube allowed for President Trump to buy exclusively the front page of its platform on election day and also in the days before. That is not in the context of harm for illegal content. It's a business decision that has profound political implications. So what I think the problem is, is that there's not enough independent oversight over these private companies when it comes to how their business models work, how data is collected and processed, whether people's rights are respected. And we've had approximately a decade of self-regulation or very, very light touch regulation. And I think we've seen that that doesn't work. I think there's broad agreements around that. And when we ask who should intervene, it's important to note that governments differ also in their legitimacy. So I would say democratic governments in the context of the rule of law should make sure that definitions and processes of protecting rights online are clear, protecting the public interest online is clear. And that is the direction we need to look at. And we can't just limit our view to the lens of freedom of expression and harm for illegal content. It's a much broader impact that these business models of social media companies have. Susan, since you're kind of in the hot seat here, I wanted to give you first an opportunity, lucky you, to respond to Maricha's comments, particularly about giving Donald Trump such prominence on election day and her more general comments about what should be done with social media. Sure. I mean, I'll just start by saying that we have had for the last number of years, we've had a huge focus on responsibility. And I've been really clear that that has been my number one focus. And there's been a lot of misinformation about our business model and whether our business model, like how it's supportive, does it work with responsibility, not work with responsibility? And I think it's really important to point out that we are funded basically by our advertisers. And our advertisers have done a tremendous job of working together, identifying what are their standards. They've worked with the World Federation of advertisers that come up with a number of different global standards. We report on those standards to them. And we are audited by third parties to make sure that we meet those numbers. And I think I'm bringing that up because when we look at governments, a lot of times, as was pointed out, all governments have different standards. In many cases, it's not clear what those standards are for us. We don't want to be having a egregious real world harm content out there. So there may be lots of content that is legal, but it's harmful. And so we feel responsibility to remove it. If we don't remove it, it's bad for our company. It's bad for reputation. It's bad for advertisers. They're going to pull the dollars associated with it. And so we really do our best to consult with experts, figure out where's the place to define those policies, define them consistently, and force them consistently. But I do think the fact that the advertising agencies and advertisers were able to come together globally, agree on standards, then enabled us to actually report on them and be in a third party, which is a media rating council at MRC, and be able to say, yes, YouTube is meeting them and is at over a 99% metric in terms of what those advertisers want across the board. And I think we also brought up health. Health has been an incredible place if you look at the pandemic where we moved incredibly quickly. It wasn't like there was government regulation about what we should do with COVID and 5G conspiracies. We had to implement those immediately. We implemented 10 different policies. We removed over 800,000 videos. We launched COVID news shelves with authoritative news in 30 different countries, and we served over 400 billion impressions from 85 different health organizations to make sure that people got the right information. And that we're incented to do the right thing. And I believe played a really important role in getting accurate information out related to COVID. Susan, obviously, you need moderation. Mark talked earlier about almost an infinite number of amount of content on social media sites. So as I understand it, for moderation, you use a combination of AI and moderators themselves, human moderators. But even with the best AI, even with the thousands of moderators, violations are going to find their way through and they'll be seen by millions of people and shared almost instantaneously. So can a platform like YouTube or Facebook or Twitter ever really be sufficiently well moderated before damage is done? There are probably a lot of people who would say it's impossible. What are your thoughts on that? So you're right. We do use a combination of both humans and machine technology. And I believe with me tremendous progress in this area. We will never be 100%. But we will be, you know, and our goal is to be over 99.9% in terms of making sure that we have removed the content that violates our policies. We do publish a transparency report so people can understand and regulators, governments, companies can understand what we're removing. And also how quickly we're removing that content to your question about how quickly content is distributed on the internet. So if you look at Q3, we removed almost 8 million videos. And 90% of that was removed with automatic flagging. And that's important because it enables us to remove the content extremely quickly. And if you break that down, 76% of them had 10 views or less. And so that shows that we're removing content very quickly before it's able to have that broad distribution. So I mean, I think just to be clear, we are going to keep getting better. Our classifiers will get better. We'll define our policies more tightly. So we'll never be 100%. But I'm hoping that we're 99.9. And that the number of nines just continue to increase in terms of how compliant we are in terms of finding and removing the content that violates our policies. And so Mark, how are advertisers actually monitoring what happens on these sites? What's your system for doing that? And how do advertisers and your own companies see their own responsibility to monitor these sites? And how seriously is that taken? Are advertisers being aggressive enough? Are they being thorough enough? Are the standards high enough? And is there any genuine rethinking in the advertising industry regarding its relationship with social media sites? Well, I mean, I think clearly there has been a lot of thinking about our relationship with the social media sites. And I think from our class perspective, you know, they would like Susan to say, you know, 100% of content gets taken down. That would be everybody's objective. I think we all realize that that may not be practical, given the volume of content that is there. But that would be clearly their goal. And I would imagine Susan's goal as well. I think that they do, you know, as you saw with the boycott of social media, that was perhaps focused on Facebook in the middle of last year, but actually extended to the other platforms, you know, advertisers take it seriously. And I think they felt that their voices were not being heard by several of the platforms, and many of them went into that boycott. Now, whether boycott is the right way or the wrong way to go, I think it certainly got the attention of the platforms. It did result in a more meaningful dialogue about what type of content was there. And, you know, some common definitions of what they meant by harmful content and others. Now, I think it's fair to say that all of these discussions should have happened a long time ago, and that these things should not be to debate. But they are complicated, and I don't envy their choices. I think the other thing we have to realize is that these platforms are not universally bad things. And, you know, we're sort of 23 minutes into the discussion, we haven't talked about all of the small businesses that build their businesses on the back of Facebook or Instagram. We haven't talked about all of the creators on YouTube that make how-to videos. I don't think my son could have got through lockdown without YouTube or indeed his parents. And, you know, these platforms are not universally evil, you know, nor indeed is advertising. Advertising funds this. And our clients do feel that responsibility vary. It should be that a number of them pulled their budgets from, I guess, the middle of the year until the end of the U.S. presidential election, feeling that they didn't want to be seen to fund this. So I think people do take it extremely seriously. I think the challenge in the argument about regulation, and by the way, Facebook have called for government regulation because they want the decision taken out of their hands. You know, the problem is that, you know, Donald Trump was the Democratic elected president of the United States. And for those people that want to have Donald Trump taken off the social media, he probably wouldn't have introduced a rule that took himself off it. So, you know, and President Merkel, as I understand it, did criticize the decision at some point to take him off the social media. So these things are not easy. And people from both ends is my feeling of the political spectrum. You use this battle to advance, of course, one way or the other. Now, there's no universal for every, you know, left, let's say left to center politician who's attacked on social media. They're the same as true people on the right. You know, sadly, there's a lot of, you know, there's a lot of hate, if you like, on the internet full stop. So we are just going to have to manage that. And so I think it is a difficult case. And perhaps government regulation, you know, is the right way to go to take the decisions out of people's hands. And maybe that is the way that things will end up. Well, let's talk a little bit about that. Marietje, there in the United States, there's a lot of discussion about Section 230 of the 1996 Communications Decency Act, whether that should be repealed or significantly revised. That Section 230 really effectively immunizes tech companies from liability for the content that's published on them. And many would say, I think, that that's led to a lot of hateful, hurtful, almost content, a wild west environment on social media, where lies are proliferating and where violence can be promoted. So in your estimation, should Section 230 be repealed, making social media companies fully liable for the content that's published on their sites? But I think the question of who should be responsible for deciding what should stay on and what should stay off should be much broader than just in the hands of those private companies who are often governed by a handful of people on the top, making extraordinarily impactful decisions. So first of all, there needs to be more independent oversight and independent research into what happens. And for example, access to information for journalists like yourself. But when it comes to the liability question in Section 230, the discussion often proliferates into sort of no liability or all liability. And I believe that there is a road in between, which does not so much focus on the content only, but also on the mechanisms of, as you mentioned, amplification. So not so much just the question of speech, but also the question of reach, which has a lot to do not only with how a message travels through the system, the algorithms and the infrastructure built by private companies, but also by data collected of people using social media and search and who reveal a lot of their very, very private secrets, essentially, against which advertisements are matched. And the question is, how far reaching should that data collection be are all categories of advertisement, for example, political ads or very sensitive personal information, all up for grabs for matching that information. And so when you think about balancing the outsized power of private companies, I think you need different parts of the regulatory puzzle from antitrust to data protection, to intermediary liability, to non-discrimination principles, questions about when harmful, but legal content becomes a public safety issue. And so I would like to have a much sort of broader view of what needs to happen because there is no magic solution. People have hoped that Section 230 would be that magic solution. People have hoped that antitrust would be, and I think neither are. Susan, what would happen to YouTube if Section 230 were repealed as former President Trump had advocated, particularly in his final days? Yeah. Well, first of all, I just want to point out that Section 230 has two different parts. So you mentioned the part that gives us some protections from liability, but there's also a second part which gives us the ability to remove content too that is not necessarily illegal, but could be seen as harmful. So we, for example, remove adult content from our platform. So if you entirely repealed Section 230, A, it would take away our ability to remove content that we see as harmful. I don't think, I mean, some people, there are many different opinions on this, and that's actually been part of the problem. And I think a lot of people would see that as extremely problematic. I think if you took away the protections in terms of the liabilities, it would just vastly, like if you took away both parts of it would vastly change the internet. It's the underpainting of what the internet looks like. So I can talk about how YouTube would change, but how the internet as a whole would change is like, I think we could say goodbye to all reviews, to all comments, all user-generated content that's up there on the platform. And we would probably, YouTube and the internet would be a much, much smaller, highly curated set of content. Similar, like if you look at TV, I mean, when I grew up, there were like three channels that we were able to look at. So YouTube right now has millions. And so I think we would go back to a smaller set of channels and a restricted set of information, which again, and I think as like has been pointed out is like, there are many steps in between. There's a lot of, and I'm hopeful that we can work with governments and try to come up with ways that we can have more sensible regulation where we can keep the benefits that we have accrued from the internet, the useful information, the businesses. YouTube is not, YouTube just to be clear is like, we're a platform for small media companies. We have media companies built on top of YouTube. And we actually just ran this report that in the US, we produced 345,000 creative jobs. And there's similar numbers in different European countries. So I think we just have to be really careful about the value that has been created and how to work closely with governments to be able to achieve what we want, but not destroy the value that we have created with the internet. Great. Well, I want to thank the panel. We've come to the end of the public portion of this discussion.