 Hello, I'm Nicholas Thompson, I'm the CEO of The Atlantic. It is my great pleasure to be here with Susan Wojcicki. She is the CEO of YouTube, we'll be talking about YouTube's crazy last year, we're talking about global governance, we'll be talking about this information, we'll be talking about how Susan spent much of the pandemic with five children at home, which is as heroic as running YouTube at this moment. So, hello Susan, how are you doing? Hello, how are you? Thank you. I am doing well. You're good. Good to hear. Well, first of all, I just want to say thank you so much for having me and thank you to WEF for hosting this event, and thank you to the government of Japan for hosting these global technology summit conversations, I appreciate it so much. So I just wanted to say thanks to everyone for making it happen. It is great that we get to do this, even at this crazy time. So, let's jump in, I just want to ask you a little bit about YouTube in the past year because we've all been locked at home basically watching YouTube, right? We started watching videos on how to make hand sanitizer and then videos of how to do arts and crafts, we didn't go crazy. Tell me the most surprising thing that you've learned about how people watch YouTube during the pandemic. Well, first of all, I'll just say I never thought that we would have so many hand washing videos. It was featured on the Google homepage and that's something I really could never have predicted. But I mean, I just saw and I felt it was a huge responsibility for us with the pandemic to be able to be, I felt like we were such an important link for people to all kinds of information, whether people were at home, they needed to connect to whether it was a religious organizations or social groups or we saw musicians who came out and did big concerts, we saw bands come and post historic coverage of concerts. It was just so important way for people to connect and learn. And, you know, one of the things probably that surprised me the most, which was really your question there, Nick, was how important we became in distributing COVID-19 information. And we immediately saw the role that we played and we just were we had everyone working at full capacity. So we served hundreds of billions of impressions of COVID information related to ways to that came from different health organizations. And we also made sure that we had playlists. We had to implement a whole bunch of new policies. But we really saw the critical role that we played in health. And that was the first time and working with health organizations. We worked with over 85 different health organizations. And it was really the first time that we worked so closely in the health field with so many different organizations for something that was global in nature. It's very interesting. And in fact, it leads right into the news from today. So as some people who are watching may know, YouTube made an announcement maybe three or four hours ago about violative content, basically measuring the amount of content that violates YouTube's standards. And one of the standards you can violate is misinformation about COVID-19. So the question I want to ask about that is this new report is out. There's transparency. That's wonderful. The amount of content that people view that violates your policy is quite low. It's 17 views out of 10,000. Is that correct? Yeah. Yes. It's it's approximately somewhere between 16 to 18 for 10,000 views. So that means with my children, that means we probably see 16 to 18 a day. But the question I want to ask is of the different contents of category that you are screening for, where you have policies that could be violated, hate speech, nudity, terrorism. What remains the hardest category to identify? They're all difficult in different ways. You've used machine learning to knock the numbers down. What is the one that machine learning still has the hardest time with? Well, first of all, I just want to say, I think it was really important milestone in terms of what we announced today, because we have been asked many, many times by governments, by press, by advertising creator community about this violative rate. And we were able to show exactly how good we are at enforcement of our policies. Right. So we were able to show that we have a very high ability to find this content and show exactly what that number was. We were also able to show that we were able to reduce it significantly over time. So if you look at where we were in 2017, at the same time of the year, we've reduced this more by then 70 percent. And that is due to an incredible amount of hard work with machines and also improving our policies. So not only did we significantly remove content that violates our policies at that significant rate, but we also created a lot more policies that we had to remove. And I mean, I would say the machines are good. We can find content across the board. But, you know, something like hate speech or something that has a lot of context would be something that would be harder from a machine standpoint to be able to detect. But in the end, we've been able to really fine tune our machines so that we can find a lot of this content. And it is flagged, but it doesn't necessarily mean that it's removed. So what happens is the machines will flag it and then it will be sent to human reviewers who will determine whether or not this is, in fact, violative or not. And they catch is one of the one of the complexities here is that this is content that violates your policies. It's not content that violates my policies or that fits some government's definition of hate speech. So a critique that someone could make is this is just what you think is bad content. There's nothing to do with what I think is bad content. How do you respond to that? I'd say there are two different conversations. So the first one is for you and I and governments and everyone else. Everyone, everyone seems to have an opinion about this, about what is the good content, what's the bad content, what should be up, what should be down. So we engage with many different groups across many different topics. And I'd say that's one conversation. But then we post very clearly and we say this is the content that we have decided is violative of our platform. We posted on our community guidelines and then that's a different question, which is, well, how good a job do you do at removing that content once you've identified it? And so this report that just came out showed exactly where we are, which is a 99.85 percent, depending we have a little confidence interval, which is why we have the 16 to 18 range in 10,000 views. So it's our goal is to break that into two different conversations, first, what the policy should be. And then do we do a good job enforcing them once we have those policies? Right. That makes a lot of sense. Let's shift to the question of good content, right? So sometimes there's, you know, there's there's all kinds of interesting content that you regularly in different ways, right? There's bad content, which you try to get rid of. There's borderline content, which is stuff that doesn't violate your policies, you try to downright that. And then there's the content that we sort of want to see. And there's also the sort of this fourth category of content. We're really happy we saw it. So if I spend an hour on YouTube and I surf through YouTube and I follow the recommendation algorithm and I watch a lot of sport videos. And maybe I see the late night comics at the end of the hour. I feel that was fine. If at the end of the hour, like I can solve a Rubik's cube because the YouTube algorithm has pushed me in like super interesting directions and figured out that I've always wanted to solve a Rubik's cube, then I'm thrilled. How do you think about incentivizing not just kind of run of the mill sugar, not bad, but like the really good stuff? Like, what's the exact inverse of the violative content? I think one of the things I've learned working in information since I've been at Google for over 20 years is the broad range of interests that people have. Information is incredibly diverse. And what a lot of people love about YouTube is they can say, I went and I found this specific video that I used to watch when I lived in this foreign country, you know, far away 40 years ago and I found it on YouTube, or I had to fix something that was very specific in my house and I could do that on YouTube. So first of all, it's really hard to say this is a content that is really great. I think you started talking about educational content. You implied with the Rubik's cube that that educational content had this higher premium. I'd say educational content is incredibly important to YouTube and that everyone, almost everyone comes to YouTube to learn something. In fact, we just had this episode study that said over 77% of people said they came to YouTube to learn something. And just anecdotally, everyone tells me how they fix something in their house. But I think, you know, what you're bringing up is one of these challenges is what is considered good content. We do classify when it comes to information, authoritative content. So if you're looking for COVID information, we actually can say, look, you know, the health organization, your local health authority, the CDC or whatever country you're in or the World Health Organization, those are organizations that we can trust as opposed to some channel that just showed up that we don't have any kind of authoritative information about. So we definitely have a concept with information about authoritative sources and we make sure that when people are looking for information that is sensitive, that we show those authoritative sources. But if you're in the entertainment area or you're looking how to fix something or how to learn something on a skewer topic, it's really hard to put some judgment about what is the best content that's out there. Right. So on authoritative content, that makes sense because you can just label that if something is labeled as authoritative, you can boost it. We did not just label it like, I mean, we have different we have a more sophisticated algorithm in terms of how we identify that. And we also are working on a global component, but we definitely do raise it and we've done a lot of work in the last year to identify sensitive areas and make sure that we're raising authoritative sources on that information. Yeah, that makes sense. And by the way, when Susan speaks about fixing things, I was listening to a podcast of hers this morning where she talked about watching a YouTube video to learn how to fix her 3D printer, I believe, which I thought was a wonderful anecdote or she, anyway, I thought that was a delightful story about how one one can use one can use YouTube. The last time we spoke, which was at her last time we spoke in public, which was at South by Southwest. You introduced a new concept very much related to this, which is that when conspiracy information is shown, you would, you know, show a panel with authoritative information and you would lead people to Wikipedia. Tell me how this has evolved over the last two years. Sure. So, yes, we met almost what? Two and a half years ago, 2018, and I told you three years ago. Yeah, three years ago. And I told you that we were going to do label content. And at the time that was a new idea. No one, I don't think we had actually rolled it out or we were just in the process of rolling it out. And that became something that we now refer to as information panels. And those information panels have become incredibly important. I'd say they're a serious workforce in making sure that people have the right information and that we can use to counter misinformation. So I'll give you a few examples. We certainly have used that on all the coronavirus information. So if you look at something COVID related will link to all different kind of health information, health authorities, depending upon what country you're in, all kinds of common conspiracies will link that for election integrity. We use that in COVID. We serve hundreds of billions of impressions that were different information panels. So we've come a long way, Nick, since we first met. And I'm hopeful that our VBR is also the start to something really important, which is more transparency and just an enhancement of our transparency report to understand how we think about violative views. Well, I wish we'd been able to announce the transparency report on this. I'll do better next time. I'll work on that. Okay. Thank you so much. It's very important to be able to bring these things live at the forum. So one of the things that to me is most interesting about the algorithm and we wrote a long piece of wire at my previous job as the editor of Wired about the way the algorithm had evolved. And one of the things that's so interesting is that every time you make a change to solve one problem, there's some kind of an unintended consequence. There's something that you then have to catch up to. There's some way that behavior has changed. So there's some new thing that is incentivized. Tell me how you think about the evolution of the algorithm right now, where it is right now. What are the key things you're prioritizing and trying to fix? And what are the things you're worried about? Sure. I mean, I think we've come a long way in our algorithm. I mean, ultimately we want to give information and suggest videos to our users that we think they're going to enjoy and want to see and related or to their interests. But there's a lot of caveats to that too. So first of all, as I mentioned, when we deal with information, we want to make sure that the sources that we're recommending are authoritative news, medical science, et cetera. And we also have created a category of more borderline content, where sometimes we'll see people looking at content that is or will be content that's lower quality and borderline. And so we want to be careful about not over recommending that. So that's a content that stays on the platform, but it's not something that we're going to recommend. And so our algorithms have definitely evolved in terms of handling all these different content types. I'd say that the plus of that is that our users are able to see higher quality content. They're also able to, we're able to make sure that they're getting information from sources that are very reliable. But I would say the con of potentially some of these changes, because as you pointed out, every change has some downside, is it maybe harder in some cases for channels, maybe who are getting started or smaller to be able to be visible when there is a major event or when people are looking at something that is science or news related. But I would say that that's a trade-off that we've made because we've realized that it's really, really important. So like we learned this lesson the hard way. So when we had the Las Vegas shooting, unfortunately, there were a lot of people who are uploading content that was not factual, that was not correct, and it's much easier to just make up content and post it from your your basement than it is to actually go to the site and to be able to report and have high quality journalistic reporting. And so that was just an example of what happens if you don't have that kind of ranking. So sure, we want to enable citizen journalism and other people to be able to report and other people to be able to share information and new channels. But when we're dealing with a sensitive topic, we have to have that information coming from authoritative sources so that the right and accurate information is viewed by our users first. And that's that's not an easy trade-off. I mean, your name is YouTube. The whole principle is that you, anyone can have complete free speech and, you know, publish where they want. Or that was the founding principle. I would imagine that this is a trade-off that did not come like this. So it's yeah, it is. It it I lost you on the last second. They're broke up a little bit, but you're right. Like we came from YouTube and YouTube when YouTube first started, it was much more entertainment. It was much more focused on creating like interesting things that you saw. Funny videos. Music has always been really big on YouTube. And you definitely want to be able to break the latest artist. And so that's something that we need to think about. So when there's a new, you know, we have so many artists who got started on YouTube. So when we have our next, I don't know, some kind of some famous artists like Shawn Mendes or Justin Bieber who got started on YouTube and they post their video, we want to be able to enable those new artists to break. But if you look at that, breaking artists or discovering the new latest small musician is very different. If you're looking for something like cancer information, you don't want to see someone who is just posting information for the first time when you're dealing with cancer. You want to see it from established medical organization. And so what we've done is really fine tune our algorithms to be able to make sure that we are still giving the new creators the ability to be found when it comes to music or humor or something funny or so many different categories. Beauty, crafts, learning, how to write all these different areas, but we're dealing with sensitive areas. We really need to take a different approach. Let's move into the governance question of this, since that is a big part of the forum today. Tell me, clearly there's a lot of conversation in the United States, but elsewhere too seen in Australia. There's a lot of conversation about regulating the big social platforms. You are, I guess, lucky or maybe unlucky that you haven't had to be subjected to a seven hour grilling in front of Congress, but congratulations on avoiding that. Tell me one idea or one idea that has traction for governing YouTube that you think is a terrible idea. And tell me one idea that has traction that you think is a reasonable idea. Oh, I mean, look, first of all, I want to say that I understand where governments are certainly coming from. And we see government, we see so many different perspectives across governments. And I'd say, generally, we're really aligned with the overall approach. So we see governments who, of course, we want to keep kids safe. We want to prevent violent extremism on our platform. We want to keep our community safe. So all the laws around, whether it's like hate speech or child safety, those are all things. We're working incredibly hard to figure out how can we work to do everything we possibly can. I'd say the challenge becomes when we get regulation that's very broad and is not well defined. So something that is harmful, like what is hate? What is harmful? Those are not things that are easily defined. And there are many, many different interpretations on that depending upon what you're handling. And so I'd say the challenges we have is when we have overly broad regulation that requires us to potentially remove a lot of content that would not be good in the end for our users. And I'll say there's a lot of regulation right now that's happening where people are, I mean, we have this issue with what was Article 13 now? It's Article 17, the copyright regulation in Europe. And we were able to make a lot of work in terms of progressing it with policymakers. But that was a case where we were really, really concerned if it had gotten too far the way it had been written that we really would not have been able to enable so many channels on YouTube. So I get really worried about any kind of regulation that causes us to potentially take down large amounts of content that would hurt so many different creators. Creators are small media companies. They represent a lot of diverse voices. There are storytellers who need to be told they're creating a service with educational content. They're deploying jobs. And we just came out with all our GDP and job numbers, which are really impressive. So I worry always when I see regulation that would potentially cause us to hurt a lot of the growth that we've seen from the internet. And so I'd say we're aligned when it comes to keeping communities safe. We wanna do everything we can and we want the definition of the language to be tight enough that we can actually comply in a way that is clear. And then we also have to just be really careful about the unintended consequences of some of the copyright or even like section 230, like what could go wrong that could cause us to have to remove a lot of content that would be really devastating for the internet and for the creative economy. Do you feel like it would be possible to reform section 230 in a way that would still give you the ability to filter content and give you protection against the possibility to impose something offensive on your platform, but that would solve many of the problems that lawmakers have seen in that very antiquated piece of legislation? I mean, one of the challenges I have is that there are a lot of lawmakers who want us to remove more content and then a lot of lawmakers who want us to leave out more content. And so it's not really clear what is it that lawmakers wanna solve for in the first place? And that makes it really challenging to be able to address. And so I think there are many ways to be able to address what the objectives are and we'll certainly work closely with them to try to achieve those objectives. But right now it's not clear exactly what those objectives are. There seems to be a lot of disagreement about it. So until that's clarified, it's hard for us to figure out exactly what are the right next steps. But we'll be having, I think the, I would say the next steps are to continue talking about and continue to try to define that more clearly and come up with solutions that will keep communities safe, but at the same time will enable the creative economy and the jobs and the education and the huge amount of valuable media to continue to flourish and grow. And what is an example of legislation that you've seen internationally that you think of as sensible, balanced and within the proper scope? Oh, I mean, I think maybe I'll start with like, you know, the net DZ, the first version of it had some really clear language around how we handle hate content and the need to remove that. And that was something that we also agree that we wanna remove that type of content. We wound up actually first doing net DZ and then later expanding our hate policies. And so in many ways it was useful that we had done a lot of that legwork for net DZ. So that would be an example of regulation that was useful for us and we're aligned on that. I think, you know, there certainly are, there's more policy that's coming there and we are in the process of understanding that and working through that. But the first version was helpful for us and nobody, we don't wanna have hate on our platform. We wanna remove it and we wanna remove it quickly. So that was something that was, I think we were very aligned and we were able to work together on. Let me ask you a big global question which is that one of the most discouraging things to me about the world right now is the technological split between East and West, particularly between the United States and China. Do you see any path to reprosment? Do you see any way that ultimately United States and China are able to figure out the issues that divide them on tech and that YouTube actually is operating happily in China some number of years from now? Oh, I don't know. I mean, I'm not sure I'm the best person to ask about that because Google has never, I mean, Google operated in China for a really short time. And so I'm not sure that I'm the right person to answer that question. I just see what I see about YouTube is just the humanitarian good that we are. So I see us as a global public video library and that we have a huge amount of content that people can learn how to do anything whether it's skill or language or musical instrument you can research any kind of historical talk or you can see all the weft talks here on YouTube. You can see all the TED talks. And so a lot of times I just feel sad if there's a population or a group who can't access that and there could be many reasons for that there could be policy reasons but there also could be technological reasons people who don't have access or they're not connected to the internet or data is very expensive in their country. So I see the value of being able to offer this library and so hopefully in some ways there will be more bridges built in the future. Do you have a set of things that have satisfied would tell you that it's time to go back into China or is it so far off in the distance and so out of the question at the moment that you don't even have that punch list? It's not something I'm working on at all right now. There's so many other things that I am working on. So many areas that I'm focused on our product our innovation we launched shorts product I'm very focused on shopping enabling more shopping on YouTube. I'm also said that responsibility is my number one priority. And as you can see like we've made tremendous progress but there's still a lot more of work to do. So I'm very busy just making YouTube a better product. And I am very interested in seeing the violative content report and seeing whether you can get that number down from 0.15 to 0.17 down to 0.10. Last question, you know, you're in... Oh, I certainly will. I mean, I'll certainly say that's the goal of mine is that we continue to lower that number. And our team will continue to work on it and measuring it is always the first goal being transparent, measuring it. We also break down in our report just, you know, all the different ways like how it was flagged was it flagged by machines how quickly we took it down what was a category that was removed for. So I do think that the transparency that we have is a really big step. And what I like about this metric a lot is that it encompasses a lot of the questions that regulators had. So a lot of times they would talk about virality. Oh, you had a video, but it was got a lot of views really quickly. So all of that is encompassed in the violative video right DVR for people to be able to understand and all work that we do should bring that number down. Well, I hope that you bring that number down by becoming better at finding bad videos and not by lowering your standards, which would be another way to- No, actually we're raising our standard. I mean, that's the thing to remember is that we have significantly raised our standard. I mean, just look at 2020, we had 10 different regulations on COVID. We had a number of regulations around civil, around civics election integrity. So we keep raising the bar and we need to actually make sure that our enforcement is even better while we're raising the bar and that's a challenge. But we're also, we're staff now. We have the people, the policies, the technology in place. So I do see the opportunity for us to really continue to improve that over time. I'm just saying that as a new CEO, I know that KPIs can influence behavior in funny, funny ways. But I totally, I hear what you're saying and that sounds like exactly obviously the right way to do it. Okay, last note, we have about 30 seconds left. Tell me something, actually let me ask you this way. Are we gonna be watching YouTube more on AR or VR a few years from now? Oh, I'd say AR. I'd say by AR. I mean, first of all, it's just, it's such a, so much potential. And I do think there's a lot of opportunity with AR in terms of modifying video, modifying creation of video, how we view videos. And I love VR, but it's been hard to get the headsets and the content and get that ecosystem started. And so until there's a real breakthrough, where one of them become a lot easier or cheaper, it's gonna be hard for VR. But it will happen. There will be that breakthrough and it will happen. So in the meantime, I think there'll be a lot on AR and AR can go a long way. It can, I think we're gonna see a lot of improvements with video, be able to improve our lives and have more tools with AR and have more fun. So I'm optimistic about the future there. All right, wonderful. Thank you so much, Susan Wojcicki. Let's all leave and go watch some high-quality content on YouTube. All right.