 Good evening and a very warm welcome today I'm Matthias Kettemann and I'm very happy to welcome you here to the first edition of insights and power This is a conversation series organized by the Leibniz Institute for Media Research, the Hans Brede Institute and the Humboldt Institute for Internet and Society Together with the Global Network for Internet and Society Research Centers. Many thanks go to our media part partner Tide Hamburg as well Today we focus on an important topic. We focus on great expectations What research expects from platforms and platforms from research? We know that content governance is a very very hard job big platforms like YouTube have to push back against state actors They have to deep platform Coordinated authentic behavior. They have to fight human rights abuses on their platforms at the same time They have to stay attractive communication spaces The world is evolving. How do they evolve with them to answer these questions? I'm very pleased to have high-level tech expert here and An academic who will discuss these questions together. We have Susan Wojcicki the CEO of YouTube a well-known digital platform all of us are using used by two billion people all across the globe and To access information to share videos and to shape culture Susan has been overseeing YouTube's content and business operation the engineering and product development Prior to joining YouTube. She has been influenced influential in developing the the ad business at Google She graduated with honors from Harvard University and holds a number of additional degrees in 2015 She was named two times list of the 100 most influential people in the world a very warm welcome Susan Very pleased to have you here with us today Thank you so much for having me here. I'm looking forward to our conversation Thank you, and we also have Wolfgang Schultz Professor Wolfgang Schultz is a director of the Leibniz Institute for media research behind Brito Institute and is the professor for media law public law at the University of Hamburg He's also research director of the Humboldt Institute for Internet and Society He has been working on freedom of communication and media law issues for a number of decades It has been instrumental in taking a global view on how the internet develops and how platform develop their normative orders Before we start with the conversation I would like the audience to invite you to participate in the discussions with your questions You can either do that directly next to the video in our live stream or via the slider tool And you'll find the link below the video But now let's jump right into into the future Looking back Susan Do you think that the the pandemic which is still raging all across the world will this be seen as a game changer? Regarding how platforms deal with content So for us when the pandemic hit it was after many many years that we had been working on all of our responsibility work We had been investing across policy Technology enforcement all areas and I'm very thankful for that investment over many many years because That enabled us to take action very very quickly when the pandemic hit We had to implement over 10 different policies We had to implement implement them incredibly quickly For example, we saw that people were were accusing Telecom equipment for being the cause of COVID-19 and we had to base Immediately say no you only you can't say that it's something other than the virus that is causing COVID and immediately make those decisions and remove those videos So across the board we hadn't been investing for many many years And I'm very thankful because that enabled us to make so many quick decisions and also to be very active In our enforcement, which we do with a combination of people and machines to make sure that we're removing that content as quickly as possible But with the pandemic, I mean we also saw that as a huge opportunity for us to help deliver Authoritative information. So we worked with over 80 different health country groups To make sure that we were working with them with what the latest information was we put that at the top of our feed So people saw that We worked with youtube creators to be able to talk about How it was important for people to stay home to take COVID seriously, which was an issue at first people thought it was a chance to go out more and We made sure that the creators really emphasized and need to take it seriously And use creators to actually reach into a lot of different groups That otherwise wouldn't have been able to talk to public health officials. So for example in the u.s For example, we had dr. Fauci meet with various rappers and and talk about the importance of The the precautionary measures to prevent people from getting COVID. So those are all different things that we did and i'm Extremely thankful for all the years and years of work ahead of time to make sure that we were ready for the pandemic Not that we knew it was coming. Um, but but just to be able to handle such a tough event like that Uh, Wolfgang, have you also met with the rappers in your scientific analysis of how to deal how platforms deal with the virus? Yes, yes, uh in a way I have um But I would like to build on one one aspect that uh, you Susan mentioned a minute ago because that has been discussed In academic circles as well as what's ngo's in this field and that is the element of automation and content moderation and how that developed during covid so We all know that it with this amount of content. It's impossible to do that without technical support but on the other hand at least what we heard was that on many platforms that played a greater role because The human content reviewers who are simply not there that could not gather in their offices. And so technology played a greater role That I think has led to More experience and in the companies about how to implement that but maybe also to see the limits of of this kind of automation Um, would love to hear a little bit about The experience you made with including technology here and maybe if you want to share that What the the future plans are in this respect? Sure sure. Um, well, so definitely when the pandemic hit We we realized that we were going to have to send a lot of our human reviewers home um We have human reviewers around the globe and we do that on purpose to make sure that we can respond to any kind of event And we're not dependent upon any one location going down But with something like the pandemic we had so many people for example That would be unable to do their work at home or it wouldn't be It just wouldn't be possible. So we actually within a couple of weeks. We actually built a system to move 100 to machines Um, because we knew the pandemic was coming. We knew that we were going to have to lose our workers Um, and in a couple of weeks we built this 100 automated system to manage content moderation Again, this was we had already been working on this for years and years and so Um, and I can maybe for for people give a little bit of background about how generally our automation systems work Which is that we use our machines basically to go out and cast a net and to identify what is content that could be problematic Um, and then we use the human reviewers to say yes, it is problematic or no, it's not And there's some content like let's just say, um, there's adult content For example, that's very easy to identify with machines But something about hate and harassment has a lot more nuance and context and could be mixed with political speech And that's what we would want to have humans spend more time Understanding the context to make the right decisions, but with something like the pandemic. We didn't have a choice We had no very small set of reviewers. So Basically, we built the system that worked entirely with machines and we used the small set of full-time employees To handle the appeals when people someone said no, you made a mistake Um, can you check? Um, so we used our full-time employees for that and I think what we saw is that You know, sure we can do that. Um, but we made a we made mistakes and that's not our intention at all Um, we realize that youtube is used by so many people to communicate To express their ideas a lot of creators are generate we share half of our revenue with creators And so there are many small businesses and so when we're making a mistake and that's impacting their revenue that's That's not good for anyone and so We have gone back to using a combination of machines and people But we certainly learned during that process that you know, what the limits were of just having machines Thank you. Thank you both of you We've we've uh, you've mentioned you're this this important that that you see yourself That is the important position that that you you have that youtube has in how to deal with with content whether to downrank it to demonetize it Um, depending on on the jurisdiction you're in, you know in some countries, uh, like germany Courts have taken a a rather strong position on on the limits of what platforms Can do that they must not act arbitrarily or that they have to stick to their their their standards and implement them so how um How does that that that work in practice you perhaps can can elaborate a bit on that I didn't make sure that uh, in light of the many different jurisdictions out there you you you navigate this this this This this minefield between uh national jurisdictions between keeping uh advertisers happy between keeping users interested Sure. Sure. No great great question. Uh, well, first of all, I'll say that for uh, you know, we we work Around the globe and you're right. Certainly there are many different laws and many different um jurisdictions and we We enforce the laws of the various jurisdictions around Speech or what's considered safe or not safe. Um, that's true for for democratically elected governments It might get a little bit more complicated in in non-democratically elected governments Um, and and for the most part, you know, so basically we we enforce those laws. Um That actually hasn't been the controversial part What has been the controversial part has been when there is content that would be deemed as harmful But yet is not illegal Um, so an example of that for example would be covet. I'm not aware of there being laws by governments saying around covet in terms of not being able to debate the efficacy Of masks or where the virus came from or the right treatment or proposal But yet there was a lot of pressure and concern about us Distributing misinformation that went against what was considered the standard and accepted medical knowledge And so this category of harmful but not But but legal has been I think where most of the discussion has been And you know for us we look at that content and we think about the the role that we play in society We want to be doing the right thing for our users and for our creators We also generate revenue from advertisers and if we are serving content that is seen by our advertising community as not benefiting society Um, no advertisers going to want to appear on that and they're certainly not going to even want to appear on A different, you know content that is positive if they think the platform as a whole is not being responsible So we are generally very aligned like responsibility is really is good for our business and we You know, we have over two million creators on our platform that we share revenue with so if we're not Generating revenue for them Then you know, that's a problem for our creators. Um, they create Beautiful and incredible content and we share the majority of revenue with them. So, um Yeah, so so basically that's like, you know, so I think governments like can can always You know our recommendation if governments want to have more control over online speech is to to pass laws To have that be very cleanly and clearly defined such that we can implement it There are times that we see the laws being implemented or or being suggested that they um, they're not necessarily clean or Possible for us to cleanly interpret them. Um, and we've also seen sometimes there's laws passed just for the internet As opposed to for all speech Um, and I do think that's a dangerous area when we start to get in and say Oh, sure, you could say something like this in a magazine or on tv, but you can't say it on the internet Thank you. I mean the one the one group of people who's sometimes happy if the law is a bit vague is law professors, right? Because then they are called upon to write expensive Um Exactly. So Wolfgang, um, what is your take on? Unharmful content and the other slightly vague categories that some states I think the uk is actually considering using as a normative concept Yes, yes, and surprisingly, I'm not so fond of all the regulatory approaches even being a law professor here because I definitely see that When we enter into speech regulation and even more if we enter into this kind of indirect speech regulation in terms of setting an incentive structure so that platforms are likely to um to take content down Then we are on a slippery slope. I see on the other hand that there is a lot of There are a lot of things going on amplification of disinformation and things like that and I see why governments feel that they need to act on that But having studied the german net CG and seeing what has been done in other countries copying that Laws mean different things in different countries and when you do not have a rule of law Based system in a country and no free independent courts Then something that might be a helpful tool in germany is something Really dangerous for freedom of speech on other platforms. So So That's why even being a law professor do not applaud every regulatory attempt that is is done here and and the second thing that is interesting from our regulatory perspective and Again, I would be happy to hear susan's Thoughts on that is that we lawyers get more and more interested in governance researchers in the internal rules community standards and terms of service of platforms because they are in the level of of speech regulation if you want to call it that way Which has become immensely complex And we had the opportunity actually materials and myself to do research on that and not with youtube and what we saw is that the mental model which people in platform companies Designed that is actually the model that kind of lawmaker has saying we have different interests We have to reconcile that we have to balance that we have to come to a fair conclusion And I would be interested in if we had done the study And within youtube what your thoughts are what is the model with which these internal laws if I call it that way this private norms Are construed within youtube is it more the design of service or is it indeed With the different rights in mind that are balanced here when You take the decision or create the norms based on which content moderation is done So is it more about creating a community of values or an efficient communication space? Susan do you want to Come in on that. Sure. Sure. Um, I mean, I think you know the The the the two I mean uh, you gave two choices And I do believe that youtube is very efficient in communication and also provides a community around values um and You know, I think you're right There's certainly our tensions between freedom of speech and making sure people can communicate those those ideas As well as the need for us to to be responsible as a platform in terms of How we distribute information and those are definitely two tensions that we deal with. Um, and You know, I would say there we we actually have this framework called the four r's which you know I can go over really quickly, which is basically our framework of responsibility, which is you know, basically we look at content That we would say would be most egregious. It doesn't make sense for us to distribute on our platform And there's many content that I think everyone agrees doesn't make sense. Um You know, obviously like um Violent extremism making sure children are safe. Um hate harassment hate and harassment there could be various definitions of where you draw those lines, but Um, there are certainly many categories that we we can all agree don't belong on a platform for this for distribution Um, then the second thing and and we remove, you know, we we we publish how many How much content we remove every quarter? Um, and then there's a lot of information topics where we believe it's important for us to highlight the The experts the authorities in the subject similar to google search, right when you go to google search and you type in Covid you're not going to see something from somebody who just you know had an opinion yesterday You're actually going to see it from a national health organization. Um that specializes in coveted So the same thing we want to make sure happens with recommendations that when you type in something About information that we're giving you authoritative information about it So raising up the experts in in information topics that might not be true for music Like in music, maybe you want to know who's the latest artist It just became hot in the last month And it could be someone new but you don't want to do that with information in general you want to go You want to rely on the experts? Um, and then also we've had this issue where we have content that Will be borderline it technically meets our standards, but it's considered lower quality content It's it's allowed. It's not like we want to be removing it But we don't necessarily want it to be recommended We don't want it to be that you come to youtube and you see a lot of content that is generally seen as lower quality But it is possible to find it if you search for it Or you go and you look for that specific creator and that that is basically our reduced section And then lastly we have to have a higher standard for For advertisers and for for reward and there's a lot of content that might make sense to have on the platform But advertisers don't want to be associated with that could be very important A lot of even like, you know headline news about tragic events Advertisers aren't going to want to be on but are essential for society And so, you know in in it in essence what i'm trying to say is that we're trying what we have a A series of line of lines that we draw on our policies But then we have a series of gradation and gray in terms of how we handle in our recommendation systems to try to straddle Those two conflicts that you're bringing up which is the responsibility and then also making sure that we're a platform to enable freedom of speech and different points of view Thank you. Thank you very much Let's transition to the to do another another topic Are you are getting slightly slightly annoyed by all of those new european rules So not a question In the european union is getting more and more active, you know in the terms of digital regulation and you know starting with the with the GDPR the general data protection regulation of some some years ago that has had global effects Some researchers like an abroad for quality the brussels effect, you know, and we now have four for new Big bodies of rules coming out of brussels soon the digital services act digital markets act and there's a data act and a iact coming so How do you how do you feel as the you know, I know it's a global platform, but still it's a u.s. based company How do you how do you feel having to talk to your european team all of the time? Um Does does brussels have a legitimate role in developing rules that apply basically globally in a way? We definitely understand the need for For governments to want to regulate how platforms work how they distribute information and and we recognize to the significant role that we play In societies and how that information is distributed. Um, I'd say there's a lot of regulation. We already have to be compliant with Copyright child safety Hey many many different areas But you're right. There is a lot of new regulation coming out That we have been spending a lot of time to make sure that we understand and that we're working with And so I would say on dsa, which is the one that would probably been top of mind most recently Has been an area where we have been You know, first of all appreciate some parts of the current drafts And definitely appreciate there being one regulator to start out with as opposed to 27 Which actually is a huge and very important It helps us do a much better job of understanding the law being compliant in any way And and for our users too for them to understand and have one set of laws makes a lot of sense We also appreciate the intermediate liability that we saw And we think that enables the internet to continue to provide all the value that it's provided But sometimes the the laws are written in a way that are hard for us to Understand how to implement or how to interpret and I'd say in the current drafts For example, there has been a phrase about notification to Creators when there's any kind of restriction of visibility So that's sometimes where as a person who who builds these platforms and manages them It's hard for us to understand what that even Means because there's always changes in the visibility of videos Where we have a large set of new videos all the time We're ranking them differently based on our user's interests of what's happening in the news And so that could result in us for example sending hundreds of millions of emails to people saying there was a change in The ranking of your video which which happens all the time And so that there's other language around not talking about the restriction of visibility, but the restriction of accessibility So if we were to change in some way that that content was accessible that would make more sense There also and I think that's the commission language that is that says that There's also language around audits and the frequency of those audits that could be Just an undue burden talks about the way that we would need to do that with any kind of product change And that's really hard for us to to know how to interpret because there's so many Product changes where where our products are constantly We're all they're always evolving when an internet service is Updating and changing and addressing what our users want to need. So something like an annual audit makes a lot of sense Um, and so in general, I think we understand but there are details in in the implementation that sometimes we worry about having Implications that go that are unintended And and that's generally how we think about it So, you know, how can we work closely with governments to make sure that that that what they want? From a protection is is achieved But that the unintended consequences that could hurt the platforms or the users or the creators that that doesn't happen Thanks If first of all, I admire your optimism that that will lead to a reduction of regulators numbers of regulators Apart from that because my my take would be that we will have more at the end But I would be interested in not going too much into details of the dsa But the broader concept behind that is interesting from our perspective And that is especially those elements that say that Your terms of service and your community standards Will be in a way regulated and so this kind of indirect regulation comes in and and there I'm I'm still wondering whether that is something which is a clever move because it leaves platforms the room to create their product By designing this kind of rules and community standards and at the same time give the opportunity to Make sure that some basic legal requirements are met But on the other hand, it becomes extremely complex and since you argued that it's important to see How that can be implemented in a company like youtube I think it would be extremely interesting to have this kind of view in we might appreciate it as a smart regulatory move but when it is not really feasible in practice then looks good on paper, but Will have not the desired effect Yeah, I mean I That was once one of the gdpr Regulators who won't be others of the gdpr He likes telling the story that you know, whenever the companies would tell him this can't be done He would find some tech person who tell them how it could be done And I'm not sure that they're always right, you know, I mean if you taking taking up all from here How do you see interaction between between YouTube and at brussels working better in the future. I mean companies reported to spend more on lobbying than for other acts That didn't quite seem to work out the way you planned or is that just you know the the Facts of today's regulatory landscape that that you have to deal with certain let's say Less than desired details in the regulation I mean, I think I would have accepted that there's going to be regulation and they're going there's going to be updated laws And and that's going to be the future That that will continue to work closely with governments and like I said, I definitely understand the need and why that's happening A lot of the challenges that I have seen are that Is that is that the implementation doesn't always It doesn't always make sense. And so maybe the intention is good, but the actual implementation of it You know isn't possible or is or is difficult or has these negative unintended consequences And so, you know our view on it has just been the need for us to continue to Just to discuss this and to understand and I think certainly having I mean certainly having technical experts who are part of the regulation and who are part of Drafting these laws which could help because they could have an understanding of how you know better understanding of you know what is possible what's not possible? and Make sure that we're that we're working together on that but You know there were there were certain like I mean I can talk about article 13 at some point um, you know, I think we got to A good place on that in in the end I mean we're still working out some of the details of it but You know at some point the law was written in a way that I was I was very seriously concerned that it was going to shut down in a significant way youtube's Distribution and content creation in europe and So, you know, that's why we put a lot of energy into explaining What the implications were and and copyright for example is a very complex area? I'm not after all this time of trying to understand all the details of it. There's still a lot of nuances And that that you know that like I'm getting explained by lawyers or various experts in the field so It was just it's just an example of how I think something that was well intentioned Sometimes can have these unintended consequences and so that's what We think is important is that we're able to to explain it and to come back and You know in many ways like youtube people might look at youtube like youtube's a big Company, you know, we we are big, you know, potentially american company But we are a platform that represents millions of small creators and businesses and If if something happens that Causes those creators to not generate revenue or to lose their distribution that has all kinds of Consequences so we paid out 30 billion in the last three years to all of our creators. There's over 2 million of them And they don't have any kind of lobbying arm. It's not like they're meeting with any kind of regulators. They're small businesses And so like sally veldt for example is a creator in germany. She does cooking and baking She's almost 2 million subscribers She has a hundred employees. She does baking products Or kurtzgesagt is does educational content have almost 17 million subscribers. So but they're not going to have a lobbying arm they're not going to have someone who goes to brussels and follows the laws and follows gdpr and dma and dsa And so we need to do that for them. Otherwise it has various implications Thank you I'm pretty sure that that kurtzgesagt at some point would do it would do a A little video on the importance of of good regulation But then again, um, we we you know one of the one of the ideas we we had when we created this this series The which the first event of is is today is to have a better exchange between platforms and academia And that's something which is also in the dsa and also by the way in the network enforcement act which The new clause of which came into force just a couple of days ago So there's better access to to other academics to to to platform platform data Susan, do you think that this sort of is is a step to the right direction? so that Is better access by scientists to to platform data good will that lead to better research results? or Should we be asking or other other kinds of questions? Yeah, I think it is good and I do think academics and experts play a key role in um having more transparency and being able to understand our platforms and how they work and how they change and um, you know, there certainly have been various academic reports and analysis on youtube, but We we recognize we need to do more to give more more tools to academics to be able to better understand how our systems are working and being able to to report on it and And have that academic integrity so they can publish, you know, what makes sense Um And understand how our systems work. So I do think that is and that is something we're working on What is the right way for us to open up? Further for academics to have more insights into our business Welcome. Yeah, I'm I'm happy to hear that obviously because we are interested in that but before we maybe talk a little bit about that um self critical reflection what I very often hear from academic colleagues and Maybe myself is we want more data But we are not really specific about what data we want and what for What questions do we want to answer and I think There is on on our side on the academic side the need to be more concrete. So to say if we want to learn about Distribution and effects of of this information We need to have some data that shows at what effect has Fact checking and labeling and things like that. What does that actually mean in terms of sharing content and so on? And so be more specific and then maybe come and closer cooperation with YouTube and others and then have specific Demands here. So I would love to see more cooperation here, obviously And if we can talk about what the obstacles are right now, what the mutual benefits could be I know it's it's a complex thing. It's about data protection and and data quality and Bunch of questions that that come with that But if one could develop a path to more cooperation here, I think it would be beneficial to all and the society Yeah, I mean, I I I agree. I mean our our goal the the more that Regulators and academics understand youtube I think the more aligned we will be in how we handle regulation going forward and so I I actually always find when I meet with A representative or a political person who has a youtube channel Like they can ask me a lot of questions And and we can have a really thorough conversation about where we both agree we could do better And what we're what currently is working well. So in general, you know, the more we can be aligned the more we can be Have a common understanding of the platform and and the value we provide and then you know, where we need to do more The better we'll be working together And there is another aspect, which I find extremely interesting and that is that Because you and and the other platforms I have to deal with a lot of problematic content Pieces and you know so much about communication about extremism and so on I think one of the biggest Think tanks and on extremism all over the world is within the platforms because they have to know what's happening there Do the codes change that that specific groups use and so on and I think there could be a lot of Cooperation and maybe there is already between those experts there and the people in academia working on on these kind of things Yeah, I mean, I also would say what's been really helpful have been when there are experts, right? So like violent extremism isn't really when I first started working on google I never thought that would be something that any of my work would ever ever be involved in and so You know, there are organizations like gift CT, which is a global internet forum for counterterrorism Which is a really valuable way that both Platforms and governments can come together and you can hire expertise and come up with a lot of shared learnings We see that the tech coalition for example with regard to child safety is another important area So I've always been really supportive of having third-party groups that are those experts that really can hire a set of people who are very knowledgeable Work with tech companies and we can come up together with what the right ways of Having regulation and what the best shared practices are together. So That to me has been very successful That's actually a perfect segue to to the segment Where we were looking at some of the questions we've received from from our audience across the world um, you mentioned third parties and in other other groups one question by Philip asks whether there is Will you have entertained the idea of somehow democratizing the The the content moderation process by implementing some sort of peer review or public review Especially regarding scientific content as that gets more and more complicated So perhaps, you know rephrasing. Can there be like a vikipediaization of the of the content review? Or is that a bit too difficult to entertain on a on a scale at that speed? Yeah, uh, well, I think there's a lot of interesting ideas there. Um certainly and um, I mean Like I'll answer it on a few different levels, which first of all we enable anyone to flag information We also have trusted flaggers. So people who are experts in their area that flag information. That's problematic But I think, you know, here you're talking more about, um, You know institutional or academic content and using a peer review type of system Um, and you know google actually If you let think you know google's initial algorithms page rank in many ways benefited from that same idea of like It didn't actually have peer review, but it actually looked at who was linking to those articles And what was the trust of the sources that were linking to them? So in many ways, it was a it was a technical implementation of Of this peer review idea. Um, and you know, I do think there's a lot of value in Having the discussion and having other groups being able to endorse documents And that's certainly something that we could look at further I've always wanted to do that In the comments. So we actually have been experimenting with for example with this timed comment So you can say at this moment, you know, right where they say x like do you agree or not agree? um and enabling There to be more discussion and commentary not about the video as a whole but about actually specific Items that were mentioned and that's actually a feature that we currently have But anyway, lots of lots of that's a very rich idea of how to add more peer commentary and more discussion around specific talks or videos Thank you. Uh, welcome to have a comment on that Yeah, maybe a lot of things that you could say From an academic perspective and um, I think one of the ideas behind the question can be that We see that these platforms have this immense role in private and public communication and So you have this idea of bringing the People in the citizens in and not just as users and and content creators but also as having some view in How the platform is shaped and how the rules are implemented and There are a lot of ideas around and you can see the whole debate on oversight boards and things like that as something going to this direction and For us in academia, it's interesting to see what different models are conceivable here And what the right path forward could be and my take on that is that's not just one part That can be different ways to experiment with these kind of things Yeah, what one thing I'll just say is like, you know, we've been very supportive of working with different third parties to be able to help Think through some of the frameworks or whether the you know at a high level You know and we again like I mentioned we understand the need for for regulation at a high level But what we have seen is that when there is a crisis the platform needs to be able to make that decision immediately It can't go to some kind of third party to make that decision Like when people were destroying telecom equipment because of covet, you know, we had to make that decision within an hour Um, are we going to allow this or not? No, we're not You know Decision done. So there is a lot of I think there's a lot of important Discussion, you know in general about our platforms But then to be responsible on the day to day The platforms need to be empowered to make those decisions right as as of current events are happening Even though I would assume that the the the signs behind that Behind that question wasn't wasn't too too difficult to parse No, it was it was there was no medical can no medical Authority thought that three G towers were the cause of COVID. Yes. Thank you. Um, when another question we were received To students specifically, what do you think will be the next big thing when it comes to content moderation? so is it going to be an kind of plastic achievement in in machine learning that will lead to Semantic superior machines that it will actually be able to to do a very good job in Contemporary or will humans always have a certain role? Well, I you know, I think that humans will always play some role Um, I think that I mean we have seen the machines do get better There's no question that you can continue to train them and the And the AI will continue to get better But I I do think that humans will always play a role in particular with really sensitive content and I mean, I I think with with content moderation, you know, it's been complex for us realistically It's complex for everyone to understand What our policies are we do publish them we try to be transparent um, and so I do I'm hopeful going forward that You know that we can continue to have you know, a good common understanding of of what the guidelines are and continue to work with experts and third parties to Continue to refine them and like we've just found it takes a lot of detail Work to really define them super clearly that they can be implemented consistently So consistency is a very important thing when we have a policy We need to make sure that no matter where that video is submitted from around the world that we get the consistent rulings on them And that we need to be able to explain that to our teams. So, um, you know Certainly, certainly machine like using machines to help improve the consistency find More of the potentially violative content do so with higher accuracy rates would all be good improvements for us in the future And without you are very much in line with Recent judgments in Germany, you know calling for more coherence and more consistency across the the moderation practices Um, we're we're nearing the the end of our of the first edition of of insights and power Perhaps one last wish what what can science do better if you what can science to better and wolf come what can platforms do better? Susan if you want to begin uh I mean I I You know again, I we recognize it's a partnership. Um, and we plan to continue to work closely together I I do think the more that the the Um More that there can be an understanding of youtube and we're working really hard A lot of what we do is also just working for people to understand what youtube is and how it works and the role that we play Um, and and certainly the more that we can have a shared common understanding About the value and and how our systems work and and where we need to agree that there has to be change going forward Um, and really see that as a partnership So that that would be my goal is to continue to work on both the partnership And the shared and common understanding Thank you Yeah, I can definitely catch this ball and uh throughout back into the Quarter of the companies and youtube and I definitely see it as a partnership as well And we very often come to the conclusion that uh, each Stakeholder has to do his role and his responsibility has to be defined and and between companies and and uh, academia I think there are already some really fruitful corporations. Um, but uh, if youtube and others are open to Make more here, then we are very happy to do that as well Thank you. Thank you very much both to uh, Professor Lofeng Schultz and the Schultz and to Susan Wojticzy, CEO of youtube for joining us today At this first edition of insights and power the conversation series between uh platform decision makers and internet scientists Thank you so much to At the Breeder Institute and Christian Grauchberg at the Humboldt Institute for Internet and society and to our media partners at Tide Hamburg for making this possible. Thank you so much for joining us It was a pleasure having uh, both of you here online And uh, I wish you a wonderful day and also to all of us listening. Thank you so much Thank you so much for the time. Thank you all