 Hello, welcome everyone. This is our open seminar and my name is Jan Piasecki and I'm very pleased to welcome Professor Steven Lewandowski from the University of Bristol, who will give a speech entitled Demagoguery, Technology and Cognition, addressing the threats of two democracy. And Professor Lewandowski is a psychologist currently working at the University of Bristol in the United Kingdom, but he is from Australia, he also worked in the United States. His research currently focuses on misinformation, corrections, the biasing, the banking of misinformation. Currently he co-author articles on ethical dilemmas surrounding content moderation. And also he was a part of the big team of researchers who developed a toolbox of interventions countering misinformation. So right now in a second I will pass my microphone to Professor Lewandowski, but later after his speech, of course every participant is invited into the discussion and it is possible to take part in the discussion in two different ways. So you can raise your virtual hand so I can see that you want to ask a question or you can also send a question into the chat. I will be screening what is appearing in the chat and I will liberate your questions. I think that allowed to our guest. So if there is no technical issue I think that we can start. Thank you. Thanks very much for the introduction. I hope you can hear me and that you can see my slides. Okay, very good. So yes, what I want to talk about today is my core research which deals with addressing threats to democracy and in particular the effects of misinformation. And I should start out by noting that this is a very broad team that is contributing to this research here as some of the people involved and I want to acknowledge them upfront together with my funders because none of this would have been possible without them. So the departure point for my research is the observation that democracy is in retreat in many countries. What I'm showing you here are data from an institute known as VDEM which has been looking at the state of democracy around the world in many countries. And they have a measure that is a composite of a large number of indicators relating to electoral fairness independent of the judiciary and so on. And they publish these annual reports that tell us how democracy is doing in various different countries around the world. Now, on the graph that you can see anything below the diagonal represents a decline between 2011 and 2021. And as you can see, there's quite a few countries below the diagonal including well established democracies like the US and unfortunately you will see the Poland in Hungary. Some European countries are also considerably below this diagonal suggesting that in those countries democracy is sort of shrinking based on these indicators and that is confirmed by other institutes. So, you know, I'm moderately confident to say, well, we have a real problem there. So the question is why? Why does that happen? Now, you know, there's a multitude of answers to that question. I can't cover them all. All I can do is to focus on three major players in this space and I can talk about how they interact in giving rise to this decline in democracy. And what I've chosen to focus on are those three things what I call demagogues, the social media and importantly human cognition that is all of us and how we think and how that affects democracy. Now, I'm going to argue that demagogues in social media are in conflict in a sense with how we think when it relates to as it relates to democracy, demagogues and social media can exploit the frailties of human cognition without first consequences for democracy. And I'm also going to argue that there is a natural affinity between demagogues and social media and that the two can reinforce each other in their negative effects on democracy. So what I want to do is I want to take up these three issues in turn and explore how they relate to each other. Starting out with what I call demagogues. It's hard not to talk about Donald Trump when you're talking about misinformation or demagoguery because he was, well, very good at it. If you count up his record of misleading claims, you find that according to fact checkers he made more than 30,000 false or misleading claims during his presidency. Now we can debate what are the other of these claims and maybe there's some truth in them, but I think it's established beyond doubt that he was unique in his ability to say things that had little connection to reality. So the question I want to ask, the first question I want to ask is, well, what's the fallout? What happened? What are the consequences if a politician is in power who basically is lying 20 times a day roughly? Well, the first thing I want to examine is whether demagogues such as Donald Trump are able to set the agenda, the political agenda, the public agenda and the agenda of the media. Now, this is an interesting question to observe because in political science the conventional wisdom, by that I mean research dating back to the 1970s, 80s, 90s, the conventional wisdom suggests that the media are the principal agents of agenda setting. And there is quite a bit of support for that. You know, you can run experiments that show that the media are causally in setting the agenda of public discussion as has been done by these researchers. But I would argue that that conventional wisdom may be out of date because I think, and I will show over the next few minutes, that social media affords an opportunity for agenda setting by others. You don't need mainstream media and their editors to set the agenda. Instead, you can be Donald Trump. And I will show you how he has done that as one example in which demagogues can affect public discourse. Now, I want you to think back to November 2016. This seems like a very long time ago now with all the events that the world has experienced since then. And that was a time right after Donald Trump got elected, but he hadn't assumed his position yet. And one evening in November 2016, Donald Trump started tweeting vociferously, vigorously about an event that happened on Broadway at a theater performance. He was very upset that his Vice President-elect was in attendance at a play on Broadway and the audience at the end of the play was addressing Vice President-elect Pence and pleaded for a diverse America because they were concerned about the consequences of Donald Trump's election. Now, that caused quite a stir at the time. As you can see here from a Google Trends analysis, what this graph shows you is the number of times the American public searched for the terms Trump and Hamilton. And as you can see around November 19th, there was a major uptake in that search term, unsurprisingly because Donald Trump tweeted about Hamilton extensively on that day. Now, what is interesting here as a very suggestive hint, what's interesting is that the date of Donald Trump's Twitter campaign coincided with the day on which he settled the lawsuit against his so-called Trump University. He settled for $25 million, including a $1 million penalty to the state of New York. So this was not something that was good for him politically. And if you now look at that Google Trends graph again, you will discover that there's a blue line at the bottom, which reflects the number of times people, the public in the United States, searched Google for Trump University settlement. And as you can see, they didn't, not much compared to the Trump-Hamilton search term. Now, we can't draw too strongly a conclusion from this, but as a first approximation as a hint that Donald Trump might be using Twitter to divert attention from something he doesn't like. As a first hint of that, it is informative and I find quite interesting. So what my colleagues and I set out to do was to explore this possibility a bit more systematically. Might Donald Trump throw out these shiny objects? Might he be tweeting about something here to divert attention from over there, which is something that is damaging to him politically? So what we did in this study was to analyze all of the New York Times coverage and all of the ABC News headlines during a two-year period right at the beginning of Donald Trump's presidency during the Russia Mueller investigation. You may recall that there was an investigation about possible collusion between the Trump campaign during the election of the Russian government. And all the reports about that investigation were arguably not good news to Donald Trump. At the time, it was for him politically damaging to have that investigation ongoing. So what we then did was to relate the coverage in those two media outlets to Donald Trump's tweets. So we wanted to know whether we can detect a relationship between media coverage and Donald Trump's responses and conversely, how would the media then respond to Donald Trump? That is what we were interested in the interplay between this leading demagogue or leader anyhow, although he is a demagogue, and him and the media. So what we did was to follow him. Now I'll have to explain the graph here a little bit before I go on and show you the data because you have to understand what the graph is plotting. But once you know, it'll be immediately obvious what it is showing. So what we did was throughout that two-year time period, we looked at all possible tweeted word pairs in Donald Trump's tweets. And there was like a thousand of them, obviously. There's a whole vocabulary that Donald Trump is using after you strip out all the function words and the relevant stuff. And then you can take two words in a pair, which will capture the meaning of a tweet because tweets are very short. So if you take two words in a pair, you have a pretty good idea what the tweet is about. And so what we then did was to fit a regression model for each such possible pair and try to predict how often that would happen as a function of the coverage in the near times of the Russia Mueller investigation. And we then plot that regression coefficient on the x-axis. We then do another regression model that looks at how the New York Times responds on the next day in terms of their Russia Mueller coverage as a function of Donald Trump's tweets from the day before. We plot that on the y-axis. Okay. What does that mean when I say that? Well, it means that each pair of words in Donald Trump's tweets has a position in this two-dimensional space. There's a pair x, whatever that may be, there's a pair y, there's a pair z. Now, each word pair sits in this space, and we can now ahead of time realize what should happen in this space of tweets under various scenarios. Now, suppose it's all noise. Suppose nothing is happening. That is whatever Donald Trump is tweeting has nothing to do with the New York Times and whatever the New York Times is saying the next day has nothing to do with Donald Trump. Well, then all the tweets, all the word pairs would be a blob in the middle of the space centered on zero. This is nothing. Okay. So that's our null distribution that we would expect. And we would in fact expect that for items that are of no consequence to Donald Trump. Okay. Now, if Donald Trump tried to divert attention from Russia Mueller, then what should happen is that in response to the New York Times coverage, he should start tweeting stuff. Trump should become more active and there should be points to the right, because anything to the right means more of that in response to the New York Times covering Russia Mueller more. All right, we don't know yet what these word pairs might be. I have no idea, but there should be a whole bunch of points over to the right if Donald Trump is responding systematically to the New York Times coverage. If that is successful, and the New York Times is then diverted and is suppressing its coverage of Russia Mueller the next day, then we should find those points to be in the lower right quadrant. Because being below zero on the y-axis means, oh, a reduced coverage as a consequence of those tweets to date for. So that gives us a guide to interpret what I'm now showing you, which is the results of an analysis first looking at neutral words. And then I'll turn to the interesting part, which is the Russian Mueller investigation. So here we have four topics, the economy football gardening and skiing. Now the economy we don't know that could be relevant to Donald Trump, but football, gardening, skiing, you would think that that is pretty neutral and nothing much should happen. And that is precisely what you can see, because if you look in the top panel, you can see that all the light blue dots that are sitting there in the middle are within this red circle, which happens to be the 95% contour, the 95% confidence contour of this analysis. And basically nothing happens, which is as you would expect. And the word clouds at the bottom, by the way, while they represent the coverage in the New York Times that we analyzed and they just confirm that when we look for articles on skiing, yeah, well, they're actually about skiing at the Olympics and mountains and so on. So that just confirms that what we're talking about here is as intended, something that with the exception of the economy is pretty unpolitical. Okay, and Donald Trump, you know, does nothing. I mean, at most you can say that gardening puts it to sleep because there's a lot of points on the left and that means less coverage in response to gardening. I suspect that's because the New York Times mainly talks about gardening on weekends and maybe Donald Trump doesn't tweet as much on weekends. All right, so that tells us that when we expect nothing, we get nothing. But what about Russia Mueller? Remember now, anything about Russia Mueller would have been bad for Donald Trump. Certainly in the New York Times, ABC News, they reported this without at the same time trying to support Donald Trump. Now look what happens. What happens is this, when the New York Times reports on Russia Mueller, Donald Trump starts tweeting a hell of a lot more. A lot of points to the right. He gets very active, agitated, one could say, in response to New York Times coverage. And the next day, the New York Times in response to some tweets is reducing its coverage of Russia Mueller. They respond to Donald Trump by printing less about Russia Mueller. And that's what this point cloud in the bottom right is about. It sits outside the significance boundaries. So something significant is going on. And it's not just the New York Times, it is also the ABC headline news. So it's not just the print media. It's also the TV that shows precisely the same effect. And what's very interesting here is that under any other circumstance, the coverage in the New York Times and ABC News is actually uncorrelated. We looked at that. They do not tend to talk about the same thing, interestingly. But the moment Donald Trump tweets about Russia Mueller, their coverage becomes synchronized. There's a correlation there. And one aspect of the correlation is that coverage of Russia Mueller is depressed. So arguably, what is happening is that when the New York Times talks about Russia Mueller, and this is the word cloud on the left that tells you what the coverage is about, then Donald Trump starts tweeting like mad. What is he tweeting about? He is tweeting about the words you can see on the right. Those are the words in the pairs that were in this bottom right quadrant. That is what Donald Trump is talking about. Jobs, jobs, jobs, China, tax, North Korea. Those were all items of political strength that he's talking about. And he tweets more of that. The more the New York Times is covering Russia Mueller. And in consequence, the next day, the New York Times talks less about Russia Mueller. Now, the word clouds here exaggerate the effect size. The whole point of doing this animation is just to hammer home the point of what we are observing. And so what we are observing is that social media permits agenda setting by demagogues. The media can be diverted from issues that Donald Trump didn't like. So the question is, was it just Donald Trump who did this? Well, we looked at this very recently, my team and I, by looking at the tweets of all members of the US Congress between 2016 and 2022. This is 1.6 million tweets that we looked at. And what we did was to extract all the links that were being tweeted and we checked whether the domains in those links were trustworthy or not. And we used something called News Guard, which is a commercial product that is employing journalists to go through systematic checklists to indicate the quality of each domain out there. So it doesn't talk about individual articles, they're not fact checkers, they are credibility assessors. But nonetheless, if you constantly leave something from a low quality source, then you're probably sharing misinformation sooner or later and you'll do that. If you share high quality sources, well then, not so much. And what we find is that the share of poor quality domains being shared by members of Congress goes up the more they are on the ideological right. In the United States, red is Republican believers, Democrats. And what you can see here is that there are more points above that horizontal lines, which means those other people who share a large number of low quality sources. And among Democrats on the left or even Republicans, if they're not ideologically that far on the right. You find that most of the points are below the horizontal line, which means they never share any low quality information. So this goes to show, you can see it here again, that over time, there clearly is a trend among one party, the Republican Party, to increasingly share low quality information. So it's not just Donald Trump. It is also other members of his party who, since he got elected, have sort of drifted away from quality media more and more over time. Now, this is interesting to know because one of the things that we know from previous research by lots of people, Brendan Nyhan, Andy Guest, Jason Reifler, they have shown repeatedly that conservative voters in the United States consume and share more misinformation than liberals. Now, it was never clear, though, why that was. It was just a statistical observation and it's been, you know, there's no doubt that this is the case because it's been shown over and over and over again, and it doesn't matter how you look at it, you always find that. Now, one way in which you could explain that is by saying that political elites are significant vectors of misinformation, as we have shown in our study, and that the partisans are basically emulating what their leaders are doing. So that, I think, is an important conclusion to draw because it's our first indication that, you know, demagogues have an affinity with social media because it permits them to set the agenda in ways that prior to social media, arguably, would not have been possible. So that is, I guess, my first conclusion I want to draw, that there is this affinity and now I want to examine, putting aside social media, I want to examine, well, how do demagogues affect how we think? What's the fallout of Donald Trump on our minds, on our cognition? How do people think in response to demagogues? My departure point for this analysis is this rather surprising finding, which is that throughout his presidency, most Republicans, three-quarters of Republicans, consider Donald Trump to be honest. Now, bear in mind that the fact checkers have identified more than 30,000 false and misleading claims. So at first glance, you might wonder, well, wait a minute, what's going on here? How can somebody be honest who's wrong that often and who sometimes is clearly lying because he knows he's saying something that's not true? It's very easy to show. I mean, if he says, I was in the White House all weekend, but in actual fact, he was out golfing. Well, come on, that's a lie, right? We don't have to camouflage. It is just wrong to say that. It's dishonest. But the perception of honesty, where does this come from? Well, let's go back to day one of Donald Trump's presidency, where his press secretary made the claim that his inauguration, Trump's inauguration, was the largest audience to ever witness an inauguration. And he was clearly talking about people attending the inauguration in person. Now, that was not true. And, you know, there's so much evidence for this not being true, we don't really have to discuss it. Here's a photo at the same time taken for Donald Trump and Barack Obama during the second term, Obama on the right, Trump on the left. There's no question. Yeah, I mean, there's hardly anybody attending Trump and Obama through the crowd. No question that this claim was false. Now, why would somebody make that false claim when it is so easily disproven as in this particular case? Well, I would argue together with some other people that this so-called shock and chaos disinformation reflects an altered notion of truth. That what matters is no longer what really happened. But what matters is what people think should have happened or what they feel is happening. And, you know, here's Donald Trump himself saying to his audience at a rally, you know, just remember what you're seeing and what you're reading is not what's happening. Don't believe your lying eyes. Just let me tell you what you should be seeing. That's the subtext to this. Of course, he didn't say that out loud, but that is the subtext to this. And people who follow Trump are okay with this and they participate in it. Now, here are data from an experiment done a day or two after Donald Trump's inauguration where the experimenters present participants with the two photos I've shown you from the inauguration. And their task was to pick the photo with more people in it. Now, it couldn't be simpler. There's lots of people here and there are a few people there which has more people in it or guess what? That one. And what the data here show are the error rates. So if you look at people who didn't vote or people who voted for Clinton, pretty much nobody picked the wrong picture. I mean, these error rates are just, you know, keystroke errors. Somebody just slipped. They're not real errors in the sense that people got it wrong. But now look at Trump voters. 11% and 26%. And the higher number, 26% are highly educated Trump voters. The highly educated people picked the wrong photo 26% of the time, more than a quarter of the time. Why? Well, I would argue that they knew that there were fewer people attending Trump than Obama. So they knew which picture they should have picked. However, they also knew that it was politically content, politically contentious, and that Trump made a claim that was at odds with reality, but they support Trump. So they're going to pick the wrong picture to express their support for Trump and hence to engage in what is called participatory propaganda, which is that when the partisans pick up the message, set by a demagogue and then go with it and they do so to express their support for their favorite politician, irrespective of the accuracy. And here we can see that a quarter of Trump voters, highly educated Trump voters, were perfectly happy to pick the wrong picture. Knowingly, don't tell me that their eyesight wasn't good enough. They knew what they were doing. They were expressing support for Donald Trump, which outweighed the evidence in front of them. That's that altogether surprising when you then dig into this more deeply and look at the literature and political science and philosophy about the relationship between populism, as expressed by Donald Trump, and the truth. Now, populism, just to make sure we were on the same page here, is a political ideology that relies on one thing, and that is to divide the population into people and the elite. The elite is corrupt. The people are virtuous. Now, that's entirely artificial construction because, of course, you know, depending on what country you're in, different populists will come up with completely different divisions. And there is no such thing as a monolithic people and a monolithic elite that are pitted against each other. It's ridiculous. But that is what populism relies on, and it is inherently anti-lingocratic because it is negating pluralism. It is not acknowledging the possibility that different parts of the population may have very different opinions. No, for them it's just either good or bad people versus elites. Now, the corollary of this is that populists invariably accuse the elites of lying and they appeal to the common sense of people. And that means that they're denying truth seeking as a shared goal of society, and instead they're appealing to authenticity as factual, replacing factual accuracy. What's authenticity? Well, authenticity is simply me saying what I feel in the moment and speaking my mind, being honest about how I feel. That is authenticity. And in a populist worldview, that's more important than accuracy. So if Donald Trump says, damn it, I have the largest audience, then that's the way he feels about it. And he's authentically speaking that belief, and it is that authentic belief speaking that his followers then consider to be honest. And we can see this all the time. We can, you know, here's just some explanation of this a bit more. Trump states falsehoods that are very easily disproven. You know, it's just ridiculous. And one of the things that he and others, it's not just Donald Trump, it is populist leaders all around the world. What they're doing when they're telling these obvious lies is that they're flouting norms of truth telling, which rhetorically can be associated with the elite. And if you do that all the time, then you're signaling contempt for the elites or for the establishment. And you're signaling that you're authentically speaking your beliefs and that you are an authentic champion of the real people. So dishonesty within a populist logic becomes a marker of authenticity and of being a champion of the real people. And for the followers, the lies become an opportunity to express their loyalty to a leader by buying into it. Because then they too are in on the negation of establishment norms and that participating in that loyalty is more important to them than factual accuracy. So there is evidence for this here is a very nice paper by Hall et al., Oliver Hall et al. from a few years ago where they induced that in an experiment they got people to accept lies by a politician under certain circumstances. And the circumstances are shown on this slide. If people feel that a system is corrupt or that they're excluded from it, and if they perceive a candidate to act on their behalf, then they will accept that candidate line to them. That's okay. But only under these circumstances, if neither of these conditions is satisfied, lying again becomes unacceptable. So that tells us how demigods interact with human cognition. They create a new conception of truth all of a sudden. And that is what we're observing I think in many places in the world. So now I want to turn briefly to the relationship between social media and human cognition because that too I think is important to understand. And it is something that has adverse consequences on democracy. So what is the relationship between social media and cognition? Well, if you really want to know, I can recommend that report that myself and others have recently written for the European Commission. It's at that link, sks.to slash tech dem. It's publicly available. You can download it. That really covers a lot of it. Now I can't talk about all of it today because I only got about 15 or 20 minutes left. So just very briefly what we did in this report and in other work. We identified four pressure points between human cognition and social media technology that are relevant to democracy. And they're shown here attention economy choice architectures algorithmic content curation and of course misinformation and disinformation. Now all I want to talk about briefly is the attention economy and algorithmic content curation because at most I will have time to explain those two things. Now, the first thing we need to realize is that human attention, your attention, our attention is the main product on social media. As a rule of thumb, if you're using something online that's free, you're not paying for it. Well, that means you're being sold. You are the product. Why? Well, because the longer users dwell on a platform, the more the platforms can sell ads to advertisers. So they want to keep you there, not because it's in your interest, but because it's in their interest to sell you ads. Just as an example, YouTube by default has this auto play feature, which means it never turns itself off. It'll keep playing videos for you for as long as I mean forever, basically, just to keep you there. So you don't have to make a choice. It just pumps information out there. Now, that's not all bad in principle, because it has enabled many free services. I use free services, free quote unquote all day, Google Maps, Google Mail, Google this, Google that. You know, that's all free. I'm using it. Lots of other things are free. I'm using it. Wonderful stuff, but it comes with a cost. And the cost is this. The attention economy, as it is commonly called, has produced a fundamental misalignment between producers and consumers. This was a point made by Plunkett. And let me just tell you what he said and what I fully endorse. That is that traditional markets, manufacturers who produce, you know, cars, toasters, skis, whatever. They make a profit by satisfying their consumers. And if the consumers don't like the product, the producers receive a signal. Now, that sort of works, not always, but it sort of works because, you know, if a manufacturer is producing a car that blows up every time it bumps it from behind, then the consumers are going to stop buying that car. And that actually happened with the Ford Pinto in the 1970s. That thing blew up on impact, killed quite a few people and consumers learned that and they stopped buying it. The signal was received for it was drew from this product pretty much and try to fix it. So in a conventional market economy, I would argue that there is an imperfect but existent alignment between consumer preferences and the producers interests. Now, in an attention economy, that is no longer the case, because the profit for the platforms, the producers in this case, has nothing to do with the consumer, the users. The platforms only care about the users to the extent that they keep them engaged so they can sell advertising to third parties. Now, what that means is that the market relationship in an attention economy is between the platforms and the advertisers. It's not between the platforms and the users primarily. So what happens if the users are dissatisfied? And what happens more importantly, if the advertisers interests are opposed or are conflicting with the well-being of users? What happens in this new economy where everything is fundamentally misaligned compared to conventional markets? Now, to my mind, that is a real problem. Here are some of the key factors that Peter Pomerantsev and I reviewed recently and that we have to be concerned with. The first thing is that human attention is captured by negative emotion. And the moment you realize that there's a lot of literature on this, we don't have to debate this. Humans pay attention to negative things. We can't help but look at a car crash when we drive past it, even though we don't want to, but still we're drawn to look at this tragic event. So what follows from that is that disinformation or misinformation, which is by definition novel because it's made up and which is often designed to be outrage provoking. Those attributes of disinformation all of a sudden become features that maximize platform profits. So, in other words, the very structure of online platforms incentivizes poor quality information. And that is a problem. Now, we have confirmed this. One of my postdocs, Elma Ximchin, has done this analysis. We looked here at 160,000 news media articles that were shared by members of the United States Congress. We determined the emotional slant of those articles, and we used NewsGuard again, which I explained before, as an assessment of the quality of the domain. And what we find is this. The more negative emotion is in those articles, the lower the quality of the domain that they were shared from. So the more discussed, the lower the quality, the more fear is in those articles, the lower the quality, the more anger, the lower, et cetera, et cetera. You can see that all the negative emotions have this varying relationship with quality of the news media. And now, consider the fact that the money is where the negative emotion is. That is where the platforms make money, because it captures people's attention. So the money is being made with low quality information, and this analysis is pointing in that direction. All right, so much for the attention economy and the problems it generates. Let's talk about the second thing I have time to talk about, namely algorithmic content curation. Now, what do I mean by that? What I mean by that is that anytime you look at anything on the internet, chances are that some artificial intelligence, some algorithm is shaping that for you. It's curating it. On Facebook, Twitter, what you see is ranked by an algorithm that is trying to show you things you're likely to like. Why? Well, because if you like stuff, you stick around. And if you stick around, the platform is making more money because they can show you more ads. So it's as simple as that. They will show you stuff you like. They will try to satisfy your preferences. YouTube will show you videos that they think you will like. That's great, sort of. I don't object to that in principle, but it also means that it might show you increasingly extreme content because it thinks that that would satisfy you more. And that is concerning because we have evidence from experiments that the search engine ordering and the newsfeed rankings causally influence our preferences. So it makes a difference to what is being shown to us. We can show the effects of that. And that is concerning because 30% of views on YouTube, for example, just to illustrate, were determined by an algorithm not consumer choice. 30% of the time people watch something that they never wanted to watch. They never typed in. They never clicked. They never searched. It was just shown to them spontaneously. And of course, so we have to be concerned about that. The other thing that follows from this curation at the availability of algorithms is that advertisers can engage in something that is known as microtargeting. What do I mean by microtargeting? Well, it's based on the idea that the best ad in the world is the one you only show to the people who then buy your product. That's the most efficient use of advertising. You never show it to anybody who wouldn't buy it. You only show it to people who buy your product. That's the dream world of advertisers. And that's fair enough because if you show an ad to millions of people who don't want your product, well, all that does is to waste money. And that means prices go up. No one benefits, really. It's actually a good thing if advertising is effective because then it lowers the cost of advertising, at least at night. And advertisers, marketers have been doing that since, yeah, I don't know, certainly since the 1950s and quite possibly even before then. So they segment their audience in ways that make a lot of sense. I mean, you're not going to find much lipstick ads in motorcycle magazines. You're not going to find toys advertised on TV after bedtime, not nearly as much as during the day when all the kids are watching after school. It makes a lot of sense to do this sort of audience segmentation. But there comes a point at which the segmentation turns into manipulation, into microtargeted manipulation. Let me show you an extreme case here that was reported some time ago where it was revealed that Facebook's algorithms, they could identify the moment when young people were feeling worthless or insecure. They had a real, alive tracking system of teenagers' emotions, which could then be used by advertisers in principle, in theory, to target their ads. Now, Facebook did not dispute the existence of the technology, but they said it was never available to advertisers. And I have no position on that. I have no reason to say necessarily that they were telling us the non-truth. But that capability exists. And somewhere along the line from this audience segmentation, which we can all endorse, to this extreme case of moment to moment knowledge of somebody's emotions, somewhere along the line, we cross a boundary that, to my mind, is not acceptable. And it isn't just me. That opinion is shared widely around the world. Here are results from a representative survey with samples of 1,000 people in the US, the UK, and Germany, where we probed people's awareness of artificial intelligence. And more importantly, in this context, their attitudes towards personalization, towards micro-targeting. And so let me show you the data just very briefly, give you a glimpse of what we found. And what we find in a nutshell is that in all three countries, the public rejects political personalization. Anything that is in red and orange means no, I don't like it. Anything in purple means yeah, I'm okay with that. Now, if you look at the bottom, recommending local events, movies, music, restaurants, shops, everybody is happy with that if an algorithm tells you where you should go and what movie you should see based on your preferences. People don't mind. And I don't mind either. That's great. But now look at the top, political campaign messages, newspaper front pages. That is where the public draws the line and says, whoa, no, that is not acceptable for me to be targeted on the basis of my personal characteristics by politicians. And this is strongest in Europe and Germany and the UK, attenuated in the US, but the pattern is the same. People do not like to be manipulated for political purposes. And they also don't like that personal tragedies and and other private events are being used to target them such as their sexual orientation, political views or other personal events. People don't like it if that is being used. They're okay with age and gender, but anything other than that, whoa, they don't like for that to be used in micro targeting. And very importantly, this is not politically polarized. In other words, Republicans and Democrats are conservatives and labor voters in the UK, etc. They all agree on this, which is quite remarkable that there is no polarization. So that has told us something about the relationship between social media and human cognition, how it can have adverse consequences and how people feel about it. They don't like it. So the last question I want to discuss and I believe I have about 10 minutes left because we started 10 minutes after the hour is I want to talk about how to measures. How can we defang the Democrats? Well, at the heart of this, I think is populism. As I've noted before, that is problematic because it's anti democratic, even though it calls itself populism, it is negating pluralism, it is therefore anti democratic in our common understanding of democracy. So we have a problem, but where is it coming from? Well, here are some data that are just a signpost in a possible direction. I'm not saying this is it, but it's interesting to look at. And what I'm showing you here are data about productivity and workers wages in the United States between 1950 and about 1980, 1979 or something. What you can see is there are two lines and they're indistinguishable and that is economics 101 as productivity goes up as a worker produces more for the same amount of labor, their wages go up. Well, yeah, of course it does because there's more money made from the same amount of labor and that is being shared with the worker and of course, you know, the boss and the shareholders and all that. But nonetheless, there has been for decades this coupling between increases in productivity and wages. Now look what happened the last 40 years in the United States. Productivity has continued to increase. Wages have been largely flat. And that difference between the two curves you can see there is a squintillion dollars and they're in the pockets of shareholders and the owners of companies they're no longer being shared with workers. And guess what, people notice that, you know, they notice that they know this they may not be able to describe it in data, but they know what's going on. They're being left behind and somebody is making buckets of money. And then guess what, if a demagogue comes along and says wall elites wall you know the real people that's very attractive all of a sudden. And indeed there's evidence that inequality which of course results from this is the driver of populism and I mentioned earlier the study by how little that people accept lying demagogues when they feel left behind. Well, look at the grant. They should feel left behind because they are left behind or have been certainly in the United States Europe looks a little better. But in the United States, it's very clear. So that is something we have to deal with easier said than done. But nonetheless, we must do it. What else can we do? Well, we can protect deliberation. What do I mean by that? Well, what I mean by that is if you protect people against social media and demagoguery and and all the shouting and yelling that is going on out there. Then people can make extremely good, well considered and balanced decisions. We know that from research on deliberative assemblies or so called mini publics where people get together in a room. Literally a room. Usually usually it's not online where they discuss controversial issues with the aid of experts and moderators. Let me give you a specific example. The Republic of Ireland. They had to referenda last 10 years or so on abortion and gay marriage marriage equality. Now these are extremely emotive hot button issues. If you had a referendum on those issues in the UK or the United States that it would be just insanely polarized and probably would end up being violent. None of that happened in the Republic of Ireland. And one of the reasons it didn't is because they had deliberative assemblies of randomly chosen citizens who spent a whole year discussing these issues ahead of the referendum with expert input, public submissions, moderated by a respected Supreme Court judge. And ultimately they made their recommendations public. They didn't tell people how to vote, but they explained the issues as they perceived them. And that was why you recognized as being representative of what Irish people felt. And so they could go to the polls without this toxic polarization. Here is a summary of this. I think was the abortion. Yes, it's the abortion referendum. That's it. That's what the single page of recommendations or issues. That's what the assembly came up with, but that's focused people on the issues and not the demagoguery. So there's nothing wrong with people if you protect them against demagogues. They can come up with amazing stuff and luckily Ireland has been largely spared populism more or less. And this is one of the manifestations. Finally, we need to think about how to recreate social media for democracy. There are some people here as one of them who is pretty radical. Shoshana says you can have Facebook or democracy, but not both. That's an extreme position to take. But honestly, from what I've seen and the analysis I presented today, there is at least some truth in that. Now, we can avoid that by a number of things. Here are some recommendations again from the paper with Peter Pomerantsev, where we talk about transparency and how important it is to be transparent. And to tell people about what algorithms do and why they're being targeted, et cetera, et cetera. And finally, we have to support human cognition. Can I take another five minutes? Is that okay? I'm now happy with that. Then I will tell you a bit more about human cognition and how we can help people right now without redesigning the system, how we can help them survive online and do better. Now, there are two things I want to talk about. There's plenty of other stuff, but I only have time for two. Something called boosting and something called inoculation. They're related but different. Boosting means that you teach people something to enhance their competence. Inoculation does a similar thing, but it does it by warning people how they might be mislead. Okay, so slightly different aspects of what is effectively an intervention to assist people's cognition. Now, here's one example of this where we boosted people's ability to detect micro-targeting. Micro-targeting being the customization of political messages towards their personalities. What we did in that experiment was that people took a personality quiz, so we knew what their personality was. We then told them what their personality was, so they then became aware of themselves. And we then asked them to detect when an ad was targeting their personality. That's all we did, just detection. Did this ad target you or not? And we did this with pictures, with ads for cosmetics. There are two ads here. You can see people on the left and one person on the right. And guess what? Which one of those is appealing to extroverts? Yes, the one on the left, introverts on the right. If you don't know what extroversion introversion is, well, that's what we explained to participants in the experiment. Extroverts are people who are outgoing, who love loud parties, are in company, want to be in company. Introverts tend to stay at home, read books. That's a thumbnail sketch of the two endpoints of the personality dimension. And when you tell people what their personality is, they all of a sudden become far better at detecting the ads. Their performance goes from just barely above chance to about 90%. Simply by telling them, you're an introvert. And this is what introversion means. And bam, how they can identify ads targeting introverts. So it works extremely well. Now, let me turn to inoculation, the final thing I want to talk about. What we did there was to present people with brief videos between 30 and 90 seconds that explain to them how they might be misled by demagogues. After that treatment with brief videos, people become much better at discerning between high and low quality information. We in this paper ran an experiment in the field on YouTube. And I'll skip. I don't have time for this. I could have shown you a video. We can do that later on your own time. And more recently, just a few months ago, Google did a study in the field in Poland, the Czech Republic and Slovakia, where they showed videos to the general public that inoculated them against misinformation relating to Ukrainian refugees. You obviously know there's lots of Ukrainian refugees, in particular in Poland. Those videos were viewed 38 million times overall. And Google did a test as a follow up and found that there was an increased discernment, meaning that people in Poland became better at identifying misleading information that was trying to scapegoat Ukrainian refugees after they had been exposed. And they had been exposed to those videos in comparison to a control group that didn't. So this is now beyond being an experiment. This is actually being rolled out on the basis of our search on the inoculation by Google in the field. And it is shown again to be effective in this particular case. So I think I went over a little bit, but let me conclude. We're facing threats. There's no question. Democracy is at risk. Demagogues are very good at exploiting social media. We have problems online as I've reviewed, but that's not hopeless. There are also opportunities. Populism is not inevitable or invincible. In fact, I would argue that it probably has peaked in many countries. I mean, you know, it's very hard to tell, but they're running out. They're running out of lines that people buy into slowly. We can envisage the Internet for democracy and we can boost people and we can protect them against misinformation through inoculation. And each of these topics is worth another two hours of discussion, but I think I'll stop here and I will thank you for your attention very much. And I'm happy to take questions, of course. Thank you. Thank you very much for a very interesting lecture. And yes, we are waiting for your questions. Hello. Thank you very much for a super interesting talk. My name is Jonas from the University of Oslo. I had a question regarding your land research, looking at how the mainstream media responds to demagogues. I was kind of curious on why do you think the mainstream media responds as it does and can advise into the destruction attempts of these demagogues? And you suggest they should do instead what would be a sustainable strategy to counter these attempts? Excellent question. Well, first of all, as we point out in the paper, I'm convinced that the New York Times isn't doing this because they want to or because they like Donald Trump. I mean, quite on the contrary, their stated position was to hold accountable. They were quite, you know, strong in their statements about Donald Trump and the importance to hold them accountable. So I don't for the moment think that this was an editorial decision. What I do think is that editors and journalists are people just like the rest of us and it takes very little for people to be influenced even when they don't know it. And I think that is what is going on. Now, one of the problems the media do have, actually, notwithstanding what I just said about them resisting Donald Trump. I don't think the media, at least early on at the time when this research was done, I don't think they really understood what was happening. I don't think they understood that Donald Trump was one of the world's best manipulators and that he was playing them like a piano and that he was out to undermine democracy as I think is now entirely obvious after the events of January 6th. I don't think the media were prepared to see him for what he was and that gave him a window into this. And they continue to have problems along those lines relating to this notion of journalistic balance, which predisposes journalists in a democracy to present both sides of the argument, which is terrific. Unless one side is always lying, then it's no longer terrific. Then you're facilitating their corrosion of democracy. And that shift from presenting balanced opinions to saying, whoa, whoa, whoa, here's a guy who's just making these crazy claims. That took the media a long time. It is only now after January 6th that the quality media in the United States will explicitly refer to Donald Trump's baseless claims about the election, which is the only way to go because they were baseless. They were completely, I mean, just crazy town, right? And they served a political purpose and they almost top democracy. I mean, you know, the US came incredibly close and it's not over yet. So it took the media a long time to get there and they're still suffering a deficiency there. The same with trying to balance scientific opinions that denial on climate change. The media still sometimes falls into that trap. The BBC doesn't. You know, even though they have an explicit policy against that, they still fall into this trap because journalists are, you know, they like to be balanced and they're susceptible to criticism that they're not being balanced. Which is, I think, it's a slightly different story, but it's seriously problematic. We have one questions from our virtual audience. So maybe I will read it out, but it's also available in the chat section. So Laura Stonehoff is asking, was there any analysis done in the micro-targetic research to see if there were significant differences between the German participants and the US? Is there more research done in this subject in non-English speaking countries? Yeah, very good question. Well, the first part were the differences between the German participants in the US. Well, yes and no. Yes, in the sense that Germans overall were most concerned about their privacy and far less permissive than Americans. The British were more like the Germans, but not quite as extreme. So there were clear differences there between countries. However, the pattern was identical. By that I mean both Americans and Germans liked political targeting the least. And both were happiest with having dinner recommendations or movies or something. And that pattern across different topics was identical across the three countries. So depending on how you look at it, you can say, well, they were the same or different. Your point about research being done in non-English speaking countries is extremely important, of course. And it is a serious problem how much of this research is focusing on the United States or the UK, but far less so. Now, it is changing. For example, the Google study I just presented took place in Slovakia, the Czech Republic of Poland. So that's very different. We also now have an increasing number of articles that report data from places like India, Kyrgyzstan. There was an inoculation study just on Kyrgyzstan, whatever. Sorry, I don't even know the name. And etc. So we're just broadening out. There are now papers appearing that did research in Nigeria, in Africa, in Asia. But I totally agree with you. There's not enough of it. And we have to overcome that. In Europe, it's not quite as bad. There are other inoculation studies done by colleagues at Cambridge who looked again at Poland, Germany and the UK. And so there's too much English speaking stuff happening, but it is beginning to broaden out. And by and large, you find the same effects wherever you go, which doesn't surprise me. I mean, people think the same. It doesn't matter whether you're Polish or British. Your cognition is going to be very similar in many, in many ways. We have another question. Hello, this is my question. I have a serious question and a non serious question. So the non serious question is goes as follows. You picture Trump as some kind of social media genius that the picture presents him as a very cunning operator of social media someone who's really very good at it. And that does not align with the public impression of this figure. It aligns with his self reporting as one of the most intelligent people on the alert. But that's not commonly shared among, for instance, us. So would you care to share your own opinion of that particular individual? Do you think that it is a pose? It's an actor. Is he more intelligent that he seems or or do we so this is the most serious question. And the serious question is, do you see any way to align the incentives of social media platforms with the integration? How would you go about that? Okay, great questions. The first one is, okay. I think Donald Trump is extremely cunning. He has the ability to read a room and to manipulate it to his advantage. There's no question about that. I think he is extremely good at that. He's also extremely good at capturing people's darkest feelings and exploiting them to his advantage. I mean, that is a skill. There's no question. You cannot be this good. Well, you cannot be this good at doing bad by accident. Okay. Now, is that conventional intelligence? No, I don't think so. I don't think he's particularly intelligent. I see no evidence for that. But that doesn't mean he can't be very manipulative and cunning. That's a different dimension from formal intelligence. So that's what I think is going on there with him. And he's also completely ruthless and has no morals. And that gives you a lot of mileage in politics. And that is one of the problems that the American media, they just didn't realize quite what they were dealing with early on. Second question. Yes, that is the million dollar question, isn't it? Well, I think one way to do this is through regulation. And in Europe, the EU has taken steps in that direction that I think are very encouraging with the Digital Services Act and the Code of Conduct. Because they're now holding the platforms accountable. They have to justify what they're doing. And also, and to me, this is much more important than anything else. The EU is creating a new center called ECAT, the European Center for Algorithm. Transparency in Seville in Spain. And that center is tasked with, what is the name says, with looking at platform data and auditing their algorithms. So we understand how they operate and what is going on. And that is one of the things in the Digital Services Act. That is extremely important. They need an requirement for transparency by the platforms and their algorithms. And to me, that is the most promising angle to take because the moment we just don't know what's going on. And that is one of the things in the Digital Services Act that is extremely important. And that is the most promising angle to take because the moment we just don't know what's going on. If we have independent researchers who act in the public interest, investigate the algorithms and audit them, then that'll change everything. Because once the public knows for sure that Facebook is showing them stuff that they know to be false just to make money, they don't think that's going to go over terribly well. I think that'll generate political momentum then to fix that and to mandate attributes of the algorithm that are more compatible with democracy. That is what I think is the most promising aspect of this. Now, I'm putting a very optimistic spin on it in the sense that, you know, there's a lot of power out there that is aligned against it. And politicians are susceptible to power. And so you don't know if they might not fall over. You never know until it's over. But certainly there are steps in the right direction. And that to me is key is to have independent audits of algorithms and so we know what's going on. And then design counter algorithms that do better than that. And there is literature on this out there. I mean, there's a recent paper by Bargadien, Nyhan and Reifler and all these people in Nature, Human Behavior, where they showed what would happen if you redesign the recommender system in a news feed to take into account, I think it was audience diversity of the sources. The moment you do that, the algorithm will recommend things to people that are far higher in overall accuracy than they otherwise would. So we know what to do to make the algorithms more compatible with democracy and we just have to get there. And it's an arms race and it's a race against time, but it can be done. I see a question also from our audience, Sarah Fisher asked. I think that it might have missed this during the talk in which case, sorry, my question is about the data showing that New York Times reduced the Russian mother coverage following Donald Trump's tweets. Do you have a way to compare this against natural degradation of stories over time, especially now that news cycles are so short? I guess my worry is that New York Times might have reduced their coverage the day after a story broke, regardless of what Trump was up to. Thanks for the talk. Really fascinating. Thanks. Yes, excellent question. And we do have at least partial answers. The first answer is that we had so-called auto correlations in the model. So we were modeling statistically this sort of natural drift that occurs over time. We also looked at, I mean, we had like 100 or so control variables in the analyses. And so we were pretty confident that that was not the problem. And the other thing is we had all these neutral, these placebo analyses with gardening and skiing, which didn't show the same effect. So you would then again argue, well, OK, why would it show up here but not there? And finally, and that to me is perhaps most important. We also did an analysis using Brexit as the topic. Now, Brexit was going on throughout, well, still going on. It never ends. Brexit will never end until we rejoin. And that went on during the first two years of Trump's presidency. There was a lot of coverage in the New York Times. And Trump, it was not politically damaging to Trump. On the contrary, he, of course, supported Brexit because it was his mates who were running it. And we didn't find the effect I showed you for Russian Mueller, even though it was very comparable in terms of coverage in terms of the duration. Everything was very similar except it wasn't damaging to Trump. And the pattern was completely different. It was blob in the middle. There was no effect. So putting all that together makes us reasonably confident that we picked up a real effect here. But your question is excellent because, I mean, it's an observational study, so there can always be something else we didn't account for. But we tried. And we had lots of placebo analyses and 100 control variables, and it was still robust. Do we have any other questions here from room? I can't see anything in the challenge. So I also have a question because I also see attention or maybe even a contradiction between, I say, political change that are somehow required in order to change the landscape of regulations and how media and especially how social media are functioning and also to protect all the democratic mechanism which was already somehow corrupted by the populists in power. And this political change requires also a huge political mobilization. So most people who oppose demagogues and populists, they also want to somehow exploit social media and its weaknesses also of human emotion, but for the good, let's say. They want to mobilize their voters to go to the polls and vote, let's say, as bad actors. So have you thought about how we can navigate this particular contradiction? Yes, it's a very good point and I think you're absolutely right. It is very difficult to deal with that. So let me give you a few partial answers. First of all, I think there is an inherent asymmetry between the information that demagogues pump out and those that people who oppose them are using. Now, by definition almost anything that is being pumped out by demagogues, by populists, sooner or later will be false. Because it has to be. You cannot be Donald Trump and be telling the truth in a conventional sense because then you would have no case. You would have nothing to stand on. You have to demonize immigrants and the only way you can demonize immigrants is by making stuff up because if you looked at the data then you'd find that immigration in most cases is a bonus for the country that immigrants come to. There's a lot of evidence for that. Immigrants are a positive contribution to a country because they tend to work hard. They're young so they don't have as many medical expenses. They pay tax. They hire other people. I mean, especially in the UK, this country needs immigrants. And so, you know, those are the data and then the demagogues have to whip up fear by telling lies basically or exaggerating isolated incidents. And I don't think that the people who mobilize against demagogues have to do that. I'm not saying they don't also sometimes lie, but on average, they don't have to do that. And if you look at all the fact-checking during the election campaigns in the United States over time, you know, there's always an asymmetry in terms of what fact-checkers pick up, you know. And in a nutshell, Democrats lie less. It's as simple as that. I showed you the data from the Congress analysis. There's a huge discrepancy between the parties. So it's not the case that there's complete symmetry which will make it hopeless to differentiate. I think if you found a way to cut down on highly emotive and false information with a magic wand, you would penalize the bad guys more than the good guys. And I do believe that populism is bad because it is toxic to democracy. So I think that's one thing to put on the table. And the same thing is true for inoculation. People ask me this all the time. Well, you inoculate people against being misled. But how do you know that you're not also inoculating them against good information? And the answer is that, you know, if you use good information, you're not going to be scapegoating people. You're not going to be incoherent. You're not going to be emotive. You're not going to cherry-pick. You're not going to jump to generalizations. All these misleading rhetorical techniques. People, scientists, for example, do not engage in that by and large, right? So we can tell what the quality of information is that is poor and that is better. You know, we have a history of 300 years of looking at argumentation. And we know that, you know, what bad arguments look like. So we can teach people to detect them. And that means, on average, they will only reject things that are of poor quality. So I think it's not a symmetry. It's not an informational symmetry. And I think that's key to the issue. Specifically, how you would do that in the case of social media and how you would revise algorithms without also penalizing people who want to defend democracy, that's very tough. I don't have a good answer to that. But I do know there's an asymmetry in the quality of information. There's no question. And so if the algorithm favors quality, it's going to favor people who support democracy. And it will work against people who are trying to undermine it. Do we have another question behind us? Hi. Hello. This is Rafa Figuo. Thank you for a fantastic talk. I wanted to, I have one very brief question. You mentioned the study by Manjkovic from 2017 about the Facebook somehow targeting people feeling worthless and insecure, right? I found the study, the mention about this study right now in the internet. I was wondering whether you are aware of or what is your opinion about kind of boosting manipulation, which could not know first whether people who experience such as states and negative emotional states are more vulnerable to demagoguery or, you know, misinformation. And following up, whether you figured it would be possible to apply manipulation based on somehow making people aware of being in such a state and then use it against misinformation. And the second little very short question. What's your opinion about these processes in China where social media are completely different and also the kind of democracy they have is a little bit also different. And this could be a fantastic field for exploration and comparison with the normal, our world I would say. Yes, thank you. Excellent points. Okay, your first point about can you, I understood your first main point to be can you boost people's ability to detect when they themselves are particularly vulnerable and then protect them against manipulation. And I don't know. I suspect that is going to be quite difficult because, you know, if you're vulnerable, you're vulnerable because, you know, certain things have broken down. I mean, if you're already, let's say you're depressed, well, you got there because something has failed. Okay, so it's unclear to me how you can then say to people, oh, by the way, when you're depressed, do this. Well, if it were that easy, they probably would do it anyway, so they're not depressed. If you know what I mean, you know, it's very difficult to do that. However, it is, of course, possible in a therapeutic context over the longer term, over the longer terms to give people techniques to deal with when they are feeling vulnerable. I mean, that is part of therapy is to let people recognize cues and give them things to act so they don't then fall into depression. So, you know, but that's not boosting sort of quickly something you can do online. Now, the second point about China is very interesting point. One of the fascinating things about China is that, well, two things. Number one, they don't exercise much censorship. Now, let me, let me, let me, I mean, they do, but it's not the way you think, perhaps. Very little content in China is removed, relatively speaking. The censors do not yank things off the internet. What they do instead is two things. Number one, they flood social media with distracting information. They do what Donald Trump does. And there is quite a bit of evidence for that. And there is something called the One Cent Army or something, which is lots of people in China being employed by the government for minimal pay to put posts on social media that are distracting the public from other issues. There are classic cases, for example, involving an earthquake in, I mean, some time ago where there was a lot of stuff on social media about the earthquake and how many people were killed and how inadequate the government response was. And so the government obviously didn't like that. So what they did was to mobilize thousands of people to post about some sick scandal of some actress, something that happened a year earlier that no one cared about anymore. But they managed to drown out coverage of the earthquake by having, by pumping out the stuff about the sick scandal. And enough people sort of then got into it and let go of the earthquake. So that's called flooding. They do that a great deal, similar to what Donald Trump does incidentally. And the other thing the Chinese government does a lot of is what's called friction. They don't make it impossible, but they make it harder to get at information they don't like or to spread information they don't like. So, you know, they banned the BBC. You can't get to the BBC from China. Well, except you can you can get a VPN and then you can go to the BBC, right? So it is actually very easy to go to the BBC or CNN or whatever from China. All you need is a VPN, but the number of people who a know what that is and be who can be bothered is so small that the Chinese government doesn't have to prevent complete access. They don't have to. All they got to do is make it harder. And then 98% of the people just don't bother and their purpose is achieved. And the same with censorship. They don't totally censor things the way it used to be done. You know, Stalin had his people who would just sort of, you know, rip apart the books and ban content. These days they autocracies don't do that as much because they have other tools at their disposal that are much more effective. They just drown out the good stuff with their own nonsense and achieve the same purpose. But no one can cry censorship. So we can learn from that, but I think I don't want to I don't consider China to be a model of Internet governance. Let's put it that way. I don't. It's it's what we want to avoid as well. Thanks. I can see any questions from the audience that maybe I asked the last the last question. Quite recently I seen a mask encouraging government officials and leaders to speak themselves. And so, so do you think that that this is this would be a positive trend that I'd say politicians would have this kind of contact with their constituencies without any intermediaries without media, without people who who can somehow critically approach what what what they're talking about. And what do you think what will be the the new the nearest future of our communication, especially when we take into consideration the crisis of traditional media outlets? Yes. Well, I'm I'm not against politicians tweeting or or being in touch with their constituents. I just think they should be bound by conventions and norms that are democratic. I don't mind, you know, Joe Biden's Twitter account. I, you know, I'm perfectly happy for him to tweet and to people for people then to reply to that. And so it's not the that I'm in principle opposed to two politicians having a direct line to the public quite on the contrary, I think it's terrific. But that doesn't that shouldn't give politicians a license to do what Donald Trump has done, which is to lie and to undermine democracy and to effectively try and to overthrow it. You know, that that doesn't follow. They're two different things. And now, of course, the difficulty is how do you enforce the line who draws the boundary and all that. That's a separate issue in principle. I don't have a problem, of course not. I mean, you know, in the United Kingdom here, the every MP has what's called the clinic with his or her constituents, and that's some sort of office hour. I don't know on a weekly basis or once a month or something. I don't know. But you just knock on the door you walk in and you take a seat and then you talk to your MP. There's a tradition that's gone back hundreds of years and they're still doing it. So, you know, I mean, I think that's potentially a good thing to have that contact, of course. And I also had my mind. The thing that let's say that then we have to take into consideration this content moderation and all policies for all politicians and public officials who for instance can be let's say censored or suspended for breaking the rules of giving social platform. And some argue that let's say that having access to your audience is a kind of a right in our time. So the social media platforms should not be allowed, for instance, to suspend social media accounts of public officials who are elected in democratic elections. And of course we are coming back to Donald Trump and his Twitter account and Facebook account which were suspended after January 6. Okay. Yes. Well, first of all, the big issue we have to address is to what extent we want to have private corporations acting to govern public spaces. To me is a fundamental question and I personally think that the events surrounding Elon Musk and Twitter have shown us that we cannot afford to have private corporations in control of public spaces. You know, we got to have public control over public spaces. No one would have let Twitter do what it has done over time. Had it been planned at the time and had a formal proposal been made. Oh, this is what I'm going to do. The only reason we have what we have now is because it evolved over time and no one knew what was happening because it was new technology and it was all exciting and interesting and no one really knew. And then all of a sudden, oh my God, we woke up and here's this monster. So I think that's a fundamental question to ask and I have an opinion on that which is we need public ownership for public spaces. Beyond that decisions about banning and moderating and all that kind of stuff while they're extremely tricky and nuanced. And this paper, I think you mentioned that when you introduced me that I published with colleagues about a week ago. We looked at, you know, a very large sample of Americans a representative sample to pull their views on moderation and banning and so on. And it turns out that most people, many people are in favor of content moderation and even banning if the consequences of misinformation are severe and if the person is a repeat offender. So the public is actually very comfortable with the idea of moderation in extreme cases. But I guess what we didn't ask them was whether they whether they want a private company to do this or a public body that that we didn't ask them. So I don't know how people feel about that. I suspect it would be a difference along party lines on that one. Yeah, but so I think that's a very subtle question though completely. Yeah. We have to talk about that some more. Thank you very much for following our response. I can't see any other questions. We don't have also any question in the chat, but that was a very long lecture in the seminar. So, of course, very, very interesting. So thank you again. Well, thank you for having me. Yeah. And of course, everything will be later available on our website. So once again, thank you and bye bye. Bye bye. Thanks so much. Bye.