 I would like to talk a little bit about work on big data and democracy. I should mention that this work a couple of weeks ago received funding from the Data Science Center Tilburg, so now you see what you get. This is joint work with Fried van Hils, who's a PhD student right now and whose PhD position was partly funded for this money, and my dear colleague Erwinan Miller at the University of Vienna and also here Tilburg. So democracy, big data, social media. Since 2016 at the latest, so since the the US presidential elections that led to the election of Trump and since Brexit kind of had been accusations that social media would actually foster disinformation to be kind of transmitted to voters such that they would make actually election decisions that are not in their own interest. So thereby that social media and the type of information that will be provided by social media would manipulate election, elections democratic elections, which can lead to several problems. This only did not have, this did not only happen in 2016, but this is a picture from last week actually, where kind of a whistleblower from Facebook, Francis Hogan, kind of worked Facebook for long, kind of gave testimony to the US Congress and kind of revealed many internal documents of Facebook. And one of the things she brought up is that she claimed that actually the algorithms that Facebook used to distribute news and also political news would damage democracy. So these are a serious accusations. Now, where's the problem? The problem starts with the fact that actually we know right now that more than 50% of online news will be consumed by algorithm-driven platforms such as Facebook, Twitter and also Google News as a main source of political information, right? So it's not the old TV, it's not the, it's not newspapers anymore, it's social media basically which are the main source of political information for many people, not only about their friends and their parents. Now, we also know that some ideologically motivated information suppliers, of which we will call interest groups, disseminate disinformation on news platforms for several purposes. And so the two key problems that have been mentioned in the literature and partly that we also worked on theoretically in previous work are the, on the one hand side, the obfuscation of origin of news items. This basically means if you get a news item on social media, very often you only know, oh, I saw it on Facebook, but you do not recall who is the original sender because kind of that's not made so prominent. And the second major problem that has been mentioned is the kind of technological ability of micro-typing at services which implies that you get a different ad or use of different news than your neighbor because your neighbor has different characteristics and made different choices online at the point, right? So where's the true problem? The true problem is if it is correct that there is disinformation, that disinformation can affect people's voting behavior, then in the end we could, this could undermine trust in democracy. Because if we think that the person who just officially won an election and is not going to be sending the will of the people, then why trust the results of elections? So that's the main problem. Now, based on this research kind of by several people that kind of beforehand, some reform proposals have been proposed. For instance, by the European Commission in what they call the European Democracy Action Plan. And so the two key reform proposals here on the one hand side are to introduce mandatory disclosure rules for social media such that kind of the origin of political content must be made more transparent. And secondly, that micro-targeting technology should be more restricted. Yeah, so that's basically in the political discussion. But now, and this is a very interesting, so these are two quotes from political scientists and Josh Tucker is a pretty prominent one. So when they wrote, well, we're concerned that reliance on untested conventional wisdom based on folk theories of technologies impact on democracy is leading to misguided reform proposals that may even worse the problems they attempt to solve. So we just saw the previous slide that kind of policymakers are taking action, but they do not really know whether the action is warranted or not. And whether it goes in the right direction because there is not free empirical evidence. And why is there not so much evidence? That's kind of where the second kind of the second quotation here comes in, because kind of personally and Tucker write that it remains the case that the employees of the platforms are the only ones who really know the scale of the problems might be attributed to them. Those of us, we poor researchers, those of us on the outside must make do with the glimpses provided through publicly available data, which may or may not paint an accurate picture. So that's the problem. We need to know kind of we need to get some empirics about what actually is going on. And this is where we try to contribute. So what we try to to answer is a research question that says whether so we want to study these two technologies mentioned and so can a ban on micro targeting and or mandatory disclosure of interest by centers of political messages? Can this avoid election rigging and improve voters welfare? So that's the the overall question. And how do we operationalize that basically in the lab? So we go to the lab, we create our own kind of we create our own data. Before that kind of experiment is very well informed. We can first construct the game theoretic model making clear who has which kind of incentives. And so why would we then do this in the lab? Well, first to get to get the data about human decisions in in kind of in online environments about political elections in the first place. But in particular, that's nice thing, of course, about lab experiments. You can control everything, right? So we do not kind of we do not have the external validity that you do if you do have Facebook data, but we can control everything. And so at least we have complete internal validity and and then we can discuss external validity. Now, I will not go into too much detail about what how exactly it looks like, but I will I want to give you a glimpse about these about these experiments. So basically, we have subjects that students usually and they are matching groups of four. And in each group kind of you would have one person taking over the role of the interest group, sending a political message, and you would have three people who receive such a message and they take on the the role of voter. They get a message about the state of the world. And we're here kind of in the model kind of that the state of the world can be one or two. Think about state of the world one could be something like a more leftist policy is more appropriate state of the world two is a more right wing popular policies more probably exactly we don't write. So the end is the assumption is that the interest group sees what is actually the best, but the interest group has an own incentive because it belongs to one of two types. It's either what we call majority or what we call my narrative. And depending on the state of the world and its type, it prefers one of two parties, X or Y. And then basically the interest group sends this message, Oh, the state of the world is this or that. And this is cheap talk. So it can be pure lie or it can be truthful. And it sends this to to the voters, the voters see it. The voters also have a kind of have a type so two of them are majority one is minority, then they make kind of they make a voting decision. And the party that collects the most votes is very as it's supposed to be in a democracy. Now kind of what with the interesting part the interesting part is basically here, because we study on the one hand side games with public communication, think a newspaper or a TV. So in this kind of in this case, the interest group can can choose only one message and all of the voters see the same message. And they know that they see the same message, right? You read the moon newspaper and you know that your neighbor reads exactly the same piece. Alternatively, there is micro targeted kind of news dissemination, which implies Oh, it's possible that you see something different than your name, because your majority is minority or the other way around. And you know that. And secondly, the question is, well, do you know the identity of the center, for instance, oh, yeah, this message came from a majority interest group. I have some guests what they want me to do consequently, I can, I can make a kind of update my beliefs. And in this case, this would be disclosed identity of the center. Otherwise, we also check what's going on if they just don't know so as in social media. So all of this was implemented in Vienna actually with my co author due to Corona, it took a pretty long time. So October 2020 until July 21. So we ran a total we ran 36 sessions with 432 subjects. And importantly, I should mention in this kind of interdisciplinary environment, this was an economics and economics, sorry, experiment, which implies the decisions were incentivized. So the students on average made 40 euros for two and a half hours. But actually, if they made smarter decisions, they would make more money. And if they made made not so smart decisions, they would make less. So that's that's the game. We have some measures for kind of the outcome is for individual aggregate efficiency. That's not so important. But then actually what we did is that we first started kind of in one game, which is we call it and you here, which stands for micro targeting and undisclosed. So that's actually think of current social media. So where you the micro targeting exists, and you do not know as a as a voter actually who's sending your message. And then we study kind of starting from here. What happens if we introduce a ban on micro targeting, if this is not happening anymore. And what happens if we mandatory and disclose the interest of the center or both. So that's basically part one of the game or two of the game. Here we change position and we basically try to restructure the development of the media industry. So we start in the good old times when we had newspapers. So that would be the public game with disclosure of interest. And then we check what's going on if micro targeting is possible. And if the instantaneously kind of the groups types are not are not revealing. Now, what do we find that we have two major findings. The first is, well, as long as games are public, again, think newspaper. As long as games are public, then there is no major difference in the average individual efficiency of both minority and majority. So that's that's basically that's okay. However, as soon as micro targeting is in is included, so where messages can be customized to the type of the receiver, then this is a problem for minority voters, not for majority voters. Where's the intuition? The intuition is that with two thirds probability kind of the center is of majority type by definition, that's majority. And so majority interest groups have an incentive to report truthfully if they send a message to majority receivers, majority voters, and then kind of, which is why they why they report truthfully in the first sense. And then kind of despite the fact that they are not really keen on it, but they they also have to report truthfully to minority voters. So that's in a newspaper kind of minority voters also know I get correct news, and then I can update my own beliefs, I can make my thoughts about what is correct and what is my kind of in my own interest, and I can make a floating decision. However, as soon as a kind of micro targeting is possible, this this discipline effect of public communication signal default, that's what we can predict it theoretically. And that's also what we see in the data. And that's a pretty clear result micro targeting is bad for minorities. Second major thing, we found that the micro if we ban micro targeting, this alone has no significant effect for a majority and minority voters. However, disclosure is the elephant of the vote. So if we force the kind of social media to disclose the interests of the message standard, then this will have positive effects in particular for majority voters. And if we combine kind of this is closure with with a ban on micro targeting, it will have positive effect actually for both majority right. So that's that's basically the second major major message here. If we combine kind of a prohibition of micro targeting, and we ask them to disclose kind of the identity of senators, then this will be positive. If I conclude, so what we what I presented here is basically the first systematic evaluation of the effect of a micro targeting ban and mandatory disclosure of interest on voting behavior. Our experiment suggests that disclosure of interest is crucial to enhance the efficiency of voter decision making. And the micro targeting ban enhances the efficiency of voting actions by minority voters, but only in combination with mandatory disclosure. Thank you very much.