 This is joint work with Dongkoo Chan who is also in the audience and the paper is about how online misinformation leads to conflicts offline and what we do about it and the perverse consequences that could happen. All right, so just a brief motivation. You know, it's well documented that there are a lot of misinformation on social media platforms and they often incite or flames, you know, conflicts offline. Okay, and so just to fix ideas, one example that happens before the pandemic, you know, it seems long ago, is the, you know, it's something that happens in Paris. Okay, so back in 2019, there's this fake social media video circulated around saying that, you know, these Roma people in Paris kidnap some children and once this video becomes viral, you know, there are a group of young people who, okay, there are a group of young people who get their weapons and start attacking these Roma people and, you know, causing injuries and they set the, you know, the Roma people's cars to fire. Okay, and this is not something pleasant and obviously, societies has responded and in particular, you know, different countries have attempt to use different efforts to make social media platform internalize the conflict costs that are driven by this misinformation. Okay, and this includes, for instance, awareness campaigns. So a famous recent example is the Wall Street Journal's podcast series called the Facebook Files. They basically have an extensive investigation and saying how Facebook's algorithms track their polarization and conflicts and violence. Okay, and there's also Congress hearings. The CEOs of Facebook, Twitter, Google, they have to regularly go to Congress and tell people what kind of social responsibilities they have been doing to help fight this misinformation. There's also a, you know, a growing research program on ethical algorithms in computer science. Now, when we develop algorithms, we better make sure that these algorithms are, you know, ethical in a sense that they internalize the costs they generate. You know, governments also impose regulation, you know, legislation on misinformation, for instance. All right, so on its face, these societal efforts appear desirable and effective. What we want to show today or what I want to argue today is a cautionary observation concerning these societal efforts. All right, and the main result of the paper is especially that, you know, if we take a platform who internalize the conflict costs, it's going to generate when it designs its algorithms to filter misinformation. These platforms could actually perversely aggravate the conflicts. And the main intuition is that these citizens who are aware that the platforms face, you know, internalize these conflict costs will become too confident of the personalized contents they read, okay, and in turn become too hostile against disagreeing opinions. And we want to draw a sort of policy implications that we are going to show that these societal efforts are effective if and only if they're sufficiently aggressive. Okay, and so I'm going to frame the talk today just to, you know, just to demonstrate formally what we mean here on this slide. Okay, and so the plan today, I mean this is a applied theory paper, and so the plan today is going to first give you a model and the corresponding equilibrium analysis. In particular, we're going to do two versions of the model. Okay, in the baseline version, I'm going to consider a platform that is self-interested. Okay, all that the platform cares is to maximize profit. Then I'm going to move on to an alternative version of the model, where the platform faces ethical concern to internalize conflict costs. Then I'm going to sort for the equilibrium, you know, where I'm going to compare the equilibria and these two versions and arrive at our main result, namely, you know, in the alternative version, the equilibrium conflict cost could perhaps be larger. All right, finally, I'm going to discuss the related literature. Okay. All right, so let's begin with the baseline model. This is going to be a very simple setup. I'm going to look at a one-shot game. Okay, there will be a hidden state phyta that is distributed normally with mean normalize to zero and some prior precision p. So you can think of, for instance, this phyta represents the change in effectiveness of a COVID vaccine, you know, against a new variant, against relative to some older version of the virus. There are two parties, basically. There's a platform and then there will be a continuum of, a unit continuum of citizens. It's citizen i is characterized or it's citizen i has a bias bi, which is a real number. So what happens here is that the citizen want to learn the value of the true state by using the platform and the bias is going to represent the value that each citizen would like the true state to be. Now, for simplicity, in this talk, I'm going to assume that the biases are commonly known. This is a very strong assumption, obviously, but this is just for convenience so that I can avoid defining distributions and beliefs. Okay. And in fact, for our results, these bias can be the citizen i's private information or it can be that nobody knows for sure the biases at all. Okay. All right. So this is the setup. Before I describe the, you know, how the game unfolds, I want to just give you a brief timeline before I describe the detail. So what we have in mind is a model where the platform is a news intermediary. Okay. So before it receives any news reports or news contributions, the platform develops an algorithm that filters misinformation. Then the nature is going to, nature is going to draw the true state of the world. Then the platform is going to receive news reports about theta from some, for instance, external sources that we're not going to model. And these news reports are containing the informative ones as well as the, you know, the fake ones that we call misinformation. And given the reports, the algorithm, as well as the chosen algorithm, the platform would generate some personalized contents for each citizen about the state theta. And the citizens will read these personalized contents and infer the state of theta individually. And, you know, in the baseline model, we are not going to talk about conflicts. But once we get to the alternative version of the model, after the citizens do their inferences, there will be disagreements and there will be conflicts. Okay. All right. So what do we mean by the algorithm? What we have in mind is actually a real number. So the platform, in the very beginning, is going to choose a filter F, which is a non-negative real number before theta is realized. This filter is hidden from the citizens. So I don't, for instance, know exactly what the, what Facebook's algorithm is, what Facebook's algorithm is. And we're going to interpret a higher filter F as a more aggressive filter by the platform. And in a model, the citizens do not take any actions. All they do is that they, they receive information and form beliefs. Okay. Now, next, we want to think about how we model the personalized contents that the citizens receive. So once the platform chooses the filter F, it citizen is going to receive an indiview idiosyncratic signal that, you know, that we model as the personalized content. And this signal is defined as follows. First, it is, you know, this is the signal YI consists of three components. All right. So first, the signal is informative about the state theta and hence there is a, you know, theta here. But this signal is also muddled with this information that we model as the random noise, epsilon I. And this is personalized. And this epsilon I is normally distributed with, you know, with mean zero and mean normalized to zero and precision Q plus F. Okay. And so what's happening here is that this noise is going to be describing misinformation that escapes the filter. If the, if the platform chooses a high F, meaning if the platform is filtering this information more aggressively, this epsilon is more concentrated, you know, at zero. And so the signal Y is going to be more informative about theta. And Q is just going to be an exogenous parameter that describes what the default precision of the signal of this noise if the platform is not filtering at all. Okay. And this is personalized because this is personalized because, you know, we want to capture the fact that each citizen's information acquisition on the platform depends on, for instance, her individual subscription of new sources on the platform. Finally, this SI is also an exogenous parameter that we want to capture as the citizen's land in the citizen's personalized contents. As I said, the contents that the citizen sees could well depend on the individual subscription of new sources on the platform. And these new sources could well be biased. Okay. And so this land just captures, you know, the media bias. Okay. All right. And that being said, for today, I'm going to set SI to zero just to simplify my exposition. And in fact, this is not going to drive, this is not going to affect the main result of the paper. I'm going to talk more about it later. The signal YI is assumed to be private to the citizen. So, you know, I don't see what's on your for example. Okay. All right. So now after receiving the signal, the citizen has to do the inferences. Okay. And how do they do the inferences? Well, they basically form a state estimate, which is modeled by the posterior mean of the state, given the signal YI. And of course, to do this inference, the citizen has to know a bit about, I have to know about the data generating, the data generating process. And because the filter F that determines the variance of the signal is hidden from the, you know, is hidden from the, from the citizens, when the citizen do the state estimates, they have to compute this expectation EI based on what, based on the filter, they expect the platform has chosen. Okay. Instead of the actual filter chosen by the platform. All right. And so we denote this expectation of the filter by citizen I, F star I. All right. Okay. So now that's, we can then define the platform's payoff. So, but literally this is the platform's revenue. So once it chooses a filter F, and given the citizen's expectation of the filter it has chosen, that we, you know, that we collect it as F star, the platform's expected payoff is as follows. Basically this is the revenue. This is the, basically this is the revenue minus the cost. Okay. Now here, beta tau and C extortionist parameters. The right side here is the cost of the, you know, of developing the filter. And it's a, it's a quadratic cost. And the first part here is the revenue. Okay. And how do we model revenue? Well, how do, and the dominant source of, the dominant revenue source of social media platforms is advertising. Okay. And, and how do the platform derives more advertising revenues? Well, if it can deliver contents that attract the citizens to spend as much time on the, on the platform as possible. And how, when will citizens spend as much time as possible on the platform? Well, when they enjoy the contents they're seeing on the platform. And how do we, how do we model how, you know, the enjoyment we, we assume that the citizens enjoy the contents that are bias conforming. So if I'm seeing something that conforms my bias, or if I'm, you know, thinking that, you know, I'm actually being informed, I'm learning something that is, that is useful. And closer to the truth, then I'm more, I'm also happier. Okay. And so this first component is the bias conforming component. All right. If, if the citizen is state estimate is closer to be, to her, to her bias, to individual bias, the citizen is, is happier. Okay. And this quadratic loss is going to be small. So this beta is basically a parameter that measures the, the platform's benefit to provide bias conforming contents. And this second component is, is a truth learning component. If citizen, if citizen, I think that, you know, she's learning a lot about beta, this posterior variance is, is smaller. Okay. And this citizen is happier. Again, tau is just, you know, the exogenous parameter that measures the platform's benefit to better provide truth learning contents. All right. Now, just to make sure that we're on the same page, recall that these posterior inferences are computed based on the citizen's expectation because the citizens do not see the true filter chosen by the platform. In contrast, the outer expectation, as well as the cost depends on the actual filter chosen by the platform. Okay. And why is the expectation, outer expectation, depending on the true filter? Well, because this affect the actual signal received by the, by the citizens. Okay. And this distinction between the actual filter and the citizen's expectation is going to be key, you know, when we derive the results. Of course, in equilibrium, they have to be the same. But we'll, we'll, we'll get to that in, in, in a few slides. Okay. And the solution concept of the, of this one shot game is, is Bayesian Nash equilibrium in pure strategies. Okay. So pure strategy is a genuine restriction. So this, this allows us to have, have trackable belief updating. Nonetheless, you know, we allow the, the platform to contemplate deviation to arbitrary strategies. Okay. Notice I have, I have not defined payoffs for the citizens because they do not take action. So, so we don't, we don't need them in a model. All right. So what is, you know, in equilibrium? So basically in equilibrium, the platform is choosing the filter to best reply to the citizen's expectation F star. Okay. And, and so that this, this best reply is precisely the citizen's expectation. In other words, the in equilibrium, the citizen's expectation has to be correct. Okay. And here I abuse notation a bit because in the last slide, I write that, you know, F star is a collection of all the citizens expectations. All right. But in, because in, in equilibrium, they have to be the same, they have to be correct and hence has to be the same. I'm just going to abuse notation and say F star is a single filter that the citizens expect. All right. So I see this. Yeah, I can read the question. It would be helpful. It's from you. And he points out that the objective of the firm, the platform has two components, which are the bias and the learning about the truth component. But you only give the platform one instrument, which is the filter. Exactly. I guess the question is, have you thought about letting the platform change the level of slents? And does that do anything interesting? So there's one, okay, so there are two reasons. The first reason is that we want to think about well, okay, so, okay, two, let me, let me respond, let me respond in two parts. The first part is that there's one, the key reason why I dropped slant or, you know, the slant, why I assume that the slant is exogenous here is because I want to focus on misinformation. You know, not like how can the, the, the platform sends a particular information or recommend particular information and hide some others. Okay. And the second reason is that if I introduce slant or the platform having the ability to choose slants, given that all the citizens here are rational, you know, they, they, they can, they can perfectly remove the, this land chosen by the, by the platform, you know, when they do the inferences. And so unless we, we allow for more complicated slanting strategy by the, by the, or, you know, strategy that affects the information that the, the citizens perceive. I guess that being said, the, the most important reason is that I want to, I want to focus on one single instrument, which is the ability or the, or the, or the platform's incentive to feel to miss information. But that's, that's a good point that I can, I will come back to it when I, when I summarize, when I do the summary in the talk. Okay. But thanks for the question. All right. So, okay. So this is the, this is the model. And let's look at the equilibrium in the baseline, in the baseline model. So, you know, this model, that's a unique equilibrium. Okay. And in the equilibrium, the platform chooses the filter characterized by the following, you know, equation. So what I want you to see here is really that marginal benefit equals marginal cost. The left side is the marginal benefit and the right side is the marginal cost. And a key observation in this, in this equation that characterized the equilibrium filter is that the filter is strictly increasing in beta. And which is the benefit to provide bias, bias conforming content. And it's independent of tau, which is the independent of the benefit to provide truth learning component. Okay. And now why, so, so the interpretation really is that the platform in equilibrium filters only to better provide bias conforming contents. Okay. So the platform couldn't care less about, you know, helping citizens to learn about the truth. Well, why is this? Well, if we think about, if you just, if you look at the truth learning component of the platform's refining, given the citizen's expectation f star, the, you know, the truth learning component is this, is this guy. And if we do a bit of algebra, we can, we can do Bayesian updating and simplify this posterior variance into this object. And notice that nothing is random here. And so we can remove the expectation operator. And this object is independent of the actual filter chosen by the platform. Okay. And so taking the, the, and the key, of course, is that the citizens do, do their posterior inferences based on what they expect the platform has chosen as the, as the filter, but not the actual filter. All right. And, and, and hence when the platform chooses its filter, given the citizen's expectation, the platform really couldn't care less about this component in the revenue. Now, of course, this observation relies on normality so that we can have a posterior variance that is independent of the, of the signal received by the citizens. And I'm going to provide more discussion later. What I'll say for now is that this is not going to be the key, the fact that this posterior variance is not random. It's not going to be the key that drive our main result, which is the perverse effects result. Okay. Now then the question to the platform is how to better provide bias conforming components, okay, to the given a citizen's expectation. And with, you know, the bias conforming component is this guy. And again, we can do the algebra bit to simplify the, the citizen is state estimate, which is this guy. Okay. This is really just the way that average of the signal received by the citizen plus the, plus the prior mean, which is normalized to zero. Okay. So the state estimates could, could be reinterpreted as a weighted signal. Okay. If we put this back in to the bias conforming components, what is happening really here is that the platform wants to choose the filter to minimize the expected quadratic loss of these weighted signal from the citizen's biases. Okay. And how could the platform minimize this expected quadratic loss where the platform wants to minimize the dispersion of the signal. And how do the platform mean, you know, reduce the signal dispersion while the platform filters. Okay. And this, and the platform has a stronger incentive to filter if the benefit beta to provide bias conforming component content is higher. Okay. And that gives us the, the, the first result. And I'm going to, in the interest of time, I'm going to skip the other two comparative statics about prior precision PNs and default precision Q, but, you know, I'm happy to talk about that later if, if people are interested. Okay. Now let's move on to, you know, the ethical concern. All right. Now, in equilibrium, the citizen's state estimates typically disagree, as you could probably imagine right now, because they're getting individual signals. And we're going to assume that this is disagreements are going to lead to conflicts that are costly for a platform that faces ethical concern. So I'm going to define this term formally in this two slide, in this and the next slide. So you can, so the first thing is we want to, we want to define disagreement. What do we what do we mean by disagreements? Well, we want to measure any two citizens disagreement by the distance of the state estimates. So you can imagine that the the government is designing a policy that could affect all the, all the society's welfare and the optimal such policy is the one that matches the true state. And the citizens disagree about, you know, what the optimal policy is. Okay. And so the, the conflict cost, even the realized signals of the citizens and what they expect the platform has chosen as their filter is defined as follows. Okay. So what is this? Well, first, this term is literally the disagreement between citizen I and J. Once they see their own signal, Y I and Y J. And the cost induced by this is agreement is again quadratic. So we have the square here. And of course in this, in the society, it's not just citizen I and J. There's a unit continuum of them. And so we do the double integral and we do the double integral. There's a, you know, we, we double count the citizens. And so we normalize it by one half. Okay. And so this is our definition of conflict cost. Now, let's move on to define what's, what's a platform that faces ethical concern. So this is exactly the, so in this alternative version of the model, the platforms pay off is exactly the same with an additional term that is the expected conflict cost that it induces. Okay. So the first, the first line is something that we have seen. This, the second line is new. Okay. And this is the expected conflict cost. Meaning, well, you know, actually here is extortionist parameter. So if, if the platform is generating more conflict cost, the platform is reduced is, is, is basically getting us a smaller payoff. All right. And so here we say that this platform faces ethical concern to mitigate conflicts. And the parameter H here is the parameter that measures the strength of the platform's ethical concern. Okay. The model is otherwise identical. The only difference between the alternative version of the model and the baseline version is the additional, you know, term that, that, that captures the conflict cost. Okay. All right. Now with this in mind, let's look at what's going to happen to equilibrium once we introduce ethical concern. Again, there's going to be a unique equilibrium. And in the equilibrium, the platform chooses the filter characterized by the following equation. This is very familiar from what we have seen before in a few slides ago. The only change here is that there is a new term plus H that increases the platform's marginal benefit to filter. Okay. And so in particular, if the, if the, if the strength of ethical concern is larger, the platform's marginal benefit to filter is larger. Okay. And the intuition is, and so what this implies is that the filter given ethical concern is going to be strictly larger than the filter absent ethical concern. And the intuition is very simple. It's just that the platform now has an additional incentive to filter so as to reduce the dispersion between the citizen's signals. So that, so as to then reduce the disagreement between the citizens. Okay. All right. Now with, with all this in mind, we have, we are now ready to look at the main result. Okay. So the main result, we will look at the equilibrium conflict cost. All right. Which is the same thing we have seen, except that now all the expectation operators are the same. Okay. And this is, meaning that we do not distinguish between, we no longer distinguish between the platform's actual filter and the, and the, and the citizen's expectation of the platform's filter. And this is because we're looking at equilibrium. In equilibrium, the filter has to, you know, the, the citizen's expectation has to be correct. And so we're going to use the same single filter. Okay. All right. Now with this definition, here's the main result. So the main result says that there exists a cutoff, no, there exists a cutoff or threshold, whatever you like to call it, a threshold H bar of the, of the strength of ethical concern such that the conflict, the equilibrium conflict cost associated with the self-interested platform is larger than the equilibrium conflict cost associated with the platform of ethical concern if and only if the strength of ethical concern is large enough. In other words, or an immediate corollary of this main result is that if it happens that the ethical concern of a platform is not, it's not large enough, but the platform has ethical concern. This ethical concern could have, could aggravate equilibrium conflicts and increase the, the, the, uh, equilibrium conflict cost. Okay. And so this is something that we want to prove or we want to shed light on. Okay. What's the intuition? Okay. So if we, if we try to write down the, or simplify the conflict, the equilibrium conflict cost, we can write it down in this way, okay, in this form. And the first key observation is that this function, equilibrium conflict cost is single-picked in F in, in the filter chosen by the platform. Well, why is this? If we increase the equilibrium filter, there are two consequences. The first one is the learning effect. Okay. Um, when I, when the, when the platform filters more aggressively, the, there will be a smaller dispersion of the signals between the citizens. And so the citizens are basically, uh, learning the common state better and that mitigates conflicts between the citizens. But on the other hand, the, there's a confidence effect. The citizens know that the platform is now filtering more aggressively. And so they put a higher weight on the signal that they receive. Okay. And, and, and in particular, they're putting a higher weight on the individual signal they're going to, they're, they're receiving. And so this aggravates the citizens' disagreement with other citizens. So with this observation in mind, the ethical concern is going to, uh, mitigate equilibrium conflicts if and only if the learning effect dominates the confidence effect. Okay. Um, which is the case whenever the, the, uh, filter give an ethical concern is sufficiently larger than the filter absent ethical concern. Okay. Um, and when is this happening? Well, this has to be the case when H is sufficiently large, meaning the ethical concern has to be sufficiently strong. Now this is sort of this, the, the roadmap of the proof. And you can, you can take a step back and ask, why is this happening? Why does the ethical concern fail to mitigate conflicts? Okay. Well, of course, the platform understands that if it chooses a higher equilibrium filter, it could aggravate conflicts because of the single peak, uh, structure and because of the learning effect and the confidence effect. But once these citizens form the expectation of the, of the, um, of the filter that the platform chooses, the platform would, the platform with ethical concern will internalize the conflict cost incorrectly. It will, it will internalize this object instead of the one in the previous slide where F star is F. Why is this? Well, because these first term is to wait that the citizens put on their own signal and which captures the confidence and the citizens do not, do not know the actual, you know, and this is ca, this is characterized by the citizen's expectation, but not the platform's actual filter. And so once the platform with ethical concern internalized this object, the actual filter is not going to affect the confidence. But with ethical concern, the, the, the platform to minimize these conflict costs, the platform to reduce this conflict cost, the platform wants to affect this term, which is to boost the filtering F to help learn, to help the citizens to learn the true state. Okay. And so the platform would always want to filter more even once the citizens form the expectation about what the platform is doing. Of course, then in equilibrium, the citizens correctly anticipate these platforms in center and then, and so then they will adjust the expectation of the F, of F star to a higher filter. Okay. And that, that ultimately uses the first outcome. Okay. All right. So one question that you may ask is, well, how, how should, you know, when is, when, when, how strong is this fresh, you know, this cutoff threshold, you know, if the pressure is larger, that means that it's more likely that, well, okay. So in the proposition, we says that we say that if, if the, there will be no perverse outcome, if the ethical concern is strong enough, namely larger than this threshold, equivalently, that is to say if the threshold is larger, that is more likely that we will have a perverse outcome. And so we want to understand the structure of this threshold. And to do that, let's, you know, let's record it, you know, the P denotes the prior state precision of this, you know, of theta. And we can characterize or we can look at the structure of the threshold as follows. If the prior state precision is sufficiently large, then this threshold is positive and is strictly increasing in P. So when there's more prior consensus about the state, you know, it's harder to preempt the perverse outcome. On the other hand, if the prior position is smaller, then this threshold is zero. And so literally, there will be no perverse outcome. And the reason really is that when the prior position P is large, the citizens put a smaller weight on the signal that they're receiving from the platform. And hence the learning effect is limited. And it's more likely to be dominated by the confidence effect. Okay. All right. So I think I only have a few minutes left. So let me comment on, you know, the you know, the setup. So, you know, basically, clearly our results rely on the quadratic normal setup. And the reason for doing it is like many signaling papers in literature is for tractability. So it has afforded us the sharp characterization of the equilibrium implications of ethical concern. Nonetheless, we conjecture conceivably that our main insight of perverse ethics extend to more general settings. Okay. And you know, this is because qualitatively, the key to our perverse ethic result is the fact that the citizens form expectation of the hidden filter. And once this expectation is in place, the platform of ethical concern always wants to filter more. Okay. To reduce the dispersion of the signals. All right. So, you know, in the interest of time, let me just advertise a bit. I'm not going to talk about them in this talk, but in the paper, we apply these results to look at the, you know, government at force and some proposals of regulation, you know, misinformation, legislation regarding misinformation, arresting people who spread misinformation and, you know, forcing platforms to be transparent about the algorithms. We also look at the credulous citizens who are non-Basians and they just take whatever signals they receive as granted. And there, the non-serious lens would matter. Okay. And then we studied the conflicts between the two groups, you know, the credulous citizens, as well as the rational citizens that we have considered today. And we then look at the effect of media literacy campaign that turned the credulous citizens into rational citizens and study the effect on offline conflicts. Okay. Well, I think I have 30 minutes left. Let me, you know, let me skip the literature. I mean, 30 seconds. And just to say thank you. Thank you very much, Alan. Also for sticking so well to time, even a few seconds ahead of time. Perhaps now we could go to Sunni for a five-minute discussion. Yeah, thanks for the opportunity to discuss this paper. I enjoy reading it. It's a, I'm not super familiar with this topic. I normally deal with more, you know, this is the information topic I'm always familiar with. But this, although it says there are no slides, but the paper does remind me of this figure. So this, in my mind, the model here is like a new aggregator from many new sources. And some of them is, you know, of course, are some credible ones at the end. They are not so credible ones. And then so this is a figure produced by, I don't know what is that sort of paper that's talking about, you know, there are some more credible news outlets, and then there are less credible news outlets. So when we talk about the filter, it's like drawing a line here, and just don't let this kind of use to being able to appear or share or, let's say, Twitter or Facebook. So this is like a, like a future in the future in the misinformation that lasts for life. But on the other hand, there is also another dimension that's about bias, left wing, right wing, or other kind of bias. So as you know, the question asked by Professor Cabo, so there's a slant or sensation that is sometimes, usually in Twitter and Facebook is on this side. So some of the information is not being allowed to share or being discriminated against by the algorithm. What the, in practice, okay, what usually the platform does is actually they have both. So of course, there are some misinformation that being filtered. On the other hand, there's also, you know, bias in the algorithm, like what can be shared. For example, in the vaccine, you know, the COVID-19 vaccine example, usually you can really talk about how good the vaccine is, should take vaccine, everybody should take vaccine. But on the other hand, if you talk about, like, this vaccine is fake, it's not effective or it's extremely dangerous, I guess there will be some sort of, you know, censorship or labor on that. So what happened in reality is really these two things are going together, especially that you, your model already have an element of bias. So it's very natural people ask the question about how these two different things play different roles. In my mind, I think the, I am not at Esper or not, this kind of model, but I think the model is very elegant and it's the weighted part of how people weight or different information. This is a great way to capture it. The objective function is well-formatted, the capture thing. But I think one piece of advice I can give to make this paper more attractive is try to stick with example throughout the paper. It's very important to remiss this example. So at the beginning, you talk about this fake weight, fake media that causing this complex example. And then you talk about this vaccine effectiveness example. I think the vaccine effectiveness example is more attractive. So just, for example, if you stick to that, then, so first is, is there, you know, really like misinformation cause conflict example like this and the vaccine protest that happening, we see, you know, many places, does it has any information, anything to relate to with the, you know, the filtering on social media or something? What has the social media been doing? Because when you go to Twitter, they will, they will, you should, you know, if you open this link, you will see that, you know, they will put labels about, you know, whether this information about COVID-19 is trustworthy or not, something that sometimes they will just delete the message. So what do platform doing practice for this particular example? So what do they do, you know, in filtering information about vaccine and maybe the pandemic? And then it's important to revisit up to get the result. That's, is it really, you know, you know, I know this is a great economic, you know, economic result that's, you know, this is a perverse, you know, perverse effect. We always want that this is a well perverse, well very fair and that's something, you know, we liked to write in the paper, but if it will be, it will be really valuable if you can get us one or two examples that it does happen in reality in this vaccine example. Does it, is it possible to, is it, does it, you know, happen in reality? So people, you know, see information about vaccine on, you know, one internet and then they believe what they read is true because the misinformation has been filtered out. And then people's belief became more extreme. Some people believe, some, you know, the government tend to force, push harder and harder on vaccine, like using the vaccine passport. And then some people strongly support it. And some people, you know, strongly against it, causes anti-vaccine protests. You know, does this really happen, has anything to do with the media, you know, misinformation and, you know, filtering? And then, and also, you know, I actually, you passed that part, actually reading, but I like the part about, you talked about the legislation, a rare cyber attack and then the transparency. So there are, there are a lot of, you know, you mentioned these different efforts about how, how to design ethical algorithm in practice, but can, you know, some of this, you know, when you are reviewing these government policies, it's also good to revisit the example. So about the vaccine, what, what did, about some, you know, I don't know, violence events, what do the platform do? And then, so I think, I think you mentioned, for example, at the beginning of the paper, and then you just didn't, never revisit them. I think that's something I think one piece of advice I can, I can provide it. And then, yeah, I enjoy reading the paper and that's all I want to say.