 Welcome to this new episode of Robustly Beneficial. Today, we're going to discuss paper from the post-eating on national academic of science called exposure to opposing views on social media can increase political polarization. And I really, really enjoyed this paper, especially the finding of this paper because it's counterintuitive and it shows just how difficult being robustly beneficial is, especially when it comes to our human interactions and particularly here about political polarization which is a topic on which you can easily have backfire effects. So it's easy to come up with ideas that sound intuitively amazing, but when you test them, well, you can realize that it's actually a trickier, a lot trickier than this. The basic idea of this intervention at this study comes from the fact that because of recommended system on social media, we mostly see the same type of content that we are already used to because the algorithms have pictured that this is what we are most likely to click like on. So this created the phenomena called eco chambers where if you are Democrat, you will mostly see pro-democrat arguments in your newsfeed. And if you are Republican, you will mostly see pro-Republican argument in your newsfeed. So this amplify how people are politically polarized on social media. And the first idea that came up to fight this is simply to take content from another eco chamber and show it to you in your own eco chamber. This is what the study in this paper? Yeah, I think the idea of breaking the eco chamber and like having these contents from other eco chambers exposed to people who are not from this eco chamber is something that has been promoted a lot. It's called like diversity in recommendations. And it's often recommendations like in many discussions and many papers about how to design more beneficial eco-moderate systems. There's often this mention that we should increase diversity. And it's often like, well, not tested. Like it's often like more of a intuitively good idea that people have is just like you should increase diversity. And I think there's an assumption that this will reduce political polarization. And, well, this paper is very interesting because it shows that it's not that simple. So the way they ran the experiment, they found 1,600 participants. Out of them, they were tested at the beginning of the experiment to measure what side of the political spectrum they fall in. So they had approximately 50-50 participants in each category. And then a category of Republicans and Democrats. Then they took half of each category to subject them to the treatment protocol which consisted in simply following a Twitter bot and seeing messages posted by this Twitter bot which posted 24 retreated 24 messages per day either picked out of a cluster of a Republican's account or picked out of a cluster of Democrat's accounts. At the beginning, a participant were selected based on how much they use Twitter and they only considered participants that use Twitter in average more than three times a week. So they expected that the interaction with the bot of this participant were really good. Participants who were in the treatment condition every week had to answer a small survey, a small test to make sure that they were actually engaging and seeing the message from this Twitter bot. And among these messages, there were some that were not even political, but simply to test. Like the bot was posting an image of an animal and then the participant were asked about what animal was being posted by the bot so that they could measure actually how much participant in the experiment were interacting with the bot. And this lasted for between four and six weeks and then participants were tested again on their political spectrum, the idea were tested again. And so what they found that Republicans were that were interacting with this Twitter bot treating a Democrat content, they became a half a standard deviation, more polarized on their own side and they did not find a statistically significant result for Democrats, but it seems to indicate that it went in the same direction of more polarization. Yeah, this is very bad news. It shows that a simple intervention like this at least is not going to be effective and can even be counterproductive in terms of political polarization. And if you think about this, it kind of makes sense like if you think a little bit more, like it's not clear that if you're exposed to more content from politicians of the party you don't like, it's not clear that this will get you to enjoy this idea as more. And this is not like specific to this study. There are other researches in psychology that show that while you have this backfire effect, this reactance sometimes it's called, where if you exposed to an idea that's too remote from what you believe and especially if this idea is expressed in a very clashy way, in a very harshly, then you can feel that this kind of message is threatening what you believe. And then you can be more in a defense mode, in a soldier mode, while you're trying to defend your ideas you feel like you're under attack. And this can lead you to create the habit in you that you get used to defending your ideas all the time, which can slow down, which can increase polarization and prevent you from better understanding the ideas of the other side of instance. Yeah, so there's a lot of psychological ground and based on this, like I guess it could have been predicted that the result would go this way. When I read this paper, I was not maybe thinking about this psychological background sufficiently. In case I was quite surprised with the results and I think this is something to take into account when designing more robustly beneficial algorithms. I wanted to ask a question, like if you had to design a recommender system and you have these findings in mind and you know the diversifying the feed, just naively diversifying, is not necessarily something that will decrease polarization. So what would be the less naive way to diversify the feed? Like I have a proposition and maybe you can have another one, which is what about, so let's say there's like echo chamber A, echo chamber B. If I show someone from B, a post from A they get, so she or he gets angry on average. How you do it naively. But could we use the finding to spot like when someone from B use the vocabulary of B to promote an idea from A and then spread it to B? Like how easy it is to spot a vocabulary from echo chamber of B? So someone from B spotting with the vocabulary of B, idea is coming from A. Let's say someone who is like completely against, so completely like for the non-intervention of the state. Okay, so someone or she or he is using vocabulary from Reganian politics to promote public healthcare. Yeah. And then if we spot that, we can just show this kind of post to the echo chamber B. And of course, at the other side, someone who uses vocabulary or she's using or he's using vocabulary from public intervention, strong state and public service to promote ideas that might sound from echo chamber B. Yeah, I think this is an interesting proposal. Yeah, so I guess in the end, we need more data. So I think one thing that this paper really shows is that all of these phenomenons are very complex. And we need a lot more data to better understand what are the impacts of this or that kind of recommendation. And maybe also we can stress the fact that while the study like included 1,600 people, I think initially, like not all of the users like followed through and we only have partial data and there's a lot of variability between from one person to another. I agree that this is complex. I just like want to add further speculation. So imagine we do the experience. So we do the experiment and we find like even less intuitive findings like showing someone from echo chamber B an idea from echo chamber A that's promoted with the vocabulary of echo chamber B from someone from the echo chamber B. So instead of making them like the idea, they will just split to echo chamber B prime. Then you'd have B, B prime and A, A prime just add more echo chambers instead of solving the polarization problem. I don't know, this is just speculation. Yeah, so definitely. I'm sure the phenomenon is so complex that doing the experiment would lead to another non-intuitive finding. As Les said, we need more data and we also want that while collecting this data, we also want to be algorithms to be beneficial. And the way to do this is something we discussed already several weeks ago using a multi-ambandage algorithm. So the idea of this is simply that there will be an algorithm doing the experiments to collect data, but doing it in a way that to reducing its own uncertainty on what are good intervention and not maybe the intervention you propose maybe is going to be a good one. So in that case, the algorithms will use this intervention, observe that this intervention is doing what we desire it to be doing and select more and more this intervention as time goes by. If there is no other intervention that's even better than this one. If we realize an intuitive result that this intervention you propose is a negative one, then the algorithms using a multi-ambandage exploration would lower the number of time that it actually does this intervention because we see that it's not a beneficial intervention. Yeah, and another thing like these are things that we already discussed in other episodes more dedicated to these different topics. But another thing we discussed in a very earlier episode was the problem of long-term effects. And this is particularly striking for these kinds of phenomenons of political polarization which take over, which lasts for months. This is not like you're watching your content and right after that you become polarized. It's more like a subtle step-by-step but after a month important phenomenon. And yeah, AB testing and multi-ambandage is not necessarily going to be why it has its limits when it comes to long-term effects. So it's very complicated, unfortunately, but we need to like power, I think it's very, very important. That's the thing is both very hard but it's also very important. So we really need to put a lot more means and data collections about these things if you want to make progress. And I think this is like a research area that's where you need a lot of disciplinarity. Of course it has to do with politics and sociology but it also has a lot to do with psychology the way we interpret different things and also with data and social networks. So yeah, all of these expertise have to be combined and fortunately I'd say it's not an area of research that's investigated enough, I'd say so far. Yes, so to take for instance, the example of Medi like another risk that I could imagine would be that while the two sides agree on the same ideas because they're using different vocabularies they may still feel like they are in contradiction with the other side and they would still be clashing even though they actually agree but just the difference of vocabulary can make things very hard at times. Another type of intervention that we discussed was simply pushing people to think more in terms of nuance and uncertainty because it's very dramatic that people on two opposite side are overly confidently convinced that they are right and that others are wrong and certainly the right way to think about these kind of issues is to be in the middle to understand that there is a difficult trade off between a positive and negative arguments and that it's a very difficult debate instead of simply focusing on the arguments that people like to hear and using their confirmation bias being confidently convinced of their own idea. Yeah, one thing we discussed was this idea that when it comes to politics it's very easy to fall into this soldier mode where you instead of trying to understand problems better you're going to defend your ideas and you're going to try to stick to your ideas. So what is this book? This book about soldier mode versus scout mode why intuitively when you're in soldier mode and this has to do with what they are like neurobiological explanations from like connections with neurobiology like apparently the amid data that's responsible to the fear is really activated when you're talking about politics and you can really think of this as you are really to fight and to defend your ground. And so when it comes to politics like it's like there's this line or this political line and as soon as the topic becomes political we try to cling to where we are we try to stick to where we are and we're defending our position and that's not the ideal framework for exploration, for understanding for sharing different ideas. And so instead of that like instead of getting these people in this mode like it's probably before getting them to slide on this line like a prior step should probably be like to get everybody to sort of stand up like to rise above their position to be willing to move a little bit in the sense and the way to do this would probably not by directly talking about politics but maybe by proposing a content that are more like engaging in terms of curiosity they are intriguing that will make people think things that maybe are more meta as well like they would get people to think about how they think and how others are thinking. Maybe like the best way to fight polarization would be something along these lines but yeah, again for the research and more data collection is needed to understand all of this better. So thank you for listening to our podcast. Next week we will discuss the recently added new entry on AI ethics in the Stanford Encyclopedia of Philosophy. Thank you for listening, bye bye.