 Okay. Very good. So, so for this final project of today, which is called Beyond Rational Herodine, so Samia and Yuchunin are going to give us a summary of what they are. And this was supervised by myself and by Mateo Marci. So Samia, please go ahead. Okay. Yeah. Yeah. Yeah. Okay. So we will first introduce our research, including the background of the issue and the end of our project. We will then present the baseline model and results and the extension model. The presentation will be closed by a summary and the recommendation of future research. Okay. The issue we are interested in is the information bubble, which refers to the phenomenon that information people receive are filtered to fit their pre-existing attitudes. The consequence of the information bubble is over linked to rapid societal changes such as pre-set and the polarization of society. And it is found to harm healthy civic discourse and open-minded deliberation. There are some possible causes of information bubbles. The first one is selective exposure or homophily, which suggests that people tend to accept information that align with their attitudes and get together with people who are similar to them. The second one is social influence, which suggests that people's attitudes may be affected by the environment such as social norms and therefore people become more similar to others. And nowadays, these effects of these two factors are reinforced by algorithms, especially social platforms use algorithms to improve online experience by filtering information judged relevant to social media user. So the aim of our project is to propose a model to explain the systematic formation of information bubble. We are particularly interested in whether homophily will result in information bubble. So our model is built based on the social climbing game. In this game, agents can use their links to contact more influential members of the society. Individuals can optimize their position in the network by becoming as central as possible through links to people who are in the center. So it is assumed that utility depends on how central the agent and its neighbors are. The K here denotes number of links an agent has and the AIJ is wider. The agent I and J are connected. So one means they are connected and zero means they are not. But the first term indicates the centrality of the neighbors. And the second term indicates the centrality of the agent itself. And the dynamics of the social network is summarized as follows. Anytime an agent, we will randomly pick up an agent I and its neighbor, agent J. We will also randomly pick up the agent J's neighbor, agent L. But agent L should not be agent I. So if agent L, which is agent I's neighbor, is already connected to agent I, then nothing will happen. But if they are not connected, then with some probability, the link to agent J will be replaced by the link to agent L. And the probability is determined by the change in utility and the parameter beta. The beta here represents the relative weight between observed and the unobserved part of the utility. A large beta implies that a decrease in utility function is not acceptable. That is, social status is valued so highly by the agents that everything else is not important. A zero beta means that social status is not important. So in total, there will be like 50% chance that utility may decrease. So in a word, beta here implies how likely individuals will pursue the social status. And here is a figure that shows the effect of beta. The left panel shows when beta is small, which means individuals put little effort to pursue their social status by linking to influential people. The social hierarchy is not that obvious here. You can see the size of nodes is distributed evenly. But if beta goes large, social hierarchy appears. So you can see almost all agents next to the central one. And here is another graph that shows the effect of beta. The y-axis is the largest degree divided by numbers of agents. And this one reflects the degree of social hierarchy. And the x-axis is beta. So you can see as beta increases, the degree of social hierarchy increases. And now I'm going to talk about our model, which is built based upon the model I've just introduced. We add on some new elements to the model. And the first one is ideology. We assume all agents have their own ideology, which is either left or right. And they can choose whether to disclose their ideology or not. The utility depends on the centrality and the social norms. So the centrality is same as before. And the social norm indicates how comfortable the agents are in their social circle. And now here is a mathematical presentation of our model. So suppose there are n-agents connected through the undirected networks and in total there are n-links. And there are two ideologies, left and right, coded as minus one and one. And there are theta-fractional people who are left leaning and one minus theta-fractional people who are right leaning. And also there are lambda-fractional people who reveal their ideology and one minus lambda-fractional people who do not. And theta and lambda are independent to each other, so they are not relevant. And now we assume that all agents infer social norms by observing their neighbors' attitudes. So here is the example. For the left-leaning agents, the social norm is numbers of neighbors who indicate they are left leaning. Plus here is numbers of agents who do not disclose their attitude. And we times the fraction of left-leaning people. So we will have expected number of left-leaning people among those who do not disclose. So in total we will have expected numbers of neighbors who are left leaning. And we divide it by total numbers of neighbors. And here is a utility function we have in our model. So the first two terms are just the same as the social climbing game. So these two terms captures the centrality or we can say popularity of the agent. And the second one here is social norms. And we have a gamma to control the weight of social norms. So if gamma is zero, that means the agent don't care about the social norm at all. But if gamma is extremely large like infinity, that means the agent care a lot about the social norms relative to the popularity. And the dynamics is exactly same as the social climbing games. That means in our model the attitudes, so the ideology and the strategy whether to disclose or not is fixed during the dynamics. So they do not change. Thank you, Yushun. So I hope the model features are clear. But if you have any question, please So I'll show you some results what we have and how a bit of comparison with what social climbing game produces. So here, as you can see the right side, so if we assume that people don't care about the ideology and only care about the popularity, which means the gamma is zero, then essentially we would not, we will have hierarchy in the social society, but we would not like see that people having left ideology are connected to only left people. So there's no clustering here. But this it's a very mixed random network, but this hierarchy because people care about popularity and they try to climb the hierarchy. So which is what we get from the social climbing game. But if we increase gamma, if people care about the ideology part as well. So for example, if we consider social media context, and there's a political issue. So if we, if people care about what I'm sort of publicly advocating for, what my friends or connections are publicly advocating for. So then we see, so when we care about the ideology as well, we actually see that the society emerges into a segregated society, which has clusters. So as you can see, so the right color means the people who have right ideology and the blue color have people have left ideology. And colorless people notes means people who do not reveal publicly their ID person ideology. So here we can see people who are publicly revealing this, they tend to connect with the people who have similar ideology, at least who advocate for it publicly, because you can only see the what they advocate for publicly. And so we see that society in the network, we end up having clusters based on the how many ideologies are split. So here we are considering the extreme ones. So if we take gamma is equals to zero, which basically means we come back to the social climbing game. So we do see the similar research, which is kind of for verification of a model that this is nothing, at least a code in the modular correct. And but as we increase gamma, that means when people start caring about the ideology part, and if they come care about it more than relatively more than the popularity, we see that the maximum degree, it actually decreases. So that sort of means that the high, the most popular agent in the in the social network actually never does not connect to everyone. So in the last year, we see the maximum degree we convert this to one, but which means there's there'll be a central node of most population, but here we see the hierarchy sort of reduces. And this sort of comes from so also the higher gamma, the slope is lower. So basically the this point means gamma is close to 300, which means the highest degree is close to 0.5, which is 50, half of the age. And if the gamma is zero, it goes converges to 1.901. So essentially, it sort of reflects that if we have clusters and people connect only with people who have same ideology, that means we lose out half of the population, right, which is why we end up the maximum even the central most, the most popular person also will have only half of the population connected with it. So this is one comparison. And if we try to see how this maximum degree depends on beta and gamma, the beta is the intensity of efforts, which by which they climb the hierarchy and or change their social connection, and the gamma, which is the relative importance of ideology or social norm, we see that the on the high, the highest degree only just goes to 0.9, 0.1, when the gamma is close to 0, it is 0 to 50. So this is a region where the we see similar sort of hierarchy in this in the society. But as gamma increases, we see the maximum degree decreases even for higher beta. And to see it more clearly, so here's the graph, you can see the black line, the black graph is basically the highest beta, which is 0.09. And we see it's when gamma was zero, the society had the central node had close to 0.9 connections. But this decreases over time. And this is this happens for almost all of us as gamma increases. And this converges to 0.5. So this sort of shows that for irrespective of how big system or network we pick, if they if increase the gamma, they'll sort of segregate into clusters. And even the highest, most popular agent also will also only have connected to the half population, which means that they'll be a cluster. So but another interesting thing we see is the people who do not reveal, so till now we were talking about the overall society, how it forms, but what happens to people who do not reveal their ideology publicly? So versus the people who publicly advocate for their ideology. So this graph shows that the maximum degree for the people who do not disclose publicly their ideology, their actually popularity decreases significantly. So when the gamma was zero, when we were in the social gaming game model, the respect of whether you disclose or not disclose publicly your ideology, you'll have higher centrality. But as people start hearing about ideology each other, the popularity of people who do not disclose their personal ideology, or we can interpret it in the other way that people who try to take the central stance on a political issue, they tend to lose their popularity in social network, which we do not see for the people who publicly reveal their ideology. And essentially this graph comes from those people who publicly reveal because people who do not reveal, their maximum degree is very, very low, it's close to less than 1.1. So now we want to see what happens to the social norm. Also, what other measures we can use to understand what happens to the diameter of society when the system networks too large? So here we can see the when gamma is zero, the social norm over all social norm is 0.5, so there's a balanced society because we started from our balanced society and unbiased society. So for these results, we took theta and the lambda to be 0.5, just to make sure that there's no biasness. So we see that the society even after the evolution of when they climb hierarchy and they end up having a very hierarchical society, there's still the overall social norm is 0.5. But as people start caring about each of this ideology, the social norm increases the rest of it and the higher the beta is, the increase in the social norm will be higher, like the rate will be much higher. And this shows that the over time, the social norm will sort of converge to one, which means that in respect of how big system we pick, people tend to connect with the people who have similar ideology. And that means the society sort of segregates into clusters, and that's why everyone's social norms sort of increases to close to 100%. So overall social norm actually increases for everyone. So now we want to see, so till now, as you should mention, we were considering, we didn't change the dynamics of the social climbing gap. So that means the strategy, whether you disclose your personal ideology publicly, or do not disclose, that's starting. So that's already assumed from the beginning, it's randomly assigned. But what if we try to make the strategy dynamic? That means over time, you can choose whether you disclose or not disclose. And if yesterday, if I was disclosing my personal ideology publicly, or advocating for a political issue, tomorrow I can choose to not disclose. And in real life, we see if all our friends advocate for the opposite ideology, so if all the connections around us advocate for opposite ideology, we tend to become sad because of various conflicts or social pressure. But on the contrary, if all our friends advocate for the same ideology as us, we tend to become more comfortable in publicly advocating for our ideology, or publicly sharing on social media. So inspired from this realized situations, we changed the dynamics in the following way. So essentially, we changed this part. So when we pick agent i and a neighbor of agent ij and a neighbor of agent gl, then we see if social norm of agent i is zero, that means the friends or the connections of agent i has advocate for opposite ideology, then the agent i choose become silent. So if it was one, it will become zero. On the other hand, if the social norm of agent i is one, that means all my friends are advocating for the same ideology, if I was silent yesterday, I'll start advocating for it. So I'll be more comfortable. So when we choose to, when we make this dynamic, what happens is, so on the right hand side, when, oh, sorry, on the left hand side, when the strategy was started, we saw that there's two clusters in this over time emerges. But when we make this strategy dynamic in this way, the way I mentioned, what we see here is the there's a dominance of one political side. So if I start choosing to disclose or not disclose or publicly advocate to not advocate based on my connections or my friends, then essentially we see over time the society emerge sort of leads to the popularity of one political side. It can be right or left, depending on the the dynamics and the issue condition, but it will end up having a dominance of one. So essentially the people who were left on the left side or the right color, which means the people who were advocating for right political side ideology, they essentially become silent and they lose their color on the right. So which is and the blue, the people who were advocating for left side on the left, but they were not disclosing publicly, they start choosing to disclose because they become their social norm increases. And that's why we see almost all the blue, almost all the people who have left ideology, starts advocating publicly. So there's a dominance of one person. So this is what, yeah, and just to summarize, in this way, we provide a new potential model to analytically understand the formation of information as well as segregation of societies of popularity of one political side. And we see it's just simple individuals desire to connect with people who have similar ideologies may segregate the society into clusters, and that leads to information bubbles. And the relative importance for we have for comfort with our social norm or connection with respect to the popularity or influence in society that sort of moderates the intensity of information bubbles as well as the social hierarchies within those. And the last is the possible extension showed that not revealing our ideology publicly based on our social influence may actually reduce also. So yeah, so this was when we were looking at the graph, the people who do not reveal publicly their ideology, their popularity decreases if the society cares about ideology. So they actually lose your social influence. So the last is if we change our decision to the publicly advocate on our advocate, depending on our friend, whether they advocate or not, the same ideology, that might lead to dominance of one political side. Yeah, so follow a future recommendation for future research. So firstly, we will encourage people to add more psychological foundations about the assumptions on individuals ideologies and strategies. And the second one is because our current model only tests the role of homophily, and we will encourage people to test the role of social influence whereby people form and adjust their ideologies according to the social norms, which means that people's ideologies may change during the dynamics. And the third recommendation is to encourage people to explore the dynamics strategy of revealing ideology more comprehensive or deeper understanding. And the last one is to encourage more empirical work to test our models. That's all. Yeah, thank you. Excellent. Thank you, Yuchun and Samyak for this interesting presentation. We have time for questions. So anybody? Let's go ahead. I think Tuan has a hand up. Go ahead, Tuan. Tuan, yes. Yes, I have a question. Thank you for the talk. It's very interesting. For the future direction, do you think that you can incorporate a different type of interactions? Because if I understand correctly, in the model, you consider only positive interaction between the agent, right? So what could happen if you include, for example, negative interaction in the sense of the social sign network in which you have positive and negative link, positive link for friendship and negative link for enemies, or like this lie or this chat hour? Yeah, that's actually a very interesting recommendation. And it's actually sort of relates to another idea we were thinking is what if in real we not only look at a political idea, but also different factors, gender, social classes, citizenship. So I think, yeah, that would be very interesting. And maybe to explore for this, if there are multiple factors on which people care about and how they interact and how they knowledge also. It's something we have not explored here, but it's something we definitely would like to know. Yes, thank you. Because I think that one of the key ingredients in your model is homophily. And homophily just account for people who are similar, right? And if people are much more dissimilar from each other, there should be also a way to account for their negative, possibly interaction, right? Okay. Thank you. Thank you for the question. Charles also raised the hand. So Charles, go ahead. So I think some might address it to some extent. So my question was, would it make sense to essentially introduce, so you have one dimension ideology around which people are potentially divided? Would it make sense to introduce other dimensions? Because I guess when we think that depending on the relative strength of these dimensions relative to the ideology, maybe they can essentially moderate the clustering effects of just ideology. Because it potentially adds people that are divided along more division lines. So maybe that would lead to less division along ideology. Sure. So I think what's the problem which at least you're facing is when we introduce other factors as well. So if we take a multiple factor utility, the difficult part is how to measure the changes happening over time. It's the network's too big. So we cannot visually interpret. So I think the difficult part is to basically how to measure the difference in the social network happening. And that's something if you can have a good measure to actually account for sort of capture those dynamics, I think that this could be a good explanation. Thanks. Thank you. More questions? I had a question actually. I think this is very interesting talks. It deals with very topical issues. I think, I'm just trying to think in terms of policy here. So I think generally speaking, people would think that bubble formation is a bad thing. Because it sort of reduces the diversity of allowable opinions. On the other hand, I'm not 100% sure that's true. Because going too far the other way can lead to bad outcomes like allowing climate change deniers to have an equal footing in the debate. So we have to be a little bit careful. But I was wondering, are there any types of network structure or any types of interaction mechanism that you can think of that either prohibit or reduce the amount of bubble formation that can happen in a network? Yeah. So one right now talks, I can think this. So we know algorithms or platforms on social media platforms, they just sort of fast track this dynamic. But what maybe can help in reducing to going to one extreme is if this in algorithms, we try to incorporate a news addition, which basically sometimes we give information from the other side of the coin to people having different ideology. Instead of always sort of optimizing or improving the suggestions for recommendation based on your existing interest. So if you add that part that you always, I get some information from opposite ideology or opposite interest on an issue to me. So I think that could sort of reduce the severity of Yeah, but yeah, that's something we need to think more. Good. Any other questions? Yeah, I had a question. So maybe that was third last slide where we talk about dynamic behavior of people and one particular strategy dominant. So are there like what are the factors which influence which particular which particular strategy will rise in the end? Because we begin with randomly distributed data. So what decides that blue will eventually come across as dominant? Yeah, thank you for your question. So to be honest, so one one one initial condition which is theta, which find how many fraction of population has which side of ideology. So if this try to live has more proportion of the population, that could be one driving force. But here in our survey took point five. So that's not why we are having this. But so here if we try to run the simulation again again, we get sometimes red and sometimes blue. So it essentially is depending on the simulation dynamic, basically which agent is being picked at what time and then which neighbor of the agent. But I think that's not still too unrealistic in the way this all shows that we cannot always predict which side will dominate if we start from an unbiased society because humans interact in a very unbreakable way. So someone can interact with someone like unbreakable. So it kind of shows that it can have dominance of either right or left. And that's something we cannot predict, at least with this existing model. But what we can be certain about this, there will be a dominance of one side of one if anyone the dimension. So I do. Okay, one more question, maybe. If not, no more questions. Okay, so if not, let us thank Sammy and YouTube for this fantastic talk. Thank you very much, guys. And actually, I'd like to apologize because I forgot to thank the previous speakers. Okay, so now a round of applause for the presentation of the six praise today. Thank you guys for this very fantastic work. Thank you. Very good. So let me stop there.