 Greetings, and thank you for attending this month's Science Seminar presented by the NSF's National Ecological Observatory Network, which is operated by Battelle. Our goal with this monthly series of talks is to build community among researchers at the intersection of ecology, environmental science, and neon. We're very excited to have Kath Langren here to present today. And before we turn it over to the speaker, a few logistics. We have enabled optional automated closed captioning for today's talk. If you would like it, please find the CC button in your Zoom menu bar. This webinar will consist of a presentation followed by a question and answer session. As you think of questions, please add them to the Q&A box. We also have a meeting chat, which you should feel free to use to share links or other items of interest with the group, but try to add your speaker questions to the Q&A. We will facilitate discussion at the end after the presentation, and there will also be opportunity to ask questions over audio if you so prefer. Neon welcomes contributions from everyone who shares our values of unity, creativity, collaboration, excellence, and appreciation. This applies to neon staff as well as anyone participating in a neon event. The full code of conduct is available via a link that I will share in the chat in a moment and is also available on the bottom of our Neon Science Seminars webpage. I will just show you, here's our list of talks, and then there's the code of conduct. I will also drop the link to the Science Seminars webpage in the chat in a moment. This talk will be recorded and made available for later viewing on the Neon Science Seminars webpage. To complement our monthly Science Seminars, we host related data skills webinars on how to access and use Neon data. Registration for those that are available on our same Science Seminars webpage. We do not have any data skills webinars scheduled for the remainder of this year, but we did have four great webinars or actually five great webinars that have occurred over the last few months. All of them are recorded, and we strongly encourage you to go check those out and do check back soon because next year we will be having more data skills webinars. And then lastly, if you have ideas for a talk for this seminar series, nominate yourself or a colleague today by filling in the form on our Science Seminars webpage here near the top. We would love to see what you're up to and have you present to us. So now I'm going to turn it over to Bobby Hensley to introduce today's speaker. Hi. Catarina Kapp-Langren is a post-doctoral visiting fellow at the Cooperative Institute for Research and Environmental Science at the University of Colorado Boulder. Prior to joining series, she received a PhD in applied mathematics from Cornell University. She uses dynamic systems models to study a wide range of phenomena from voter turnout to planets beyond our solar system. Currently, she is bringing these interests together by investigating social aspects of climate change. This work combines two fundamental questions in her research. How do we move society toward large-scale solutions to the climate crisis? And how does one's position within a system influence their view of it? She is passionate about complex systems, open science, and interdisciplinary research. And so with that, I will turn it over to Kath. Thanks so much for the introduction. I'm really excited to be here today. So I'm going to share my screen in a moment. Going back here and share. Can you see the slideshow? Yes, that looks great. Great. OK. So today, I'll be talking about modeling misperception of public support for climate policy. And this is joint work with my collaborators. Jeremiah Osborne-Gaoui and Matt Burgess are at CU Boulder, and Joshua Garland is at University of Arizona. So first, let's talk about why we might even talk about misperception of public support for climate policy. What is really behind it? I'm going to begin with the rare piece of good news on climate change, which is that overall support for climate policy in the US is actually quite high. So this is a map from Yale climate opinion maps. There's a lot going on, but I will highlight the most important things here. So depending on the policy, overall support is around 66% or higher. So this is the bottom bar over here. So for requiring fossil fuel companies to pay up carbon tax, 66% of Americans support this policy. And you can go and play around with the maps on the website, but most of them look kind of like this. They look mostly orange, which means that even in pretty conservative districts, what you would see is that the majority of Americans support most climate policies. There are some variation. Taxes tend to be less popular than things that are not taxes, but overall, you get widespread support for climate policy. However, misperception levels are also really high. So here are two papers. The first one is by Sparkman et al. Came out in 2021, and they have this really fleshy title of the paper saying that it's a false social reality that we underestimate popular climate policy support by nearly half. And recently, just in January, a paper came out saying that this actually might be a phenomenon that is global and extends beyond the US. So if we look at support levels for climate policy, when you ask people, do you support a particular climate policy, you get responses in the 66% to 80% range of actual support. However, if you instead ask of people, well, what percent of Americans do you think support climate policy? You're going to get these much lower estimates, on average, in the 37% to 43% range. So that's a pretty significant gap. And this is a paper from that false social reality Sparkman et al. paper. Sorry, this is a figure. And there is, again, a lot going on. But what I want you to take away from this is that this entire map is red, meaning that in every state, there is underestimation of public support for climate policy. There isn't anywhere that is overestimating. And the numbers differ. But overall, this misperception level is above 20%. So this is a phenomenon that is robust across different states. And so naturally, looking at this, you might ask, well, what gives rise to such widespread misperception? In order to tackle this, we're going to borrow two terms from the field of social psychology, false consensus and pluralistic ignorance. The false consensus phenomenon describes a tendency to assume that one's own opinions, beliefs, or attributes are more widely shared than is actually the case. Or minority opinion holders think that they're in the majority. For example, if I love pistachio ice cream and I think that most people prefer pistachio ice cream to every other kind of ice cream, that would be false consensus. Pluralistic ignorance is the state of affairs in which virtually every member of a group privately disagrees with what are considered to be the prevailing attitudes and beliefs of the group as a whole. Or more simply put, majority opinion holders think that they're in the minority. The paradigm example of something like pluralistic ignorance is binge drinking on college campuses. So college students would report that they believe that binge drinking was the most widespread behavior and attitude on campuses, but they thought, well, I don't actually like it, but everybody else likes it. And so that would be pluralistic ignorance. And what I want to highlight here is that both of these phenomena lead to underestimation of majority opinion. So both minority opinion holders thinking and they are in the majority and majority opinion holders thinking that they are in the minority would lead to the kind of underestimation that we're hoping to eventually capture with our model. And again, for most climate policies, the majority are supporters and the minority are the opponents. Before we dive into modeling, let's talk a little bit about the survey data from the Sparkman adult paper. So in the survey, people were asked to estimate the number of Americans who are worried about climate change, who support carbon tax for fossil fuel companies, who support siting renewable energy on public land, support transition to 100% renewable energy by 2035 or support the Green New Deal. So there was one issue and four policies. And we can look at the structure of the responses, so the structure of these estimates. So here on the X-axis are going to be the guesses, the estimates that people have given in response to this question. And on the Y-axis, there are going to be the number of respondents. So we will get a simple distribution of these estimates. And this is what the distribution looks like. So what you see here at 66% is I've highlighted the actual mean and at 45% is the estimated mean. So the estimated mean is the center of mass of the histogram. And everybody to the right of the actual mean on this plot is overestimating and everybody to the left is underestimating. And I made the arrow red to match sort of the underestimation from that map figure that we've seen. And something interesting that we observe here is that there are actually two peaks. So there is this apparent by modality. In most of these histograms, you tend to observe a peak at around 50% and another peak at around 20 to 30%. So that's also kind of intriguing. We can also compare being worried about climate change to other issues. So here on the left is worried about climate change and on the right is support for carbon tax. So again, these are responses to the question, what percent of Americans do you think are worried about climate change or support carbon tax for fossil fuel companies? And so comparing these, we see that the carbon tax histogram is a little bit more right skewed. So there is more misperception because the gap between the actual mean and the estimated mean is wider here. So the estimated mean is somewhere at 35%. And we see that while there are still these two peaks, one at 20 to 30% and another one at around 50, the 20 to 30 peak is actually higher. So we can say that sort of using multiple metrics there is more underestimation of support for carbon tax compared to being worried about climate. We can further break it down by demographics. So if we split it into Democrats and Republicans, we see something that is perhaps not too surprising. So the histogram for Republicans is even more right skewed than the one for carbon tax in general. Republicans tend to have more misperception because the gap between the estimated mean and the actual mean is higher. Democrats tend to have pretty significant misperception, but less so. And for Republicans, this 50 to 60% peak is much lower compared to the overall support for carbon tax estimates or compared to the Democrats. But I'm about to show you something that is perhaps unexpected, which is Democrats who watch Fox News. So if we look at Democrats who watch Fox News here in the middle, we see that they have even more, an even more skewed distribution of guesses than all Republicans. The 50% peak is even lower and there is about the same amount of misperception compared as the Republicans and much more misperception compared to all Democrats. So this was Democrats who watch Fox News at least once a week. So that gives us a clue that people's media habits are going to be really important for disentangling the relationship between their support and their misperception of public support for climate policy. And now we can look at the media habits in a little bit more detail. So we can look at the correlation between media consumption and underestimation. Here on the X-axis are going to be different news outlets arranged from left to right on the political spectrum. And on the vertical axis, we're going to have the same five issues that people were asked about. So worried about climate, carbon tax, renewable energy on public land, transition to renewable energy and the Green New Deal. And so positive correlation will mean more positive correlation with underestimation. And blue will mean a negative correlation, so less correlation with underestimation. And so here is the plot and there are kind of three specific patterns that I really want to highlight. The first one that usually jumps out is that once we look at Fox News and other conservative outlets, which here is everything to the right of Fox News. So things like Breitbart News and so on. We suddenly see a jump in correlation. So we see positive correlation with more underestimation of public support. And that is a pretty stark contrast as we transition from NPR to Fox News. Another thing that we notice is that the absolute values tend to be higher for things like the Green New Deal and being worried about climate change. And that also makes sense because these things are more prominent and probably get more highlighted in the news compared to something like citing renewable energy on public land, which doesn't get talked about as much. And we'll get back to that in the third part of the talk. And finally, something that I think is kind of interesting is that if we look at just this top row and move from right to left, we move from conservative outlets to the left. We leave the red region and move into the blue region. But then all of a sudden, once you go far left enough to other liberal outlets, which is things like Mother Jones, you actually get back into the red region, meaning that there is positive correlation between tuning into these outlets and underestimating the percent of Americans worried about climate change. So that's also quite interesting. So from here, we can formulate some research questions. I want to know if widespread underestimation of public support can be explained using a social network model. If so, what features of a network model should replicate this behavior? What explains the apparent bimodality of these perception distributions? And to what degree does media consumption affect misperception of public opinion? So with that, we're going to dive deeper into the modeling. We're going to use a network model and I'm interpreting the word network very generally. A network is just a set of nodes connected by edges and they can be used in different models to represent different entities. For example, they could be used to model power grids where the nodes are buses and the edges are just the wires and connections between different places. They can be used to model food webs where the nodes are individual species and the edges represent the predator-prey relationships between species. They could be used to represent trade networks where the nodes are people and the edges represent economic relationships between people or companies or countries. And finally, perhaps most famously, networks can be used to represent social networks. So the nodes will be individual people and the edges will be social connections that those people make. And that is indeed the kind of network that we're going to deal with today. So the nodes will represent individual people and the edges will represent that those two people know each other's opinion about climate and climate policy. So here we'll have some number of majority nodes and a smaller number of minority nodes and the edges will represent their connections in this climate-related realm. I wanted to make a note that I'm interpreting this quite broadly. So what I mean is that here I replaced a person by a megaphone to highlight that people have different kinds of platforms. And so it's not so much a discrete distinction as maybe a spectrum to go from an individual person to an individual person with a large platform to an influencer, to somebody who is maybe the host of a media show, to maybe a media outlet. So we're going to abstract away from these distinctions and interpret our model very broadly. So here in this example network, what we can do is we can ask each agent to make a guess of what percent of people support climate policy based on their own knowledge. So this person would look at their nearest neighbors and the connections that they have and they would say, well, 60% of the people I know seem to support climate policy. So that's going to be my guess. And again, it's not necessarily your friends, it's the people whose opinion you know. And we can ask everybody in our model to make the same guess. And then we can aggregate their guesses. So here, the way I've constructed this example, the actual split is actually very close to the estimated split. So if I average out the estimates, I get 71% and the actual proportion is 70%. However, we can modify the network to get different results. So here I'm going to take these two nodes and all I'm going to do is swap their positions. And by swapping their positions, I can generate some amount of underestimation or misperception on my network. So here we went from 71% to 55%. This shows as a proof of concept that we can manipulate network structure in order to generate different scenarios. And the trick would be to investigate this systematically in a way that allows us to say something about the real world. So how can we simulate a social network with a majority-minority split? Well, one property of real-life social networks is that they have what is called heavy-tailed degree distribution. So here are the plots of degree distributions for YouTube and Twitter. Degrees the number of connections that somebody has. And so the upshot of both of these plots is that there are many, many people on these networks who don't have a lot of connections. And there are fewer and fewer people who have a lot of connections. So if you want to select a user with a hundred followers, they're going to be much less frequent than users with zero or one followers and so on. So there are a few people with a lot of connections, but the vast majority of users don't have that many connections. And the classic way to generate these networks is the Barabasi-Albert algorithm developed in 1999. But in 2018, Karimi et al. updated this algorithm in order to generate a majority-minority split in a systematic way that lets us look at different possible network configurations. So I'm going to go through the classical variant and then through the Karimi et al. variant and how that is constructed. So in the classical variant, you grow your network sort of organically and in order to end up with something that looks like real-life social networks. So we're going to grow our network. You start with some number of nodes and then at each time step, you add a new node and connect it to M existing nodes. And so for example, this person arrived here and made this new connection. And then there's something called preferential attachment which means that the probability of connecting to an existing nodes depends on the degree of that node. So imagine you're showing up to a party and you see that there is a lively conversation going on versus maybe some people standing awkwardly around in the corner. Preferential attachment captures this feeling that you should join the lively conversation going on that you are more likely to connect to those that already have a lot of connections. And so this person chooses to connect to someone who already has some connections rather than to the person who doesn't have any connections yet. Now we want to incorporate the majority minority structure into this. So what we're going to do is to modify the algorithm a little bit. The nodes are assigned minority or majority property according to a predetermined fraction. So for most climate policies, it will be 66% majority and you start with some number of nodes and grow your network. So at each time step, you add a new node and connect it to M existing nodes. However, the probability of connecting depends not only on how many connections that node already has but also on something called homophily which is whether or not we share an opinion. So if I show up to a party and there is a lively conversation but it is about how much people hate pistachio ice cream all of a sudden I'm less likely to want to join that conversation in particular and I'm going to try to seek out people who agree with me. And so we can make this parameter H which is a scalar between zero and one where for in group connections H will signify the propensity to seek out those that we agree with. And then for out group connections the homophily will be one minus H. So this will be the corresponding propensity to avoid those that we do not agree with. So with the scalar parameter we can now have a systematic investigation of how homophily on these social networks influences misperception. So first let's just look at a few examples of preferential attachment networks with homophily. So here on the left, we have a maximally heterophilic network where people only want to connect to those that disagree with them. The size of the node is proportional to the degree to the number of connections that it has. And so here minority nodes are actually more popular because majority nodes want to connect to minority nodes. On the other extreme end, we have what you could call an extreme echo chamber scenario. So the maximally homophilic network actually separates our network into two distinct components because the majority nodes only want to hang out with each other. And the minority nodes only want to hang out with each other likewise. And so here the majority and minority components just do not talk to each other in any way. And then in the middle, we have homophily equals 0.5, which is correspondent to kind of just randomly throwing opinions onto a standard for a Basi-Albert network without any interaction between opinion and the structure. And so we can make the same kinds of plots that we've made for the survey data in order to look at whether we get pluralistic ignorance and false consensus in this model. So again, on the x-axis, we have estimated support percent. And on the y-axis, we have the number of agents. And I've collared the contributions from minority and majority nodes accordingly. So here we have a completely heterophilic network where people only connect to those that they disagree with. And so all of the minority agents think that they are in the minority. So they estimate the proportion of majority nodes to be at around 100%. But it's different for the majority nodes which only connected to minority nodes. So they think that everybody is actually blue on this network. And so we have pluralistic ignorance because the majority nodes think that they're in the minority but we do not have false consensus. Compare that to a completely homophilic network where we have two disconnected components. So everybody regardless of whether they are in the majority or in the minority think that everyone agrees with them. And so here, what we see is that it averages out to being pretty close to actual me. So we don't even have widespread misperception over here. And we do have false consensus because the minority nodes think that everybody agrees with them but we don't have pluralistic ignorance. And so these extremes don't let us capture the kind of misperception that we see in our survey data. And if we look in the middle here for homophil equals 0.5 we don't have widespread misperception and we don't really have any distinction between being a minority node or a majority node. And so that also doesn't give us the kind of misperception that we want to see. So echo chamber effects alone don't appear to be sufficient to generate the kind of misperception that we see in the data. So from here, where do we go next? Well, we can look at our results and pay a little bit of more granular attention to what is going on. False consensus is achieved through high homophily. So minority nodes need to have enough connections to other minority nodes in order to generate false consensus. But pluralistic ignorance is achieved through centrality of minority nodes. Meaning minority nodes need to have enough connections to be visible enough to the majority nodes that majority nodes can be tricked. Well, we probably don't live in a world where we seek out those that we disagree with on purpose but we might live in a world where minority opinions on climate specifically are overrepresented among central nodes. So next we will explore what happens when minority nodes are overrepresented among central nodes. I'm going to propose a fairly ad hoc mechanism in order to achieve this. We're going to naively swap the highest degree majority node and the lowest degree minority node. I call it the prince and pauper strategy. We're going to take the least connected minority node and promote it into a position of great influence on our network. This will help us explore the effects that we're interested in. And it's a little bit ad hoc but sometimes it does happen in real life. For example, if you think about the 2016 election before then nobody really had to care about Trump's opinion on climate policy but then overnight he was promoted into a very influential position along this issue specifically. So here we're going to look at a different kind of plot that we haven't looked at yet. On the X axis is going to be the number of swaps. So the number of minority nodes that are promoted into having a lot of connections. And on the Y axis we're going to have the mean opinion estimate. So again, the actual split is 66 to 34%. So here at 66% is the actual percentage of supporters of climate policy. And then the different colors, different shades of race scale are going to indicate homophily. So the pale ones are going to be heterophilic networks and the darker ones are going to indicate homophilic networks. So let's see what happens as we start making these swaps. So the homophilic network start out with guesses pretty close to the actual percentage and the heterophilic network start out with much lower guesses but both of them go down and eventually asymptote out as we start making these swaps. And so that is not necessarily surprising. We're intervening in our network to see that we can in fact promote minority nodes into positions of high visibility which affects the opinion estimate. But what is interesting is that the empirical estimate range the 37 to 43% is right here. And so even for really homophilic networks it doesn't take that many swaps to get us into the empirical estimate range that we see in the survey data. So yes, this intervention is maybe not inherently surprising but it only takes five to 10 swaps on the network of size 100 to get us into the ranges that we see in real life. And similarly, we can now make the same kind of histogram plot that we've seen before. So this is for a larger network. Here we have a few of the things that we want to see and a couple of things that we don't want to see. So we have widespread misperception. The difference between actual mean and estimated mean is quite pronounced. We have a 50% peak, our 20 to 30% peak has shifted over here to 30 to 40, but we'll take it. But now we also get these previous other peaks. And so we might wonder what that's about. Well, that's about this parameter M which is how many nodes does a new node connect to as you grow the network. So that controls network density. Here, when we make just two connections there are a lot of nodes that take these really extreme opinion estimates, zero or 100. However, if we increase that, we actually get rid of those spurious peaks and now we have everything that we wanted to see. Furthermore, we see that qualitatively we're capturing a lot of the things that we see in the survey data. We have widespread misperception because there's a gap between actual mean and estimated mean. We see this apparent by modality and we are no longer seeing this spurious peaks. Unlike with the survey data we can also break it down into majority nodes and minority nodes. And so what we see is we have both pluralistic ignorance and false consensus which we couldn't achieve without this intervention or artificial promotion of minority nodes. So from here to recap homophily can give us either pluralistic ignorance or false consensus but not both. However, once we add oversampling of minority opinion among central nodes we can get both pluralistic ignorance and false consensus just like in the survey data. And so from here you might say, okay we have this model, does something that qualitatively matches the survey data but do we actually live in a world where a minority opinion is overrepresented among vest connected nodes? And so the next part is diving deeper into this. This part of the project is ongoing right now and we're looking into media coverage of climate policy. Some aspects of media coverage are really well studied. So for example, there is a paper from 2004 called Balances Bias that documented that US prestige press coverage of global warming has contributed to significant divergence of popular discourse from scientific discourse. And this is actually the second piece of good news on climate change that I'm going to leave you with today which is scientifically accurate coverage of climate change is improving over time. The McAllister doll was a follow up paper from the same group, Koykov's group where they looked at a larger data set from more countries, more years. And it seems like that phenomenon has been declining over time. So we no longer see that every time a climate expert is brought on, a climate skeptic must also be brought on. In order to talk about this we're going to introduce this word phrase false balance which is disproportionate representation of minority opinion. The same group, Max Boykov's groups runs the median climate change observatory here at CU. And what they do is they monitor the number of mentions of the words climate change or global warming on television and in press. So here is for US television coverage. Here we have time series data going back all the way to the year 2000, which is pretty amazing. And I wanted to point out a couple of things. First, it's kind of an interesting data set to just stare at. For example, here in June, 2020 you see that it went down to basically zero for everybody because the dominant news story at the time were the Black Lives Matter protests. So climate change was not getting any coverage and you can see patterns like this throughout the time series. And another thing is that here on the screenshot I've highlighted February, 2024 and Fox News is in second place only to CNN by the number of mentions of climate change and global warming. So that is a clue that Fox News talks about climate a lot or a lot more maybe than you would expect. However, the coverage of climate policy has not been explored in the literature. And you can easily find examples that are both foreign against climate policy. So here is a quote saying we should be in an all out effort to move to renewable energy and to save energy so that we don't have to use as much of it. But of course you can also find quotes that contradict that. For example, here is a quote that casts the Green New Deal as just a ploy to take advantage of people so that his donors get paid than not people like ExxonMobil that usually donate to Republicans. So again, we wanted to investigate this more systematically to be able to draw conclusions about how climate policy is covered in the media. What we're doing is we're annotating news transcripts. We have a corpus of TV news transcripts that mention the words climate change or global warming. And for each of these transcripts with our team of human annotators, we are answering these three questions. Does the segment acknowledge or deny that anthropogenic climate change is happening? Does the segment express climate concern or opposition to climate concern? Does the segment support or oppose climate policy? I'm going to present preliminary results we're more than halfway through the data set. So I don't have any reason to believe that the conclusion will change, but these are technically under construction. So here we have science of climate change, attitude toward climate concern and attitude toward climate policy. And the bars correspond to acknowledges neutral denies debate, concerned neutral opposition debate, supports neutral opposes debate. And so I've highlighted the majority opinion in yellow and the minority opinion in blue on each one of them. And so what do we see is that there is very little explicit denial of the science of climate change consistent with McAllister at all results. When we look at attitude toward climate concern, there's a little bit more opposition than actual denial, but overall we get more concern or neutral statements. And then for attitude toward climate policy, there's again more opposition than either to climate concern or explicit denial. And we get more debate about climate policy than these other aspects, but overwhelmingly we get support for climate policy, except I've hidden something on this rightmost plot, which is this giant bar that corresponds to the transcripts that do not mention climate policy at all. So in our preliminary data set, 88% of transcripts do not mention policy solutions at all. And so that really dwarfs the supports and opposes bars for attitude toward climate policy. And so that was kind of surprising for us to see and somewhat worrying, but a lot of transcripts would say things like, oh, with climate change, you should expect mosquitoes to get worse and then they would move on to another thing and not really discuss policy. Another thing that we can do is we can split it by outlet. So here the four different plots correspond to different news outlets. And what I wanted to highlight is that even without the titles for the subplots, you can pick out that one of the outlets really stands out. So here in the bottom left, we see there's explicit denial, way less acknowledgement, mostly neutral. If we shift toward attitude toward climate concern, again, we see that the news outlet in the bottom left is different qualitatively from the ones that surround it. And for attitude toward climate policy, again, we see no support and we see all opposition. And so you can privately guess to yourself what that outlet might be. And I'm sure many of you have guessed already that outlet is Fox News. So it is really an outlier. There are seven total news outlets in the dataset. I put only four here on the slides, but that is a consistent pattern that we see that Fox News really diverges from the rest of the news outlets. Another thing that we did is we connected the survey data and media habits to the news coverage of climate policy to see if we weighed by views, we would see something closer to false balance because so far it looks like there isn't really overrepresentation of minority opinion on climate policy. But even weighing by views, we find no false balance. So the colors here are different because they had to group some media outlets together in order to tie this to the survey responses. But what we see is that even if we weigh it by the number of views that Fox News gets, on average, we don't really get false balance. So we see that there is more representation of support than opposition to climate policy. And that's kind of surprising, especially if we think about the Green New Deal or things that become really prominent or notorious. But once we look at the data and connect our annotations to the estimated views, we really see that the proportions seem to reflect what the general public's opinion is. So that leaves us with this kind of unsatisfying question. If it's not false balance in the news, then what mechanism could it be? Well, something I'm looking into right now is inspired by this paper about the complexity of pluralistic ignorance and Republican climate change policy support. And what they describe is that some Republican supporters of climate policy engage in self-silencing, where there is this feedback loop where they perceive that their information environment has an unrepresentative share of climate policy opponents. And that makes them more likely to self-silence, which of course influences their information environment to represent more opponents again. By information environment, here we mean kind of the whole ecosystem of news and social media and in-person interactions, just their overall information environment. And so I'm going to leave you with three takeaways. If you only remember one thing from this whole talk, it is that most people underestimate public support for climate policy. There are really more people who care about climate policy than the ones that don't. And that's a thought that I find really comforting on a particularly sad day or a particularly warm day. The second thing is that echo chambers alone do not explain widespread misperception. So homophily effects or echo chamber effects by themselves do not really give us the kind of phenomena that we would expect to see from the data. And finally, unrepresentative information environment can help us explain this phenomenon. We have not necessarily gotten at it through looking at news media, but at least Republican supporters of climate policy report that they do see this unrepresentative information environment. And so in my future work, I'm going to look into different aspects of this to see where this widespread misperception is rooted. And so for future work, there are three directions that I'm really excited about. One is we can use large language models and AI tools to annotate more transcripts. Naively, before human annotators, I tested chat GPT and it was really bad at answering these questions. But now that we have a pretty large corpus of human annotated transcripts, I hope that I can train a large language model to annotate more transcripts and maybe look at a time series data of how climate policy is covered. Another direction is we're going to explore opinion on climate policy on social networks. So I am going to collaborate with Marylena Hoffman, who has an algorithm for assessing polarization in the context of social networks. Since you can't really use Twitter data anymore, we are going to use Reddit data and try to see which climate policies are more polarized. And finally, I'm going to investigate this potential self-silencing asymmetry. If majority nodes self-silence at higher rates than minority nodes, what does that do to the model? So that is the modeling aspect of this. I want to thank my project collaborators. Jeremiah has been amazing to work on for the transcript annotation bid. Joshua Garland has helped a lot with the model development and Matthew Burgess is my PI and a wonderful mentor. And I am so grateful for the annotation team. So we have a few undergrads and one master's student. And I thought that I was saddling them with something tedious, something repetitive, but they come back and they say that they really like being able to see how exactly the climate change denial gets put into the news. And they really like looking at different arguments. We have weekly meetings to ensure that we are annotating transcripts consistently. And these undergrads have been such a delight and I'm really grateful for their contributions. I'm going to leave the takeaways up, but otherwise that's it for my talk. Thank you so much, everyone. Does anybody have any questions? You can either type them into the chat or if you prefer to talk, if you raise your hand, I can unmute you. Can you hear me? Yep. Yep. Hi, I came in very late into the meeting and I kind of jumped in when you guys were discussing the nodes and then the graphs with the statistics. And I kind of was very, very confused. Could you just briefly go over like the, what the nodes were trying to, like I was having a hard time understanding how the nodes were connected to public misperception and your studies? Yeah, absolutely. And could you also go over echo chambers? I was like one of the takeaways I didn't really understand as well. Yes, of course. Thank you for the question. So the premise here is that people form their opinion and their estimate about what percent of Americans support or oppose climate policy based on their information environment, which includes the people whose opinion they know may be based on social media, may be based on in-person interactions. And what we're doing with the modeling is we're leveraging the structure of real life social networks in order to capture what that information environment might look like. So something that is a very robust property is that they tend to have these, they tend to have the kind of degree distributions where most people do not have a lot of connections and few people have a lot of connections. So that's the kind of structure that we want to replicate. And so that is how we are simulating things. And so each node on the network will represent a person and then each person will have some sort of connections to other people in their information environment. And then each person will make a guess. And you can think of the really well-connected nodes as somewhere in between a regular person and a whole media outlet. So if you look at, you know, Tucker Carlson's Twitter account is like, it's still ostensibly one person, but it's somebody with a really large platform. And so that is also somebody who has a lot of followers. And so by modeling what happens with the really well-connected nodes on the network, we can see how changes to the information environment in real life connect to misperception overall. Does that help? Yes, it does, thank you. So we got it. Sorry, could you also go over echo chambers for me? Oh yeah, I'm sorry. Yes, one of the things that we wanted to test out is this like echo chamber effect. So when we create these networks, we can have an opinion distribution. And so we have a parameter, let me pull up something helpful. We have a parameter in our model that kind of lets us tweak networks with to have more or less echo chamber effect. So here on the right is sort of a maximum echo chamber where all the minority nodes only talk to each other and all the majority nodes only talk to each other. And as we decrease it, that sort of relaxes and you can get networks where you mostly talk to the people you agree with, but you also sometimes talk to people you disagree with. And then all the way here on the left, you only talk to the people you disagree with. So it kind of creates this knob for our parameter sweep to investigate that. And so everything between like H0.5 and H equals one is more of the echo chamber side. And so one of the things that we expected to see is that maybe this misperception is because people don't talk to those that they disagree with. But from the parameter sweep, what we can take away is that that's actually not a sufficient condition on this type of network. So something else might must be happening. Thank you so much. Thanks for the question. So we got another question in the chat. How do you think issues with accurate perception by the public affect policy implementation at various levels of government? Yeah, so that is an interesting question. I'm not, yeah, I'm not entirely sure, but I really believe that this is an important thing to understand. There are many levels of complexity at play here, but I believe that at some point it does eventually lead to change. And with climate, we're looking at a kind of complexity where there are companies actively advocating against climate policy, but the majority of Americans support it. And you can look at the history of something like tobacco products where there were also a lot of tobacco companies advocating against banning smoking, but you can't really smoke in restaurants anymore. And so I think that even though there is a lot of complexity and a lot of pushback, it is important to have this accurate perception. So I'm not sure about the explicit mechanisms, but I think it really matters in the end. Yeah, I think there's a parallel with reproductive rights where when reproductive rights go up for a referendum, people typically tend to vote in favor of reproductive rights. And I think perhaps vaccine uptake is another issue that is similar. So I don't have a very nuanced answer for you about various governmental levels, but I think that climate isn't the only issue that is affected by this misperception. And I think correcting it would actually help us. Great, I think we have time for maybe one more question. I could ask one. So, Kat, that's interesting. You said you have a degree, you're an applied mathematician, but and a lot of us on this call are probably more, or many of us are natural scientists or like physical scientists, but it's interesting you're kind of bridging over maybe into the social sciences and working on a topic that's very like politically or culturally relevant, maybe even charged. So I'm curious, it's more of a personal question, but like how did you decide as an applied mathematician that this is what you wanted to work on and then have there been kind of interesting challenges for you working in this kind of system maybe versus some of your other projects you're talking about working in astrophysics, modeling planets or something. So do you want to speak to that? Yeah, thanks so much for the question. I went into applied mathematics because of its versatility. I always had a really hard time choosing and so much of the crux of applied mathematics is saying things like, look, this biological system behaves a little bit like this physical system. And I think that looking at social systems through this lens is also really liberating. It is definitely challenging to be in complex social systems where you don't have Newton's laws to fall back on. So all you have are these mathematical thought experiments where you can rigorously examine your assumptions. And so that's something that I really like. I think a lot of social sciences built on very rigorous theoretical arguments but bringing the power of applied mathematics to it and saying, well, if we assume this self-silencing, what does this actually yield? If we assume the echo chambers, what does this actually yield? Can help us examine things in a new way? I have not really encountered any pushback. I've logged my time in series. It's been really wonderful. But I think I'm still figuring out where on the spectrum from like applied math to social sciences, my spiritual home really is. Great, thank you. Samantha, do you want to wrap up or how do we finish these? Sure, yeah, unless anyone wants to raise their hand and ask a final question, we do have time. Maybe for one more, but if not, just give it a second. Oh, I just saw one minute. Let's do one more then. It would be interesting to know if there is a similar degree of misperception with regard to other topics, like regulation of water pollutants or whether climate change has a unique level of misperception. Yeah, we're currently in the stages of compiling different specific policies and climate-related questions to investigate in our social media study. So yeah, I will note water pollutants because there are definitely some really interesting niche questions, especially things related to engineering that have a lot of kind of complexity to the misperception, the things that get mixed into those conversations. So I will look into that. It would be interesting to see like things like water pollution or maybe less politically charged. And so I wonder if that would have an effect. Great, well, this was so interesting. I thought this was a wonderful talk. Thank you so much, Kath, for coming and sharing your expertise and your research with us. I, for one, learned a lot. And thank you everyone for attending. We do have one more science seminar coming up for this sort of academic year when we have talks. It's going to be May 14th. Colleen Iverson is gonna be here with us talking about the hidden half of ecosystem response to climate change. And I think we are back into the roots and soil. So yeah, but thank you so much again for a wonderful talk. We'll hope to see everyone again next time and take care. Bye everyone.