 All right, I think we will get started. Thank you all so much for being here on a Friday afternoon when it's a little snowy out. My name is Chris Bavitz. I am one of the faculty co-directors of the Berkman Klein Center for Internet and Society. And we are thrilled to have you here today and thrilled to have Professor Sunstein here for a book talk. Cass Sunstein's the Robert Walmsley University professor at Harvard Law School. And he's going to be talking to us today about his new book, which is called Hashtag Republic, Divided Democracy in the Age of Social Media. And I confirmed that we say the word hashtag when we actually say the title of the book, which is important. A couple of administrative, just logistical things. We are live streaming and recording this. We're going to have plenty of time for Q&A, but just keep that in mind if you'd like to ask any questions. We also have a book sale table in the back of the room. You can purchase the book there through the coop. And Professor Sunstein's happy to stick around and sign that for you. If you'd like, I'm going to turn things over to Cass Sunstein. Thank you. Okay, great. So this project has, in the short run, taken about a year and a half. And I've actually been working on these subjects generally for well over a decade. What I'm going to do is focus on a very particular piece of the puzzle, which involves how people learn about politics and law and how new information changes or doesn't change their views. So in economic theory, the way to think of this is in terms of Bayesian updating, which means how you update rationally when you receive new information. And here's a little tale of Bayesian updating, which doesn't involve politics, but seems to me to bear directly on hashtag republic. What you're seeing is the Waldorf Towers, where I lived with my wife for several years when she was United States ambassador, United States ambassador to the United Nations. And her name is Samantha Power. And for a significant amount of time at that place that you're seeing, everyone who worked there when they saw me in the morning or the evening would say, hello, Mr. Power. Good morning, Mr. Power. Good evening, Mr. Power. And that didn't bother me, except that there was one person who was becoming a friend and who kept calling me Mr. Power. And I thought if we were to be friends, you should know my name. So I said to him one day, you know, it's actually Cas, Cas Sunstein, and you can call me first name, last name, whatever you're comfortable with, but my name is Cas Sunstein. And he looked at me and he said, that's amazing. That's incredible. You look exactly like Mr. Power. And on reflection, he was incorporating new information based on his prior beliefs in a way that was fully rational. His prior belief was there's a person, a little bald, kind of tall, whose name is Mr. Power. There's another person who looks the same who is saying his name is Sunstein. There must be twins. The idea that Mr. Power was actually Mr. Sunstein, given his prior convictions, was not believable. He was rationally updating on the basis of his priors. Here's another story of Bayesianism, maybe, about updating. And it involves a discussion between two of the great screenwriters of the last period of human history, meaning since there have been homeless apians, two of the great screenwriters, Lawrence Kasdan, and the sainted George Lucas, who were responsible for some of the best episodes of Star Wars. And as they were discussing Return of the Jedi, the third in release order, Kasdan said, you know what, you gotta kill Luke. Luke has to die. And Lucas, his name's kind of like Luke, that's not a coincidence. He says, Luke isn't gonna die. And Kasdan says, well then kill someone, kill Yoda. And Lucas says, you don't go around killing people. It's not nice. And then Kasdan, and this was recorded in real time, started to get very serious about the nature of art. And he said, I'm saying that the movie has more emotional weight if someone you love is lost along the way. The journey has more impact. And Lucas responded, I think, with a kind of speed that's revealing, I don't like that and I don't believe that. That's a very revealing phrase because of the sequence. I don't like that and I don't believe that. Disliking, the proposition effect, preceded the disbelieving, the proposition effect. On one view, Lucas is being like my friend at the Waldorf Towers, given his prior convictions. He's updating through disbelief, but there's maybe something else going on about his emotional reaction to Kasdan's proposition and how his emotional reaction affected his judgment about facts. Okay, so some political and legal examples and you can think of social media, Twitter, Facebook, not quite Instagram, not very often, but maybe sometimes, bears on climate change, gun control, the Affordable Care Act and immigration and the criminality associated with immigration. So these are four live issues where there is a lot of information out there and people are updating. Okay, we're gonna get to law and policy really soon, but let's start with something a bit different. We're gonna do a little experiment. Everyone who's listening to these words, please participate wherever you are. How good looking do you think you are on a bounded scale of one to 10? You don't have to say it, but think it if you would. I have some news for you, so here's some information. That's you, you're amazing looking. Now, having seen that claim of fact, that is that you are amazing looking, how good looking do you think you are? Now let's try it a little different way, the experiment. Do it again, imagine you didn't see that picture of those terrific looking people and think, how good looking do you think you are? Same as the first time, but now I have some news for you, that's you. I'm sorry, it's really bad news. Dogs are mostly beautiful and that one in its own way is surely, but still, you don't wanna look like that one, you kinda do. Okay, now what do you think? Okay, here's how human beings are. They are asymmetrical updaters with respect to looks. That is when people hear that they are better looking than they thought, they update in a way that is cheerful and they do it. If they hear that they are worse looking than they thought, they are resistant to the news. They are less likely to update and they are averse to the new information. Okay, what's extremely interesting about that I think is that there's a lot of talk about confirmation bias where people like information and are affected by information that fits with their prior convictions and they disregard information that doesn't. This isn't about that. This is confirmation bias alone having no effect, it's driven by direction. So people who think they're terrible looking are more likely to update when they see the good news than the bad news. It's not about confirmation bias, it's a good news, bad news effect. Now we might think that this is motivated reasoning that people are made happy by the good news and therefore it affects them, made sad by the bad news and that's probably right. Another view is that at least for many people they are updating based on their prior convictions. They think I'm pretty good looking, they hear they look like a dog, they think that's ridiculous. I don't have to pay attention to that foolishness and on that view they are like my friend at the Waldorf Towers. Okay, now the good news, bad news effect has been studied in multiple domains and it's very robust with respect to personal information. So if people are asked how likely do they think it is that they will be vandalized, die before 80, suffer from insomnia or infertility or heart failure or cancer or diabetes or obesity. If they give a prediction then they're giving new information that suggests it's actually not as risky as they thought, life, they will update. If they get bad information suggesting their life is actually more dangerous than they thought, they're less likely to update. Those are kind of grim things, most of those. Being trapped in an elevator or having a mouse or rat in your house, not that terrible, but the exactly the same phenomenon. Good news, it's highly unlikely, I'm here to tell you that you're gonna have a rat in your basement, is something to which you will be highly responsive, bad news. You know what, in Cambridge there are a lot of rats in basements. You're less likely to be moved by that. So we know that there is some neurological foundation for this, that there's a part of the brain that is unlikely to allow heads. This is technical neuroscience talk, as you can tell. There's a part of the brain that is blocking incorporation of bad news, and there's another part of the brain that incorporates good news and it is very able to do that. In fact, the data suggests that separable processes govern learning from favorable and unfavorable information, different processes in our brains. So much so that if we want to get people to incorporate bad news, that is doable by zapping the part of the brain that blocks incorporation of bad news and then the good news, bad news goes effect. So selective disruption of regional human brain function, paradoxically enhances the ability to incorporate unfavorable information into beliefs of vulnerability. Okay, the words I just said are written by a tremendous neuroscientist, Halle Sherrott, and inspired by her work and work of others on what involves updating based on personal information, I've been intrigued about issues of policy and law. So a hypothesis is that with respect to many issues of policy and law will observe the same phenomenon. So if you hear on the internet that terrorism's not such a terrible problem, that's good news, you will update, and if you receive news suggesting, you know what, terrorism's a horrible problem, you will be more resistant to the updating. Okay, so a hypothesis is that we're gonna observe in policy and law exactly the same thing and there will be asymmetric updating on social media that is conforming to the general finding about personal information. Okay, I don't believe that's likely to be true. And so an intuition is that for many people, learning that terrorism is less of a problem than appears will be less impactful than learning that terrorism is a more serious problem than appears. The mechanism is unclear, but the intuition, I hope, is not unclear. That for some people, if you hear Obamacare is ruining everything, that will be influential and you'll update, whereas if you hear Obamacare isn't doing much damage, you will not be influenced by that, even though it's good news. And the claim, the intuition at least, is that good news and bad news will produce asymmetrical updating, but it's going to be very different depending on the prior convictions and emotions of the recipients. So you will see a good news, bad news effect for part of the population, but you'll see the opposite where bad news crushes good news and its impact for another part of the population. And in terms of social division, if that happens, it's completely devastating. Okay, to bring this to earth, I have some data for you. And it's in the book and I'm gonna spell out the core of it here. Okay, so a hypothesis is with respect to climate change, strong believers are gonna update more with bad news than good news and skeptics will update more with good news. That's a kind of very crude hypothesis. I think it's very intuitive for skeptics that if you think climate change isn't a serious problem, you read stuff suggesting gets cold, gets hot, has nothing to do with what people are doing, that's very impactful. If you read stuff suggesting we're all gonna burn up, I think that's crazy environmentalist pseudoscience. That's intuitive. The idea that believers will update more with bad news than good news is less intuitive and I'll have a few things to say about whether it's true and why it might happen. Okay, so here's the experiment. We got hundreds of people now more than 302, but basically asking them three simple questions. Do you think the Paris Agreement is a good idea? Do you believe that man-made climate change is occurring and are you an environmentalist? The new head of the Environmental Protection Agency, we know his answer to the first two questions is not very positive. I'm not sure what he'd say to the third question, but if that's right, the head of the Environmental Protection Agency would emerge as a weak climate change believer. What we did was to create three terciles just based on where they fall within the distribution of the population, where the top is called strong climate change believers, the bottom is called weak climate change believers, not because they deny the existence of climate change but because they're in the bottom third of the distribution, got it? Yeah, yeah, it was asked at a time when it was salient so they've got it. Okay, so many scientists in the question, the first question, many scientists we told them think that we'll get up to six degrees Fahrenheit, six degrees Fahrenheit or more. Now notice that in this part experiment I'm gonna give you another, we gave people an anchor, meaning a number to try to organize their judgments. That was a controversial choice we made and it might not have been right so I'm gonna give you data soon when we didn't give them an anchor but we didn't want the numbers to be all over the law so we just said that. And the results we got once we asked people what they thought were pleasingly orderly. So the weak climate change believers think we're gonna be at 3.6, the strong think we're gonna be at 6.3 by 2100, this is and the moderates at 5.9. I say that's orderly because we're getting from the strong believers a higher number than the moderates and that's higher than the weak. So that's, no one's gonna publish the paper we just described, it's basically too straightforward and intuitive. Here comes the experiment. Okay, so the participants are sorted into one of two conditions. In the good news condition all three groups, the weak, strong and moderates and in this version of the experiment about 160 people are told to assume that you know what, the climate change problem isn't so bad, it's gonna be one to five degrees C by 2100, that's what they were told. And keep in mind this is a lot like what people are seeing in the media and on the internet, basically every week they're seeing stuff like that. The other half was not given the good news, they were given bad news, they were told uh-oh the scientists have reassessed the data and they think things are much worse than previously thought, we're going to go up to seven to 11 degrees Fahrenheit by 2100. Got the setup? Okay, now here's what happened. Weak believers in climate change were moved by the good news, their average estimate fell by about one degree. Now given that they started at a baseline of 3.6 which is pretty low, that's a very significant change. The good news basically drove their numbers way down and they seemed to think, okay, it's gonna be all right, even more all right than we thought. For the weak believers, and mind you these are not climate denialists, these are 3.6 people, the bad news had zero effect on their estimates. None. That's I think like my guy at the Waldorf Towers saying you look exactly like Mr. Power. There was no movement in his original belief which is that the ambassador had someone as spouse whose name was the same as hers. They moved not at all. Now this finding in my view is politically dynamite though it's intellectual water, meaning it's political dynamite because it tells us something very dramatic about updating. It's intellectual water because meaning it's not that big a deal. Water isn't that big a deal unless you're really thirsty. Have I just made you thirsty? Water generally isn't that big a deal in life, thank goodness, because we have plenty of it. And the reason is in this area we have plenty of stuff showing that people are more responsive to good news and bad news. And all our study does so far is to show that's true for the bottom tercel with respect to a political issue as well. Okay, here's what is I think political dynamite and intellectually something stronger than water. Strong believers in climate change were far more moved by the bad news. Their average estimate jumped by about two degrees and when they got the good news it fell by 0.9 degrees and those of you who alert to statistics will notice if you're at 6.3 falling by 0.9, it's statistically significant, it's not that big a deal. So what we're observing is asymmetrical updating of a diametrically opposed sort where the weak believers are more responsive to the good news and the strong believers are more responsive to the bad news. The moderate climate change believers by the way they were equally moved in both cases. No asymmetrical updating, no difference between how they updated. Okay, so with respect to political affiliation I think the simple thing to say is that of course Democrats had a higher climate change belief store to score but updating behavior across good and bad news didn't interact with party affiliation. That's significant. It means neither Republicans or Democrats as such are more likely to be prone to the effect. The effect really depends on what's your prior conviction not on your party affiliation. Okay, I told you that the co-authors of whom I was one were a little uncertain whether giving the anchor of six degrees was the right choice. So we ran the experiment again more recently without any anchor. The overall average was five degrees Fahrenheit pretty close to what we observed with the anchor. That's comforting. The three groups were at 5.3, 5.6 and 4.2. That's a puzzle. First, and I think the less difficult puzzle why is the third tertile at 4.2 whereas in the other condition at 3.6? It might be that people in the third tertile adjust from the anchor. They say if it's six, I'm way below that. They end up at 3.6 or it might just be it's a different population and not much can be made of it. The slightly more difficult puzzle is that the strong believers have a slightly lower prediction than the moderate believers. That's weird. Strong believers shouldn't. I think that's explicable on the ground if you don't give people an anchor. If you're a strong believer or moderate believer strong believer you think it's gonna be a lot. Five. Moderate believer you believe it's gonna be moderate. Five. You don't know what the numbers mean. So I wouldn't make a lot over the numbers given. The real question is what's gonna happen in the no anchor condition once we do what we did before which is we divide them into groups that are exposed to good news and bad news. And basically I can tell you the basic findings were replicated. So the moderate showed no asymmetry. The strong were more responsive to the bad news and the weak were more responsive to the good news just as in the anchor condition. A notation here in this version the weak were some movement in response to the bad news. They weren't utterly impervious but I wouldn't take that as a major finding. It's fair enough for our experiment to say if you're a weak climate change believer bad news will move you less than good news and bad news will move you somewhere between a little and not at all. That's what we've got. Okay so these are climate change findings. We now have a bunch of very preliminary findings in other areas. Gun control in terms of the number of deaths that are associated with not controlling guns whether if you have gun control you save lives. Basically it looks a lot like the pattern you've just seen for immigration. Our basic finding seems to be everyone updates more in response to good news and bad news. I'm not sure how to understand that exactly. It may be that our population doesn't have an entrenched commitment to thinking anything in particular about immigration. So when they get the good news they say I like that and I believe that. And when they get the bad news they say guess what I don't like that and I don't believe that. So what's going on here? I think the more intuitive explanation, let's call it the George Lucas explanation, is the I don't like that and I don't believe that explanation is that people with high and low climate change belief scores are invested in their attitudes and they update accordingly. So if you have low belief score you think it's not so bad the climate change problem is just not that much to worry about. The good news is welcome doubly. It's positive for the world. Not such a terrible situation. And it's also affirming of their prior convictions. So it leads to a big update. It's pleasing news personally. I'm right to be in the bottom tercile. And it's pleasing news in terms of public policy. Climate change problem isn't so bad. Now for the other group, that is the group with the high belief scores, they are crediting the bad news. The reason is on this account that that news is affirming because it confirms their belief that they've been right to be worried about climate change. In that sense they're motivated to accept bad news. Okay, what is noteworthy about this admitted speculation is for some people to learn that a social problem, whether it's immigration or the Affordable Care Act or poverty is much less serious than you think is actually disturbing because it undermines your understanding of the world. And people don't love that. Maybe the most tendentious way to put it, which I think might well be true, is strong climate change believers would rather believe the planet is gonna burn up than that they've been fundamentally wrong. Okay, there's a second explanation which might be right, which says it's more like the Waldorf employee that it is like George Lucas. It's not about motivated reasoning. It's just about updating based on your prior conviction. And I think here I'm gonna give a very simple account without any arithmetic. The simple account is suppose you believe that the world is not gonna get much hotter by 2100. It's gonna get a little hotter, not much. Then you read some scientists have said it's actually gonna be a very modest problem. Given your prior conviction, that's credible. Given what you think you know, that tells you something new. Now suppose by contrast you read something suggesting it's gonna be up to 11 degrees warmer. You think given your beliefs, that's crazy talk. That's environmentalist stuff. That isn't science. That's like reading that the Holocaust didn't happen or that dropped objects don't fall. It has no weight. And on that view you don't have to talk about motivations. It's just rational updating given prior convictions. And on that account, the climate change believers, the strong ones are also rational updaters. That is they believe the climate change problem is really serious. Once they see it's even worse than they thought they update, that's consistent with the general direction of their belief, that's credible. When they read it's gonna be tiny, one to five degrees Fahrenheit by 2100, that's gotta be paid for by the oil companies. They'll dismiss that. Not because of emotions and motivation on this account, but because they're just rationally updating. Now whichever of the two accounts is real and we're trying to figure out which one is real. We haven't succeeded yet. Whichever one is real doesn't matter for present purposes. What matters for present purpose is that across a wide range of political and legal issues you are going to get asymmetrical updating where people in the bottom and top terciles are going in quite different directions as they receive good and bad news because one bit is more persuasive than the other and they're going to be going in very different directions. That is Facebook every day. That's what people are seeing on Facebook every day. Okay, so the broader idea is that good news for the country will for many millions of people have far more weight than bad news. So we can find groups that will say if the Affordable Care Act isn't producing increases in insurance premiums or that the $12 minimum wage is not increasing unemployment, that is much more convincing than the opposite tail. So that is the good news, bad news effect as in the looks studies basically turned political. But for some groups, the possibility is that apparently good news of exactly these kinds will trigger a negative reaction because of their desire to be vindicated or because of their prior convictions. So that to learn that the Affordable Care Act is producing increases in insurance premiums or that the $12 minimum wage produces big increases in unemployment, that's pleasing. Okay, we have a little data from judges now and the data from judges suggests that in the United States, if a Republican appointee is sitting with two Republican appointees, he or she is going to show very conservative voting patterns much more so than if that Republican appointee is sitting with at least one Democratic appointee. Now, what I just said, I hope is striking many of you as strange because if there are two Republican appointees on a panel, they have the votes, they can go the direction they want and yet they are far more moderate with one Democrat there. Democrats on a Democrat, Democrat, Democrat panel show extremely liberal voting patterns far left of center. Much more so than if a Democratic appointee has at least one Republican appointee on the panel. It's completely symmetrical. So this is a way of saying that on a DDD panel, if you are a company challenging an environmental regulation, good luck. Your chances are much lower than if you're before an RRR panel and if you're a labor union challenging a decision by the National Labor Relations Board that went against you, you're much better off with the DDD panel than an RRR panel. Now, one way to put this is that in some areas of the law, the political party of the appointing president is a pretty good predictor of how the judge is gonna vote in an ideologically contested case. But a better predictor of how that judge is gonna vote is the political party of the president who appointed the two other judges on the panel. I didn't expect that finding. Now, what's the explanation here? The explanation seems to be, I think, very similar to what we've discussed so far, either about motivated reasoning or about rational updating. So on an RRR panel, three Republican judges, you're hearing a lot of arguments about why the Environmental Protection Agency overreached, not many the other way, just by definition. If there's a Democratic appointee there, he or she is gonna say a few things about why the EPA was okay. And then the information exchange is going to lead you more likely to vote in a moderate direction. That's consistent with what the data show. Now, what I wanna suggest with respect to social media is that the power of personalization is creating a lot of DDD and RRR panels. Facebook's creed, and they are, to their credit, rethinking this, is that your news feed should fit your interests. What you find meaningful, you in particular find meaningful, and either through the sense of the algorithm, which knows that, or through your own online behavior, we should see the construction of the equivalent of RRR and DDD panels. Okay, so the implication is that a lot of perpolarization in politics and law, division in many nations in the world, is created and fueled in this way. So every day, you can encounter competing and plausible predictions, suggesting that your own current estimates are too optimistic or too pessimistic. If the evidence involves your own future, good news is gonna have special weight. If the evidence involves politics and law, this isn't necessarily so. For some populations, it will go that way, and if you're with me for some populations, it'll go exactly the opposite way. More particularly, some people find objectively good news to qualify as such, and will get particular attention and updating. For others, the same news could contradict convictions to which they are deeply committed, and it will have less weight. So if that's true, when that's true, the circumstances are really ripe for polarization, driven by information baths, and heightened or produced by asymmetrical updating of diametrically opposite kinds. I don't like that, but I believe it. Thanks. Comments, questions, Star Wars references. Now that you've outlined a problem, do you have any suggestions about how to deal with it? Yes. There are a million and one things that can be done, and I'll give a few ideas that are tentative. The ingenuity of America is now being brought to bear on this problem, and I trust that ingenuity much more than my own little ideas, but I'll give a few. Facebook could redesign its newsfeed, this is just one example, so that you encounter stuff that doesn't particularly fit, either with your ideology or with your preferred topics. Once it redesigns the newsfeed, it could do a couple of different things. It could, by default, give you a newsfeed, let's say, expands your horizons, and allow you to opt out of that particular newsfeed in favor of, you know, I want to see me, me, me. Or it could say, here's your newsfeed, which is our standard newsfeed. Do you want to see opposing viewpoints? Do you want an opposing viewpoint button? Or do you want a serendipity button? And the serendipity button could be of multiple different kinds. It could be topics, it could be points of view, and you could maybe specify the degree of distance from your own interests that you are willing to bear. There are, technologies now are rising. Some of them have names almost, like Escape Your Echo Chamber, where you can work with them and your social media to get stuff that's very different from what either the preexisting algorithm or your own particular choices would get. So social media can do a lot of stuff on this. We can imagine what I hope will, in the fullness of time, be successful business models, which are social media companies, which are trying to provide people with serendipity or diversity of view. And we could also imagine just a kind of small scale even, cultural movement in which a lot of people insist on seeing stuff that doesn't fit with their preexisting convictions. I have an observation that I thought you might comment on if it strikes you as such. People are always saying, read what you don't agree with. But of course, that's more preaching than in practice. I mean, I've noticed that if I sit down and read a column that I really agree with, I feel very pleasant. And if I read a column that I don't agree with, I feel very unpleasant. So it seems to me it's a human nature that we're always gonna seek out those columns and opinions that we agree with because they just please us. They make us feel better than if we read something we don't agree with. Do you have any comment on that? That's a completely fair point. And there's a lot of data consistent with what you say. So I'm pondering a few things. It may be that many people undoubtedly have a short-term aversive reaction to reading stuff that is inconsistent with what they think, but they also have an aspiration to be exposed to that stuff. And it's their own aspiration and their aspiration at least sometimes trumps their short-term aversion. So there's a great old paper by a philosopher named Harry Frankfurt that talks about how people have desires and they also have desires about their desires, second-order desires. And when you seek out things that make you feel uncomfortable, you might be vindicating something that is part of what you care about. So there's that. It's also the case I think that both one's immediate culture and the larger culture has an impact on what kind of lack of pleasure you feel reading stuff that you disagree with. So surely some of us on some days we read something that is politically jarring and we think some four-letter word comes through our minds and it's horrible. But some of us on some days will read that and there'll be something like, I think, delight in saying, oh, there's a point there, I hadn't thought of that. And either individual personalities or cultures can cultivate that. And there's something, I think, that's quite relaxing actually about reading something that's extremely jarring and disagreeable and finding yourself some part of you nodding. That's a great feeling. Yes, I'll tell you something a little personal that maybe will be a little more concrete than the abstractions. When I was nominated for my government job I had to be confirmed by the Senate and it was a really difficult process and several Republican senators not only voted against me, they made it so I couldn't receive a vote. And one of them was Senator Saxby Chambliss from Georgia who was extremely negative about yours truly. And we got a meeting with him, I loved him. I thought he was fantastic. His negativity about me was, law professor, he seems to like animals and I represent farmers and they are really suffering and is he gonna pour stuff on them that's gonna hurt them in an economically very challenging time. Now not only did I like him personally and feel his concern for the economic situation of his constituents was, do you say it's honorable, it's too weak, it was fantastic. But also I learned a ton from him because he knew no stuff about farmers and what they're going through and what government regulations are doing to them. He hears that. And to hear him, this was the hour I spent with him, that was tough in a way, but it was fantastic. And he became a great friend and I can't say that there's a member of the U.S. Senate in the last 10 years. I like a lot of them. There's not one I like more than him. And if my reaction had been he's all focused on farmers, what about clean air and clean water and he's all focused on economic development. But that's only part of the picture and it would have made me feel terrible. That would have been a terrible way to interact with him. It would have been how a terrible person interacts with the U.S. Senator who actually knows some stuff. And that's just one story. But if you read something, let's say you're a Trump supporter and you read something about how his healthcare plan isn't a very good idea. There's something I think opening of one's self to thinking, oh, maybe there's a problem there. And for Senator Sanders' supporter to read something suggesting there's some part of the Affordable Care Act that's a terrible mistake, say, maybe it's a terrible mistake. I think there's something, it makes people less tense in a way, right? There's a tension of defending your own thing. It's something it's great to give up on that. First. The previous question on social media. I wonder if you could manage to have a conversation with Zuckerberg before he comes to give a commencement speech in front of millions of people. Well, you never know, life is full of surprises. I greatly admire the people at Facebook and from the public accounts it looks like they are attentive to this question and working on it. I do think that Facebook is A, a great thing for countless people. And B, it's been insufficiently attentive to these concerns and thinking about its own newsfeed. So the book is not very excited about what at one point Facebook described as its aspirations through its newsfeed. But the people there are tremendous and there's reason to hope that we all learn. And I certainly have learned a ton from what happens at Silicon Valley. And there's every reason I think to expect that social media will be engaging these problems. Thanks so much for the great talk. I'm curious how much of what you said applies to people with deeply held prior convictions versus not. So there's one interpretation of what you said, which is that tweaking the Facebook feed for those with strong prior convictions won't matter because they're disregarding the contrarian messages that you're feeding to them. But that might be very different for moderate people that have no convictions at all. That's a great question. So if we spell it out, if it's the case that the people with the most intensely felt convictions are impervious to competing views, then where the problem is most acute, the solution will be least effective. That would be to spell it out. Now, if you have people who think dropped objects fall or that the earth goes around the sun and then they read things saying actually dropped objects only sometimes fall and the sun goes around the earth, at least most people will be completely unmoved because the commitment is so bedrock that the competing argument is just ridiculousness. So the question is whether on political issues people are like that and sometimes they are but the hope is that even on issues where people feel very deeply today, if there's a counter argument that's based on evidence and reasons, they may be movable tomorrow. And the other point comes from some old data about group decision making. And there's so much we continue to learn about this. So we're talking about a snapshot of data which in 20 years will seem hopelessly primitive but there's some old stuff that suggests in a group where there's a majority and a minority, the dissident view usually doesn't move the majority but it gets in their heads. So they won't make a different decision on the day the decision has to be made but they heard it and a month or a year later the dissident view turns out to matter and that fits with experience, doesn't it? And so the thought is even if you have an entrenched view and you can think of the Democratic and Republican parties which have moved on at least one issue very significantly in a relatively short amount of time, it has some of that feature. Now there's also political self-interest but... So before social media came along I think a lot of what you were discussing in your talk still applied so people would watch certain news channels or read certain newspapers and I suppose actually when you think of Facebook it largely reflects the community around you and what they're saying. So I was just wondering if you could elaborate on what is particularly novel or new about social media specifically. Yes. So the claim is not that with respect to the forms of social division, social media have demonstrably created more than preceded it. That's an empirical question and a lawyer even to study it empirically would be very tough. The claim is instead that social media by virtue of the ease of personalization either on the producer side or the consumer side. Let's say put these problems in very stark relief. Now that is agnostic on the historical question. Because it's so simple for an algorithm to figure out what your point of view is probably if you're on Facebook enough they can come close to nailing it and on Twitter they can know what kind of topics you like. Then you can be algorithmed into something that there was no equivalent before and that puts the issue of personalization in bright colors. And so that's the only point about social media. Before Facebook and Twitter, et cetera, there was a larger role for information, a receipt in New York Times dot com, wallstreetjournal.com, not that that's better but it involves much less in the way of personalization except if those websites are personalizing too. Thank you, Professor. The name is Pranak. I wanted to ask whether in your framework do you see place for the question of authority of the person that somehow spreads the news? Because I can see how what is being said matters but I think what is different between social media and a panel of judges is that in a panel of judges they are sort of all equal. While in social media there are some friends who's on a particular subject I would maybe treat as more reliable, some as less, and when it comes to media sites there are even some sites where when you see someone saying something you would even have a stronger conviction against it. So I was wondering especially in this normative side of we should do something about these algorithms. How do we fit this in? That's undoubtedly true and there's some data that's supportive of it suggesting that if you're a certain type Fox News will be very credible and for another type it's very dismissible. The New York Times has weighed among some and none among others. There is evidence supporting what you say that sometimes a correction of a falsehood can actually strengthen the commitment to the original belief. For reasons that are not clear but probably connected with these things either why would they say it wasn't true if it isn't true? The very denial is supportive of the fact or they denied it, I'm mad. So I'm more clearly committed to the thing than before that's motivated. So clearly source credibility really matters. A hunch is in my little very simple data. The climate change strong believers were categorizing the good news scientists as unreliable sources and the climate change weak believers were doing the same with, they probably had a picture in their mind of who the scientists were. I think you're entirely right. Now in social media there are a couple different things. One is that some of the sources of information are like Fox News or New York Times meaning they're instantly either self-refuting or clearly convincing. And they're also just people spreading stuff. And I think what's happening when people spread stuff is that there isn't sufficient discounting in the human mind for bias or agenda or just lack of knowledge on the part of the person spreading things. So here's a way to put it. If you go up to someone on the street and ask what time it is, you will assume won't you that they will give you the time. When we interact with members of our species if they say something to us we typically assume it's true. And at least normally that's so. Is it gonna rain tomorrow? Why would people say something crazy about that? And I think that's a form of rational Bayesianism when other people tell you about time and weather. But we are probably credulous Bayesians meaning we don't discount enough on social media perhaps in particular for agenda bias lack of knowledge. So a friend of mine in the earliest days of the internet a friend of mine who's like a super amazing professor who studies rationality received an email saying I'm writing from Nigeria, we have a zillion dollars, your name's on it, can you contact me and you can get the zillion dollars. Remember those scams? Some people just fell for the scams and there wasn't anything wrong with at least some of the people who fell for it. Scams you receive a note, you're told you inherited a zillion dollars, it's worth a discussion surely or not. And that receptivity to the scam is similar probably to the fact that since I worked in the Obama administration I'm gonna go after my own. You hear something about what Republicans that's horrifying if you're a Democrat and it's like people telling you, no it's gonna rain tomorrow, yes. But the people who are saying the things about Republicans are all too frequently trying to promote themselves or foment rage or get more popular themselves and there isn't the discounting going on. Now in terms of remedies you're onto something important which is the credibility of the source can really matter. And so if you wanna change someone's view on some climate change issue, we haven't run this experimentally. But if the climate change scientists who say it's gonna be really horrible turn out to be people who'd been previously not much concerned, my guess is that the low believers will move more. And if the high believers hear that the study came from an MIT Harvard consortium of people who'd been actually extremely alarmed about the climate change problem and they scaled back their estimates, given the new data, probably the high believers will move a bit more. Coming back to your answer to the first question, you suggested if I understood you right that Facebook could adjust its algorithm so that it feeds people a variety of points of view including ones with which they don't agree. Then you gave yourself as an example which is was inspiring, but you're probably not typical. So assuming that most people feel more comfortable when reading what they already agree with, how, it's really two questions. How would you get those good people at Facebook to do something that they can't expect the majority of their readers really to prefer? That's the sort of surface question. And more deeply, what is the role of platforms then? Is there some kind of a civic role or responsibility or governing role that is a bit different from that of a traditional business? The answer to the second question is yes. So to see Facebook as equivalent to, let's say a seller of socks whose only responsibility within the confines of the law is to maximize profit is inconsistent with Facebook's massive social impact including on democratic processes. And my sense from the public statements is that Facebook is in agreement with what I just said. Of course, they have obligation to their shareholders and their employees. So to become an economic loser would not be a very good idea. But to see itself as having civic obligations is that's a step forward. And I think they've taken that step a long time ago. And the question is, what does that entail exactly? Here's another way to put it maybe. Suppose you're a large, softening company. You can think of your favorite one. It has very large effects on human health. And a large, softening company can think of itself as we're like someone who sells socks, we just want to maximize profits. Or it can think, well, we have some actually obligations to people. And Pepsi has been focused on the complexity of its social role and has been quite interested in human health. Good for them. In terms of what Facebook should do exactly, what it should not do, given what you said, is turn itself into something whose newsfeed is a nightmare. They shouldn't do that. Here's another thing it shouldn't do. Pick favorites. There's no reason to think it has any interest in this. That's great. It shouldn't think we like one political side and we're gonna promote that. That would be a disservice to the diversity of its customers. But what it might think is, what can we do consistent with our economic model that appeals to some people's desire to encounter stuff that's new or that appeals to some people's desire to see a breadth of views? They probably know who those people are a little bit and can work with them. And they probably know something about the extent to which a large set of their users actually are interested in seeing more than one thing. And so long as users ultimately have control and can say, I just want Bernie Sanders stuff all the time. And in some sense they do because they don't have to click on or read anything that's not consistent with their views. So long as ultimately the user has control, experimentation with multiple things is in everybody's interest. It would be astounding if the Facebook, let's say, presentation of its core values in some month of some year. Let's say, maybe unfairly, that it is consistent with echo chamber stuff. It would be amazing if that was the final word. That was kind of perfect in terms of, even in terms of Facebook's economic self-interest. And the fact that we see now multiple services turning up that are providing people with new stuff or very different stuff is suggestive that there's some money to be made. Hi, my name is Martin. So I read Nudge and on my way over here I was reading Thinking Fast and Slow and there's a Cass Sunstein who's referenced in the book a number of times, I hope it's you. My name's Cass Sunstein, so that can't be me. Oh, Sunstein. So there's one reference to while you were in government adding gallons per mile in addition to miles per gallon to automobile stickers as a means of providing a matter of information that's more informative to auto buyers. And the author in the paragraph where he mentions that says that unfortunately this is in small type. And so I guess this was added in 2013. I haven't actually seen it. But would you comment on why the decision to make it in small type if it is a good idea? And then maybe segue that into if DDD on a panel comes up differently than RRR, comes up differently than RDR, do you see a time in the future when we'll ask ourselves why would we let humans make these decisions? Okay, great. In terms of the label when you buy a car, there are two things that are irrelevant in their magnitude different and they're big. One is you can see the annual fuel cost in a big panel. The other is you can see how much this car is gonna cost you compared to the average vehicle over five years or how much you're gonna save. So if you look at the thing, there's miles per gallon which by law must be identified. And then your eyes go right to this car is gonna cost me $5,000 compared to the average car or I'm gonna save 5,000. And then there's a big thing that says annual fuel cost really large and that had been at some point tiny. Gallons per mile is there. Now ask yourself, I can't speak for the Obama administration though I was a participant in that decision, ask yourself, are consumers gonna be massively helped by seeing gallons per mile in huge letters? I hope the answer to that is pretty clear. If suppose the EPA in the next year said we're gonna have a new label when people buy cars, it's just gonna say gallons per mile, nothing else. That would not be a very good idea. The reason is what people care about mostly and I can talk about the environment also but what they care about mostly is how much is this gonna cost me compared to a fuel and efficient car? And the two panels I described to you give you that in big letters. People also care about the environment, some people do and we have a panel on that. Gallons per mile, so gallons per mile has a big advantage over miles per gallon which is it doesn't give prey to what's called the miles per gallon illusion which is if you go from 10 to 11 miles per gallon you're gonna save a lot of money. If you go from 50 to 57, you don't save much at all and people think it's linear. But on reflection, I'm just speaking for myself now, the gallons per mile has an advantage over miles per gallon but is that what America needs to be thrown in its face, gallons per mile? No, that's not the best thing that you can tell people. The best thing is tell them about dollars or about the environment, I think. Now maybe I'm sure better can be done on than the current label but don't go big on gallons per mile. Whatever country you're from, that would not be the best idea in Europe they do something like that it's better than miles per gallon, okay. In terms of replacing RRR with non-humans, do you mean dolphins or algorithms? Okay, so there's some excellent work being done on exactly this question. Algorithms are good at making predictive judgments if they're good algorithms. So a question whether a patient who needs a knee replaced is gonna do well given certain background facts with the knee replacement. Evidently algorithms are great at that and they outperform doctors. There are some legally relevant judgments where there's new data suggesting algorithms outperform judges in terms of whether you should release someone on bail. The algorithms are quite good. What I'm puzzling over is whether of two possible answers to your question. One answer is we're never gonna get to an algorithm that's good at making the kinds of decisions those panels have to make. That's one answer because there's something intrinsic to the question which means it's not a question for which algorithms are well suited or whether we're a long way away from getting to algorithms that can do as well as judges. Not sure whether the first answer or the second answer is the right one. If the first answer is the right one, it's because the question, let's say whether someone discriminated in violation of the civil rights laws on the basis of sex is that susceptible to an algorithmic answer? What kind of questions is the algorithm suited to resolve for that? And if it's a, you could imagine a specification of the case such that an imagined algorithm would do great, but you could probably imagine ones for which they wouldn't. And the safer answer is the algorithm they're gonna get there, but not in a hurry. We have time for one last question. Thank you, professor. Let's think about a smaller group, not all the republic, but for example, the legal institutions and the insiders in these legal institutions. How to pack a good new in the progressive side sent to the conservative side and try to move them to a less conservative position? How can I make a good package and convince them? Okay, so the question is how can progressives and an elite institution move conservatives? But insiders, the members of these institutions, like George Thornes, the defenders. Okay, so at first glance, there's no particular reason in the abstract to think that the progressives are right, but let's suppose they are. So let's suppose there's some issue where the progressives, where's one where I feel pretty confident, the earned income tax credit's a really good thing. And I think a lot of conservatives agree, but let's suppose there's an elite institution where the conservatives don't. The best isn't to invoke facts and evidence. So the question would be, why are people skeptical about policy X if it's because they think it has some consequence that's terrible, then the question is, does it in fact have that terrible consequence? And the task would be to explain why it doesn't. And I'm thinking a little bit in at least some elite institutions where they're working well, evidence is the coin of the realm and shame or reputational concerns or who's part of the tribe, that's utterly irrelevant. But I'm thinking of this great book called How to Win Friends and Influence People, which says, you can't win an argument, don't even try. Because if you win an argument, you won't get the person to change their mind, they'll just think what they thought before and they'll be angry. Now this is undoubtedly too strong, but it has some human insight. He's not talking about elite institutions, he's talking about like at dinner parties or with your friends. So you can't win an argument, but you can sometimes get people to want to agree with you. And he has some ways he thinks are effective at that. Now I wouldn't encourage anyone in government or a company or an educational institution to rely on that. Use evidence. So I referred to my Republican friend as it happened and he moved me, but he moved me by telling me facts. He told me some stuff about farmers and about some things that might have adverse effects on them. He didn't by the way tell personal stories or make me cry. He told me facts. And that was behind closed doors, certainly that was influential. In elite institutions like a company that's deciding on a policy, if it's some place in Silicon Valley or a company that's a startup and that's making some product, move them by telling them some truths. Thank y'all. Thank you.