 Good morning, everyone. Well, good morning, everyone in Australia. Good evening, I think, to Sam, where you are in the Bay Area. I assume it's sort of pre-dinner time, so we're getting everyone at the very different ends of the day for this session. Can I please welcome everyone to the breakfast session of day two of the ANU Crawford Leadership Forum. Thank you so much for joining us. Before we proceed, I do want to acknowledge and thank the traditional owners from where I join you today in Ngunnawal Country. Before we enter the discussion today, I'd like to go through a few housekeeping things. I'm sorry, I'm sure you've heard these so many times by now. If you do want to raise a question, please just put that in the chat, raise your hand either on screen or via Zoom. Samantha and I are going to have a chat, basically for about probably 15 minutes, depending on how much we can eke out of our disagreements on things. And then we will open up to questions from the floor. So if you have something that you want to interject with, just keep that in mind. And there'll be plenty of time to question both of us, but particularly Samantha because she is very much the expert here. So I will introduce Samantha. She is an expert, a very well published and very well cited expert in disinformation and misinformation. She's based at the Stanford Internet Observatory and Digital City Civil Society Lab. She writes on technology in society, media, foreign influence, and I guess how we really wrangle with the internet and social media in international relations and in governing domestic politics. So I'm going to start by asking Sam a question that I think really gets to the core difference of how different researchers view social media. I'm very sorry to say to myself, I'm very much a domestic politics person. I look at elections. I look at how we can change voters minds on things. Samantha is coming to this from very much the perspective of foreign influence. So from my perspective, social media doesn't matter, right? People have their political views and their political preferences form so young that jumping onto Twitter or jumping onto Facebook shouldn't matter. That's the very sort of conservative domestic politics view. But Samantha, convince me why social media matters. Yeah, and Jill, thank you so much for that introduction and hi, good morning everyone in Australia. Yeah, it is a little before dinner here, so hopefully you don't hear my tummy grumbling through the microphone. That's such a such a great kind of question to kick us off is like, what is actually new and important here about social media. Because, you know, Jill, I think you pointed out like people consume a wide variety of information from so many different sources where influenced by our friends were influenced by our families and, you know, our political ideologies aren't something that, you know, we read one piece of news and all of a sudden we've changed our mind on an entire like contentious issue it's something that happens slowly over time, if at all. So, yeah, this is kind of really the million dollar question here is like what's actually new about social media and as social media has it disrupted our information landscape our interpersonal relations enough to kind of affect these growing trends in polarization. And so there's a number of different theories and arguments out there around the effect that social media can have on polarization. First, some of you might have heard of this term before has to do with filter bubbles. And so that's the idea that social media algorithms themselves they're reinforcing a particular point of view, because we know that algorithms have personalized content, and they personalized content based on people's data, which will collect all kinds of information about like our political preferences, our various interest interests and make big big inferences about who we are as people and who we are as voters. And then the algorithms will feed in that content that already pits our preferences to our news feed so we're continuously seeing, you know, either right leaning or left leaning content. That's kind of like the first theory around the way that social media might influence polarization because if you're only seeing content politically left or politically right. That goes to those diversity points that help make democracy work and might, you know, challenge your own kind of biases and whatnot. The second theory is that it's not actually the algorithms but it's the way that social media structures group interactions. So this is the theory around echo chambers where I'm going to, I already have a group of friends that are particularly, you know, politically leaning one way or another, I'm going to follow those people, I'm going to interact with them. I'm going to stay in this echo chamber where I'm not necessarily being exposed to other kinds of people with diverse views around politics and therefore will polarize society that way because people are self selecting into various groups. And then the third kind of theory around social media and polarization has to do sort of again related to the algorithms but the way that they promote certain kinds of content over others, not necessarily political but this really has to do with the different kinds of polarization that we're talking about as well because there's ideological polarization, which has to do with, you know, we can't agree on political ideologies and then there's affective polarization, which is the idea that I, you know, as one political ideology don't like you as a person who has a different political ideology that means so that's affective polarization. And this is where attention economics kind of come into play because algorithms might promote negative content they might promote conspiratorial content they might promote content that is like very rumorous or slanderous and that stuff tends to go viral and very far on social media and so we're constantly being exposed to the other side, or people who are do not share our same political values as, you know, being negative and being bad affective polarization might therefore increase and so those are kind of three theories and to be honest Jill I think a lot of the literature is very kind of there are studies that will show both sides for all of these things we don't necessarily have good answers as to you know this is exactly you know social media for sure is promoting agro chambers or as for sure promoting filter bubbles, or as for sure you know, feeding us this negative kind of content through attention economics that is leading directly to polarization but these are kind of like the three theories that people tend to point to when we're thinking about hey, what's actually new here about social media, and how could this be changing how we view other political ideologies and how we view others. Let's talk a little bit more about effective polarization is I didn't know you were going to bring that up. And I love the theory of effective polarization well I hate it from a democratic sense but it's probably something that a lot of the audience hasn't really come across because it's a little bit inside baseball still. And it's this idea that we have a sense of identity about the party that we like, or the social group that we identify with so. I'm an academic and I'm a cricket fan, and I live in Canberra and all of these things make up my identity, but effective polarization is a little bit more embedded it's a little bit more angry. And it taps into this idea that we don't just like our in group but we often hate the out group. And while this can sound really kind of aggressive in confronting it's not always it's not always so much the case. It might just be that as much as I like to see my party win at an election, I also kind of like to see another party lose. And that's in conventional political science, something that we thought didn't matter so much. There's the sort of the classic study of negative advertising and sort of emotional advertising in political science and election campaign suggests that when people see too much of this kind of angry emotion from candidates or from political parties, they tune out. It might make us it might have influenced our vote but most of what it does is make us less likely to care generally, but something about algorithms and social media feeds off this right and so you mentioned this in your last point because I think this is one area where we can instinctively get the feel that social media does matter we do see angrier content and we do see this more effective with an a content more than we see boring kind of long form headlines, you know, explainers or think pieces about Afghanistan for instance so what what role does the do the platforms have here, you know, and they are they really to blame in pushing this viral click baby effective kind of content. Yeah, so you know I think it is definitely you know we can think of the new role of technology here and the role of the algorithms, because for sure 100% they are playing a role at the end of the day social media companies are businesses and they are businesses that are built on attention to economics and surveillance capitalism where you take a whole bunch of data about people you feed it into your algorithm so that their people's attention stay on their screen longer so you're constantly scrolling on your phone. And that time in that time you can deliver advertisements to them or sell users data to other parties who want to advertise to to various users across the platforms and so there is an actual economic incentive for this kind of content to go viral. That being said, it's also because we as people tend to like this content. You know, people do like there are lots of studies out there that show that emotional content tends to go much further and much farther on the internet than plain old facts. And you know this doesn't always necessarily have to be negative emotions like if we think of cat videos or those like cute animal videos like that's why they tend to get shared online a lot because they elicit a positive emotional response and people and then people will go on and share that kind of content, but negative emotions tend to have a much stronger effect on people and so that's why they'll often go farther than some of the happier content that we see online so you know part of it is for sure the social media platforms and you know I don't kind of like absolve them of their like role in really promoting this kind of content but you also can't take the human factor out of it as well. So you mentioned facts there and that gets us to our next point. Can you explain the difference between disinformation and misinformation because again this is very inside baseball but something that we get hung up on in the discipline. And again sort of bring it back to the role of platforms here in you, I guess helping us to pass disinformation and misinformation as you know as humble kind of audiences. So, typically in the literature or you know in general when we're thinking about misinformation. We're thinking about content that might be spread online that people might not necessarily know is false. So there's like a question of intent behind the different definitions of misinformation disinformation misinformation there's kind of like no ill intent to misinform people by sharing this kind of content, whereas disinformation there is some kind of intent to to misinform people and so we tend to talk about disinformation in the context of influence for an influence operations online because you know you have a state by actor that is purposefully injecting certain kinds of false information into the social media ecosystem and trying to get other people than to pick that up and share it maybe as misinformation because they might not realize that is actually true and so it can kind of like change depending on the actual intent of the person. It makes it like really tricky definitionally because you know intent is something that's really really hard to measure. But usually when we're talking about those two different terms we're thinking really about like the intent behind the sender of the message. So and again you mentioned, so you mentioned state actors there right so we're escalating very quickly from from me sending around a meme that is a little bit wrong to you know to Russia trying to meddle with our elections. And I guess this is, this is again a huge definitional problem when we're talking about this stuff because we are talking about an enormous range of actors and while you know attention may focus on Russia and Trump and whatever but a lot of this is sort of happening down low. And that's, I guess, you know the kind of the feeling for a lot of us I think that the horse is bolted. So how can I stop my auntie on Facebook from sharing anti backs means, but at the same time, you know be worried about election integrity in in a way that we understand, for instance, you know, or some kind of state versus state, you know conflict that's happening by a Facebook. How does, how does Facebook grapple with that as a company. And, you know, I think this is such a hard question where we constantly see the platforms really struggling to, you know, implement any kind of meaningful change to their algorithms to their business models to their systems because it is a hard question and you know whenever we're dealing with anything that has to do with speech. There's a lot of interpretation there's a lot of gray area. We're not only working in one language but you know Facebook is a global company with, you know, billion users that are not only speaking English but are speaking a lot of very localized dialects that are going to have different, you know, terms for hate and so on and so forth and so moderating all of that at scale is a huge undertaking that's full of all kinds of trade offs. So, in terms of, you know, what the platforms can actually do about it it's it's a very difficult question and you know I think a lot of when we're thinking about misinformation in particular which so often can feed into polarization and this affective polarization, you know, it's, you know, the platforms play a role here but you know we have to also think about citizens we have to think about government responses, it's really like a challenge, you know the fight for democracy is really something that we have to fight at all levels. The government's got, you know, if you know this is a big if I agree with that but if if there are state actors around the world looking at Russia. And, you know, I guess being impressed by by their internet activities you know by these sort of labs of citizens that they have pumping out disinformation. Is there a way to put that journey back in the bottle. Probably not. I think, you know, especially given like how ubiquitous and how important social media is for news consumption for entertainment for, you know, interpersonal connections for life for everybody really. I think it's like going to be really hard to put that G back back into the box and this is you know really one of the newest friends for this kind of interference and you know compared to traditional media, like traditional kind of methods for spreading propaganda. It's so much cheaper, you know, creating fake accounts is just incredibly like it's such a low cost you know you just need a SIM card, and you know a cheap mobile phone. You know, if we think about the data that you can collect about like how well your messages might be spreading and breaching certain kind of audiences. And that kind of information is incredibly valuable to propagandists right because you can do a B testing you can see which messages work with certain kinds of audiences. You couldn't do that in the past if you were dropping leaflets out of a plane like during World War two or whatever. You know who picked up your leaflet and why the message might have resonated that person with social media we can do that and you know we can target very specific communities of people with very fine tuned interest because of all the data that there is about us so it's kind of general socio demographic information anymore it's you know does this person like does this person like guns or do they like guns and live in the south and are between certain ages and like so on and so forth you can pull in so many different kinds of factors and because we have so much data about people you can do these kind of like big data analysis looking for various trends across these communities to really tailor that message and so all this to really like say kind of you know people who are spreading disinformation purposefully and using social media to do it are a lot more empowered now because of this technology than they were in the past. So the optimist in me says well you know the barriers to entry or lower for propagandists and they get more easily hit their KPIs because they have all these amazing metrics. But again something that we know from studying voters is that it's sort of a garbage in garbage out thing and if you're not really putting time and effort into persuading us then the benefits are low right so I guess the you know again and maybe it's a little bit polyannery shock me but if it's not changing voters minds then why do we care. Yeah, I think what it's one of the effects that it's really having is that it's not necessarily changing people's minds but it's giving, first of all, a lot of ideas that the fringe of the political spectrum that people do hold it's giving them a megaphone and it's making them seem a lot louder than they actually are and that loudness then allows other people that might share those political values to join on and to like bandwagon on to various cause and so it's kind of connecting people at the fringes a lot more than in the past because now they have a Facebook page or a group that they can join. They have an audience on Twitter with various hashtags that they can use. And so yeah not only like allowing these people to kind of come together at the fringes but then amplifying that kind of back into the mainstream, and then mainstream media, and others will often pick up these narratives and share them people will share and we kind of talked about this earlier because if something is kind of conspiratorial or room, like a rumor, people like that kind of content and so it's very much like a vicious cycle, I think and so even though it might not necessarily be changing people's minds immediately, I think, you know the slow day to day effects that seeing this kind of content over and over, and increasingly having these fringe ideas amplified can slowly start to widen that divide between people. I have a question in the chat that touches on that idea of how social media brings people together but can also sort of driver a wedge I guess between different groups and and that gets us back again I think to this idea of effective polarization because in groups and in groups do matter. So the question is from Sandra and she asked you know what do you think about linking polarization in the long observed reduction in social cohesion or what we might call social capital in the sort of Putnam concept like conceptualization and bowling alone and how we're sort of, you know we're atomizing as a society. In other words, social media provides a platform for social cohesion, when social cohesion expressed by active involvement in community groups has reduced or broken down. But it also permits an intensity and focus because of the way that uses a certain content which is exactly the point that you were just making, you know we'd be remiss to talk about social media today and not, you know, not touch on the fact that we're all here via that Samantha's here from San Francisco, the rest of us are here from our lockdown houses in Australia. There is something about the social media, about social media and these platforms and the way that it does bring people together. Yeah, definitely and I usually like to cite the Putnam article of bowling alone as you know. Yeah, why we like to, you know, and also like thinking about these trends more broadly than just social media that but you know polarization is something that has been happening before the internet and before mobile phones. We've had these kinds of growing trends, but social media can amplify a lot of these features just because of its various affordances the way that it like networks people and can bring like minded people together to opt into their groups. Okay, so you just mentioned affordances which is something that I only really learned about in the last couple of years and it's this idea that for some of us, social media is easy, you know we have the skills. We have that we can jump online we know how to download an app we know how to speak in the kind of way that the social media. You know fosters and then to enter debates online that you know that can create social capital or social creation. Can you touch on affordances and and how this are differently distributed across platforms and I guess how the different platforms sort of lend themselves to different types of social creation. Yeah, definitely so um yeah so I'll take kind of like the three biggest ones I think I'll do Twitter, we can do Facebook and we can do YouTube and yeah we'll throw a tech talk in there too. It's kind of all overlap in in various ways. So, you know when we're thinking about like Twitter and social cohesion and affordances, the way that, for example, the news feed and hashtags are structured. You can join certain group conversations using various hashtags that organize topics and conversations pretty much that are happening on Twitter and so that's one way that we tend to see some social cohesion happening, because they're very open and hashtags are very like can be read and viewed by everybody. I'd say that you know you don't have as like strong of an effect as you would say in like a private Facebook group where you're inviting members to join, or you might get vetted before you can actually participate in this kind of group chat on on the Facebook platform, whereas you know Twitter, any it's pretty open so anybody can can join it so you might have like some diverging opinions and things like that too it's much more conversational and so on and so forth. Facebook, when we're thinking about social cohesion and the way that it brings people together what is unique about it again is this kind of group feature or like even the pages as well. The pages are essentially run by one person that will push content to users, and then groups are things that anybody can kind of participate in and people will join pages or groups, based on their interests. You know they can be, you know about cats or they can be about politics they can be about vaccine misinformation or they can be about climate disinformation so there's all kinds of topics that people will join in and out of there. TikTok is a really interesting platform, because it's out the way I don't know how many people are on TikTok that are on this call but it's kind of you know we tend to think about TikTok as like one of the new forms for disinformation and all this kind of stuff, because it reaches a lot of younger generations the content is very viral, and it's not as it's kind of a new kit on the block so they don't have a lot of the same kind of like practices that a lot of other social media companies have and capacity as well. So, we tend to think of this as being you know some of the new fronts and TikTok algorithm will essentially you know you just scroll through constantly seeing a different videos it's organized by hashtags the same way that. Twitter is but it's less kind of group groupie and more based on tailoring content to particular users and so you kind of get social cohesion in the sense that you are targeted as being a person that you know might like dog content and therefore all you're going to get is dog content. We actually see a lot of news organizations and even like state backed media, like Russian state backed media creating accounts on TikTok and creating this kind of fun video content that will introduce children or teams to like what's going on in Syria. And so there can be like political angles and stuff like that on TikTok content as well. So very interesting fascinating space I'm kind of rambling right now but the point is that. It's really terrifying whenever I wonder under TikTok it feels like the Wild West and so I'm part of me feels like relieved that it's not just you know me approaching 40 but on the other hand, that feels like it's, you know, like it's going to foment a lot of potential polarization in kids. Yeah, you know, I think I was just like kind of scrolling through some of the channels like content earlier. And it's just so fascinating because it is kind of like very intro one to one fun content that's very much done in the way of other content that is appealing to younger generations, very catchy. And, you know, it's it's, it's just really interesting how they're like framing a lot of this disinformation to newer generations and this is the stuff that will, you know, when people are young, they're still forming their political identities in various ways and so it's just a fascinating phenomenon and something an area that needs a lot more research. This is great as the parent of a 14 year old and I'm not at all, not at all relieved by learning about this. And now I would be a very terrible employee if I didn't read my bosses, the Vice Chancellor is here, my bosses comments in the chat. Brian says that from his vantage point, the internet has disintermediated the curation of information and knowledge. So where we're kind of we kind of act as the gatekeepers, you know, information provision is becoming a lot more democratic. This is something that I'm reminded whenever I tweet out something completely stupid and Brian chimes in to gently ask if maybe I'm not correct. And thank you Brian for keeping me on my toes. And that now the ecosystem, we have an ecosystem where information is ubiquitous and requires each just each citizen to make individual decisions about how they interact with it. How do we help citizens connect to information in a way that supports valid information and expertise and no I don't think you're being too elite for thinking this Brian I think what what we find in a lot of media curation is that people are time poor. And it can help us pass information, particularly in this sort of ubiquitous information environment where we're drowning in information. Anything that can help us to pass that and to sort the good from the bad is useful and the moment that that role is probably, well at least on social media is really passing from journalists and from editors and producers to platforms and algorithms. It sounds very kind of the robots are coming to get us but is that the case Sam or is, you know, would you, would you portray that differently. Yeah, for me, you know, I think, I think some theories from psychology can help us a lot here when we think of the difference between like system one and system two processing where I always get them backwards system to is, I should Google this. The slower slower and fast. I'm pretty sure system two is the slow one and system one is the fast one. So when we're system one when we're fast processing, we tend to not pay very much attention to the facts we tend to have an emotional reaction to the content and a lot of Facebook, or social media content in general is really designed to feed into that system one processing because of attention economics, you know, it's the stuff that just elicits an emotional reaction from us and so we tend to, you know, think with make decisions about things based on how we feel about them. So for a system to processing is this, or whatever the slower one is the slow processing that we do. That's when we start to stop, take a second, think rationally about what we're seeing and process it and come to more logical decisions about what we're reading and how that relates to us in our values. And social media does not promote that kind of slow, engaging, rational kind of content, because it's constantly just scroll more and more and more and more and more spend more time read things, watch more videos. And so, if we're thinking about solutions to get people to, like, be more rational about what they're consuming online. So the solutions really get to tailoring content or giving people content that will force them to engage in this kind of like slower rational processing opposed to the quick emotional processing that happens and so for me really. And it's not about slapping labels on to content because this is something that we see a lot of the platforms doing now it's labeling mis and disinformation online, labeling election related content and linking, you know, users to go to voting information centers, labeling things as fact checked. A lot of the studies actually show that these kind of practices have a lot of unintended consequences, and are not actually making people more rational readers of news. Instead, for me the problem again really just comes down to the business model of these platforms and the ways that algorithms promote a lot of this fast thinking content this emotional kind of content. So if we're out of that, then I think we're actually going to be able to see people be more critical and what they're reading online and help develop a lot of those skills that people need to navigate the wealth of information that now is made available to us. The interventions, the kinds of can you sort of elaborate on the kinds of things that you're talking about the kinds of things that we could insert. I guess into the platforms or into the ecosystem that might reduce polarization or might stimulate sort of critical responses or what we call sort of cognitive reflection. So, I think the co-pancher in the chat has asked about the sort of the viability of getting platforms to tell consumers how reliable the information is, or you know, the source of the information is something else that we talk about. So, you know, we talk about fact checking. We talk about these sorts of cognitive reflection tests where you know you every now and then you sort of ask to reflect on what you've just read. What are the things that break that kind of attention economic cycle. What can you just give us a quick overview of the different, the different possible interventions and what we know about them so far. Yeah, so generally speaking, there's so many different kinds of interventions that platforms have introduced to combat a lot of these problems. You mentioned labeling fact checking initiatives. We can talk about investments in news journalism that's something that we've seen a lot of the platforms do recently. There have been some algorithmic changes to downgrade certain kinds of content over others. If we, I guess we could start maybe with labeling because that's kind of one of the big ones that I see a lot of platforms do, especially, you know, in the lead up to elections is introducing a lot of these election related labels and kind of related to this is like the third party fact checking because platforms will partner with third party fact checkers and then label content that has been reviewed by the third party fact checkers as being false or misleading or, you know, whatever the appropriate label is and so when we're thinking about labeling as a strategy to reduce mis and disinformation and therefore kind of combat polarization that way. Labeling tends to have a lot of unintended consequences. And so one of the studies on labeling has shown that if you label content, it tends to decrease people's overall trust in all news. So it's not just that we're making people critical readers of news from maybe fringe or far right or far left websites, but we're actually making people skeptical of professional news as well. And so that might not necessarily be good either because you know we want people to have high trust in professional media and professional journalism to an extent. So labeling can also if everything is labeled but a piece of mis or disinformation makes it through unlabeled. That can also have a negative consequence because people is called researchers who did the study call it the implied truth effect where content that is false but unlabel is sort of trusted as being true. So if platforms just mislabel something they miss a piece of content that's going through their automated filters, people might interpret that mis or disinformation as actually being true because it's not labeled. So one area where labeling does help is that if users are given a prompt that something has been like back checked as false or as misleading, they might not actually share it. So that's one of the positive sides of these labeling kind of studies is that we can actually reduce the amount of sharing of mis and disinformation by labeling content. But you know we have to kind of think about some of these unintended consequences as well when it comes to the labeling practices. And you know and so I think you know fact checking kind of goes in hand in hand with a lot of those responses for me I think the most promising things that we've seen platforms do and do relatively well that I think you know they could do more of has to do with like investments in news and journalism. For example, removing paywalls for people who want to access local news. I think that's a really great initiative because you know one thing about the internet to is that it's made information free and nobody wants to pay for news anymore if you can just find something else online for free. But newsrooms need that kind of subscription they need that paywall to help fund a lot of their work. We can't just completely cut off their modes of financing right and so by platforms kind of covering the paywalls or like the some of the paywall costs for users to access that content. That's something that is positive and can help educate users because now they're not going to if they click on an article that they want to read, they don't have to worry about the paywall anymore they can actually just read and access that professionally produced information. The other area that I think they can do more in again has to do with the the algorithms because they've, you know, made several attempts at, you know, tweaking their algorithms to maybe not share as much political content and news feeds and the lead up to elections but then, you know, the question is how do you define the digital and there's just so much stuff kind of around either the way that this negative viral content test is spread that I think platforms really need to do more in but they're just disincentivized not to because it's their business model. Thank you so much Samantha for for being here. Thank you for coming, you know, for our breakfast and your pre dinner. Thank you so much everyone who's attended thank you to the Crawford School and to the ANU for putting this on. Any more questions that you want to ask Samantha I'm sure you can get in touch with her either on Twitter or via email stalker on Facebook. I'm a don't stalker on Facebook. If you want to get in touch with me. I don't know why you would but I'm very easy to find. We will let everyone go and have their breakfast now thank you so much and we'll see you for the final panels in the leadership forum this morning.