 Hey everybody, we are at the top of the hour here and so it's really exciting to be able to talk with this community, although I really, you know, miss the yellow house here. It always is such an exciting experience to be able to, you know, hang out, shoot the shit and like get into these issues in a real, a real way. I feel quite a bit of distance from my communities of researchers and because of the pandemic, but luckily I've been able to work with an incredible team at Shorenstein with a little bit of crossover at Berkman in order to sort of sustain my, intellectual life. And one of the things that we've been spending, I would say the last two years on my life but different folks have been engaged with it for different amounts of time is this media manipulation casebook and what we're really trying to do is present a theory methods package for studying media manipulation and disinformation campaigns and over the last several years I've been really engaged in this specific topic because, ultimately, I think the question of media manipulation and disinformation for me really is about do we have a right to the truth. We have a right to accurate information. And as we've watched the last probably two decades of technology develop we've seen the closure of some of our other trusted institutions where we've seen, you know, the drying up of local journalism. We've seen universities come to rely on Google services for most of their infrastructure. We've started to see social media kind of take hold of the public imagination and none of these things are really thinking a lot. None of these things are really designed I should say to deal with what we're going through in this moment which is profound isolation coupled with immense and warranted paranoia about our political system and economic collapse. And, you know, I spend a lot of time in my room, just like the rest of you trying to figure out some of these these big questions and so today I really want to talk about well what is the price that we're really paying for unchecked misinformation and the way in which our media ecosystem to use a turn of phrase from from Berkman Klein's illustrious history. What are we, what are we doing here with this media ecosystem in the midst of a pandemic knowing that the cost is very high. We'll get medical misinformation before they get access to timely local relevant and accurate information to share my screen got some really great graphics that we developed for the media manipulation casebook at media manipulation.org. And these were developed by Jeb Riley, which is an amazing illustrator, and he's been helping with our team over the past few months sort of get our stuff together. And who I am, I am right now the research director in the director of the technology and social change project at the Shorenstein Center. But apparently the Shorenstein Center is just like my house at this point. It's really hard to think about like what what a return to the university is going to feel like, given the fact that you spent several months working from home but today I'm going to present on the true costs of misinformation producing moral and technical order in a time of pandemonium. And I choose that word really intentionally and I'll tell you why but first a few key definitions. So what am I talking about when we're talking about media manipulation and disinformation online. I'm talking about basic comms one on one media is an artifact of communication so you know notes and books and you know any kind of thing means what have you artifacts things that are the leave behind of public conversation for the most part is what we study. When I talk about news or news outlets I will talk about news and news outlets, using that terminology. But when I say media, we're referring in a way to this kind of ephemera manipulation. My team has debated quite a bit and if Gabby limb is listening she's she's been at the forefront of us trying to get our definitions in order. The only manipulation to us is to change by artful or unfair means so as to serve one's purpose. And we leave the term artful in there because as I was doing a history of media manipulation on the web I was drawn into the work of the yes men who are media activists, you know, figured out early on that, you know, who can own a domain, apparently anybody can't because they bought the domain associated with the WTO they bought a domain associated with George Bush and instead of really lambasting these these groups what they did was they impersonated them in order to draw in other folks. And there's been several documentaries made about the yes men and in their media manipulation hoaxes. But for them, the point isn't to keep the hoax going the point is to reveal something about the nature of power and so manipulation in that case, for me is is sort of an artful hack. And we can talk a bit about white hat hacking and gray hacking if if if time allows. But when we talk about something is disinformation we actually try to apply a very strict set of criteria and we define it as the creation and distribution of intentionally false information for political ends and intentionality. You know, for anybody on the on the line here that is a lawyer you are just cringing right now I can, can kind of feel it through the webinar. And don't worry I'm not going to talk about the freeze peach that just, you know, sets your hair on fire but intention is hard. I'm not going to lie who can know a man's heart, right. The issue here about intentionality usually with disinformation campaigns if they are to set off a cascade of misinformation, manipulating algorithms in particular, generally these groups have to recruit from more open spaces online and so we've seen you have different for, you know, Reddit groups for instance be repurposed to talk about the intention of what it means to spread disinformation and how to get one over on certain journalists and so we are able to discover intention when we can discover where a media manipulation campaign is being planned. And so it's but it's a really hard thing to assess without direct evidence. But nevertheless, when we talk about disinformation it's because we have some direct evidence that points us to the intention of the campaign. So, I want to recognize that we're in a moment of extreme emotional deprivation, you know, social isolation in in this word the pandemic is something that I was really drawn to thinking about it and it's the entomology of thinking about of all the people public common of disease widespread. So pan and demos together, thinking about those kind of ills that that spread in these kinds of situations that are in some respects completely unpredictable and hopefully at least for my lifetime once in a lifetime. I prefer to call this moment pandemonium. And I'll tell you why so pan meaning all but also, I mean demos meaning all but pan meaning evil spirit, evil, divine power and fury or divine being. And the reason why pandemonium for me is a better descriptor of this moment is I think back to the right at the beginning when most people were like, Okay, we can handle this. We'll get through this. This isn't going to be a problem. We're just going to, you know what everybody go home from school we're just going to get on zoom. And it created quite a quite quite a chaotic environments. When we were thinking about how do we write about the phenomenon of zoom bombing, my co authors Brian and Gabrielle limb Brian Friedberg and Gabrielle limb we were talking a lot about. Where does this opportunity arise from where you have a bunch of people adopting a brand new technology very quickly, you have institutions buying into it at massive volume, and then you have students who are not really bought into wanting to be on zoom all day. And so some of the early instances of zoom bombing that we saw we're not what it ended up being which turned into a kind of political political ideological war of racism and all kinds of other phobias. At the beginning it was a lot of pranking students dropping links to their classes into chat apps and hey you know we're going to be talking about this thing like wanting to come in pretend to be a student and students were using their phones to videotape themselves invading classrooms and then uploading them to YouTube and, and it was, it was pretty funny I'm not going to lie my favorite one was this video of a student, just interrupting the professor and saying hey can I just, can I just pay you for my other professors are letting me pay them. I'll give you three grand we'll call it a day and the professors just like what is going on. Are you even in this class right. And it's just kind of jokie and hoaxie and but it was a way of people coping with the kind of moment trying to assess what was going on and then you saw more vicious use cases of zoom bombing where women black women professors were being targeted, you had instances where LGBT groups were being targeted alcoholics anonymous, and it went from being pranking and hoaxing into something much more. The only word I can describe is disgusting, and what it prompted though was a rapid change in the technology itself zoom didn't just change their settings, but they really had to interrogate the entire system, even thinking through where their servers are based and what kind of privacy protections they would need to put into place but what's interesting about that is because zoom had a closed business to business model. And it wasn't necessarily like social media is just out in the world for all to use. They were able to install these changes without an immense amount of blowback from the public. When we see social media companies try to deal with some of the more terrible use cases racist use cases, transphobic use cases, things grind to a halt. And we've seen over the course of this last summer, even instances where social media is trying to clean up medical misinformation in order to prevent, you know, poisoning, as well as people taking unnecessary risks. There's been pushback against that as well. And so it really has a lot to do with who the customers are, and who the, you know, who the technology company thinks they're serving in terms of how they envision what is possible for the design of their systems. And in moments of pandemonium, or as Foucault might call this the logical ruptures or, you know, paradigm shift. We see technology become much more flexible and malleable to the situation than they may have been in other situations that didn't feel as as as critical as they do right now. And so what you're living through in this moment is is a really rapid succession of technological changes that many of us are just, you know, every day waking up to and being like, Wait, what happened, they did what, didn't they say they were going to do this other thing. And so as we as a research team try to reckon with this. We also have to think about well like methodologically how do we capture this and stylistically theoretically how do we how do we know what to look for. And so I always turn back to the work of Chris Kelty, who was my postdoc supervisor is someone who I is an anthropologist and information studies scholar at UCLA. And he wrote this book 2009 called two bits. And it's really, you know, also is indebted to the work of Eleanor Ostrom on governing the the Commons. But he talks a lot about how to produce moral and technical order, and he is studying free software. And so, thinking with his, his framework, I'm really drawn in by this quote, geeks fashion together both technology and principally software hardware networks and protocols and imagination of the proper order of collective political and commercial action. That is of how economy and society should be ordered collectively. And what he's really trying to say here is that the way in which our technology is built in codes, a vision of society, and the economy, and in building software in this way. We end up recursively with a society that in some ways mirrors that technology but in other ways the technology really is distorted by the conditions of its production. And so, thinking through that I was wrote this paper with Anthony Nadler Matt Crane about weaponizing the digital influence machine the political perils of online ad tech. I was like a year and a half ago or so. And we came up with this concept of the digital influence machine which is the infrastructure of data collection and targeting capacities, developed by ad platforms web publishers and other intermediaries, which includes consumer monitoring audience targeting and automated technologies that enhance its reach and ultimately its power to influence. So instead of thinking about just social media we're thinking about the architecture of advertising that spreads across the web and social media as a way to understand well how is the web or the internet reflecting how is the network then its vision of society, and how is that infrastructure specifically about the reciprocity between data collection, and then circulation of information through targeting. How is that encoded and what forms of power are then able to leverage that digital influence machine in order to produce. Let's just call it social change. What power is something that sociologists calm scholars, every one of us, I think on this webinar today are really interested in understanding, because I'm not making the case that every instance of misuse of technology is the fault of some happening. What we're actually trying to understand is as this technology scales, as it develops, what kind of social imagination is animating design decisions, and who can either purchase power in this system, or wield it by virtue of having very, very large social networks. And so we're not necessarily interested in all misinformation, or all instances of bad behavior online. What we're interested in is how do certain kinds of behaviors scale. How do people learn about it, how do they adopt that kind of power and how do they wield it against a society that is using the internet by and large for entertainments using it to learn about things using it to read the news using it for their education, right. A lot of things are now passing through the internet as a kind of obligatory passage point, but in doing so in in digitizing most of our lives and now even most of our daily lives during the pandemic, what kind of different differences in power manifesting themselves, and to what ends then are we as a collective asked to shoulder the burden or to pay the price for the production of this particular kind of moral and technical order. So I asked myself, if we're in this situation, and it's now, I would say easier than ever to conduct propaganda campaigns to hoax of the public to perform different kinds of griffs. I'm drawn into thinking about how, at the beginning of the pandemic there were nearly 100, there were over 100,000 new domains registered with COVID-19 or Coronavirus as part of the domain address part of the URL. So, what ways in which are we paying for this kind of media ecosystem information environment that ultimately doesn't seem to be serving our broadest public interest which for me is at this stage at least with the, with the pandemic, being able to access accurate information. So I'm thinking a bit about through the lens of a Siva Vardyathian's book on anti social media, thinking about well who pays for social media, you know, the adage of course is if the product is free the product is you. But the product actually isn't free. Advertisers are the ones that pay for it and then you are the consumer of advertising through social media. But Zuckerberg said this really interesting thing he said, I don't think it's right for private company to censor politicians in a democracy so this is during his Georgetown speech and I thought, yeah I can agree with that and like private company shouldn't be censoring politicians and a democracy. But also, like this seems really like a platitude it doesn't seem. It doesn't, it resonates but it doesn't, it just hits different when you start to think about, well what does they mean by censoring politicians and what do you do when you create the conditions by which politicians can, or any old, any old person can speak to millions at scale. What happens when you are not necessarily accounting for the fact that you have built a broadcast technology that is allowing for misinformation at scale. And so that was my my initial thoughts on this and one of the things that I was drawn into in early January 2020 was a bit of a reversal in Facebook policy where they write in the absence of regulation Facebook and other companies are left to design their own policies. We have based ours on the principle that people should be able to hear from those who wish to lead them warts and all. And that kind of statement warts and all made me wonder a bit about well how are they really going to reckon with the way in which different politicians are using their system. Not just the organic quote unquote we could get into all kinds of metaphors about why that's why that get into all discussion about why that metaphor is wrong. But what is it that they're actually trying to get at when they say warts and all when it comes to politicians who are using both their advertising systems, and other forms of social media marketing to essentially delude the public. Right to to not just put forward a political position, but also to gin up all kinds of suspicion, and, and of course this is before the pandemic really takes root. But, you know, this seemed to be their reaction, Facebook's reaction to the situation that we're in was basically like, Well, if you don't regulate us, we're just going to, you know, kind of have to let it happen. And then the pandemic hits and Facebook realizes that they become this central figure in not only medical misinformation campaigns, but also in this effort to as Yochai Bencler and Rob Ferriss's group have shown to make people believe that mail in voting is insanely corrupt. And so, eventually Facebook does have to change their policies on political advertising, because they realize that at scale. It's different scale is different. You know, Clay Scherke often talks about more is different right and in sociology we don't understand ourselves as psychologists because we know that more is different society is actually different. And so when you're dealing with misinformation at scale, people who pay the price don't tend to be the companies at all but really end up being the people who are information consumers, consumers, let's say. So how do we study this how do we study misinformation at scale how do we make sense of it. The new website is up now. And what we do is we put together a theory about the media manipulation life cycle which, if you want to study these things we recommend that you look for essentially five points of action. Where is the manipulation campaign being planned what are its origins. How does it spread across social platforms in the web. What are the responses by industry activist politicians and journalists. This is crucial if nobody responds to your misinformation campaign. Not much happens. You know, 2016 2017. Even earlier than that Whitney Phillips's work has pointed us to the fact that this kind of media hoaxing performed by, you know, trolls and other folks through 4chan. It had become a bit of a game to try to get journalists to say the wrong thing to try to get politicians or other folks to chime in to quote unquote trigger the libs. And so, when we're thinking about like how a media manipulation campaign is going to succeed. We're actually trying to understand well who's going to respond. Because you know there's going to be so many. Let's just call them shenanigans online that it would be impossible to moderate that at scale. Then we look very closely at mitigations. So 2016 2017, really the big tools of social media companies were content moderation including takedowns and the removal of certain accounts. They weren't really in the business 2016 2017 of demoting content. There was some removal of monetization, but over the last year in particular we've seen a number of different ways in which platform companies are willing to do moderation and curation of content online. And so stage four is has expanded, but without any transparency. And so we're often backing into problems of content moderation as we're trying to understand the scale and a scope of a media manipulation campaign we often run into moderation that has not been recorded in any public kind of way. And then the last thing we look at is the adjustments by the manipulators to the new information environment so if some action is taken, or if they enjoy some success, we will see that campaign happen again and again and again. If we can answer the question around, you know, who pays for social media and we know that advertisers are pumping money into it and we know that, you know, the public is by and large the crop that is being that is being harvested. So we have to think about this category of misinformation and a little bit differently. We have to think about well who is actually called into service to mitigate misinformation at scale. So we suffer through the, the open hearing that I was part of but it is on YouTube, not ironically that is where they air the select committee on intelligence hearings. And we did a hearing on misinformation conspiracy theories and infodemics and I'm just going to talk a little bit about what I presented in that hearing and then how I think about it related to the media manipulation casebook. When we're thinking about who pays for misinformation. I'm really thinking about several categories of folks that are professionals that have now started to build their livelihoods and careers around handling misinformation at scale. So journalists, we know the story. Many of us are probably involved in this where our role is one of finding misinformation online or disinformation campaigns and then ringing the alarm bell and trying to get platform companies to go on record. This is a new beat for journalists over the last several years journalists have really honed skills that are unlike any other using open source intelligence, as well as using other forms of of online investigation even digital ethnography to get to the bottom of misinformation campaigns. Public health professionals. I talked to more public health professionals than I have in the span of my lifetime over the last few months where they are just deluged by the kinds of by people who are scared people who are confused and people who want to know more about COVID-19 because they saw online, you know that five G's causing Coronavirus therefore no vaccine will work. They saw online that Bill Gates actually unleash this on the world and so we just need to put Bill Gates in jail and the rest of this will end. Public health professionals have been called in to do quite a bit of misinformation wrangling. Civil society leaders. I've been working with a group of folks through the Disinformation Defense League for the past several months really focused on get out the vote campaigns. If you're get out the vote campaign isn't just about letting people know when, where and how to vote but really about trying to get them to understand that voting is not imperiled. Your mail-in voting is not imperiled or other forms of voting or that the machines aren't rigged. This is the role that civil society has had to take on as a result of misinformation at scale. And then lastly law enforcement public service election officials probably best exemplified by the recent controversy around the website operated by CISA which Chris Krebs was at the helm of until he was fired via Twitter. You know these are the kinds of folks that are taking these questions fielding these questions about voter fraud and misinformation and so they don't have budgets for that nobody was like oh yeah you know what we also need to have a huge misinformation budget so that election officials can let people know that their votes were counted and that this is the way in which the machines that they used worked or this is how we verified signatures. Right we just, I anticipated it but for the most part we didn't anticipate the scale at which this would be occurring. And so when we talk about this on our website and I we have 14 or so case studies up there we have about another nine or so that's going to go up by end of year. One case in particular around the Ukraine whistleblower. And we were looking at the different kinds of media manipulation tactics that were used, and journalists by and large had to really navigate a very twisted terrain of trying to cover the Ukraine story without saying the name of the whistleblower which was actually traveling in very high volumes throughout the right wing media ecosystem, and there were a lot of attempts to try to get center and left media to say this person's name. And for the most part they relented and they didn't cover the disinformation campaign because of the role in the status that whistleblowers historically play in our society which is that we should be protecting their anonymity. While the pandemic documentary the way that that was planted online. They knew it was going to violate the terms of service on every platform so they set up a website in which you could engage with the content watch the content but then also reupload it, and they had a set of instructions that basically said download this and reupload it to your own social media. And this happened thousands and thousands of times so it was really difficult to actually get that video removed from the internet and let's say prominent platforms because it's still on on the internet, but this tactic of distributed amplification we've seen this before, but we still don't have a lot of great solutions to it. When it comes to dealing with medical misinformation in particular as well when we're thinking about viral slogans and the ways in which civil society have had to deal with white supremacists and extremist speech online. There's a study about the slogan it's okay to be white and the way that it moved from flyers that were planted on college campuses with a very plain message there was no indication of who was doing this, other than if you needed to look. You knew where to look on 4chan you knew that it was a campaign by white supremacist and trolls that basically asked people to put this flyer up in public places that just says it's okay to be white in Massachusetts someone actually put up a banner over the highway, trying to get the idea and other folks to take pictures of it. And this kind of viral slogan yearning is something that civil society organizations have really had to reckon with, and call attention to so that people understand every difference of this slogan is meant to create the conditions by which people would discuss racism and race but through the lens of whiteness and, and to try to normalize discourse about white identity. One of the services, one of the case studies that we have up there is a case study about a Maxine waters forged letter the letter books as if I mean there's a lot of digital forensics that make you, you know, realize very quickly that this is a forgery, but it was a letter saying it was a letter written for Maxine waters quote unquote to a bank basically saying if you bring these people. If you if you donate a million dollars to me I will bring think that was the figure was like 38,000 immigrants who are all going to need mortgages to this area and so if you if I win then we all win kind of thing. The letter was planted by her opponents and then through the uses of bots and other kinds of automated technologies was promoted online, but it actually ended up with the FBI having to get involved and we've seen numerous instances now, especially during the pandemic, where law enforcement is now being called up with these rumors and and they're being asked, you know, will you step in and deal with, you know, antifa setting fires in my neighborhood and law enforcement are like, where's this coming from. Like why now are we being asked to deal with misinformation at scale and these kinds of rumors and so, and of course election officials as I mentioned earlier also being called into the fray. I wrote in MIT Tech, October 5 of this year of still this year. Yeah, it's December 1 rabbit rabbit. I wrote this piece called thank you for posting and I make this argument like secondhand smoke misinformation damages the quality of public life. Every conspiracy theory every propaganda or disinformation campaign affects people and the expense of not responding can grow exponentially over time. Since the 2016 US election newsrooms technology companies civil society organizations politicians, educators and researchers have been working to quarantine the viral spread of misinformation. The true costs have been passed on to them and to everyday folks who rely on social media to get news and information. So we're to try to restore moral and technical order. I got like a list, I got lists all over the place of things that I think we could do but I'd love to discuss some of these with you. I think we need a really good plan for content creation coupled with transparency and content moderation. I've argued in the past for hiring 10,000 librarians to help Google and Facebook and Twitter. They can support out the curation problem so that when people are looking for accurate information and they're not looking for opinion. They can find it you know if you think about Google search results, the things that become popular as the things that are free anything that's behind a paywall is not something that people are going to continue to return to. As a result, Google search becomes the kind of, you know, the kind of quality of a free bin outside of a record store. Right. Every once in a while there's a gem at the top but not not usually. We also need a distribution plan for the truth that supports public media and social media companies must deliver timely local relevant and accurate information. This happened with pandemic misinformation information where there's lots of these like yellow banners that are showing up on websites. That's not a plan that's like a sticky note. So we need we need something else. We need to develop a policy on strategic amplification that mirrors the public interest obligations of other broadcast companies. When we think about strategic amplification if something is reaching 50,000 people when we think about broadcast and radio, we have rules for that. So when when something is reaching a certain amount of views or a certain amount of clicks or a certain amount of people routinely, especially if it's a certain influencer. There has to be some kind of measure that will help us understand when misinformation or hate speech or incitement to violence is circulating it at epic volumes, how what is the protocol that should exist across social media sites and then I think something that fell out of view but is still very important is that technology companies, including large infrastructure services, must fund independent civil rights audits where auditors are able to access the data needed to perform investigations, including a record of decisions to remove monetize or amplify content. So we need much more transparency and this might even come in the form of a agency that can deal with this so these are just four of the ideas that I've come up with off the top of my head wrote on the back of a napkin and didn't give much thought to. I'm kidding. I spent my whole life seeped in this nonsense. The only thing that sustains me through it I think is knowing that there are people like the good folks at Berkman Klein that want to deal with these problems and want to deal with them responsibly, but also understand that these are different groups of people disproportionately and we've got a really great white paper up on media manipulation.org from Brandy Collins Dexter, specifically about how coven 19 misinformation manifests and black communities online. And so, as we deal with the pandemic as we deal with the questions of moral and technical order, we're really striving to answer. Do we have a right to the truth. And if so, how do we get there. So that's the thing that's been paying my pain for for many, many years now, but I'm hoping that through, you know, the next several years of an administration that is potentially I don't know, I don't even know, sympathetic to dealing with the harassment of women. The ways in which certain communities are underserved online, particularly black communities dealing with the kinds of meta misinformation that are pervading Latino communities as well. So we will get somewhere on some of these issues, but it's going to be a very, very long process so I just want to thank my team and the folks that helped me think through these problems every day. And we're now open for questions. Thank you so much, Joan. So we definitely have a bunch of questions coming in, and we'll start with some of the ones that are getting some additional thumbs up just as those seem to be the most compelling from from people. So one is is from Madeline Miller and states as a student currently as a student currently doing a professional degree in library and information science what can I do about be part of a team of 10,000 libraries, working on community misinformation. Yeah, I think, like, you know, I would love for, you know, the ALA to step in and really create a program that allows for it, you know, at conferences for this kind of thing to develop. As well, you know, there have been different efforts to build a digital public library, but I do think that we need more librarians voices embedded in industry. And it pains me to say that because ideally, as the utopian that I am, we would build a broad public infrastructure that deals with these problems and I'm like, very excited for Ethan Zuckerman's new lab at UMass to deal with some of these issues but for now, we have what we have. And we do need folks to start to think about, well, if we were going to take, you know, let's say 20 consistent disinformation trends and deal with them specifically, how would we reformat search so that the first three to five things that people see are actually facts that have been vetted. Francesca Tripodi has this really great report on data and society on searching for alternative facts. And one of the things that she discovered in her, you know, ethnography was that a lot of people believe that the thing that they see first on Google, it has been fact checked in some way and or vetted in some way or is the truest thing and it's not. And so, I think there's a lot of work to be done by librarians to sort out our information ecosystem and to make good models for how we would advance a knowledge based infrastructure, rather than a popularity or pay to play infrastructure which is what we have now. All right, thank you. So this is one that came up early and I think it's a question that is often comes up in this realm and this is how do you like how does your definition of disinformation differ from propaganda. And I might add to that, you know, how is this challenging given that those are probably, for lack of a better word, unstable categories with many of the different entities that that we're looking at here. Yeah, yeah. Well, the reason why disinformation is actually an interesting category to work with is because it shows up in, you know, American discourse. Prior to 2016 is something that is sort of uniquely Russian in the sense that these are kinds of campaigns that are associated with Russian political tactics information warfare. But as the US starts to figure out that there's disinformation happening. You get this discourse of fake news and, you know, Claire Wardle did a lot of work to try to tell people not to use that word because it was playing into political divides and I'll give you one anecdote about why fake news was so treacherous to work with is because when you would talk with designers at these social or technologists at these social media companies, they would just say well you know fake news real news like what's the difference it's information, you know, and and they didn't understand what we were actually talking about, which is to say that they were using these popular definitions and Trump had obviously made an enemy out of the New York Times and CNN by then saying they were fake news. But what we were actually talking about was something that Craig Silverman had had looked into when he was I think a Neiman fellow, where there was where these cheap knock off websites that were made to look like news. But we're really just about click bait and they come up with any old headline that would make you want to click through the content so so fake news for us was a technological problem brought about by the monetization of advertising. Where you have these these fake websites, but then the politicization of that term made it seem like well when you're talking about fake news, you're talking about, you know, a political category. And so, when disinformation started to sort of become something you could talk about and not have it be necessarily only aligned with discussions of Russia, it felt like a better fit than dealing with particularly the fake news discourse, but then on top of that, you know, when networked propaganda came out, of course propaganda got put back on the map in a serious way to look at, you know, the phenomenon of media elites, who were using both the online and the broadcast environment in order to create a zone of information that was politically motivated. And so for me when I think about propaganda, I'm thinking specifically about, you know, the way in which that book positions networked as a as a kind of tool of media elites, but with disinformation for us as a research team we're really trying to look at the incentivization and we deal a lot with fringe groups we don't necessarily. Of course we're avid avid watchers of Tucker Carlson but in so far as he's like the shit filter which is that if things make it as far as Tucker Carlson, then they probably have, you know, much more like stuff that we can look at online and so sometimes he'll start talking about something and we don't really understand where it came from and then when we go back online we can find that there's quite a bit of discourse about wouldn't it be funny if people believed this about antifa or, you know, so yeah so that's, I know it's not a clean answer, but that's sort of how we arrived at where we are. Awesome, thank you. So, to follow with that, I think particularly as you start to talk about kind of the media and those distinctions. The next question we have is in. I know this will be a more complicated answer. How have many have any social media companies expressed any willingness to work with groups like yours on squashing this misinformation on their platforms. So I'm thinking about that particularly as as you were describing kind of the fringe groups and how they're taking advantage of and often the media companies are profiting from that that advantage. Yeah, I mean we'll take meetings with anybody we just won't take their money or data. Right. And so the idea here is basically that you need to have a pretty strict rule for my team especially we have a pretty strict rule about how we get our data, the way in which we engage with platform companies or any company. We will take meetings literally with anyone right because for us it's not about, it's not about them getting us to see it their way. Right education when done well is we all arrive at a shared definition of the problem. And so for us engagements with different companies really are about showing them something they couldn't see, because they were stuck in a specific mode of thinking. So for example if we think about all of the different ways in which research could have been done about disinformation and the elections. I think the approach of the Berkman Klein team around, you know, looking at mail in voter fraud very deeply using media cloud was the right approach. They didn't, you know, go out and solicit a bunch of information and funding from platform companies that would mire them in questions of their allegiances and whatnot they just did what they knew how to do. And they were able to, like, many of us that remain fairly independent were able to see that the question wasn't really about how many misinformation campaigns we're going to see big or small. It was really about having the best knowledge we could have of the ones that were going to have what looked like the most amount of impact because they had a couple of signature aspects to them which is that media elites were picking them up. The political elites were pushing them, and they were forcing a public conversation that wouldn't have happened if not for the design of our media ecosystem operating in this way. One of the things I don't think any of us could have predicted though was the reaction, for instance, of large scale media corporations like the New York Times or the Washington Post, or even the Wall Street Journal, not to pick up that political propaganda and parrot it back out especially during the time of the, you know, done done done the Biden laptop story. And I think that when we, as research teams engage with industry, it's really, really crucially important and I can't underscore this enough to go with the method that you know, and to go with the mode of analysis that is most authentic to what it is that you're going to study so for us it's mostly qualitative digital ethnography, like we watch the content, we understand who the players are we understand the scene and we, and we take that pretty seriously. And so as a result we don't get stuck in, you know, questions about, you know, oh did this have any impact or did this do that or blah blah blah or like, you know what we can show very cleanly is these are the progression of facts. And we can show empirically how these things scaled and then we can look at, you know, the kinds of reception and, and mitigation attempts that platform companies have had, and then we can evaluate them based on the ways in which manipulators either choose to abandon that project or they choose, you know, another route. And so that's, you know, that's different from other research houses that do very similar kinds of things. And I think the other challenge journalists have is a very similar one which is none of us want to get stuck in the role of becoming like an arm of the industry just doing content moderation that's that's not what's an interesting piece of the puzzle for me the interesting piece of the puzzle. And this is probably just because I'm a nerd is like, how do we make sure that people can make accurate, you know, can access accurate information and make decisions about their lives based on that information as well. Like, I also want to have a little bit of fun. So I enjoy a good prank and I enjoy a good hoax. I think that by and large when we're dealing with this information misinformation at scale. It's a feature of these industry practices and therefore we can't assume that the industry is going to be able to see itself for what it is. I think we have time for maybe one more question. Exactly what I was thinking. So this one, I think ties well with that last one and as a good kind of wrap up question from Charmian white. Can you elaborate the concept of a distribution plan for the truth. How is it possible for social media companies to deliver timely and accurate information when communications on them are instantaneous in real time in the number of contributors to these networks is ever expanding. It's, it's a hard problem and needs more research. And that's why I really value the work of librarians on thinking through these kinds of taxonomies that we're going to need and the kinds of ways in which we might want to hold out certain categories. So we can hear of Deirdre Mulligan's work on rescripting search. She got this beautiful paper about, do we have a right to the truth. So what happens when you Google did the Holocaust happen. Right, so back when she was writing her paper, terrible things happened when you Google did the Holocaust happen, you're actually brought directly into anti Semitic groups who would, you know, post literally every single day anti, you know, holocaust anti Semitic content. And this is an experience that I had as a researcher, looking at white supremacist use of DNA ancestry tests. It wasn't the case that when you were looking up certain kinds of white supremacist claims that you were given information about white supremacist groups and why they were bad. And the SPLC does a really good job of tracking that it just wasn't rising to the top of Google, what was where, you know, white supremacist groups like storm front. And so, for me, it was really important to think through those questions of what do you get when you search for X. What do you get when you search for why and then how do those algorithms reinforce that. Dana Boyd and I wrote about self harm and if you for a while if you were to look up how to injure yourself and you wouldn't just get, you know, tutorials. And also be reminded that you searched for that over and over and over again on YouTube and Instagram and other places that didn't really have a great restrictions on that kind of content. And this goes back to, you know, early discussions about Tumblr and pro anorexia blocks. And so, now is the point to understand that we've reached a kind of critical mass with social media and dealing with information at scale is just profoundly different than dealing with rumors or hoaxes that kind of stay local. And because social media companies have focused for so many years on increasing scale, increasing information velocity. We now have to have a bunch of different professionals dealing with it in like really like slapdash kind of ways right the way in which journalists have had to take up the problem of media manipulation because they have been targeted by it. So much election officials have had to deal with it this year. It's just, it's beyond. It's beyond a kind of quick technological fix we actually need a pretty robust program to to deal with the curation problems so that when you do search for how to vote. There's a lot of information about what's particular to your area. And it's only recently like I cannot express to you enough, how recent it is that these companies have been willing to make those changes. Prior to that it was like, if you asked me in 2016 if we were going to get any traction on dealing with these white supremacists that were organizing online that were rallying it at Trump rallies meeting one another, growing their ranks. You know, expanding their podcast networks. I would have said, No, absolutely not these companies are completely unable to face what they have built, because they didn't think about the negative use cases they didn't think about how different people fringe groups would rise to the top and have an incredibly outsized impact on our culture. The question of propaganda then comes into full view as Trump kind of pardon the meme but assumes his full form as the president who is trying to defend himself from a lost election. And this has, you know, just kind of thrown everybody on their heels in this, in this field, trying to, to map and understand what the, what the real problem is, you know, I was talking with Jane lechenco at Buzzfeed and she was just saying, I did 44 debunks in two weeks and I'm like, maybe it's not working anymore maybe you can try to debunk everything but it's just not it's not going to hold you know the gates, the gates again a broken. And so we just, we need to have more thinking of course and like as a nerd and more research, but I do think the possibilities of solving this problem lie between these different professional barriers and that's why I think the multi disciplinary approach to this problem where we try to get everybody's concerns on the table we try to understand how to navigate that and then, you know, with with a little bit of group from like, to help from groups like, you know, Jonathan Zittrain's assembly project, we can start to have those deeper conversations that ask the question, you know, how do we get beyond misinformation at scale, and whose responsibility is it and how much is the cost. And how do we, how do we end up with the internet that we want, rather than this like thing that we, you know, eventually like the thing that we have, which currently is isn't working and the public health implications right now or are what my mind is at because ultimately this kind of misinformation at scale politicize medical misinformation medical information around masks around here's this is like, this is gory stuff, because you know, we're going to look back on this moment historically and every one of us is going to wonder, did I do enough and how could I have done better. And that's why I think it implores us all to try as much as we can to get involved to think through these problems and to, and our method is thinking with case studies because we want to think about things in depth Lee and then we want to abstract from them higher order definitions and principles. So that's where my mind's at, but I appreciate everybody tuning in. We're going to have several misinformation training starting every month, starting in January, and we're going to have lots of opportunities for people to write for as well so I'm really looking forward to that as the next iteration of this project.