 Great, so we saw several really interesting case studies. And I think one of the things that we see about them is, again, this idea of what do people know? And people often spread misinformation not because they want to. It's just that they mistakenly believe something. And just a hypothesis is one of the reasons that the corrections don't spread as much, one, it's not as exciting, right? And two, it's a little embarrassing to admit that you got it wrong that first time, right? So are you going to correct something that you tweeted earlier that it turns out is wrong? I don't know. Maybe you don't want to call attention to that. Anyway, so I think partly what ties some of these cases together is this idea, again, back to what I like to think about as skill, as what do people really know. And in particular, I think here also, this idea that I heard from Ted Street specifically about algorithm literacy, right? Because some of the issue, for example, about trending topics and why does something trend, it's very much about algorithms and understanding these systems at a deeper level, which most people don't. And so yet again, we have this issue of lack of skill in an internet-related domain, but how do we get people to be more skilled? Partly we can build better tools, but partly we also need to recognize that they need training. And one of my big challenges, and just a question I want to throw out there, but obviously we welcome all sorts of comments is, where do we actually teach people how to be better at using the internet? And what's a good point of intervention? Because I think that's a huge challenge. And I'd love to hear people's thoughts on that either in a group or later, if you want to come chat with me about that. But I think at this point, we'll open it up and hear reactions to the talks. Yeah, my name is David Skaak. I'm an even fellow here. Just to touch on your point, and it really speaks to something that's resonating with me throughout all of this, is that oftentimes what we're seeing, and especially in Gilad's work, is that journalists are the ones that are amplifying the message, whether it's true or false. And that's what then feeds people to re-amplify and spread even further. So it's not glamorous, but I think one of the things that we need to do as a collective is provide journalists with the training they need at the basic level to understand how the internet is different, and also to provide them with those tools that I'm sure the next panel will get into. Hi, George Coe, Harvard College and politicalscape.com. So just to touch on your point, in our ES21 class, one thing we looked at was really the design aspect of the tool you create can help people explore just intuitively. So if you actually log on to our developments, our alpha site, which is dev.politiscape.com, you can see a really simple tool we created using the similar kind of data clustering that Professor Resnick did. And we found out that by just handing people our development website on our iPad that the simple thing of just touching and seeing the bars move apart gets people excited from the design standpoint of exploration. And so maybe one of the things to tackle in terms of the hack day tomorrow is the looking from the design aspect, maybe with the IDEO Human-Centered Design Toolkit, to figure out how we can use an intuitive, easy to use design to motivate people to explore. Carl. Carla Engle from Progressive Strategies. I wanted to thank you guys for kind of changing the conversation this afternoon, at least a little bit, into truthiness, which is not just discrediting misinformation, which is obviously a big problem that we do have out there that there's a lot of misinformation, but also building a positive truth platform. That's what the internet can be used for. So I wanted to, it was one of the things I was gonna say before we started this section, but I'm really excited that we moved into that direction. And I wanted to extrapolate on that for one of the things to throw to the hacks for tomorrow to think about is how do we build that truth platform? How do we amplify it? And how do we reach just outside of our sectors? It was identified today that only certain types of people go to factcheck.org and other fact-checking facilities, and that only certain people want to learn whether they're right or wrong. And those aren't the people that are in this room. Those are the people who are, and I'm sorry, I'm gonna take it political, that's what I do, watching Fox News and taking that information, sucking it in as golden, and then repeating it, retweeting it, putting it on Facebook and all of those things. So one of those ideas is how do, I'd like to hear the hacks look at how do we create that librarian Christian, I think that was you or Esther, who brought that up? How do we create the librarian for the internet? Is it a yelp for the internet where people are raiding truthfulness? When they Google something, is it something like that? But that would be a tool on how to use the internet more effectively, how to discredit that information before you read too much of it. At first glance in how to work on the self-selecting problem, where we look at our Google and our Bing and all the other search engine options, and we choose what we read. Well, if we saw that some things have been fact-checked already for us, or rated in some way, that might change how we do that selection. Can I respond to that, actually? Yeah, and then I'd like to too. Sure, yeah, I just wanted to tie that with the last commentary I've lost now, but the last commenter drew our attention to the media as an intermediary that was very important, and your comment really also proposes that we should give more tools to people in part this yelp idea that you have. It seems like there's another intermediary that we haven't talked about that much, and that sort of Gilad's talk makes me think about Twitter itself as an intermediary, and Esther's point of algorithm literacy. Could we see tools like the ones we just saw on this panel that were directed not necessarily at the media, although that is important, but also at, say, search engine intermediaries to try and do something to empower the users in that regard, because it doesn't seem like, it doesn't seem like this is where this conversation is really gone, and yet this is very important. I'm thinking here of Ben Adleman's stuff. Like a couple weeks ago, he found that on real estate sites, in order to comply with the Federal Trade Commission rule that ads be labeled, real estate search engines had decided to use the label best match to label their ads. So, I mean, it's not a great label for an ad, right? So, okay. I just also wanted to comment. I mean, one thing I find fascinating, for example, with the Martin Luther King.org example, is that after all these years, Google still, I mean, we don't know their secret sauce, right? But it seems like that they still continue to take links as sort of equal recommendations, right? I mean, the only reason it seems that Martin Luther King.org still comes up fairly high is because now a lot of websites point to it an example of a bad site that's misinforming people, right? So why has that code not evolved to understand that not every link is a recommendation, for example. So it seems like there's lots of room at the coding. I completely appreciate it's a very complex problem, but it's interesting that it hasn't changed a lot. Politifact. Just wanted to follow up on a couple things that were said. First of all, getting to the idea of things that are true. One fly, I think, in a lot of fact checking is we focus too much on just debunking falsehoods when I think we should be focusing on answering people's curiosity. And so when we, and still publish when something's true, and so one of the things we do at Politifact is if we find we're curious about something or our readers suggest something to us, we'll fact check it. If we find it's true, we still publish a fact check about it. And some of our colleagues who we respect a lot in our business don't take that same approach. And I think that's important to do because otherwise you give this sort of distorted picture that everything's wrong or, you know. And I think our mission as journalists is to answer people's curiosity. If they hear a political claim and wonder, is that true, we should be answering that. And if it is true, it is. The other thing I wanted to follow up on is the idea of, I think this was Paul's point about trying to speed the way that we get suggestions about fact checking. And we're working on that. There's some tricks in it. The New York Times has a pretty cool tool that they use on debate nights where you can tweet a suggestion to them and then readers rank it up or down. And that's cool except as I found one debate recently, there was one claim that had been voted up by a bunch of New York Times readers and I thought, man, we've got to fact check that one. That's great. And I kept searching and searching and searching and it had never been said in the debate. So there is still the need for coming in with a journalist to really assess these things but what we do want to speed up those tools, we find that the most effective one is actually email. We check our email constantly throughout the day. Readers are giving us a lot of suggestions and about one third of the fact checks that we publish on Politifact come from reader suggestions. Thank you, Phil. So I'm excited about the call for building tools that our users in various ways. And since we have here two representatives of fact checking organizations, if I may make a nerd comment, maybe this is for the tomorrow section. I'm sorry, I can't be there. But if the fact checking news organizations, experts, reporters, et cetera, there was some kind of an API that allowed to build tools on top of the information that they publish. That this would allow, some of us would like to build tools like Politifact and I and many others to latch onto those so we could use them to spread those information. We could use them to match them to misinformation that we can detect on social media on Twitter. We could use them to make it easier for crowd sourcing. So for people to match things that they observe to things that have been checked. So it seems like there is a bunch of opportunities that would open and so this is a call to action to sort of make it easy for programmatically access fact checking information. Yeah, sorry, yeah, go ahead. Sorry, John Dunbar, the center of Republican integrity. I just had a couple of broad thoughts that I wanted to get out before this was over with. First, speed kills accuracy. I've given up on being first. Used to be working for a wire, working at a wire service you were first. Now Twitter's first. I can't fact check Twitter. I can't fact check somebody's tweet. And then the media picks up on the tweets. So what are you gonna do about that? Make sure your tweet is, I mean, check out the tweet and then still not get credit for breaking the story. The other thing that has struck me is that policing truth is, that may not be what we're trying to do here, but policing truth is like holding back the ocean. I mean, we're surrounded by misinformation and it's not just the internet, it's everywhere. And that's not to mention the 40% of households in the US that don't even have a household internet connection. The other thing that struck me is that a lot of the fact checking movement has really been, most of it has been reactive by nature. Somebody puts a lie out there and you feel the urge to correct it. I don't feel that urge so much anymore. I think that over the years of being spun, I look more at the Rick Berman's of the world and this guy has absolutely been outed for years. A lot of people know about him. There is this, it's the issue of intent, it's the issue of triage. It's like fact triage. If somebody gets the report wrong that somebody said that the NYPD had declared a no fly zone over a demonstration, I'm not gonna put a lot of time into that. But if somebody comes out and basically pushes a propaganda campaign on an issue that's gonna affect every man, woman, and child in America, that's probably a high priority. And some of it is what I would call pre-propaganda, which is finding out what somebody's agenda is and then educating yourself on it and getting the facts out on the issue before somebody forces the issue. And that's all probably pie in the sky. It's just wanted to get those out there. Thank you. Thank you. I'm taking mental note of who's raised their hand so we have at least four or five people. Yes. Hi, I'm Nick Diacopoulos. So in terms of tools, I just kinda wanna go back to Urs's brilliant typology from earlier today and talk about the psychological sort of factors approach to tools. So I'm not a cognitive psychologist. I'm a computer scientist by training, but in some of my reading, I've learned about the elaboration likelihood model of cognitive processing and the idea that you can have central route processing, very deep processing of media information or more peripheral route processing, which relies on surface credibility cues. And what we know from that research is that the more deep processing that someone does, the more carefully someone reads into something the less likely they are to be influenced by credibility cues. So I'm wondering from a tool's point of view, what could be done to engender more central route processing? So how do you get people to actually be interested and find something relevant? Because if you can do that, then I think they'll approach it in such a way as to be more critical of it and perhaps to have a better understanding of the material and the quality of the material. I hope someone's keeping track of all the great questions that people are bringing up. Gillette? Yeah, I just wanted to respond to the algorithmic stuff that was said before. And I think it's really interesting because the public sort of thinks that algorithms are neutral, you know, it's these classification systems, it's math, it's not somebody's opinion, but the fact that engineers sort of build specific algorithms into our systems that then recommend what we should see, right? So it's the power that goes into these algorithms. That's interesting and the question is, instead of optimizing for hot new sizzling content that'll grab our attention, which is what the current sort of state of the art algorithms do, can we optimize for an informed public? And how does that algorithm look like, right? And can we sit together and actually figure out what that would mean? Because I have no idea what that would mean. I think. We mapped the deceptions across the 2004 and 2008 campaigns from factcheck.org onto a national survey. Then looked at Fox Reliant as opposed to other reliant viewers to determine whether they had accepted the deceptions on their own ideological side and on the other ideological side. And when facts were contested, which they believed, we found it was equally likely for those enclaimed into Fox to believe the deceptions on the Republican side as it was on those CNN enclave in 2004, MSNBC enclave in 2008. We didn't see a significant difference in the likelihood by which they would question the materials that were being disseminated from the other side. So I don't know objectively how much more is out in one media environment than the other, but when you look at the audience response to what's there as mapped against the factchecking, there isn't a difference. Fox Reliant versus CNN Reliant 2008, MSNBC Reliant 2008, 2004, 2008. Thank you. I'm curious as to whether the topic of Google bombing has been studied by anybody in the room or talked about. I mean, that's actually something that I've been fairly involved in on the progressive side of things. And I imagine that the Martin Luther King site that you talked about may have originally spritzed to the top because of some campaign by conservatives to make a rise to the top. But I'm curious whether that effect, basically the idea of Google bombing for those who aren't familiar is for bloggers or other folks with the ability to move big numbers to Google bomb certain sites so that they rise to the top over others. And it's at this point a fairly common campaign, but we do it a lot with candidates, for example. If somebody's lying about something and we want something that's more factual to go higher, you get people to Google the thing that's more factual. And it's a fairly common technique and I'm wondering whether it's been studied by folks who... So I actually edited a special issue of the Journal of Computer-Media Communication in 2007 that did have some work in it on that. I think Google has responded much more to that since in the last five years, so I don't know if it's as effective. I would also like to add that I don't know if it's completely fair to conservatives in general to suggest that they would Google bomb the Martin Luther King dot org site. I mean, I think that's really a white supremacist sort of really far end of the spectrum community that would be promoting that site and that's what I've seen just from looking at their linking. Erin, you had your hand up. Did you still... You had your hand up. Yeah, so in trying to think about tools to deal with this stuff, I think it would actually be useful to think about the difference between, say, me and Rick Berman or between Kai over at Color Lines and Fox News because I've been called a propagandist. People look at Street's blog and they say, I mean, literally in legal documents, I'm referred to as the radical experimental bike lane lobbyist for a lawsuit over a bike lane. And I think there's a substantial difference between... For a while I would try to argue that. I'm nothing like a propagandist. And I think in recent years I've come around to this notion that, well, maybe I am doing propaganda and maybe we all are to some extent and that sort of the tools of propaganda are just so widely available and they cost so little now. It's just so easy for all of us to do propaganda. So what is the fundamental difference between someone who's doing advocacy for livable streets and someone who's doing advocacy for the Republican Party? Because that's what Street's blog is doing. It's doing advocacy for livable streets and Fox News is essentially doing it for the Republican Party. And what's the difference in the way that we do that advocacy? And I think in that question lies some sort of answer to the tools or to the kind of code you'd want. And I'll just give you one little example. You know, you can look and see at Street's blog who's paying for stuff pretty easily. You can look and see that the people who are writing have a long history. So there's a sort of level of authenticity whereas, you know, Melanie's Rick Berman, he's never done, you know, humane animal stuff before. I think letting people see these sources a little better in the context could really help determine which kinds of, you know, is this good propaganda or bad? Thank you. Did you have, I thought you had your hand up. You, yeah, you didn't have your hand up? Yeah, okay. Yeah, it's very closely related to that. So I don't think we're thinking enough about the reality of the situation which is that the value shaped the facts, all right? And you have to think about that primarily. So if any tool, and left and right have different sets of values, and we know what they are, and it's been cited extensively. So if any tool is gonna be created, I would think, you know, it would be great to create algorithms that actually find ways of reframing information. Like, you know, you're perpetrating this lie and you think it supports your beliefs, but actually it's completely contrary to everything you believe in. That's the way to get through, is to say that this information which you find so threatening isn't, or this information which you think you must believe you don't because it doesn't really help make you who you are. And that's what will actually, the only way to change minds. And I think that's what all the psychology research says. So if we could, you know, just something, something like, in this room, I've heard liberal values expressed many times. Whenever people say big money, who's funding this, special interest, these are all, because we're egalitarian. A lot of people in there are egalitarian. So you could do algorithms, you could find, and if you find egalitarian values being tied to a bit of misinformation, like you do in the vaccine autism issue, where you, you know, people think big pharma is poisoning kids, it actually isn't, but it's their liberal values that make them more inclined to believe that, because they distrust big pharma. You could then reprogram and say, no, it's not, this doesn't support your values. Saving lives of children is more important than vaccines. Save those lives. And then you would actually start to go through all the psychological steps that you need to get someone to stop believing something wrong. Sasha, did you have your hand up earlier? Yeah, so I don't, I'm not going to close this as a solution, but I think it's interesting to have a conversation that somehow kind of imagines a bright line separation between what happens online and what happens in other media outlets, right? So we know, we have a lot of good information now about the way that information flows across channels. And so to talk about how we could counter online misinformation or hate speech, or however we're framing it, without talking about what strategies there would be in the broadcast space, I think would be, you know, a big mistake. We have good metrics that look at how something could be generated and spread via broadcast outlet, and then it gets picked up and constantly repeated in online spaces. And so in that context, you know, what's come up so far, you know, was a boycott threat potential. But I think in the context of talking about Martin Luther King.org, we should also remember the possibility in other times and spaces of licensed challenges as another strategy that has been extremely effective in the past of restructuring the entire, you know, employment practices of an industry very successfully, although the licensed challenge ultimately failed. So that would be something we'd be able to look at kind of the history of the 1964 case Lamar broadcasting and United Church of Christ, a key moment in the civil rights struggle. And then just to raise the question that what are the limits, presumably there is some limit beyond which the lies or hate speech or disinformation would get so bad that we would agree that we would have to either revoke the licensee or take it offline. And the classic example there is the genocidal incitement in the case of Rwandan hate speech radio. And so again, I'm not proposing that the solution is censorship or taking away licenses, but I am proposing that it's difficult for me at least to have a conversation that doesn't also look across multiple platforms and doesn't include the full range of political possibility. So we still have a lot of people who have their hands up, but we've run out of time. So I think we're gonna have to move on just so we stay on schedule, I think. But thank you very much for all the great comments. Please forgive us.