 I'm going to turn it over to my co-moderator of this session and our co-host from the Center of Civic Media, their executive director, Ethan Zuckerman, as he's setting up just one note of thanks to Wurston in particular, which is one thing I've learned a lot from you over the years has been the way of seeing this question of information quality or credibility or truth not so much as a static thing, but as a matter of process. A process not just also of evaluating the information, but the creating of it and the creating of the architectures around it. It seems to me you've really pulled together a lot of the learning from today into that framework and expanded it extremely healthfully. Over to you, Ethan. Sure, well, I wanted to take advantage of the fact that the time at the front of this room is so scarce. So I was going to seize a couple of minutes of it and offer my reflection on what's going on and then do my best to open this up to get more people around the table. It's an interesting moment for me because this is one of the first events that I'm helping organize with Berkman now from the MIT perspective. And I have to say since moving over to MIT after eight years at the Berkman Center, everybody wants to know about culture shock, moving between institutions. Everybody, I think, is convinced that everyone at Harvard is wearing a suit 24-7 that we move into like rope sandals and t-shirts as soon as we like take two subway stops further down, everybody wants to know the cultural difference. But I actually am starting to figure out what the cultural differences actually are. And I think one of the cultural differences that when we take on the sorts of questions we're dealing with here, the approach of an organization like the Berkman Center tends to be to try to figure out, can we find a systemic way to think about this problem? And I think Urs has just put up a really helpful system that we can start using to think about the various different problems that come up around this general sense of truth and truthiness. What's great about these large systems is that they can inform the way that we end up designing tools and we can try to get a very thorough view of an issue and then figure out how to intervene. That, as I'm finding out, is not a very MIT way of doing things. And the MIT way of doing things near as I can tell is to say the system will come to us eventually what we really need to start with are small experiments. And we should try an experiment that we can try over the course of a week or a month or certainly within the context of a master's thesis. And we should see whether that gets us anywhere. And if it gets us anywhere, we should keep hammering on that and eventually we'll get to the place that we want to be. The trick with this method, as I'm finding, is that you have to figure out which problems are tractable. So when you're looking at something as big as these questions of verifiability, truth, truthiness, disinformation, so on and so forth, I find myself now trying to pick apart the questions we talked about this morning from the perspective of tractability. So let me use that to sort of frame a couple of the conversations we've had and then a couple of things that haven't come up and then see if I can sort of push us forward a little bit into where we go this afternoon. A few of the people who are around the table here today were to gathering at the New America Foundation a few months back and it was a conversation about fact checking. It was a great conference. It was really a good deal of fun. And what was interesting, it's under Chatham House rule so I can't actually credit any of the bright people who said anything there. I can just tell you, wonderful things were said by wonderful people. But one of the wonderful things that was said by a wonderful person was talking about fact checking using a particularly gory metaphor. And the metaphor was that misinformation gets out there, it's like being shot. And so far what we know how to do is come in three or four days after the fact and try to bandage up someone's wounds. So we go through a political speech, we go through a debate, someone says something that's blatantly untrue. It circulates in the media for a while. Three or four days later someone comes in with a fact check and we're basically trying to staunch the bleeding of all of those various harms that are out there from the disinformation that allow us through motivated reasoning, through cherry picking reasoning to pick the facts we want to make our arguments. And the thought was maybe we could do slightly better. Maybe we could get to the point where we got a bulletproof vest. This isn't actually bulletproof, it isn't why I wear it, but you can imagine sort of the bulletproof vest for truth would try to sit there perhaps in your web browser, perhaps on your TV. And when that bad information came out there, it would jump up, it would block you, you'd still get hit, you might get a bruise, but you wouldn't bleed. The information would somehow get countered very early on within it. What I found really interesting about this conversation is that no one took the metaphor any further and sort of said maybe we could dissuade people from shooting in the first place. Could we get people to stop saying obvious mistruths? And that appears to be in the realm at the moment in current political climate, current media climate, well within the realm of the intractable. That the notion that somehow calling out disinformation and shaming doesn't have the power that we once thought it did. And that may change our sense for what are possible interventions within the space. Those interventions may be then trying to figure out how to counter that misinformation before it does the damage that it otherwise would do, but then we need to ask ourselves the question, do we wanna give up on the function of shaming that early on within it? When I talk about tractability it's really about questions like that. It's trying to figure out where in that line would we like to go. I've been a little surprised given the international experience of some of the people in this room, how US centric and particularly how left, right a lot of this conversation has been today. And I wanna go back to a comment made by my friend and colleague Ivan Siegel who pointed out that it can be a very different circumstance to sort of think through the process of fact checking when you actually have facts to start with. When you have reporters on the ground, when you have events that are fairly easily deciphered. For people like me who do a lot of work in the citizen media space, we are still trying to figure out the implications of the Amina Araf story. Who knows what I'm talking about when I say Amina Araf? Better than most rooms, but it's enough that I should actually tell you the story really quickly. Amina Araf was an incredibly popular blogger in Syria, an amazing story, incredibly brave young woman, an out lesbian in Damascus, writing in English about her experiences living in that city, about the early stages of the Syrian resistance. Amazing figure, major newspapers came, did interviews with her. Everyone started reading this blog. It really rose to prominence quite quickly. The only problem behind it was that it was written by a 40 year old dude from Georgia named Tom McMaster. And he had carefully constructed over the course of years an online identity that allowed him to have his voice heard because as a middle aged white guy, no one ever took him seriously and obviously he wasn't very well represented in the media. So by becoming a Syrian lesbian, he would have the chance to be heard in a way that he wouldn't be before. And as people started looking at this, it turned out that most of the people who had met Amina Araf also turned out to be white dudes pretending to be lesbians leading a commentator on the situation to look at this and say it's fake lesbians all the way down. And the problem with this was not just the construct of fake lesbians sort of reinforcing this guy's attempt to speak on behalf of the Syrian people. It was that we were so desperate for perspectives from the ground from Syria at this particular point in time that media organizations that should have known better were extremely receptive to listening to this particular voice. Now again, when we deal with the realm of intractability, we deal with the difficulty of the fact that you have a genocidal regime trying to systematically kill off their people that's figured out that killing off journalists is a really good way to keep this going. That's probably not a problem we can solve within this room. But figuring out how we cross-source and figuring out how we try to find identities that rise surprisingly quickly and that we should have certain suspicions about is a place where we might find ourselves able to try to design and develop and deploy some tools. So for me, the bad news of this morning is how many of these problems for me probably fall into that intractable side of things. I think a lot of what we're talking about, whether it's the influence of money in politics as much as I love my friend Larry Lessig, I am not tremendously confident that we're gonna strike at the root, certainly by 2012. But it also strikes me that what we've gotten this morning is an incredibly helpful set of tractable questions that have come out of all of this. Little experiments that we can actually try in the world and try to have a sense of whether or not they have an impact. I look at Kathleen Jamison whose experiment with Flakcheck literally says, can we take on something that seems totally intractable which is basically false speech and fear mongering within political advertising, but can we try a clever way to have a point of leverage and actually see if we might have an effect on how many of these ads actually air or don't air? We should have some notion by the end of the 2012 cycle about to what extent that works or doesn't work. So what I'm really hoping we can start doing for this afternoon is start shifting a little bit from the big questions and particularly shifting from the intractables and starting to put forward questions that we might be able to test out. And when I say test out, I mean test out experimentally where we're hoping to go tomorrow on this hack day which we'll talk a little bit more about in the conclusion. It's not really a traditional hack day. A traditional hack day is a lot of guys like Gilad Lothan who write code in their sleep sitting down with a data set and trying to build some new tools around it and we just don't have enough Gilads to go around for tomorrow. What we do have, which is an amazing asset is we have a whole bunch of people coming from some very, very different perspectives who are thinking really hard and deeply about these issues who can come together and think through some of these design challenges. What is a question that we wanna test in this space of truth and truthiness? How would we set up an experiment to test it either by organizing and trying to conduct something in the real world like an email and a Twitter campaign or by trying to build some tools that take us there? So the way that we're gonna end up starting that conversation and the direction that I'm hoping that we can start shifting the frame of this discussion is to try to think through those small, tractable questions and just to give you a couple of examples of ones that have come up this morning. Phil Mencer's basically asking, are there network signatures that can tell us when someone is a bot and when someone is human? Great questions come around this. Does it matter if you're a paid political activist and you say the same thing time after time again? Is it actually any different from being a bot? But it is the sort of question that we can put out and test. When Susan Crawford opens with this amazing story about being able to figure out who killed the newspaper seller in the London riots, it's an open question about whether there are certain factual questions where we can immediately open it up to crowdsourcing and try to find information that doesn't exist there. So the challenge that I wanna put forth is let's take the big ideas, let's get it down to smaller questions, let's start working through the frame of what we're gonna do tomorrow, which is figuring out actually how we test out those questions. So in particular, if you have some way of taking the amazing material that's been put in front of us and putting it in the realm of questions that we wanna see answered, this is a great time to come and grab the precious mic time at this conference. So hands up if you wanna jump in, please. And introduce yourself first. Lean in towards it. Should I shut up? Will that help? I'm sorry about that. So I wanna underscore the shaming point that you were making and just say that we've probably under-emphasized the role of elites here. It's much easier to potentially at least to stop these things from starting than it is to undo the damage once it's been made. And in particular, this shaming may have a second order effect where the elites anticipate the shaming and then are less likely to produce the misinformation in the first place. Now, the question though is to think about what smaller scale versions of the shaming could be implemented in a context like tomorrow. So that's what I would be interested in people's thoughts on. So there's a great potential experiment, micro-shaming. Is there some sort of social intervention we can try where we can figure out whether shaming is effective even if it doesn't make it to the front page of the New York Times or if it doesn't make it on the political fact. I suspect it's happening, but anybody putting that in 140 characters and putting it on truth economy may get some answers from the crowd as well. Please, go ahead. On Flagcheck.org, we're giving stinkweeds to reporters who air ads uncorrected and we're giving orchids to those who hold consultants and candidates accountable for their misleading statements and ads. We know that people search their own names on the web. We assume that they're going to find that they've gotten stinkweeds or orchids. We think as a result that they're less likely to air ads uncorrected and more likely to hold them accountable as a test of hypothesis. Go look at our stinkweeds and orchids after you've emailed your stations. And testability on this may have to do with whether you start getting hate mail from reporters coming in and saying, how do I get rid of that stinkweed next to my name? Which is often a sign that you're in the right direction. Yes, please. Melanie Sloan from Crew. I have to say I'm really skeptical of this whole concept of shaming in general. I think shaming has really lost its power and you can see that by the fact that everybody in America gets a second act no matter what terrible thing they've done, including like a New York Times reporter who plagiarizes everything. So I find it hard to imagine that people involved in PR, like Berman's been exposed before for stuff and he's not ashamed in any way, shape, or form. He just does it again. So I don't actually think, and unless there's some studies that show that this really works, given how our society has moved in a way that shame seems to be far more ephemeral, that doesn't seem that useful to me. So you'll remember. I put shame on the table as an intractable rather than attractable, but we can certainly go back and forth on this one. Mike? One of the things, one is I agree with Melanie in terms of at least my world, which is politics, political consultant. Shaming doesn't work except on a mass scale. You have to get to critical mass. You have to get to a certain level of intensity before shaming works because just, just exposing people for being frauds doesn't do anything to them and then in the world of politics, unless it's become so big that they're start to be affected by it politically, economically. Rush Limbaugh's classic example, Rush Limbaugh has been doing his shtick for years, saying horrible things about all kinds of people. It didn't get to critical mass until this last week. And when it got to critical mass, the advertiser started going away and he started to have to backtrack, but it took at such a level of intensity before that happened. One other comment I wanted to make that I think is important in all of this, whether it's shaming or anything else. In my view, blatant mistruth is less of a problem than the old saying about, well, I don't know, that old saying doesn't really apply. But, I guess my thought is that I worry less about blatant mistruth because most political consultants, most PR people try to avoid being blatantly wrong on the facts because they know they'll be exposed fairly fast either through crowds or through fact checkers or whatever. It's the folks who throw out facts that are completely out of context or completely, that they may have a fact and have 20 facts contradicted, but it doesn't matter. So I think, and that's a much trickier and maybe close to intractable problem to solve. So I should point out once again that my point wasn't really meant to be and advocacy of shaming, for all the shaming comments I would mention that shame con is actually three weeks from now. If you've been invited to that one, you should be terribly, terribly embarrassed. So you don't wanna tell anybody about it, but my question was merely more this question of trying to figure out what are the levers that we can work with. And I actually think the previous comment pointing out that shame has been pretty ineffective probably puts this more into the intractable side of it where I think there may be some agreement on that. I'm gonna con someone without her hand up, but who is well-known to many of us in the room, our Dean Martha Minow has actually just walked in and was gonna say a word of welcome and thanks. Dean Minow, thank you for coming over. Dean Minow, thank you for coming over. You're wonderful to host us. Martha Minow is the Dean of Problem Solving. She has introduced a mandatory problem solving course for all lawyers and so she's delighted that we're in that mode. Sorry, carry on. Becky? I was thinking about levers. I'm conscious this is what they call in the security export world a dual purpose tool. But what about advertising? We had a situation in the UK where an incredibly homophobic op-ed piece was published in the Daily Mail by an economist called Jan Wah and Twitter mobilized to contact the people that advertised with that newspaper. Shame doesn't work, but the bottom line might. So another testable intervention and possibly a tested intervention particularly as people look at the Rush Limbaugh reaction where a great deal of pressure is coming into advertisers via Twitter and perhaps this is one of those circumstances where the ability to talk back is something that we can test as a method of response. Other questions, comments, please. And introduce yourself, please. Ari Ravenhoff for Media Matters. When we look at the world of misinformation, I like to phrase it as we think misinformation is most dangerous when it metastasizes. So if there's a bubble of untrue, let's just use Fox News as the example. I'm from Media Matters. They lie willingly, they lie knowingly, and they lie forthrightly as part of a strategy and that's outlined in internal memos and other documentation. Where the misinformation we see becomes truly dangerous is when it seeps outside of when it goes from the right-wing echo chamber through Fox News but then when it gets outside of there. So if there was a test to define how to put a finger in the funnel in a way to stop the misinformation from seeping out of the right-wing swamps, just away from Fox, there's a very popular radio host named Alex Jones who spews all sorts of garbage on a day-to-day basis, 9-11 truth or stuff, that kind of stuff, but his stuff stays within his large audience so it doesn't have a broad cultural impact. So the question is, is there a way to stop the cultural impact at the funnel point? So two questions there. One is whether we could figure out when information is crossing from one echo chamber into a broader space and another question, not an easy one with all of our conversations about fact-checking, whether there's a possible intervention that one could jump in and then sort of put into play when it looks like something's leaving one conversation going into a broader conversation. Kai. So pick up on what Mike was saying. I wonder if there is in fact some sort of tool or something to build that is not a fact-check but a context-check. You know, I mean, I think to hijack our Joe Apio example earlier, he is an absurd person, he's an absurd figure who just two weeks prior to, well, more than two weeks, but a month or so prior to that, the headline in Politico was Joe Apio racially profiles, justice department, which was a variation on the one we saw about Joe Apio says Obama's birth certificate doesn't exist. The only reason Joe Apio was saying Bob's birth certificate doesn't exist right now is because he's being investigated by the Justice Department, he's trying to change the subject. So is there a way to, so I share Mike's concern that there's the issue about specific facts but we get lost in sometimes, I think, in the debate over a given set of facts to the detriment of the debate over the untruthful context and is there a way to check the context? Which maybe that goes beyond technology, I don't know but that I think is actually a greater concern. But I think it fits well within this theme of questions that we could try out and test which is to figure out if there's some way that we could put context into a story so that when Joe Apio comes we get some context of where it's coming from. Even we're at time, just to note, and we've got like four hands up. Should we maybe take four more quick? Yeah, let's take four quick comments and not react. Let's go Judith, let's go Ellen, let's go Dan and then the gentleman here and that's where we're gonna get in this one. Mike, Mike, Mike, Judith. You're having a cut, Shaming made me think about studies that are done that go into lie detector testing because a lie detector doesn't test if you're telling a lie. It really tests your own feeling about you're telling the lie. So if someone is a sociopath who really has no guilt and no qualms, it certainly doesn't show up at all. What they really can test, are you stressed? Are you feeling guilty? And so I think an interesting path of your Shaming piece is that this notion of trying to come up with some type of typology or classification of the types of people who are promulgating these lies because there's the group that's sort of like the sociopaths of politics who deeply believe what they're saying or there's no compunction about it. There are the politicians who may indeed have some guilty feeling for whom the Shaming would work because they realize they may be doing something wrong but they have the eyes on the prize. There's the ones that are motivated by money, et cetera. So understanding those sets of underlying motivations may be the key to understanding different types of useful reactions. Thanks Judith, Ellen. So in response to the notion of whether there are tools that can be built that would sort of look at the providence of language and where it spreads and how it spreads, we've been working with media standards trust in the UK. I don't know how many people know, their churnalism.uk site. But this is a site that has a database of press releases and a database of news stories and it can track how many journalists are churning the press releases. They've just open sourced their code thanks to a grant from Sunlight. We're developing the same site for the US but more importantly we're actually using their code to look at at the moment regulatory comments to see how many comments on an EPA rule actually came from a single source or a double source or how many sources they came from. So these tools, your guys can help figure out how to improve the kind of stuff that we're building but it is possible to do this. I'm speaking of the guys who are building these things. Dan Schultz, I'm at the MIT Media 11 Center for Civic Media. So a couple questions that I have. Priming with self-affirmation has been shown to be effective in helping combat motivated reasoning and I'm curious how we can implement self-affirmation techniques in the real world and if there are other forms of priming that might work and make people a little bit more receptive to fact checks. Also curious in general how much do people value truth and honesty to begin with and can that value be leveraged to chain the dialogues and how is this kind of like looking at shaming flipped on its head so instead of trying to shame people away. And then third I just wanted to note that so I'm working on truth goggles and I've tried to split, it's a credibility layer for internet content so it's trying to connect the dots between the content you're looking at and fact checks and I've found sort of three tractable or semi-tractable problems. The first is what does the interface look like so that's kind of getting at the self-affirmation priming questions. The second is where do these facts come from and how do you scale collecting facts and then the third is how do you find instances of facts in the news in an automated way. So I'm not answering all three as part of the thesis but I think those are three questions worth asking. Hi, I'm Aaron Naperstak, I'm a Loeb Fellow at Harvard and in 2006 I started a blog called Streets Blog and I guess my point is in my experience it's not necessarily that difficult to counter this stuff. There was, I am and was in New York City part of a movement that was really oriented towards trying to reform the New York City Department of Transportation, make things better for pedestrians and cyclists and transit riders in New York City so a very specific niche issue and we were sort of up against 80 years of culture and policy that was aimed at moving motor vehicles through New York City and it didn't take that much to kind of put a new perspective out there. Really it took two journalists working working five days a week full time to help create an entirely new perspective on what streets could be in New York City that streets could be public spaces and places for bikes and buses that move quickly and ultimately help to create substantial policy change and so I mean I kind of have a hopeful sense of this because of my experience in this niche issue that it doesn't, when you really start focusing on a niche and kind of professionalize it and move yourself outside of that mainstream media world I think you can have a lot of impact. So that's a wonderfully helpful intervention I think having that sense that tractability may have something to do with how big the scale of the issues are whether you're going after the fundamental left right splits in the United States polity or whether you're going after issues where left and right might actually come together and say less dead bicyclists in New York would be a good thing these might be places where we might have the possibility of making some progress. So back over to you John to introduce our next two moderators. That's excellent thank you and Ethan and Urs and others thank you for this synthesis section. Thank you.