 My name is Ed Bice, I work at Medan, we work on fact-checking software and translation software. This panel is going to be rather ad hoc. I was asked to moderate the panel about 36 hours ago and thought, said to Dan, well, everything in moderation, including moderating. So I agreed to do it. And I think we've got a great set of panelists and we'll kick the panel off with five to ten minutes from each of the panelists. And then we're going to open it up and take questions and hope to engage a discussion around the general theme of annotation, fact-checking the future of journalism and aspects of media literacy that Michael brings into the discussion. So I'm going to ask each of the panelists to introduce themselves and then we'll go to Stefan, who will introduce himself and introduce himself and give the opening remarks. And then we'll go back to the other panelists who will offer their opening remarks and then we'll open up the discussion. So I have to note and apologize for the fact that this is a mantle and so I disclaim any, well, I just note that. So I'll pass the mic here and then we'll come around to Stefan. Hi, I'm Emmanuel Vincent. I'm a climate scientist from training and I started a project called Climate Feedback and when I arrived in the U.S. as I realized the state of the news coverage of climate change and the goal of climate feedback was to bring scientists who are experts on the topic to provide feedback on the credibility of content of news coverage of climate change. Hi, my name is Mike Caulfield. I work at Washington State University and I'm running a cross-institutional project to teach students basic fact-checking skills. Hi, I'm Wes Lindemood and I'm a designer on the visuals team at NPR. NPR is national public radio but our team is focused on the visual expression of that in the newsroom. Okay, great. I think at this point we'll pass it over to Stefan. So Stefan, if you can introduce yourself and provide us with some insights into your work. Sure. Hey, thanks for having me here. I hear my voice with an echo but I will try to ignore that. So I am a Romanian investigative journalist. I live now in Germany for a while and I started, I co-founded and now coordinate an investigative journalism network together with the Spiegel and this is called European Investigative Collaborations but I have a long track record in investigative journalism out of Romania in the Eastern European context, in the non-profit investigative journalism world. My focus is investigating organized crime and recently I was also starting to be active in the research field. I'm enrolled as a PhD researcher doing work on networks, cross-border networks for investigative journalism. So now as far as I understand I would start directly my presentation, so bear with me for a second and I will show you. Okay, so I'll try to be fast on this and I don't have much feedback being alone in a small room here. So again, my name is Stefan Candean and I will speak now about how networks are part of two work with annotations. I am the network guide, so to speak. My work consists in daily coordination related to content investigations but also informing about what tools we should use as a network. Now I will talk a bit about what this EIC network is, who are the partners, a big project that we did recently and something about our workflow and tools and how we experiment with annotations, namely with hypothesis. What we are trying to do now in a separate context, putting search and annotations in a box, in an art box and my considerations related to issues, tech issues especially in cross-border investigative journalism. So about EIC, we have one year of activity where focusing on Europe and we focus on reporting and publication of investigative journalism in Europe with topics that would affect European communities. We partner on in-depth reporting at source, meaning that we have partners who are actively knowledgeable and active involved in the different countries in Europe. These are the partners for now. It's a really broad mix of small nonprofit like the Romanian Center for Investigative Journalism and the platform called the Black Sea and two really big partners like the Spiegel in Germany and also new players like Mediapart in France, digital players and we do take project-based partners so from time to time there are other partners involved. This is the example of the Black Sea which is a platform in English to report in-depth about the region. Now briefly about the football leagues which was our biggest project recently. It was almost a year long investigation into the European professional football and it was based on a leak, almost two terabytes of information, a lot of diverse files more than 60 journalists involved in 12 countries working across borders. Now from that I want to give you an overview on how we work project by project. So for each project we do have a legacy network and some tools stacked ready from past projects. During the pre-publication when we know what we are working on we go through refining the network, the tools we are using and the workflow and so we are in a constant dynamic flow about testing new tools and experimenting with new tools. We go then in the publication mode and that is a lot of legal issues, confrontation across different countries, embargoes, publication dates. So we switch between pre-publication and publication phases that involves some secret work and then public work and we end up building a sort of investigative platform out of that and you will understand why when I go into details we do have a search tool which is one of the platforms that runs a search engine with all the secret data sets we have and we use a communication platform which now currently and it was in the football mix is based on Sandstorm. You should know about Sandstorm because it's in your backyard. On Sandstorm we have these apps that we use on daily communication, rocket chat on creating a knowledge base, talk weekies and on different few other issues like filing findings which we use the issues system from a GitLab instance and in between these apps and Sandstorm, in between the Sandstorm and the Hoover where the documents reside, we are experimenting with using a bridge through annotations and of course we try to engage directly on the source documents in the search engine, marking findings, translating which is with our group of people, every partner speaks a different language and we try to bridge between the platforms by using a bot for the rocket chat instance in Sandstorm basically letting know people when an annotation was done on the source document. We use hypothesis also again in an experimental way because there is a lot of problems we have having a small tech team. We use also, we test an extension built by John O'Dell hypothesis to automate some of the tasks that we have during the investigation and that is trying to research and search again new terms, trying to add findings to a timeline, trying to create automatically wiki pages from the findings and trying to tag findings and send these findings in the food notes or timeline of wiki pages. Another thing we are involved with annotation is trying to package this bundle of tools in a more stable way and my colleagues, my colleagues work on, we work together but my developer colleagues worked in the last few months in trying to put together in one box, in one arm box for now search and annotations. So our search tool Hoover and annotations from hypothesis and this is part of a project which is funded by Google Innovation Fund which has a contest in Europe and we try to make these boxes talk to each other, try to make these boxes use both search and annotations. So at some point we can skip some of the applications in letting the group know about findings. You have a URL here and because we don't have too much I will skip to the last slide. Basically the problems I have as a coordinator here and the other journeys have is this growing networks and the growing data we have and we exchange and the question what type of data is gathered in different networks or in different tools, there is some other type of data gathered, saved, analyzed in this collaborative project. I do have to consider what are the responsibilities and obligations for the journalists and the platform owners or operators, what are the tech limitations and whether you stop, you want to enhance the user experience or you want to keep save their search history by not saving any search logs. I have these issues between centralization and distributed systems, different threat scenarios because we do a lot of the work in secret that we also want during publication to have it as much as possible exposed and have people contribute and comment through annotations on our stories and that's what we will try to do in the future. And of course the biggest issues is these tools that work in an experimental way are great but we don't have now the manpower to keep them up to date, maintain, develop them further and look at all the different consequences, especially security issues. Here you have my contacts and some of the URLs that they've used and I will wait and listen in for further questions. Thanks. Great. Thanks, Stefan. Great work. Next we're going to hear from Mike Coffield. Mike is a blogger and academic whose writings have had crossed my desktop over the last couple of weeks and before this conference I was actually wondering how am I going to get in touch with this guy. So a huge fan of his work, everyone should read his blog. This is a site called minimumwage.com. It talks about Denmark's dollar 41 menu and how if you raise the minimum wage you're going to increase the prices of all these different meals, et cetera and there's going to be less jobs because less people buy, et cetera, et cetera. I don't need to go into that piece of it but the Stanford History Education Group recently ran a study where they took fact checkers, Stanford students and history faculty, history professors and they gave them this page and they said, hey, what do you think about the authority and source of this page? Now let me see if I can break out just for a minute. I'll show you how you actually deal with this page here. I don't have that specific article up there but it doesn't matter but this is sort of your garden path solution for do I trust this page. You hit the about, you find this part of this employment EPI group here. You search Google for that and then when you look at that you see that there's source watch here. Take source watch, you can take something else, you can find multiple routes into this. You note that it's associated with this guy Rick Berman and then when you search Rick Berman in MinimumWage.com you get a list of exposés about essentially an industry-fronted PR group including ones that say that his nickname in the industry is Dr. Evil so that's kind of interesting there, right? It's a pretty simple solution. There's nothing really complex about this. Now how would you think that people do when they're presented with this problem? Well here is an exciting yet thoroughly depressing graphic. So we see the three groups here, we see checkers, right? We see historians and we see students. The checkers do great, right? The fact checkers do great. Within the meantime to finding out this EPI group moving from MinimumWage.com to EPI is about, what is that, 50 seconds. By about 200 seconds all the checkers have associated this group with Berman, the front group and understand its PR. At the point that all the checkers have done this, the average historian hasn't even gotten to EPI. And as a matter of fact, the average only 60% of historians, history faculty, will get to that point where they understand that this is a front group by this guy Berman. Students are of course even slower. You see the students are there five minutes into this and they're finally reaching EPI. And only 40% of students will get and make this connection within the relative time allotted. And again this is unpublished. I did check that, I know this says draft do not distribute on the actual slide, but I did get permission this morning, I swear. It was presented last week at the big educational research conference there, or whatever you call it. So how does this relate to what we're trying to do with students? Well one of the things, this appeared in my stream yesterday, HP vaccines, like kill a bunch of people. And this is the traditional thing our students deal with, right, is they come to a page like this. And what we've traditionally told students is read deeply, right, close reading. Look at the page, is there data, is it supported, are there experts? What we find is actually none of that advice works. In fact a lot of it is counterproductive, right? Is there data on this page? Yeah, look at this, there's a chart, there's numbers, right? There's numbers of dead people on there. You can't get more data than that, right? We have an expert here, Dr. Diane Harper, right? And we can have the students go through this and look at this. It's not only linked, it has footnotes, right? And they're kind of like, I don't know what format footnote that is, but it's some mishmash of something or other, but it looks kind of sciency, right? And the more they apply our traditional methods to these documents, actually the worse they do, right? And if we go back to that other slide and we say, you know, these are Stanford students, by the way, only 40% of Stanford students are getting to that moment. That's Stanford, right? I forget what the acceptance rate is in Stanford, but it's pretty low, right? It's kind of a selective school. Okay, so when we look at this and we say, why is this happening? A lot of it is our fault. It's our fault because we tell students, read deeply, look at the data, look for fallacies, try to match this up with all this stuff. Now, what happens? This is why, and this is the end of my intro, this is why I'm interested in annotation because what happens if instead of telling students, look at the page, think deeply about the page, what happens if you tell students, hey, annotate the page? Well, the first thing that happens is this. I'll grab Dr. Diane Harper here. We'll throw her up here. We're going to annotate Dr. Diane Harper, you know? And what we'll find is, you know, the first actual auto suggests is Diane Harper snopes, urgent warning about Gardasil. False. Now, you don't have to necessarily take that at face value, right? You don't have to necessarily believe everything Snopes tells you. But by the time you get to the Snopes page, they've done a lot of that work for you. And it turns out, if you read through this, it's a little more nuanced than you might think. Diane Harper actually did work on some of the testing, and she does actually believe that promotion of pap smears over lifetime is probably a better approach to cervical cancer than the vaccine, right? Does it have anything to do with 32 dead people and 9,000 people injured by Gardasil? No, it has zero to do with that. You can read through that. The thing about annotation is it gets our students to think about the web the way I think the web was, you know, intentionally designed. It's a bunch, it's a networked set of linked information, okay, which allows us, you know, we're not just handed a page, right, a printout of a website in the middle of a desert and say, you know, take a look at this. You know, does this look good to you? I mean, we have the web at our disposal. And so annotation in my mind is a way to kind of re-webify the web, right? Get our students to think like the web and start to think about where are the sources for this? How can we connect those sources? And it builds in students a better set of reflexes about how to approach information on the web than our traditional, very publication-based, very print-based methods. And that's what excites me about annotation, and I'll pass it on to the next person. So as I mentioned in the reflade that I'm a designer on the visuals team at NPR, and the visuals team's a small team embedded in the newsroom that is focused on new approaches to telling stories online. By that I mean web-native documentaries, data visualizations, and annotation work. Recently, I should say by way of introduction that I'm pretty new to the world of annotation, but I hope that, you know, being new I can provide a unique perspective that's valuable to the conversation. Some of our earliest annotation work started during the election season, where we were doing live fact checks and analysis of the presidential debates. From there we moved on to annotating inaugural addresses, Obama's farewell address. We've worked with member stations through the NPR network to annotate state house presentations, state address by the governor in Illinois. We've also worked with stations in the network to embed our annotation work in other member station sites. So this whole ecosystem of different annotation approaches has kind of cropped up out of our initial work on the live debate annotation. What annotation means for us is a pretty specific use case. We're using it inside the newsroom as opposed to creating an annotation tool that users can use. But this is a unique set of users that I think have some interesting requirements and some interesting needs that we can serve. One of the things that I think annotation does for us that's really valuable is it breaks us out of an article mentality. We can think about structured journalism to go to a specific point. A reporter doesn't have to rewrite a whole article just to comment on a specific thing that Trump said or another public figure. So it teaches us to write. It teaches us to think about reporting in a new way. I think the other thing it does for us is it allows reporters from different beats and different desks to collaborate in new ways. Instead of having a science reporter write a science article about what a public figure said and having an education reporter do the same thing, they can collaborate on the same source document. So it allows opportunities for editors and reporters to collaborate in a new way. There's one thing that I think, you know, all of these projects share some common editorial and design goals in common. For our annotations, it wasn't just fact-check work. We also thought, you know, as NPR, one of the things we feel we can bring to the conversation is context and analysis instead of just reporting on what's happened, how can we take a step back and provide analysis and space to think about the news. So that is true for all of our annotation work. In the case of the debates, I'll just use that as a specific example. We took that general goal and had some targeted goals around the election of identifying new issues, verifying claims, calling out general campaign themes. These are things that our editors were thinking about. And the editors, as I'll talk about more in a second, play a really critical role in helping to guide and guide all the reporters. In thinking about our design goals, one of the things that was really important was making our annotations the focal point of this experience, which is a little bit different than I think than some of the other examples we've seen today where annotation is a layer on top of. But for us, we felt like the value that we were bringing to the conversation instead of just sharing the transcript and having users take the extra step to be able to access the data from a design perspective. The challenge that presented from a design perspective was, as you have multiple annotations, we had 15 reporters working on debate night. How do you aid in the scannability of the transcript? How do you allow users to easily move from annotation to annotation? And also, how do you alert users to the presence of new annotations? So these were some of the things we were thinking about from a design perspective, making sure that this was designed in such a way that it could be flexible and work as an embed on a member station site or work in several different contexts. As far as establishing the expertise of what it was that we were commenting on, bylines were an important part of that. Speaking to, this is how you can find out more information about this reporter. Why are they qualified to comment, offer an opinion on this passage in a transcript? Making sure that, you know, that the sourcing was, this is writing in a specific way. It's not like writing like a tweet. It's not writing like an article. So thinking about the training of reporters and helping them think about how to craft an annotation in a meaningful way. Making sure that the sourcing was a critical part of the annotation. Having the editorial curation to bring multiple perspectives into a specific line in the transcript to have both the politics reporter and the senior business editor commenting on the same point. To bring different perspectives or different contexts to different statements. And that ties into how did we establish trust in the annotation work that we were doing? And that's where I think the editor was so important to this process of being there in the beginning, doing the upfront planning of identifying the right reporters to comment on a news event. And identifying common themes. Doing a little bit of upfront planning to say these are the points that we want to hit on. These are what we expect this public figure to be speaking about. Teaching and training around establishing norms for how does an annotation work? How is it different than an article? How is it different than a tweet? The line editing of having a backstop for the reporter as they're frantically finding and commenting on points. Having the support of an editor behind them saying this has been cleaned up, this is good to go. And then having an editorial director that was thinking about the totality of annotations. Instead of just looking at the atomic unit, what do all these annotations collectively say? Are we being balanced in our annotating work? Are we just all diving onto a single point? Or is there value in understanding this as all of the annotations put together? So that I think is a really, really important point that the editor played such a critical role in the annotation work. I think, you know, just in the interest of time, I won't go into the full... I'll share these slides via the annotation hashtag on Twitter, but the short answer for how we did all of this, we used Google Docs as a way that we had a transcription service being fed into a live Google Doc. We had a Google Apps script that could go in and create a template that could then be interpreted by the code we wrote to translate the information in the Google Doc into the presentation that I shared a few slides ago. So that's the basic workflow. I'll leave it at that, but happy to talk about the workflow in more detail later. Thank you. Right, okay. So what we are working on is the coverage of climate change in mainstream media online. And as you can see on this quick example, but I guess you know, there is a lot of contradicting information online and we thought we could benefit the readers and journalists to have experts provide feedback on the content credibility. So that is the concept of climate feedback. We now have about 200 scientists in our network who volunteer to contribute when we analyze articles. And I think one thing that is important to note for everybody is that the group that we brought together is motivated by a goal and they are coming here because they want to help the public know which news they can trust. So that's one of the goals we have. And mostly that was the first one, but we realized that our interaction is more with editors actually. So we can provide feedback about the credibility of content what the scientists think of their work and the editors are usually the ones who can make any modification or decide on improving the coverage. And also now we are starting to interact with people at Google and Facebook to also have them be able to read what we provide in terms of feedback on the credibility of content. So a quick overview of the way we work we select only influential articles that can be checked for accuracy that's the first step and then we invite scientists to annotate analyze the article so the good thing here is that they can collaborate on and also we can chop the work in small bytes and ask scientists to only comment on one piece so that they can all contribute a small amount of work and build together something that is complete. And the goal of them annotating is to provide one of the goals is to provide an overall credibility assessment of the article and we ask people to rate the credibility on a scale from minus 2 to plus 2 on very low to very high. So what we understood from these two years of practice that we are doing now is that there are mostly six dimensions scientists are commenting on when they justify the ratings that they provide is whether the facts are accurate whether the science is understood so I think there is a clear distinction between getting facts and understanding what they mean then having an argument that is logical and I won't go into all the details but one thing that is quite important is the sources what the reporting is supported by is it anchored in reality having really evidence and experts to give credibility to the reporting and the last step of our process is to provide feedback to the editor but also to provide this feedback publicly. So I think at this point you all know what it can look like when you have this is a screenshot of a web browser with a piece of text that has been annotated and in this case you have a year of temperature data set and scientists here provided a longer part of the data set to show that piece that was shown was practicing cherry picking just showing a little bit of the data so just one illustration and once we have all these comments from the scientists we present them on our own website clementfeedback.org and we publish an analysis where we display the tags that people have mostly used in commenting the content and the rating so in this case this Daily Mail article most people gave it a very low rating one person a low and you can see who the reviewers are and I think that's interesting, that's a little bit what you were describing we have this process of collecting annotation comments but then the way we display them on our website is that we organize them here in key takeaways so we summarize what this annotation talk about and then we have a list of the piece of text that is being commented and this is the scientists and what he says so it's also important as you were saying to justify the expertise of the person we have a link to the professional page of these people but also showing who they are and what they know about the topic and one of the things that is important for scientists is to know that what they do is actually useful has some impact they don't just comment for the pleasure of that so one thing that we make sure is that at least the editors hear about what the scientists are doing but also on social media you can also publicly call out the person and in some cases like here oops that's still where the discussion happens so on Twitter for instance really once we have made all this work of annotating and analyzing then people usually point to our analysis because it's easier to read and to digest and in some cases also journalists use what we do to build their own reporting and politicians like recently congressman Don Bayer who used our annotation of transcript of house science climate hearing to add it to the official record so that's another way that we make sure that what the scientists produce is being used one more avenue for the future is that what we are doing can also be used by the main point where people consume the news which is still Facebook Google and Twitter in some ways and they all said that they want to provide more information about the credibility of information for Facebook what they are working on is showing in the related posts or related information the fact check of target articles or articles that contain the claim that is being checked so that's still in development one thing that is already operational is that they show a little pop up before you share something if that is has been fact checked by one of the fact checker of the international fact checking network so if you try to share something that has been a little pop up and in Google recently they also announced that they would feature the fact check first or high in the search results so these are general direction that we are working on now but to open maybe the discussion I think one of the important thing that we do with annotation is really a working phase where we analyze and that set of annotation has to be promoted and publicized so there is really a two-step process where we work with the content ask questions have the scientists to answer that but then present what we found in another format and I think that brings consideration about what we should do with annotation to respect that use and another thing that I think can be done to maybe automatically go and find information about sources or the links that are being pointed to so if you can surface information that can help either the students or the scientists so if anyone wants to work on automated ways of tracing back claims where they originate from our sources who these people are or links what is really that you are going to get when you click and bring content that can help people analyze and go faster in their fact-checking thanks so if you would raise your hand if you have a question for this panel I will try to keep order of who's asking questions and then we'll deliver a mic to you I know you guys are hungry we're going to break for lunch right after this so they're in the back thanks I'm TS Waterman this is a question generally for the public do you think there's any way to aggregate annotations from multiple people scientists experts general public whoever in order to get kind of a general score of the trustworthiness of a source or a particular article is a lot of the techniques and strategies you've talked about and you've talked about teaching people are sort of personal communication following and around trying to make a decision for yourself do you think there's any way to kind of automatically aggregate voting or scores that have been put on by the by the annotators I would say getting to from the annotation to the overall credibility score for us at least because we have the two we have these comments that scientists make and then we ask them to give a credit and it's really hard to jump from one to the next I don't think you can say comment on this and this and that and then say oh based on that then it's of low credibility it's quite hard because the scientists are always going to criticize something they're going to say oh but there is a problem here but that doesn't always translate to low credibility it can be a small problem so I think it's going to be extremely hard to make anything automatic based on that maybe some natural language detection of sentiment like I'm feeling very negative about this content but I would say it's probably not something machines can do I think going back to something that was said this morning to speak to this point the idea of filters is interesting to me that I think it would be hard to automatically aggregate all of that into the same stack of annotations but the idea that a source document could be a place that could be a venue for multiple viewpoints to explore it is interesting and maybe there's you know I also see the design challenges of aggregation of just the sheer mass of annotation if you're bringing in multiple viewpoints so some way of providing some hierarchy or order to that sure thanks for the mic in the same way that Google has done an amazing job of re-ranking web search they started which was looking at the link graph but actually looking at user behavior there may be I'm just riffing on this there's obviously no answer yet but there one could use sort of the aggregate annotation and number of and as you were talking about sentiment and words involved in them in order to at least get a feel for what people are generally saying about this is that any thoughts two quick remarks on that one is that as we think about a credibility score we probably want to expand that and Claire Wardle at first draft has done some really good work charting the different types of misinformation out there and otherwise you get a scenario where a parody or a satire is swept in with some malicious information or where a parody or satire tag is employed by a you know outfit or a disreputable outfit as a way of getting past certain filters and then the notion of being an aggregate it's it's valid until that becomes a very important indicator and then it gets gamed and so we have to acknowledge that I like the way you say it's going to get gamed because I think language is so flexible you can always find a way to get your message across if you want to spread misinformation you can say in a way that looks like it's satire maybe but it's still something that is false information you're propagating but I think there are probably a set of criteria that objectively can be measured around sources around the quality of what's produced about the sources that are being cited and that can give you an indication of the trust a priori you can have and maybe then I would say you need a human to finish the work a question Aviv yeah so this is for you Emmanuel can you ask the scientists to sort of mark how important that claim that they're responding to is or are you doing that already? because if you do that if you ask them how important it is and how credible that claim is then that might potentially be able to aggregate it into their final score and I would say have them do all of this and then see if there is some function from the and I'm curious if that's when you've explored it all I have one other follow up so we do ask but in a way that is soft if you want we just tell them focus on what matters or things like that but we don't have a way to encode that in a systematic way but I think that's probably one thing we would need I'm not sure if argumentation structure of the text could help here because maybe you could detect this is the central claim or something like that by someone else not the scientist but maybe someone who studies rhetoric and build these things together yeah and the other thing I was just to clarify the previous question we talked about sort of a credibility score and parity I feel like there is a sense of like how much should I trust this sort of information to be accurate is sort of what I think of and I'm curious how that ties into sort of your model of a credibility score because if I'm going to buy a stock whether it's a parity side or not I want to know because then I'll know if I should buy that stock or not based off the information on that side and so I think there is this sort of global sense of credibility score that does incorporate parity it's just not something I should trust and I'm curious how that ties into sort of your models of credibility and if that's an accurate statement then we can kind of say the question I'm going to try I got really how you think about credibility and if you were to sort of create some sort of outcome that is a rating like is the model of how much should I trust this in order to take actions in the world is that the model that you use or do you have a different model for what is the general sort of credibility of a source like if you could reduce that to a single thing is that something that you think is possible on Wiki and so we need a common definition and what we say is that a fact is something that is generally agreed on by people that know people in a position to know that are inclined to tell the truth and those are our criteria and we actually try to get students instead of focusing everybody gets hung up on that inclined to tell the truth piece like oh it's New York Times they're liberal they have bias it's nothing else they have a different bias what we actually find is for any given fact there's actually a very small number of people in the world that are in a position to actually know so we try to get the students to focus on that first understand who are the people in a position to know the truth of this fact look for a consensus in them and then if they find a division in the consensus among those people in a position to know then start to address questions of bias and inclination to speak to the credibility score from a design perspective for a second I would say that this is where user needs analysis could come in really handy of if you're presenting a credibility score to an end user how are they equipped to interpret it and an expert can interpret a credibility score with a background that an end user may not be able to so I think about like an analogy like Google Analytics like if you just share Google Analytics and here are your top line dashboard numbers depending on how familiar you are with how analytics systems work you could interpret those numbers very differently in a positive way or in a negative way so I think anytime you think about automating something like credibility you need to think about the context of the user and how they're going to come to make sense of it and I think that that's a big role for us as designers to create systems to create structure to help extract meaning from something like that Great I want to ask bring Stefan into the conversation and one of the themes we've been hearing is expert networks it's a fantastic filter it's a great way to solve a lot of the problems of open annotation I'm curious to know questions Stefan to what extent has his group experimented with opening up contributions around data cleaning or entity structuring from users generally or a broader community or is the work and security considerations such that everything is closed Right You get the idea when everybody is going into the way of trying to fact-check communities in the public using annotation information we are kind of running in the opposite direction where we keep the annotation as either a tool to enrich source documents that are secret during the pre-publication phase or we are using the annotations we are trying to use as a bridge between applications to streamline our investigative process but nevertheless we are talking about a big community of journalists working together so you can see there's a pretty big group that is working together on documents and collaborating and needs some sorts of tools that are not there yet so we didn't have any meaningful experience yet after publication it's also an issue it's a technical issue for now that you cannot toggle in between states of pre-publication post-publication with the same source documents or to consider to publish the batch of your annotations when you feel it's safe and you go publish the stories and we have the same issue with other tools we are using and indeed it's a waste but it's a technical issue we are trying to consider this moving in between secrecy and public work talk about open networks and user a couple of questions in here about generally how do you view how do you bring the deniers into the discussion or how do you put your work in front of the deniers how do you bring in the discussion first we need to define who they are deniers what do we usually use I think within the world of scientists we might say there are some contrarians who reject the majority of evidence that every other scientist agree on and they are very very little of them and we didn't have the question of getting them in the discussion because they did not join the discussion so I don't think for us it's a big problem because maybe you are more talking about bloggers or this kind of people and it's not really our goal to bring them in the discussion at this stage now was there another part to the question how do we bring but now what I want to really say is we strive to bring a diversity of voices within climate scientists you don't have all the people who have a separate team maybe of the problem so we have rules of who we want to accept and those who do propagate false information we try to not engage at all now you have people that are not maybe the most alarmist and they are okay it's not as much as a problem as other people say and these people we try to bring in a diversity of voices and we don't have just one subgroup that dominates all the production that we have I asked one of the questions on the notepad some of you want to get your information in front of the people who are reading the original article like that Gardasil article you want people to see that but it's not your article you have to get your information from a third party others like NPR or maybe some of the other commentary if your commentary goes up on the investigative journalism you don't want people coming in and overlaying that maybe coming up with something that claims to be just as authoritative on your website how do you deal with that tension I mean who controls you want to see it and not have your own content be disputed or something on your own website. So I mean, first I'll make a clarification. I think from my perspective, while it is nice for students to do public work that actually is seen by others and may help others reach their own decision, the most important part to me again is that the process of annotation gets our students thinking in a way that is about sourced information, is about the network of the web. So even if that information doesn't get in front of the right people, I think the process of doing it is one that's really beneficial to our students. That said, I guess I'm, my big worry about annotation in terms of things like these junk articles is, you know, as someone, I forget what it's called, it's like Bartlett's Law or something that as soon as you have any metric, you know, which is influential and has real world impact, you go into, it becomes game and hence becomes meaningless. So there is, so the tension from my perspective is as we start, you know, scaling this up, you know, in all likelihood as soon as it becomes influential, the same way, you know, spam happened, the same way, you know, fake and satirical news learned how the game, the Facebook system, you know, people will start to gain annotations. And so, you know, what's the answer to that? I think from my perspective as an educator, the answer to that is we try to inculcate in our students the right habits of mind and the right reflexes to deal with that world and then we leave it to these guys to solve all the hard problems. Okay. I think unless anyone has any closing remarks that we want to make, we should consider moving on to lunch. And so thanks, thanks to the panel. And thanks, everybody. Thank you, Stephen. Thanks.