 In moderating, my name is Kathleen Hall Jamison. I'm director of the Annenberg Public Policy Center at the University of Pennsylvania. Our topic today is covering science, assessing and representing uncertainty, credibility and reform. We're going to be focusing largely on COVID-19 and the challenges involved in presenting and in covering particularly emergent science in the context of a pandemic. We have three distinguished panelists. I know that you know them by the reputation. We have Kristi Ashwander. We have Mania Baker and we have Richard Harris. I'm going to start by telling you a little bit about Kristi. She's going to then give a brief presentation of the things she wants you to focus on. Things that she thinks are most important. Then we're going to go to Mania and then we're going to go to Richard. Kristi Ashwander is an award-winning journalist and author of the New York Times bestseller, Good to Go. What an athlete in all of us can learn from the strange science of recovery. It begins with a study about beer and running to examine the ways that science can come to erroneous conclusions. Previously, she was the lead writer, lead science writer at 538. She's written about COVID-19 for Wired, The Washington Post, Scientific American, Ella Metal, Kaiser Health News and Nature. Kristi, take it away. Thanks Kathleen, I appreciate it. COVID-19 was a really interesting experience I think for journalists and probably for scientists too. This was a moment that really forced the public to reckon with uncertainty in a really explicit way. And in a way that I think they don't normally, the sort of process of science is not normally so front and center in the public's eye. If you had told me a year and a half ago that are not or herd immunity or some of these things flatten the curve, does everyone remember that? Yeah, these terms that at one point seemed very jargony and scientific would enter the public lexicon. I would have laughed, of course. We've learned in the last few years that a lot of things that used to seem unthinkable have come to pass. But anyway, COVID really brought uncertainty to the forefront and uncertainty is something that I have thought a lot about as a journalist even before COVID. And in fact, I'm currently noodling around with a writing project about uncertainty in science. So I would love to hear from any people in attendance here, people that have ideas around this would love to hear from you because I'm collecting string for a new project about this. But I think that what COVID really did was it helped show that society's expectation of science really framed the way a lot of the things about COVID were talked about. There's a lot of fear driving the messaging. People really wanted to know what to do. And there was just this demand for black and white dancers. And this really creates a problem for scientists but also for journalists like me who need to present this really pretty nuanced and complex information to the public. You want to maintain your credibility. This is something that happened to scientists but I think also with journalists. I wrote early in the pandemic that at that time masks were not being recommended. The best evidence that we had and sort of putting all of the information about what was going on at that point in time, masks didn't seem like the thing that we wanted people to go out and rush to get particularly when we were concerned about supplies and things like that. Now that answer changed over time and this creates sort of confusion among the public who's feeling like, wait a second. I thought science was really certain. I thought that science created black and white answers. Yeah, I think that there's this idea among the public that things are either true or they're false and that a study proves something. I think everyone probably in attendance here today knows that science doesn't work that way. Science is a process. It's not an answer. And every answer that science does produce is provisional. And this is really important to grasp and I think that we've just really done a poor job of explaining this to the public. And in part, it's just a really hard lesson to grasp I think. So often the way that science gets presented to the general public is that I mean, there's even memes that say this, science says or because science. And so they get an idea that science is very certain and then it's final. And so when things change and these provisional answers get adjusted and your priors are updated and all of that, it can feel like a betrayal and it can feel like, oh wait, science isn't as credible as I thought it was. And so one thing I've really been aiming to do and I'm not sure that I've succeeded frankly because this is a really hard problem but I've really tried to get in my stories this idea that this uncertainty exists and then it's an important part of this process and that whatever I'm telling you now and whatever we know right now is subject to new information and new studies and new data. Early in the pandemic, there was this rush to figure out what to do and everyone wanted answers and there was a lot of fear. And so it was like, oh my God, what do I do? And the problem here is that the scientific answer is to all of the questions that people had almost always contain some sort of version of it depends or we're not sure. And this is hard. And at the same time, then you had these doubt purveyors and people with a vested interest who are offering certainty. And so they were offering these hard and fast answers and that's really hard to compete against. And I think it's hard to explain to people who are scared, who don't know what to do, who want to protect themselves, protect their families. And I'm saying that, well, this is really complicated and we're really not sure right now. I can't tell you exactly what to do. Here are some options. Meanwhile, there's this other guy saying this is exactly what you need to know. Here are the hard and fast answers. Trust me. And I think that what really happened here is we saw sort of a lapse in trust here, both of scientists and in journalists. And some of this obviously was pushed by politicians. Yeah, but there was also a big rush in the early days to do fast science and to get quick answers. And here again, we got some answers that were very clearly wrong. I'm thinking, for instance, some of those early antibody studies, I think Stephanie Lee at BuzzFeed did some really good reporting showing some of the shortcomings of those studies, but you sort of had this herky jerk where even credentialed experts were saying very different things and the public really didn't know what to believe, especially when a lot of it seems to be coming from credible sources. Yeah, and this just made my job really hard because we're being flooded with crap. And I worry that this undermines the scientific enterprise. What the public sees isn't that science is hard and this problem is especially so. What they see is that there are all these conflicting studies and scientists are fighting. And so I've been doing a lot of thinking about you. How do I convey that to the public and show that? Well, the fact that scientists are arguing about this or discussing it or discussing possible shortcomings, that's actually good and that science working as it should. And in the meantime, we need to give you some provisional answers, but just understand that they're up for change. And that's been hard to do. Back in April of 2020, I wrote a piece for WIRE that was arguing really that during the pandemic, rather than lowering the standards for science, we actually need to raise the bar. And we were getting a lot of these cases of fast, sloppy science from what we really needed was more deliberate and careful science. Yeah, we had all hands on deck. So presumably this is possible and there are instances where people really did come together and do some pretty extraordinary, large projects. I mean, I think the vaccines themselves are a fantastic example of this. But at the same time, you had a lot of fast studies coming out that were getting criticized a lot and that's good. But it makes it hard from a public messaging perspective. When getting at the truth is at a premium, I think we really need to employ our best tools and methodologies and not relax our standards and let anything go. And yet we were seeing that. And so my job as a journalist was really made hard because all of a sudden, I'm just talking all nuance all the time. While meanwhile I'm competing with these folks who are putting out these messages that everything's really simple. It's black and white. Here are the answers. And it's just been sort of an ongoing issue. I wrote another story for nature, which is about the problems with herd immunity and this herd immunity approach that at that time was being championed by some people within the Trump administration. Here again, it was a really complicated story. And I think a lot of people in the public had been hearing about herd immunity in the context of vaccines. And it can be difficult to explain the difference between herd immunity via vaccine versus actually contracting COVID. And this was just something I actually just this morning got a text from a friend asking me to explain this and saying, well, wait, I was just at my running club last night and someone was saying that everyone's gonna get this and aren't we just gonna get it and isn't herd immunity how we're gonna get out of it. And I said, I don't have an answer that I can text you with my thumbs. Like this is, I can send you some articles I've written but the nuance here is really complex. And this makes it really difficult to explain to the public. The last story that I wanna tell you about really quickly is I wrote a story for Scientific American that was debunking sort of these false claims that continue to this day, unbelievably, that COVID death counts are really inflated that really people aren't really dying of COVID that this is all invented and people are just saying this to make money or things like this. And this is something that I thought was kind of an interesting exercise because really what we have here is something that we cannot be absolutely certain about. We will probably never know the exact number of people who died of COVID. At the same time, we have three separate lines of evidence that point to the same answer. So we can't know the exact number and yet at the same time, we can be quite certain that there are of the order of magnitude and that in fact, there are many, many people dying of this dreaded disease. And so, this was a story that I took this approach of trying to kind of show the certainty within the uncertainty. And again, it felt sometimes like I was preaching to the choir. I actually have some members of my extended family. My dad had sent this story to them because they were of the mind that this wasn't important and it didn't convince them. And up until now they had considered me credible. So I don't know what the answer is except that it's clear to me that uncertainty is really difficult for all of us to deal with. I mean, I find it difficult in my personal life. I think all of us who've been living through the pandemic have had to deal with a lot of uncertainty. And it makes it hard for us to interpret things. It makes us very prone to wanting to jump to comforting answers or answers that seem very certain even if we know that that certainty isn't there. And then I'll just really quickly close out and hand it over to Mania. But I wanted to just point to a story that I wrote for 538. This was long before the pandemic. The title was there's no such thing as sound science. And this was a story about how some of the tools and language of open science are being used and sort of weaponized against science. And if you're interested in this, I think this is something that we've really have seen in COVID, where we're seeing people vested interests trying to undermine good science by saying that the data aren't vigorous or we don't see all the data aren't open, things like this, but really sort of nitpicking and using the language of open science and sort of the open science, meta science movement. So anyway, I feel like I've been talking for a while. So I will hand it over to Mania. And I'm in charge of doing an introduction for Mania, which is my pleasure. Mania Baker commissions and edits articles on improving science for nature magazine, where she's worked since 2007. Her work has appeared in nature, science, wired, the economists, flight, new science and a lot of other places. Mania, it's your turn. Thank you so much. I'm really, really excited to be here. And I just like to say that I'm really hoping to leave this session smarter than I started. So I'm hoping that there will be a lot of debate and challenge and good questions. So since I really don't do much journalism these days, my focus is gonna be different from what Christy and Richard talk about. I bring in articles that I commission from scientists and I'm gonna talk about how my authors can represent themselves more credibly and convey uncertainty. And I think the one thing that really surprised me when I started the job was how careless researchers could be about facts outside their discipline. They might be super meticulous about their own research or about their own field. And then they'll get things like leading causes of disease or when a NIH Institute started, they'll get these simple facts, simple common facts wrong. And that undermines credibility. We save them because we have a lot of fact checking at nature, but it definitely makes me less willing to commission someone again when I get something simple facts and that happens a lot. Something else to boost credibility is to acknowledge critics and limitations. And I am almost always having to ask my authors to add a paragraph acknowledging and explaining their critics to, and that's really important because when you see that someone else understands the criticism of it at their work, you trust their judgment and their critical reasoning more thoroughly. Something that's more important, but at least it's harder to explain is sort of breaking the right frame for the argument. It's important to not pick absolutist or semantic arguments, but sort of a practical argument. The argument that is going to be what people make decisions on. So instead of arguing, vaccines are safe. A better argument is it is safer to be vaccinated than unvaccinated. If you try to argue vaccines are safe, you're going to get into this useless whirlpool of defining safe and open yourself up to petty arguments where it's safer to be vaccinated than unvaccinated can keep you focused on the evidence that really matters practically. And then for credibility, the last thing I'll say is show your human side. I can't believe I'm still saying this 25 years after Carl Sagan died. But I remember when I was a pretty new as a journalist, I was reporting about dodgy stem cell clinics that were advertising therapies that really had no evidence that they worked and didn't have great safety oversight either. So one scientist I interviewed said to me, oh, I think this is really scary. And then we talked a long time about more technical things. The quote I used was, honestly, I think this is really scary. And she called me, she was super angry that I had quoted her as a scientist expressing emotion. And I spent a lot of time on the phone with her explaining that I could probably get to the technical things as well as she could, but this kind of overall expert assessment that elicited a visceral reaction, that was the kind of thing that people could connect to, that was the indication that this kind of thing mattered. But I still have to ask multiple times for my authors to say, this frustrated me or this gives me hope or those kinds of things because it's the emotions, it's the things that matter to us that make us read an article and care about what it says. I think it's also really important to think about where your audience is coming from and to show where you are coming from. Before I went into science editing and journalism, I was a high school teacher. I was a high school teacher at a low income school in a low income community of color. And I had met this black geology professor and invited her to come to my class. And I didn't tell my students that she was a woman and I didn't tell them that she was black. I just said, Professor White is coming to talk to you and Professor White knows how to talk to college students and you need, she's not, I didn't say she, and Professor White needs to be impressed with you, even though you're high school students. And she walks into the room and somebody says, oh, someone's mom's here. Like somebody asked her, actually asked her whose mom she was and she walks up to my classroom and I'm like, Professor White and all their jaws are dropping, dropping, dropping. And so they're ready to hear her. And then she gives the slide presentation of how she spends, lives for weeks at a time on a research vessel in this teeny tiny room because she cares about what the secrets that you can learn from sea sediments. Now my students don't really care about sea sediments. They don't really care about geology but because this wealthy professor that looks like them cares about it, they start caring about that. So if that's the kind of thing that's in your background, if you can connect with your audience in some way, use that. If, when somebody can explain, I am interested in this question because it touches me in this way or that way in this way or that way, or I care about this so much that I live on a boat, those kinds of things do bring credibility because your audience knows that you've made a personal investment. Even if your audience is fellow researchers, the people who read nature. I want to go really quickly through ways to convey certainty as authors. I would say that unless you have been touched by the gods with explanatory powers or you're working with a really, really great infographics person who sort of gets statistics, I'd say away from the more technical descriptions of statistics, I would look for comparisons. I would look for, we are as certain of this as we are about evolution. We are as certain of this as we are that this microbe causes that disease. We are as certain of this as we are that bacon causes colon cancer, right? The last one is less certain. So try to find, sort of tie certainty to more practical considerations. And also, if there's a behavioral change that you've made because of your research, again, show your human sides that, because of this, I am eating outside in restaurants. Because of this, I made the first appointment to get vaccinated. Whatever it is, if your research has led to a behavioral change, that's one way to convey how certain you are and how sort of you're analyzing risk. And it shows that you're a human being trying to figure out a question. And I think that when science is represented as humans trying to figure things out, people are much more understanding when the knowledge changes. So that's, those are my thoughts. I'd love to hear feedback and I'm really interested in the rest of the panel. Thank you, Maya. We're now going to turn to Richard Harris who's covered science, medicine and the environment at National Public Radio for 35 years. In June of this year, he stepped away from that role and he's now on an indefinite break from science journalism. He's author of the 2017 book, Rigor Mortis which explores the issues of rigor and reproducibility and biomedical research. Richard, your turn. Thanks so much. And I think that Christy and I shared a lot of common experiences in dealing with uncertainty around this topic of COVID-19. So I'll just fill in a couple of other elements of that that you touched on but didn't maybe underline, which is basically in general, we in the world expect when there's a big disaster, public health disaster, we turn to the scientists at the NIH and the CDC and the federal scientists and we expect them to be sort of the voice of great knowledge and reason. And one reason they've developed credibility over the years in my view is that they have generally been very good about saying, here's what we know, here's what we don't know, here's where we're going. Now, as Christy mentioned, very often news organizations may cut short the discussion of what's unknown because a lot of editors say, what we really need to do is give people practical, actionable advice right now. So forget about what's not known. Let's talk about, let's treat things with certainty and that sort of is part of human nature that gets us into trouble here. But those scientists generally are aware when they're in that situation, Tony Fauci being an excellent example of this, to underscore, here's things we don't know. Of course, with COVID-19, that whole system got pretty badly messed up through politics because it wasn't just the scientists, it was the White House, it was the Secretary of Health and Human Services, Alex Azar, who were basically trying to spin things for political reasons. And of course, trying to deal with President Trump who had a very free flowing connection with reality. And I've spent many, many evenings covering live White House press conferences where part of our job is just simply to try to keep track of all the things that were being said that were blatantly wrong and trying to bring at least some of those to the attention of my audience and say, well, you heard that, but that's not so and so on. But obviously the scientists themselves weren't a pickle because they were often standing up there next to the president when he was saying, well, maybe we should inject people with bleach. And they sometimes felt like they weren't in a position to say, that's ludicrous. And so we were dealing with not only uncertainty, but this gets down to the issue of credibility because the scientists themselves had to choose. If I go too far out there, I'll be cut out of this whole picture. But on the other hand, I'm losing my credibility by standing here next to somebody who's spouting things that are simply not true. So it was a really interesting and often painful dynamic to witness and to participate in. I will say ultimately NPR realized that we were not serving our public by actually broadcasting these crazy events live. And so we stopped covering them live. And then we obviously felt an obligation to report what was said afterwards, but then that way we can contextualize it much better. But many other people just got these live streams through cable TV and so on. And so there was no really good way for them to sort fact from fancy in COVID-19. And that was a crazy situation. I remember when Tony Fauci showed up at the White House in the first briefing for the new administration, he just basically figured it was speaking wiped his brow and said, it's such a relief that I can now just speak my mind. I mean, he always did fairly well, but you could tell he was self-censoring a fair amount as well. So I think it's important to remember the context in which the science is flowing. And in this case, it was a very difficult dynamic both for journalists as well as the public to try to suffer through. I guess that touches on not only uncertainty but also to a certain degree credibility which is one of the buzzwords in our title. Let me take a few minutes maybe to talk about one issue that really got surfaced as a result of COVID-19. Coincidentally, preprints were just starting to take off as COVID hit the stage and we as journalists had been thinking quite a bit about how are we gonna deal with preprints? They're not peer reviewed. On the other hand, peer review is not always that great. Anyway, what are we gonna do? How are we gonna deal with this flow of information? And it quickly turned from a sort of theoretical, once in a while we'll reference a preprint to a situation where we were dealing with him all the time. And I think that's an unfinished story for sure. And I'm interested in talking to Christy and Manja and Kathleen and others who are interested in this session about what journalists can and should do around the issue of preprints. But let me give just one or two quick examples. One of the stories that I had been tracking as a journalist was the story about this drug called ivermectin, which is actually a human drug made by Merck for human treatment of parasitic worms. And fairly early on in the epidemic, somebody thought of tossing it into a Petri dish with the coronavirus that causes COVID-19. And they discovered that it's sufficiently high doses and these were actually very high doses. This anti-worm medication actually did kill the virus. And this got a bunch of scientists interested in thinking, well, added it to the actually quite long list of drugs that thought could be used for repurpose to treat COVID-19. It never struck me as being a particularly likely one to succeed and there were many others like it. But I kept an eye on it and there was a preprint in the beginning of this year, I guess, it did a meta-analysis of the first five human studies of this and basically said, ah, it doesn't really seem to work very much. But so I had decided I wasn't gonna cover it. There's always a risk for journalists if you cover a story like this, even if you say, ah, this drug is not likely to work. Most people are likely to agree with that, but some people will say, oh, wait a minute, a new drug that might work. It might work if, you know, why not try it? And of course, people are trying it in a, these days are trying it in the veterinary form because Merck the manufacturer was sure enough that it was unlikely to be successful in COVID-19 that they basically restricted access to the human form of the drug. So people are picking up the horse form, which is in diluting it or whatever, guessing the correct dosage. So, but at any rate, so, but it was one of these things that started to snowball this year because people who, you know, chose not to believe sort of the mainstream science around COVID, alternative medicine people and people with political motivations and so on or views started really using this and promoting it and pushing it and claiming that this was being suppressed and this and that and the other. So at some point, this is also a story that reminds us that journalists are no longer the great gatekeepers we were if I decide not to cover it in my pals at the New York times and the Washington Post and everybody else sort of mainstream science journalism decides not to give this any ink, it will still take off because people are talking about it on Twitter. People themselves are reading the preprints or their friends are or scientists who are advocating these points of view are promoting these things and they're spreading it on Twitter. So it's a real conundrum to how to deal with this information, what role journalists can and should be playing these days in trying to manage the flow of information that we believe to be wrong. We don't wanna give air unnecessarily to promulgate bad ideas, but on the other hand at some point we have to decide to step in if it does take off as this one did and say, okay, here's the deal, this isn't really, here's what the evidence is and it's not that strong and my take on it really was I was waiting for a much more definitive studies and I would kept telling people were pestering me to cover it and I kept saying, well, you know, if I see a really good study that's convincing that shows that this has an effect I of course will report on it but right now it's just, it's in that big hopper of dozens of other drugs that people are fiddling around with and hoping are gonna be the big, the silver bullet or whatever or at least an effective medication against COVID. So this is partly a function of preprints and how fast information was flowing out and how readily somebody's small and interesting idea about, you know, let's try this in the test tube and see how it happens, how quickly that was able to mushroom and become a, go into the public eye. And so, well, I, so I think I had another anecdote but I'm also running out of time so maybe we can come up with that as the discussion progresses. I guess maybe this is also best left for the discussion sort of the issue or perform. I know people have been thinking a lot about preprints but I'm not convinced that dealing in any substantive way with preprints is going to solve the kind of problems that I was just discussing but it's a point of, for me, it's a point of interesting discussion not that I've made up my mind around that topic. So Kathleen, I will turn this back to you to open our broader conversation. Thank you all. Our format is going to be to spend about the next 28 minutes engaging the, ask the panelists to engage each other and then we're going to turn to Q and A. Those of you who'd like to start putting questions in the question box, I see they're beginning to aggregate I'll turn to them at approximately four o'clock. Let me start with a big question. We now are in an environment in which there are in some quarters sustained attacks on the credibility of public health agencies. Hence what we used to think of as custodians of scientific knowledge, sustained attacks on spokespersons for public health science that would be Anthony Fauci. And we are beginning to see in our social science research that among those who are reliant on media channels that have advanced those attacks, the credibility of Anthony Fauci has been dropping sustained overall in the public overall we still have high trust and confidence that we're getting good information from FDA, CDC, Anthony Fauci but you're starting to see erosion in some places. Are you as you are writing and thinking about helping people to write about the science concerned at all that some of the sources that we have taken for granted as certifiers of knowledge are no longer trusted as certifiers of knowledge among some of those who read you or who pay attention to the knowledge you're disseminating. I would just say absolutely. I mean, I think I mentioned this earlier but I wrote a story about how many people have died of COVID and I couldn't even convince people in my own family with it. And this is someone that they are inclined to otherwise believe up until then they had always praised what a great journalist I was and how proud they are of my work until all of a sudden it's clashing with their belief systems. And I think it's just really difficult. And I think about this a lot because the fact is when people are deciding what to believe I think those of us who've worked in this area know that facts aren't the thing that convince people. People thinking stories, they think in narratives but there's also this whole question of what constitutes credible evidence. And so for me, a scientific study is much more credible than something I read on the internet from a homeopathic doctor say but for some other people that other person and the story that they're peddling is much more credible or the story that they're hearing. You know, I think one of the most powerful sources of information that people are paying attention to are personal anecdotes from people that they know. And so you hear a lot of stories of a friend of a friend or my friend's uncle and things like this. You know, I encountered someone the other day who had decided not to get a vaccine because his friend's uncle had gone deaf after getting the vaccine, you know, supposedly. How do I counteract that with facts? I really can't. It's really difficult. And we're in a situation now where we have entire media enterprises that are built on, you know, not the kind of journalism that the three of us here have come up doing and that has been traditional journalism which is based on facts. And you know, my job as a journalist is to go gather the information and try to put it together in a coherent story and to separate, you know, what's true from what's not. It's not to give both sides. You know, sometimes there aren't both sides and it's really important that we don't present the idea that there are two sides when sometimes one side is just factually incorrect. But once, you know, that erroneous information has been judged credible or it's coming from a source that a particular group considers credible, it's incredibly hard to overturn that. And I think in those instances, really the only way forward that I have seen workable, it has to come from within that community. It has to come from someone that has been deemed a credible authority and lots of times that's not me. And Christiane, I thought I heard you say something else in your presentation, which was that you drew on three different strains of evidence in order to make your case. Richard, are you seeing people look to alternative ways to certify including drawing in forms of convergent evidence from multiple sources as a means of establishing the credibility of things that might be challenged now by people who doubt the traditional authorities? And are there other ways in which you are responding to that challenge? Yeah, well, this is a tough issue, of course, because we talk about the public, but there is no public. I mean, it is so fractionated these days that, I mean, if you ask me what the NPR audience is doing, that's very different. I think they are overwhelmingly in support of the ideas and the authority that Tony Fauci carries. People who don't like him are not listening to NPR. They're getting completely other sources of information. So this is a further conundrum for me. It's like what I tell my audience doesn't necessarily have anything to do with this group of individuals. Now, of course, there are NPR listeners who don't believe in vaccination, childhood vaccines. I mean, because it's not, nothing is monolithic, but I think part of the real issue here is just the, people get so many, there are so many different channels of information and I don't, you know, people don't necessarily go from, you know, from Fox News to NPR that much. You're gonna choose one or the other. So I think that for people start to select multiple sources of information is more likely to be the New York Times plus NPR or Fox News plus some other, you know, Facebook friends group that is full of conspiracy theories. So it's a, I think it's a deeper problem than just thinking about how I as a journalist can put out what I consider to be the best information. And Mania, Christy talked about turning to sources that people can trust. I see part of what you do as helping us build trust in the people who write for you. And you gave us some preliminary advice about the ways in which you were doing that by certifying personal experience and personal credibility. Are there other ways in which you see that you can, you can help the scholars who are writing with you, build the credibility that will overcome the reservations people might have about other forms of authority and lead to trust in them as a voice. I mean, it's always surprising. Actually, before I answer that question, I just wanted to remember to say that, you know, I now have a new job in when I'm working with my articles and that is reading for any individual sentence that might be taken out of context. I used to have to do that routinely for if I was covering something about embryonic stem cells or, you know, something that touched on evolution. And now I just try to do it routinely. So I, like, and just insert little ugly bits so that I don't have anyone tweeting nature says and then something that is not in context. And it often makes for an article that does not read as well in its entirety but is de-risked. So it's kind of painful. I think one thing that I find myself doing a lot is just trying to get my authors to really drill down to why it matters and try to, I think, you know, academics are used to writing for an almost captive audience. They put out a paper and they know that everybody in their field needs to read it to stay current in the field. And so if they put it out, someone will read it. But that's not the case for the opinion articles in nature because we're too diverse. And so I'm always trying to talk to them about how do you make it so that somebody wants to read this article? How do you make it so that it's relevant? I'm really pushing them to talk about why they study the questions rather than the intricacies of what is the technical definition of gain of function, that kind of thing. Richard, as we think about your statement about preprints, obviously an emergent science in the middle of a crisis with high levels of public anxiety and knowledge not really there to answer some of the key questions. There's a whole new set of constraints on journalists as they try to decide when do we move this into the public discussion and when don't we? Do you have tests of strength of the evidence that you could articulate that would help guide listeners, readers and others to a sense of what constitutes strength of evidence for us because you said you were gonna wait for the quality study but didn't tell us how you would know it was a quality study. Right. I think one thing I do or I did do as you mentioned at the introduction, I'm no longer practicing as a daily journalist but one reason I would actually report on preprints even before COVID was if I saw several reports that were all pointing in the same direction and from different labs coming to similar conclusions then I would figure, okay, this is not just some idea that is likely to evaporate when somebody else takes a look at it. So that's one thing I really look at is trying to decide, is there supporting evidence that makes sense either in the published literature that I hadn't noticed or new preprints or whatever preferably in the published literature but that's certainly one test that I have often applied. I mean, sometimes if it's just a huge old finding that just seems like really important even if it's questionable, I think journalists do feel like we need to report this. This is a big thing, it's on to all of our Twitter, whatever it's getting a lot of attention and we have to do our best job to report it to bring in the uncertainties and so on and try to convince our audiences that this is not the last word on the topic but people read selectively, people retain what they want to. So you can have all the caveats in the world in your article and if people are totally convinced of one side or the other, they'll skip the caveats. So that's an issue. In terms of the ivermectin study, somebody has posted a publication in our comment section that I actually have not read yet to see whether that is sort of the definitive article that I had been waiting for. I can judge by coverage of other journalists who follow the same kind of practices that I follow and no one I've seen his report of that is like as really strong evidence that it actually works. There have been plenty of studies of other medications that seem to be really good on first blush or whatever. There was even some positive results around the endimolarial drugs for COVID-19 that got people really excited about that. But single studies generally don't last. I think the other really important issue here is that once people sort of start to become true believers in it or really believe what the testing is gonna work, they're gonna interpret their results in such ways to have positive results. And if they get negative results, they're not gonna publish. So this is familiar to the world of meta science for sure that if you do a meta analysis, you're missing all the studies that found that it didn't work because nobody bothered to write a pre or few people bought the preprints and saying this didn't work. The bar was maybe lower for COVID-19 because there was a fair amount also in the literature in the preprint literature around negative studies. But I think that's another real pitfall that journalists have to be aware of as scientists are about how to interpret a study of a meta analysis. I think the other thing that we were really seeing here is sort of the creation of factions within the scientific community and this I think made it a little more challenging to report as well, where people were really kind of, particularly in the early days when we didn't, there was a lot, I mean, there's still a lot we don't know. But a year ago, there was a lot more that we didn't know but there were sort of groups forming around particular ideas. And this sort of scientists are humans too. And I think preprints can be really interesting in a lot of ways. They're not a substitute for peer review, but I think we all know that peer review is very imperfect too. One thing that I will sometimes do when I'm assessing a study and I'm not sure about it is I will actually run it by someone who's a statistician or someone who does study design sorts of things and ask them specifically about, is this statistical analysis appropriate in this case? Is this the methods here? Because I'm not an expert in all of this stuff. And although I've learned a lot over the years, there's way more I don't know and I don't trust myself to be the final arbiter here, but I can go to someone and I will specifically look for someone who doesn't have a dog in the fight and say, is this an appropriate way of studying this question? And in particular, is the answer that this study is presenting, is it really applicable in the way that it's being applied? Is this something that can be generalized? Because the problem here is that I wanna know whether I should wear a mask to the grocery store, right? And maybe there's a study that looked at masks in a lab. That's not the same experience. Doesn't mean that they don't apply, but the study can only answer questions about that specific set of circumstances. And then we can generalize it out. And in many cases, that's pretty appropriate and we can feel fairly confident. But in other cases, there have been studies during COVID that just have been wildly misapplied and misused to make arguments. And I think this is where it's been confusing. And the other thing that I think has been really interesting is that it's been hard to assess. I think we all have scientists that we look to as being credible, but then in this case, a lot of them were disagreeing. And in some cases disagreeing very vehemently. And so you can't, I tried pretty hard not to use that as the only badge of credibility, whether I think someone's done good work in the past. I think it's really important. One of the things that I personally try to do is always challenge whatever conclusion I'm sort of instinctually wanting to come to you on something. Because I think the hardest thing but also the most important thing is to maintain that open mind, to make sure that you really are always being open to new evidence and really looking at it with clear eyes and sort of an open mind. It's very easy to be dismissive of things if it's challenging your preordained conclusion. And I think it's just human nature that the more you look at it at a question, the more certain you become and that's okay. That's what evidence does, but we still have to be open to that overturning and changing our priors. And that can be difficult to do. And I think it's especially challenging with COVID where you saw a lot of good scientists really coming down with very different sides of various debates. And the other issue I think that we should keep in mind is that we have science and we have data that's showing certain things. But at the end of the day, a lot of the decisions that need to be made policies are not just about science, they're also about other factors. And so often this comes down to value judgments and those aren't things that science can answer. And I think it's helpful to sort of be very clear on what part of this is a scientific question and what part of this is a values judgment and a determination that's being based on what's important to you and whether you value having kids in school versus, you know, saving immunocompromised people from getting a disease. I wonder if the journalistic community should be telling us more about how it determines whether a study is worthy of being shared when it shares the study. So that the public would come to understand that when we have a kind of competing selective uses of the science that Christy you're talking about where I have my hydroxy study and you have your hydroxychloroquine study and they're reaching opposite conclusions. If perhaps the journalistic community and the science community talking to the journalistic community should begin to say such things as there's the difference between an in vitro study and an observational study and a randomized controlled clinical trial that has placebo controls and is double blinded. And this is falling here on the spectrum and this is falling here on the spectrum but they could all be well or poorly done. There's a different kind of knowing different way of doing it if it's strength of knowing as a way of explaining that this is not just I picked the study I agree with ideologically trust me I'm certifying the science. You know, I think the journalistic community is definitely adapting and changing and improving but I'm not sure that they're super explicit about the process. Like I think it's, I see much more common I see and I ask myself for paragraphs about what future studies need to be done what other studies need to be done what studies people are waiting for and that's a really nice way to both show that science is a process and to prepare people for uncertainty to give readers a sense of uncertainty. And I think something I'm really curious what Christie and Richard have observed but I think the way that journalists cover individual articles has really changed. I think that individual papers are coming off of the pedestal and being seen as more of a body of research. I mean if you look at some of the like how to be a science writer books from 10 years ago there was like the one study paper like this was the first paper this was the first kind of news article you would learn how to write and they would say interview the author interview an outside source maybe interview a second outside source Big Bam Boom this is the article that you assigned to the intern on their first day it was a canonical type of science news story and I don't see that as much I don't see editors handing off a single study as the definitive word to a relatively inexperienced journalist they want journalists who know the topic a bit who know oh if I wanna understand the importance of this paper I need to make sure I bring in that paper and that paper and these are the three people that I need to talk to so I do think that there's an I would love to know what Kristi and Richard think but I do think there's an increased sophistication I don't think that increased sophistication has been communicated to the general public. Yeah I think that it's spotty I think that yes for good old science papers about black holes or whatever I think Monya you're right I think we see a lot more of that but every day I see single study reports in the New York Times and they will throw in a line that is like oh this study hasn't been peer reviewed and they figured that sort of absolves them of any other further explanation about whether it's like well I said it wasn't peer reviewed so it might not so it hasn't been published or whatever and I think that's lame and I see way too much of it even in my beloved New York Times and journalists I know and respect but I think they're wrong on this one but I also think that you can get too far into the weeds and say well this is a and explain to people the level of evidence and I think you'll lose our audience if I spent too much time saying well this is a case control study that has less evidence than a than a randomized controlled trial or and trying to explain I mean sometimes that is necessary and helpful but I think that you don't but it's also easy to just give people so much information that they don't quite know how to handle that if they don't have really the background to understand all that I'm not sure it's serving the public to spend a lot of time on talking about the methodology but I think it's very important for us as journalists to be exercising that judgment and saying oh well this is just, you know this is in vitro study this is, you know we shouldn't get too excited about this or this is in mice, you know and certainly in a long history of, you know cancer cures in mice that had gotten a lot of ink undeservedly but I think we're a step away from that but still I think knowing where your audience how much your audience wants to know I think Christy probably thinks about this differently depending upon which of her publication she writes for because her publications have very different audiences and I think they have different appetites for that kind of stuff, right? That's absolutely correct and but I also wanna go back to what Manu was saying I think that it's absolutely true that a lot of journalists have gotten a lot better at this I think sort of the culture of what's okay and what's good science journalism has changed during my career early in my career I was writing a lot of those stories that she was talking about where it's one study you get an outside voice maybe two boom, bam, bam, you know, you're done I think there's a lot less of that it's not to say that it's not done but I think among people who self identify as science journalists who are writing for maybe a more sophisticated audience than just a daily news general publication sort of audience I think that the bar has been raised at the same time I think and this may be something that a lot of scientists aren't as aware of but I think the economics of what's gone on in the news business over the last 20 years has played a big role here I mean, there's just it takes a lot more time it takes a lot more bandwidth it takes more resources to do those better stories and a lot of newsrooms just don't have those resources reporters are overstressed they have to do more with less you know, I've been a freelancer for most of my career and I can tell you as a freelancer I make a lot less money when I do that really careful, slow journalism than when I can churn things out like that and that creates an environment where it's in a lot of cases the journalists interest to, you know, do a quicker job and maybe to, you know I'm not gonna make those five extra calls particularly if maybe some of them don't end up in the story because I have a deadline and I'm not getting paid by the hour I'm getting paid by the piece and I'll just mention here I mean, I've been doing this a long time the first story I ever published in the New York Times was 2006, 2007 I was paid a dollar a word I have a story coming out in the New York Times tomorrow or Monday I'm getting paid a dollar a word, you know that's a long stretch of no pay raises it's just, you know meanwhile, everything else has gotten more expensive so I think the economics here are one reason why that, you know one thing that makes it difficult for good journalism to happen even among staffers it's just a lot of places have lost their copy editors people are having to do more with less I wonder if there's a way for journalists to conventionalize some language to avoid the problem that Richard's concerned about which is too much methodological detail we're gonna lose audiences and it's gonna become an incomprehensible story which is to say, study in mice may not apply to humans just bottom line observational study there may be other factors accounting for outcome so begin to try to teach people the kind of thing that we've taught because people can recite back to us correlation is not causation we've basically given them a bottom line to say, here's a caution about this so that when we're covering things that seem to be worthy of coverage but may not yet provide answers that are conclusive enough to justify for example, humans taking a drug and an FDA approval we're starting to signal that science has some rules that govern how these things are able to draw inference how these processes are able to draw inferences and when I pick one rather than another that little rule ought to say well, this one may be stronger than that when all things being equal if they're of comparable quality let me ask the one last question before we turn to questions to the Q&As in the chat so what do we do when we get the big retraction? So there was major hydroxychloroquine study big retraction major ivermectin study big retraction how do we cover when science de-certifies knowledge that has been offered to the public with a lot of exposition big headlines and major outlets lots of people believe science thought they knew something scientists thought they knew something and now scientists saying whoa, something was wrong here and Richard, you wrote a book that dealt with this I'm gonna start with you Yeah, it's a big issue because people and just the way I think scientists are reluctant to admit their mistakes and retract unless they're really feet are held to the fire I think maybe the attitude of journalists is also pretty much well people have forgotten about that that was months ago and so I don't need to make as big a deal of it or if we always see that by and large corrections are buried inside the papers regardless of where the article itself went with some notable exceptions but yeah, I think I think that in the case of hydroxychloroquine clearly plenty of coverage around that has been oh, this stuff doesn't work so even if people didn't focus on the retraction per se I think the net result is people who pay attention to mainstream news publications and organizations know that hydroxychloroquine doesn't work so you don't necessarily need to have four and a half minute piece on all things considered that says remember that four and a half minute piece we had six months ago well now we're gonna say that's not true but you definitely want to have that you want to make sure that the message sort of accumulates that every time you talk about hydroxychloroquine you say an unproven treatment or one that didn't work or whatever I think it's more important to convey the information about the that's public really cares about which is should I take this or not as opposed to talking about that the more mechanical part of this that this was a paper that had a retraction followed that and the scientific enterprise had corrected itself had stepped in to correct this but I think you raise a good point Kathleen which is that we maybe don't spend enough time talking about to the extent to which the scientific community is successful in self-correcting because retractions are a useful tool but they're obviously very underused and so what we should maybe talk more about how the scientific enterprise goes about truth finding not just individual scientists but the public in publication and these other methods that sort of help us sort of tilt towards the truth in the long run. Yeah, I agree. And I think you know part of this is really about showing more of the scientific process at how it works. I think if the lay public had a better understanding of how science works like the actual process of science I think this would be a lot easier for people to understand but I think that it's important for us to cover retractions and to show that this is science actually working as it should and that you know when we find mistakes and we correct them this is you know journalists do this all the time this is the standard in our profession if I make an error in a story it doesn't matter how small or insignificant it might seem we run a correction and we make sure that it's correct and that's on the record of that story and I think you know science and scientific paper should be the same way but as journalists I think we can really do a service to our readers by walking them through this and explaining to them what happened and also I think it's important for the public to understand that regularly you know science is done by people and human beings are fallible there are people who cheat and lie in science just like there are people who cheat and lie in any other profession and so you know I think it's important one of the things that distinguishes science journalism from science communication is that you know we're taking a critical eye to science it doesn't mean that we're coming to tear it down it doesn't mean that we don't think science is great but it's our job to look for the flaws and to look for those problems and to identify them and to cover them just like we would do if we were covering politics or some other subject and so it's not our job to be science boosters and I think this is something that a lot of scientists don't often understand Mania do you like to add something? Yeah I was mainly agreeing with what's been said before but I do want to give a call out for Retraction Watch which I think is an incredible service because you know I see when we're working on articles as soon as it publishes it no longer exists in our brains it's not something that we're actively thinking about we're thinking about the next story so sometimes we don't automatically find out that something that we've written about or covered has been retracted there's not those automatic processes where you're scanning tables of contents to look for things that are interested and so Retraction Watch is a nice way to alert people that oh this has been retracted because the authors are probably not going to get in touch with you sometimes it happens and say hey that thing you wrote about not working so these kinds of alert systems are really important it can be difficult to get your editor to commission a story on a Retraction because there is a bit of competition for space and resources I think people are more willing to add a correction to a story or run something I've also seen both scientists and journalists say hey this article that I wrote about or that I put out it has been retracted and you just see it on Twitter which I think is another way of getting the word out and I think those should be used more. Yeah I think Twitter has become a really useful reporting tool I try to be very mindful though that it's only a very small subset of scientists who are on it so I think it's important not to misrepresent it as sort of the scientific consensus as a whole but I find it extremely useful for sort of listing in on some of the chatter that's going on on things but also just seeing things like this very often it's the first place I will hear about a particular study or of a retraction or of a discussion that's going on about shortcomings in a particular study. I'm going to turn to the chat cube to try to retain credibility having promised that I would do so and I'm just going to pick the questions most interesting to me is that single study approach to journalism and historical relic from a time when single papers were indeed incredibly impactful and opened new paths thinking of Einstein's papers on relativity for example and it would be consistent with a still persistent myth of the lone genius scientist jump ball anybody take it. I don't think it's I think it's a relic of science journalism sort of I think in the earlier days it was a little bit more science I don't know Richard's been doing this longer than me I'm not calling you old Richard I promise but I think a lot of stuff particularly with things like space there was a lot of kind of science boosterism and I think there was just more trust that like, oh, this is all great and like we can just trust scientists and I'm not saying I mistrust them but I think there was less maybe less reporting that was sort of not taking the assumption that everything is just perfect to begin with. I would also chalk this up to marketing on the part of the scientific journals like Nature and Science which weekly put out a press digest and say, here are all the really exciting studies that we have report on something and it's low hanging fruit for a science journalist who says, what am I gonna do this week? And it's also with the embargo system basically you know that you're not gonna get scoop because no one can go with it until the embargo is lifted. So I think that whole system which benefited the journals and benefited the journalists also encouraged this that kind of approach to science journalism. And I think that that is wanes somewhat but I think it's still active. You can use certainly still read the stories of that nature on a regular basis but I think that was a very powerful driving force as well. I agree, yeah. I think it's more journalism and promotion that's changed than the scientific paper. I'm not saying the scientific paper hasn't changed but in terms of coverage I think it's, I don't think it's because of the nature of the paper. Do any of the three of you have in mind that a single study that's the equivalent of Einstein would like to say that wait a minute there are in fact those momentous studies that are so important that? I'd love to know the coverage of Einstein. I know that one that was rejected, right? If I recall, it had some trouble getting published because there wasn't a lot of experimental evidence. I'm bad, where's Eileen's life? I don't know my science history well enough. Yeah, I think occasionally the New England Journal or JAMA will publish a study that's been a huge, many years in the making of a very large analysis of some important treatment that really will change the course of treatment. And I think that's not Einsteinian impact but I think there are some examples you can look to where a single study if it is involved, huge numbers of people and has clinical relevance that may change practice immediately. Those kinds of studies I think really still do deserve to get individual attention. Yeah, and some tools like CRISPR, RNAi. If we pick up another question there's some evidence that scientists themselves are one of the sources of misinformation in science journalism. Do you think the role of journalists is to report with scientists or saying or do you think journalists have a responsibility to somehow do independent vetting of what scientists say before conveying their messages to the public? I absolutely believe it's my job to do some vetting. I mean, I assume I'm not alone in this. I very often had scientists misrepresent their findings. And I mean, I think we've all seen this if you read the abstract of a paper and the conclusions that are heralded there often aren't held up. They're the just so stories that are being told about the data. And I think this goes back to my earlier point of not being a science booster but a science journalist. And that is our job to sort of interrogate the evidence and look at what's there. I have an example of this from COVID which is there's a paper that got a preprint that got a fair amount of attention because it was a group in Italy that was trying to figure out the incidents of long COVID among children. And they did a questionnaire of parents, about 130 parents. And they came to the conclusion or the way their findings were interpreted was basically that 40% of all kids who get COVID end up with long COVID. And it was started to be picked up by particularly alternative practitioners and so on. I actually went and I looked at the original paper and it was just they themselves were pretty, or preprint, they themselves were actually pretty badly misrepresenting their own findings. They essentially didn't mention what I thought was the most important finding. It was a questionnaire thing and asking parents about the health of their kids and so on. And fair number of kids had runny noses after COVID and didn't have, parents didn't remember them from before. And that was a significant part of the conclusions that these children were symptomatic from COVID love these many months later. But they asked the parents also, overall from a scale from zero to 100, how do you rate your children's health? And the parents said pre COVID it was like, I have the numbers here. They said it was 96.3 was the average of this health index. It was pretty high and then after COVID it was 92.9. So it was very small decrement, no statistics to know whether there's even a significant difference at all. And they basically buried that finding in their paper, which ultimately was published by the way in a obscure pediatrics journal. But at any rate, I actually read that paper when people started talking about it. And I saw these conclusions and I realized, these scientists are not being really very direct with their own findings and they're making a lot out of pretty thin rule. And so absolutely that's an imperative part of good science journalism. Next question. We scientists are very concerned and very aware about hierarchies of expertise. There are MDs at the bottom, then PhDs in a subject. And then I think there are MDs objecting to this one. There are MDs at the bottom and then PhDs in a subject. Then people who spend an entire career studying something. And then some of them that are at the very top of their field. But the public seems to believe that everyone with a PhD is exactly the same and MDs are more knowledgeable than that. How do we convince the public that quote unquote expert is a multi-level concept? And do you think that this is even a problem? I mean, I actually think it's worse than that. I think that we're in a situation right now where just expertise in general has been very much downplayed. And there's been a lot of doubt about this, whether you can trust academics in general. So I think for people that are dismissive of that, it doesn't matter to them whether it's an MD or a PhD or what that is. I think that those sorts of titles are not really things that the general public want. But at the same time, I think it's important for us to really show how do we know why is this person the appropriate person to be speaking on this? And I think it's absolutely a problem. I think the thing that I see the most is people with an MD going around and talking about things they have no training in. Although we also see that people with PhDs in an entirely different field going on about a field they know nothing about. Let me just add one small point to this, which is that the Associated Press decided decades ago that if you're an MD, you will get the title doctor in a story. And if you have a PhD, you won't. And the rationale being that people assume, take the word doctor to assume that it's somebody who's capable of issuing medical advice. And so doctors and dentists are called doctor so-and-so and people with PhDs are not. And any news organization that follows that, AP Style, which is many including NPR will not call a PhD doctor and they will call an MD doctor. Many other news organizations I think are inconsistent about how they deal with that. But, and there are always a few exceptions like Dr. Martin Luther King Jr. and Dr. Jill Biden. But at any rate that further muddies the waters that the questioner has raised. How do you certify the expertise of someone? So when you're quoting someone or you're building up that person's credential because you're putting forward a commentary by that person, what do you see as those things that certify that this person is worthy of acceptance? That is that this person is able to offer you knowledge that has some presumption of acceptance, worthiness. What do you tell them? Tell who the individual or the reader. So when you're telling someone that this is an expert at a topic by virtue of quoting them, are there certain basic lines of argument that you use to certify their credibility? On my own mind, yes. I figured that honestly, I figure if people trust me to choose people who know what they're talking about and they're gonna trust that I'm not gonna pull somebody out of left field. So I don't spend a lot of time building up people's bona fides in an article. I expect that if people are listening to what I have to say, it's because they're likely to trust me. So I don't bother with that, but I do take care to make sure that I'm interviewing people who have credibility. I look at any variety of things, but often it's how I get to them. If it's through referrals from people I know and trust in the field, they say, oh, here's a good person to talk to. I think that's a very important thing. Obviously if they're authors of major papers around a topic or review papers, I look at that as well or just various other sorts of science type of credibility, those sorts of things. There's no one way to do it, but yeah, that's kind of the... I actually look to see that the person has published in that field or has some sort of visible expertise in the field. And sometimes it may be someone who, I would rather quote someone from a smaller university that doesn't have a big name, but they have a lot of publications and seem to have expertise on that. Then someone that's at a big name school who's kind of speaking out of their lane a little bit. And I think that's something we've seen a lot in COVID is a lot of people speaking outside of their expertise and getting a platform because of the name that they have. I won't name any universities, but yeah. I remember we had a conversation about this years ago. Anyway, but I do think that to me it feels less like the issue than the public just feeling that experts in general are not credible. I mean, the people that are resistant to a lot of the stories that we're putting out are resistant to expertise writ large and not just the difference between MD and PhD is maybe beyond them. Although they will take the MD or the... There are a lot of other alternative practitioners who will call themselves doctors too. And who do not have MDs? But one reason for asking the question is because you saw throughout COVID people appearing largely on cable in which the MD was the certification. And then some institution was attached to that MD. So if this is an MD and they are tied to ex-hospital, ex-university, et cetera as if it was a two-pronged certification. They have the degree and they're tied to someplace reputable. And occasionally we find that the place they were supposedly tied to the tie was so loose that you wouldn't have considered it a tie for journalistic standards. But it seemed that there was the application to the public that if you had the degree and you were tied to something whose name you recognized that that should be sufficient to be able to qualify you to speak about COVID. Not that there needed to be something else which is you had studied it or you had some kind of primary expertise in it that you were bringing to bear. And I thought I heard Christie say that one of the things that you do to try to certify someone is to make sure that they're not involved in the study as you're vetting the study to determine how and whether you're going to cover it. I've noticed that in some cases, journalists tell us that. So they quote someone and they say who had no role in the study. Well, that's the way of telling the public that you have an independent critique and have an outsider perspective. So I'm interested in the ways in which in a time in which expertise is being challenged in which institutionalized authorities are being challenged we're using the resources of journalists to help put in place some legitimate basis for inference about the ability of people to speak knowledgeably about knowledge. Let me jump to another question. How can you convey the intrinsic irreducible uncertainty of science to the public without eroding trust in the scientific process? That's the million dollar question, right? I mean, that's the thing I've been spending a lot of time thinking about this and wondering, you know how do we help the public understand this without losing their trust in science? And I think one conclusion that I've come to is it's actually I think that credibility of science can actually be increased if we can help people understand the process a little bit more because I think this binary view that the public has of science is being this thing that identifies truth, black and white that that is actually what corrodes trust in science because then each time those provisional answers are overturned, people say, oh, science is incredible. Instead of saying, oh, here's science working as it should. So I guess that's sort of the approach I've come to at the same time, this approach requires a lot of nuance that very often we just don't have the room for or editors don't have the appetite for. And frankly, I mean, here's another issue that I think hasn't been raised here. And that is how many people actually read our articles? I hate to say this, but this is something that happens all the time. And it's not just about people tweeting things that they've clearly never read because they're making a point that was made on the story if they had read the damn story. But it's just that even putting these things and having this nuanced language in the story, it may not get through. And people really, they remember the headlines and they remember the big takeaways. They remember, they may not ever get past the lead of the story. And so that makes it very challenging if that's the environment in which we're working. Writing a 5,000 word story is gonna get very few readers compared to a 500 word story. The number of people who will have the attention span to read it is gonna be much less. And so that longer story will get into a lot more nuance, but you'll also get it to fewer eyes. And so how do you negotiate this? I don't know, I would love to know the answer because nothing is apparent to me. But I think this idea of scientists making mistakes and that undermining credibility, I think that that is to some degree overblown. Somebody else will know more details than I do, but I know that Brian Nozick's group has looked at this a little bit. I seem to recall that a couple of years ago, Pew and their survey, P-E-W in their surveys about trust in science added a question of, does it do you trust scientists less if they admit to making mistakes? And they said that trust went up. So I don't think that it's a big concern that admitting mistakes will undermine scientists' credibility. I do share Christie's concern about less respect for expertise, but I don't think that admitting mistakes is eroding trust and expertise. And I also think that it's not a simple binary. There are many people who will, you say, would you get on an airplane? Are you worried that the laws of physics will stop while you're in the air? And almost nobody will be concerned about that. Or if you trust in antibiotics, the numbers are very high. At the same time, those same people who trust antibiotics and a lot of other stuff about modern medicine may also be people who don't accept the theory of evolution. And it's like they can hold these ideas separately in their minds and it's not either or. It's not people who believe in science and people who don't. I think people pretty much pick and choose. And unfortunately, in this hyper-politicized environment now, there's more opportunities for people to pick and choose. And I think that in some ways, the problem really rests less necessarily in people's perception of science than it does in kind of these politically driven influences that are driving people to conclusions as well. And I think that's, in my mind, a bigger problem. Let me stay on the topic of uncertainty. What is your sense that readers are resistant to uncertainty? Are there strategies that you use to get readers excited about uncertainty? For example, I wonder whether the public would have fun reading about scientific studies in a registered report format, learning about the study first and then the results are the big reveal. I would love to think that because I'm a nerd about this kind of stuff. I have personally had trouble convincing editors of this. So, and editors are like only a small proxy to the reading public. I mean, I think uncertainty is really exciting. That's where scientific discoveries are made. It's the stuff, you look at the circumstances under which whatever finding doesn't hold, that's how you sort of find the boundaries. This is really the frontier of scientific discovery. But that's a hard lesson to learn, I think. But I do think there's potential there, though, to bring that excitement to readers. So maybe this is an opportunity we're missing a little bit. I'm sorry, I wanted to go ahead. I think building on what Kathleen had said about formalized language or what, you know, like this is a study in mice or this is a study that has this specific limitation. I think that just as a routine caveat, there needs to be, these are the studies that other scientists want to see. You see it a lot. These are the scientists that authors themselves hope to do. I think when you talk about, when you can wrap up uncertainty into next steps, I think that's a good way to make things clear. These are the alternatives. These are the explanations that still need to be ruled out, those kinds of things. Next question. Every few months there's a debate about whether journalists should run their articles by scientists before publishing. On one hand, people will argue, whoops, I just lost the place in the form. I'm gonna lose that here. On one hand, people will argue that journalists should be independent and free to write articles how they wish. But on the other hand, people will argue that there can be high costs to publishing an article that misunderstands the science. Both of these seem to be valid points. How do you navigate this tension in your own work? Oh, I was certain. Oh, go ahead, Karce, I'm sorry. Oh, okay, I'm sorry. I didn't mean to dominate. I solved this with fact checking. I mean, the scientist doesn't get to write the story because again, I'm a journalist. I'm not dictating, science communicator is a different thing. The person working at their university or their institution to publicize the finding has their task with telling that scientist's version of that study, their research, whatever. But as a journalist, that's not my mandate, which is different. And I think most of the places that I work for just absolutely do not allow me to run a story by sources beforehand. That's just not allowed. And I'll tell you, one of the reasons is what happens is I get a nice quote and people want to change it because they sound like a human being. And so editors really hate that. But that said, what I can and do is run particular facts or particular ideas by researchers. And so I may in fact run a quote by them to say, here's what you said. I just wanna be sure that this is correct because the other thing is that scientists are human and this happens sometimes. People will mistake things in a story. And I've actually had it happen multiple times where I have to run a correction to a story because I perfectly quoted someone who misspoke. And so it is important. I mean, I understand the importance of running things by people. But the problem here is that we're not allowed to sort of give the subject a say in how the story is done. Right. I think the other issue for me as a daily or hourly journalist is deadlines. I mean, I may be covering something live. And then of course you don't have to hold on host while I go and call so-and-so and fact check. So there's a real practical problem in daily journalism about fact checking. And sometimes scientists say, I insist that you fact check and you can't publish your article until I've had a chance to review it. And that's a non-starter because the article can't be held hostage to a scientist who may or may not actually make himself available for following up. But when NPR has more time to, and we're not on those super tight deadlines, we do, there is extensive fact checking. But again, you also wanna make sure that a scientist who doesn't come out looking good in an article is not the one who gets a chance to say, oh, you completely, I disagree with my critics, you don't give me enough time, blah, blah, blah, you don't wanna engage in those conversations. But at least what I can do on a daily basis is if I have any questions or uncertainties, I do try to reach out to somebody and say just double checking this or whatever. And so there's more of that thing goes on perhaps and meets the eye. And I agree, in a perfect world, there'd be even more, but we're under tremendous time pressure as well. I think calling and focusing and a fact check on the parts of the article where a scientist has expertise, I think that makes a lot of sense, but sending an entire article to a scientist is much more likely to corrupt the process than to bring in meaningful corrections. I'm not saying that journalists don't make mistakes. Journalists definitely, definitely make mistakes, but I mean, you can imagine what would happen if you were sending your entire article to who's a politician that's generally trusted? I mean, if you sent an entire, if a political reporter did the same thing, you wouldn't trust that political reporter, right? It's on the journalist to get the story right. And I think that's best done by focused fact checking, focusing on the areas where you trust the researcher. You often what I'll do is I'll call people back and I'll talk about their specific bits and then I'll say, okay, and here's some other ideas that I've encountered just to get a sense of the consensus, but an entire article, there's too much scope for manipulation. We're in our final minutes now and we had the word reform someplace in the title of our panel. So I'd like to ask the panelists to give us in one or two sentences, the one thing that if you were controlling the universe and a charge of everything, you would change to improve science communication about issues of uncertainty faced with emerging science and an anxious public when we're in the middle of a pandemic. And I'm going to start with Richard. Oh, great. I wish there was a quick fix because I mean, we've surfaced a whole lot of problems and there are other problems we haven't surfaced in this conversation, but I think we need to redouble our efforts to convey what's uncertain. I very much like and I agree with and I try to take on your suggestion that if a study is in animals, that we say it's in animals. And in fact, the journals that promote these studies are now making sure that if it's in an animal that it says that in the first sentence so people don't get too carried away and start thinking about for their press pitches. So that's incredibly important context. I think we do that already. We can and should do more. And I think we need to keep reminding people science is a process and not answers falling from the heavens. Thank you, Mania. If I had a very, very powerful magic wand, it would be that individual readers paid attention to the sources that they were reading. Thank you Christy. I would just make sure everyone reads the damn article. And moderate is prerogative. I would make journalists and science communicators more self-conscious about the language they use. I wish we did not have herd immunity conventionalized as the way described what should be called community immunity because it doesn't much matter what the whole has. If you think the whole is the state of Pennsylvania and you're inside a community that has low vaccination you've got a real problem. Community immunity makes more sense it asks the question, what is the community? And you could move from communities to other communities in this course the same day the immunity level inside that community is what's going to affect you. With thanks to the organizers of this panel and to the three distinguished journalists we wish you a very good day navigating the uncertainties of science and of our daily lives. Thank you. Thanks so much. Thank you. Thank you everyone. Bye. Bye.