 teacher. Teacher, yes. It works with small children and grown adults. Okay, so I'm excited to introduce the next segment here titled, Bringing Change to the Mainstream with Visibility and Values, Making it Normative. So emphasis there on making it normative. Another way to think about this is why open science? The answer is because all the cool kids are doing it. So we've got three excellent speakers today and then we've got some time for Q and A at the end. So first up we've got Wajin Wang, who is a director of programs at Center for Open Science. So she's involved with leading the development, implementation and evaluation of programmatic services for COS. We've got Christy Ashwandan, who is the author of Good to Go, What the Athlete and All of Us Can Learn from the Strange Science of Recovery. She's also a co-host of an emerging forum podcast, which focuses on the creativity process. She's a former lead science writer at 538 and has been a contributor to a number of major national outlets, such as the Washington Post and the New York Times. And then third we've got Katie Corker, who is an associate professor of quantitative social and personality psychology at Grand Valley State University. Katie is a past president and current executive officer for the Society for the Improvement of Psychological Sciences, also known as SIPPS. And she is the chair of the American Psychological Association Open Science and Methodology Expert Panel. All right, so on that note I'll go ahead and turn things over to Wajin. All right, thank you George. Good morning. So the organizational theory is Jeffrey Moore identified the most likely place that products fail and is transitioning from the early adopters into the mainstream. He caught the, because to emphasize the magnitude of these challenges, he caught it crossing the chasm. To scale Open Scholarship into the mainstream, we see the challenges for the technology and workflow aspects of the culture reform. But we add the challenges of social norms, rewards, and policies as barriers to address. Behaviors that are not supported and rewarded by the system carry risks for adoption because of the potential costs to their career development. Innovators and early adopters are motivated by the vision and the potential of the new behaviors regardless of their fit for prevailing culture. But to enter the mainstream, the new behaviors have to appeal to their motivations to other motivations that researchers have. That includes aligning their behaviors to what is expected and valued by others in the community. So in a decentralized system like science, a critical intervention to scale into mainstream is to alter the social norms. Norms are their shared expectations for how people behave in the system. But there are very few official rules or policies about how to do good science. And most of that understanding comes from communicated as norms that emerge and become self-sustaining within the scholarly communities. So to understand how to be a good biologist, chemist, or sociologist, we observe what other biologists, chemists, or sociologists are doing. And if we observe that they keep their data secret or strive for publishing in high-prestige journals, then we conclude sometimes without even realizing that we're doing so that data secrecy and pursuing prestige publishing is also what we should do too. So Robert Merton described four norms that he thought distinguished scholarship from other ways of knowing about the world. The first commonality, the open sharing of information, gaining credibility based on showing evidence of support to the claim as opposed to the counter norm secrecy. You just go to trust me. And the second norm is the universalism. Evaluating research based on its own merit and versus the counter norm is done by a famous person, so it must be true. And the third is disinterestedness. Researchers are motivated by knowledge and discovery versus the self-interestedness, trying to get ahead of others in order to advance their own career. And the fourth norm of organized skepticism, consider all the new evidence, even if that's against their own prior work. As opposed to the organized dogmatism, spending their whole career trying to defend the original claims. What Merton didn't articulate specifically but is commonly referred to is the norm of quality of research, as opposed to the counter norm of quantity of scholarly research. So these norms may be recognizable as abstract observations of scholarly research. But an obvious question is, whether these norms operate among researchers? That is, do researchers themselves endorse these norms? So Anderson and her colleagues surveyed researchers to find out. They asked about 3,300 researchers who are NIH grant awardees, including Erty Clear and Midcareer, whether they endorse each norm or the counter norm. How do they think science should operate? So this summary plot here shows the responses. You can see that more than 90% of the samples endorse norms over the norms in gray here, that's on average. And only a tiny minority endorse the counter norms over the norms. The black bars here, and the gray diagonal lines indicate those that supported the norms and counter norms about equally. And next, they asked about not for their support, but their action. Whether they behave according to the norms or the counter norms. Still, you can see that most respondents report behaving by the norms over the counter norms, but now a sizable minority are acknowledging that norms and counter norms are having similar impact on their behavior. But still, very few reported behaving by the counter norms over the norms. And finally, the surveyors asked for their perception. How do they believe others in their community behave? Do others follow the norms or the counter norms? So now, when describing the research culture not their own behavior or their support, researchers perceive that the counter norms dominate over the norms. So these data illustrates a very dysfunctional research culture. Almost everyone endorse norms. Most people report behaving themselves according to those norms. And most people perceive their research community as not behaving according to these norms. As discouraging as this might seem, it also shows a couple of very promising opportunities for change. So the first is researchers' attitudes. They do not need to be convinced some of the core values of the science. They are aligned with the mission to increase openness and integrity, because they already endorse them. And the second lies in their perception. So the respondents may be significantly underestimating the extent to which their peers actually feel the same way as they do. And social psychologists coined the term pluralistic ignorance to describe the perception gap between what people believe others value and what they actually value. And this creates opportunity for intervention to shift the behaviors by revealing that one's values are actually more popular than they thought. So we administered our Open Scholarship survey to sample researchers from many discipline communities. And this data I'm showing here aggregated from thousands of researchers from 14 samples across disciplines. And then we asked the participants about their attitudes, behavior, and perception of open scholarship practices. So we asked researchers whether they themselves are in favor of Open Scholarship. On the left there, you see that substantial support. So blue is strong support or support, gray, neutral, and yellow or orange are opposition or strong opposition. But next, when you ask these researchers to estimate others' opinions on the right, you see that they perceive favorably towards the behaviors to be less than it is in reality. And then they perceive more substantial opposition to the behaviors than reality. And the gap is evident for every Open Scholarship practice that we have examined and in every discipline we have tested. So following the research culture or literature on the social norms, the perception gap is likely to result in the action gap. That is people doing the behaviors that they support less often than they would if the perceived norms were aligned with the actual support of the behavior. So in all the behavior surveyed, respondents you can see on the right here, respondents actually performed the behavior much less in their recent publication despite the strong support. And the difference between the actual belief and the actual behavior is the action gap. So now if we make it apparent that others in the community actually value Open Scholarship and thereby closing the perception gap, then we might also start closing the action gap by making it easier for those who value Open Scholarships to do it because they recognize that those actions are actually supported by the community. So Eric has discussed previously how idealists will pursue life cycle open science despite the counter norms and lack of rewards for doing so. And for some, particularly for those that hold the values but are reluctant to act because of the perceived norms against it, then the visibility of the idealist actions is evident that the norm is shifting and then can spur them into act on their values. And then if the visibility of their behaviors becomes visible and then creates a positive feedback loop, the more that people are visibly doing these actions and the more the others notice that the norm is shifting and thus more likely to do the actions themselves. Badges is a very simple visibility mechanism to highlight Open Scholarship practices. So when a paper is published in the participating journals, badges are issued to recognize that the work has shared data material or have registered their research plan and this provides a visible signal that open practice happened when they occur. In an observational study, we observed evidence for the positive feedback loop with badging. So we have examined the proportion of articles that performed data and material sharing in the journal Psychological Science. That's a black line here and before and after they have adopted badging in January 2014 in the dotted red line here as compared to other journals from the same field in gray. That did not adopt badges. So we observed a steady increase in the proportion of articles that have shared data. But this did not happen in other comparison journals and notably after this study has completed, we continue to observe this increase until it's stabilized at around 80% last time we checked. So badging is a minimalist innovation for changing norms and it's not really incentive because it's not yet tied to a true reward that includes getting a job, getting funded or getting published. So badgers have no active engagement for changing hearts, minds and actions and they're just a signaling mechanism. So this type of intervention is unlikely to be sufficient on its own. As a matter of fact, a lot of the work in changing norms involved the providing the why and the how of doing this open scholarship. So in the academic research context, the primary means of engaging the communities on the why is to write scholarly papers and engage in scholarly discussion. And a majority of our major part of our organizational activities has been participating in that scholarly discussion with empirical research documenting the nature of the challenges that Tim has summarized before. And theoretical and methodological pieces that articulate the value of open scholarship behaviors such as for replication, pre-registration, open data, pre-prints and reproducible research practices. And writing scholarly papers is a way for champions to introduce to their communities the principles and practices of open scholarship and this figure represents a rise of papers discussing open scholarship over time across domains. You can see there are new papers now every day in disciplinary specific journals that are making case for open scholarship. And beyond scholarly literature, the coverage of transparency, rigor and reproducibility in science in general news outlets also play a critical role in fostering that scholarly discussion within the research community and in interaction with stakeholders. So Christy will later present about this experience. And in many cases, the normative work must start with the why of open scholarship because the behaviors are unfamiliar or the relevance and adaptability to others research context may be not clear. So we use this framework to assess the readiness for change and guide how we approach engagement on each open scholarship practice. For example, with the influenza community, we put partners with the founder Flulab focus on educating early adopters and give them the knowledge and the tools to adopt open science and we go further to empower them to train their peers and get more early adopters. Whereas with the education researchers community, we focus on engagement with webinars and conferences and shared infrastructure. So once there is a baseline alignment among the communities of champions, then the formation of grassroots communities can organize and align champions for action on those shared values. Like the growth of papers expounding on the value of open scholarship, the champions for change have formed many dozens of grassroots communities and these groups have many different priorities and activities. Some are regional, such as the reproducibility networks that have started in the UK with UKRN and then now has over 60 local networks and 30 institution members that provide peer and administrative support for advancing rigor and reproducibility in their institutions. And this model has expanded rapidly since, especially across the EU and also reaching more broadly. Some of these communities are more disciplinary focus, like CIPs for psychology, SOAR team for ecology and evolutionary biology, STORC for psychology, BITSS for social science and CERA for educational research. And later you will hear Katie to talk about CIPs example in this session. And the grassroots communities are also a part of a much larger network of advocacy and action organizations that are providing vision, solution and support for change. And some of those have a leader sitting in the audience today. Grasmus communities are an excellent way to organize highly motivated reformers. But that's not enough and additional tools are needed to engage the mainstream. And one approach is to form a community around technology solutions for the new action. Where the technology defines the behavior objective and the branding and the visible supporters identify its relevance for the community and the legitimizes the behavior. Open science framework is designed to support this type of community building with branded interfaces that engage researchers at specific points in the lifecycle with guided workflows for doing the relevant actions. So open science, OSF preprints enable sharing and discovery of preprints, working papers and published papers. OSF registrations enable the customized solutions for research planning, pre-registration, sharing of outputs and soon will be supporting the reporting results in relation to the pre-registered plans. And OSF collections enable communities, community building around customized repositories for research projects, data materials, code or any other combination of research contents. And OSF institutions supports researchers at member institutions to share and manage their research and collaborate across the research lifecycle and offers the capability of integrating into their institutional repositories. So these community-led interfaces are operated by insiders for insiders to their domains. And because they are built on the shared OSF interface, so that it creates the doorways to incrementally engage those communities in the lifecycle of the science. While the positive case for normative interventions is strong, I want to close by acknowledging that they come with substantial risks for backfiring. That's like what Jessica has asked earlier. If a visibility intervention is adopted and the positive feedback loop does not occur, then the lack of adoption becomes more visible and thus reinforcing the existing perception that the behavior is not valued. And this can entrench inaction by creating a negative feedback loop. Because instead of having no evidence about other's values and actions, and now the evidence of absence is visible. So we should be careful to mitigate risks like this when implementing a normative intervention and make sure we really work closely with the communities. And some other risks are especially pronounced when idealists are the driving force of the change. So we love idealists, but idealists have unique motivations that are distinctive from the mainstream. And failure to appreciate those distinction can inadvertently interfere with engaging the mainstream with reform. For example, for the idealists, it's common to adopt open science as part of one's own identity. The behaviors are so central to a person that they are an important part of how they think themselves as researchers. But for the mainstream, open science behaviors are tools to help them to do their research. And their social identity as researchers are more rooted as in their discipline or the topic of study. So to engage the mainstream, the messaging about open scholarship should be focused on good practice and how it improves research, not with an expectation that one needs to form their identity about open science in order to practice it. Also idealists will always be ahead of the curve. As soon as data sharing becomes a common practice, then idealists will identify that the sharing isn't fair and needs to be improved. So the high standards of the idealists are wonderful for continuously raising the bars for open scholarship. But to engage the mainstream and these high standards could be intimidating barriers to entry. So effectively engaging the mainstream requires recognizing where they are and help them to build skills and equality over time as they gain more experience and confidence. And relatedly, idealists want all the open scholarship to all the things. But this is a huge barrier to entry because all the things become so daunting or overwhelming that it seems like they need to spend months retooling just to get started. So engaging the mainstream effectively requires incrementalism. Giving researchers a low cost, low burden way to start and create pathways to adopt more little by little. So to conclude, observing others following the norms, increase our likelihood to follow the norms ourselves and establishing open scholarship norms increases readiness for the changes to incentives and policies that will finish the scaling into the mainstream and convert the open scholarship behaviors into the new standards. So next we have Katie to talk about the SIPPS example as engaging the grassroots communities. Over to you. Thank you, Haja, and that's a really nice welcome. Yes, this is a fun talk to give because it's a little bit different than your typical scientific talk. I get to spend a few minutes telling you about an organization that I helped to build and some of the work that I'm really proud of that we've done as an organization. So SIPPS got its start back in 2016 in Charlottesville, of course, was founded by Samine Vizier who is actually not here. She's out watching my child and Brian Nosek. We had about a hundred people who gathered at COS for a couple of days to actually start an action-oriented movement within psychology. A lot of the people in this picture had been in many meetings that had just simply talked over the same issues many, many times and they were ready to do something about this to actually bring about change in our field. So at the conclusion of this meeting, it was decided that we would formalize our group and found the society for the improvement of psychological science. We actually incorporated formally as a nonprofit in 2017 and it's just been a tale of growth from there. So we had only a hundred people at that first meeting about double that at the second meeting, 200 people also in Charlottesville closer to 300 in 2018 which we held by me and Grand Rapids, Michigan and over 500 people at our largest meeting in Rotterdam in the Netherlands. And you can see 2017 we started officially accepting members and the numbers track as well there. So there's a similar picture from our opening plenaries in Rotterdam. It was so large that we couldn't fit into one room so we had the overflow room up top. So really large, passionate group of dedicated individuals. We had a bit of a setback with the pandemic. So membership has stabilized. It's hovering now around 500 people. We had a lot of interest in our online meetings during the pandemic. If there's anybody else here who works with societies, I think this is a pattern that lots of societies have observed over this time period. We started having a hybrid meeting last year and we had still quite a few online registrants but back down to about 100 in-person registrants. And then this year we're meeting in Padova, Italy and we have much more interest again for an in-person meeting but the online meeting has interest has tapered off. It remains to be seen if that will hold out. So there's our 100 dedicated folks rebuilding starting last year in Victoria. So what is it exactly that SIPs is up to? What is it that we do? We do a lot of activities now that you might associate with a traditional scholarly society. So in addition to our conference that we host, we have a pre-print service which is actually the largest pre-print service on OSF pre-prints with the exception of the OSF branded one. We have about 25,000 plus pre-prints that have been shared over the last couple of years since that has existed. So there's volunteer moderators, all kinds of activities associated with hosting the pre-print service. We give out awards to recognize exceptional activities in improving psychological science. So the people who are actually out there doing the work of implementing these norms in our field. We also give out, we have a mini-grant scheme. We give out small pots of money to people to enable them to be able to do the kinds of projects that will advance our mission. And we have an official journal which is Collab or Psychology. It is hosted by the nonprofit Press University of California. It's also been very successful. So lots of activities, it's probably not a coincidence that it's sort of all oriented around the kinds of things that traditional scholarly societies do. But the point wasn't to make a society, the point was to make one that was focused on advancing this mission of improving open scholarship. Here's some evidence of our success in teeny, teeny tiny print. We've tried to get folks who have come to our meetings or have been involved in our activities to share their successes with us, both to broadcast those back to the community and to demonstrate the things that we've built. But as Hua Chen mentioned, one of the things that academics do when they get together is they write papers. So we've done a fair bit of that. All of these papers are not the work of the society per se. They're the work of individuals who have put a lot of effort into demonstrating the applicability of these practices to their specific sub-disciplines to building tools that are helpful for their particular research. So there's all kinds of things. I'm sure we're sharing the slides later so you can poke around and see all of the different things. In addition to the papers, there's been many resources that have been created. So I mentioned SciArchive was one of the resources that came out of an early one of our meetings. But we've also had some other successes like the Psychological Science Accelerator which is a large distributed international group of people collaborating to do all kinds of research, original research and replication research in a crowdsourced distributed way. So we've been really successful by almost any metric. I mean the number of people that we've gotten to be involved with these initiatives, the things we've produced and so on. But the big question for the movement more broadly is, are we able to reproduce the success in other fields? That's the million dollar question. I would argue maybe not. Maybe this was a one off circumstance of the fact that we had two charismatic leaders who were powerful within our field who were able to exert influence in ways that perhaps they didn't even recognize at the time. But we do see little pockets of similar activities bubbling up in other areas. And I do think there are conditions that we can create that make it more likely that we would see the kind of success that we have seen with SIPs in these other areas. In spite of the fact that this would be the doom and gloom version of this is that we've built this thing. It's been great. It's been very helpful for our field. I can't do it again. I have a few lessons learned for anyone that is working in a grassroots capacity and is hoping to exert this kind of change in other disciplines. I think these are applicable to pretty much any kind of grassroots organizing effort, not just to this kind of grassroots change. But these are probably the top five things that I think have been the most important that I've learned at least as we've been doing this. First one maybe seems really obvious but it's really important to get super clear about your mission and your values. This was one of the first things that we did when we got started is we wrote actually a formal mission statement. I've presented a sort of shortened version of it here. But we defined ourselves very clearly as a service organization. It's not necessarily a research focused organization. It's not focused on teaching specifically. It's a service organization and we're aiming to improve psychological science by improving training, improving policies, doing science research and doing outreach with a focus on diversity, equity, and inclusion. This makes it very clear to people who might join our movement and participate in our society what it is, what we're about, what we want to do. It might seem like it's in the way of actually doing the work that you want to do. But it's been really really helpful over the years to be able to go back and say this is who we are. This is what we're here to do and this is what we're all about. We also identified five core values which are broadly applicable to open science movement but it was important for us to express them and again to provide a little bit of scope or constraints to be able to say this is what we're focused on. This is what we're trying to do. Getting clear about the mission, that's the first step. Second step, an equally boring one unless you're like me and I'm a geek about this kind of stuff, but you want to formalize your governance model. Governance is about much more than just writing bylaws and these kinds of formal things. It's about saying who's responsible for the work, who's in charge of the work, how long does that power last, when does that person get replaced and so on. It's a communication exercise as much as anything in terms of writing down exactly how you're going to go. We have existed now almost eight years. Governance is a big part of that. Being able to say we have defined points of transition having that set out from the beginning has been extremely helpful. Any kinds of grassroots movements, even if it's not a full society like that, it's useful to have some shared understanding of who's going to be in charge, how long is this going to last and so on. The third point it's extremely, extremely important to prioritize diversity, equity and inclusion and this might seem like it goes without saying but if we are building a movement to improve psychological science or biology or whatever discipline, whatever area it is, it needs to be a movement that benefits everyone. Not just the people who are doing the leading, not just the people who are in the room running the initiatives of the society. We are going to fail at the task overall if we develop solutions or we develop initiatives that don't benefit everyone. That's almost an impossible goal to find something that's one size fits all and meets everyone but we need to have at least diversity of perspectives. We need to have that input we need to listen and we need to make sure that we're serving as many people as possible. Well this one's been fun. It's important to bring everyone along. I want to especially emphasize career stage diversity. The early career contributors to our movement have been some of the most valuable and crucial parts to what we've done. We absolutely could not have done it if it were not for the early career folks. That's not to say that we don't need the later career folks as well. Later career people have access to positions and roles and so on. Resources, all kinds of things that enable you to get into spaces that you otherwise would not be able to. But you need that energy, that passion that enthusiasm that the early career researchers bring as well. And the valuable perspectives. I just can't overstate enough how important the early career people have been. They're really smart and we need to listen to them. And so my last point is a related one. There's a temptation in this kind of work and this movement to sort of work fast and get out ahead of everybody else. I know the way that this work should be done and I want to just sort of like plow through full steam ahead. But really what I've learned more than anything else is that that way, that style of working doesn't work. So you really can't go out on your own. You can't go alone. We need a huge community of people behind you to actually achieve the goals that you are trying to achieve. And so with that I will leave you with this picture of this is 42 individuals who have been absolutely central to the work of our society. The top row there is our past presidents and then we of course. And we have also the heads of committees, conference organizers, executive board members. This is only the people who have been in leadership positions. You really can't go out and do this kind of thing on your own. It takes a lot of resources and a lot of time, energy, investment. Every single one of these people has been dedicated and done a lot of valuable work for advancing the work of the society. And I think Christy is next. I don't know if I should introduce. Alright. I've been covering this issue of reproducibility and science, open science pretty much from the start. This was one of the early stories. I don't know if it was my first story, but this was about the first big replication study published in Science about psychology. And one thing that's really interesting, I was thinking about this as I was putting this talk together, is I've really witnessed kind of an evolution of the thinking in this field. When I was reporting this story, I remember there was this very prominent psychologist at Harvard who gave me this like delicious quote where he said, I learned nothing from this study. And I was sort of like, whoa, yeah, but I think that there is this real knee jerk reaction at that point. You know, there was some real outright hostility to this kind of thing in the beginning. And I think that, you know, what I've seen there is a culture change going on where it's no longer quite as acceptable to tell a reporter something quite so extreme. There's still pushback, but it's no longer okay to just dismiss this movement out of hand. So I have written some pieces, done some stories about individual studies, but most of our reporting is a little bit more wide ranging. So this top story, Failures Moving Science Forward, was really looking at the issue of replication. What should we make of this? You know, what do we know when a study fails to replicate? Do we believe the first study or the second study and how do we navigate this? And, you know, I think the takeaway about this too is that, again, I think it's a recurring theme that, you know, forward by failure that, you know, this is a process. It's not an answer. And I think, you know, when people ask me, what do I want the public to know about science? I want them to understand that science is not a magic wand that just touches everything. It turns everything. It touches the truth. But it's a process. And it's a process of becoming less wrong over time. I think that's what we're seeing. This second story is about that science collider that Katie was just talking about. I really went deep down the p-value rabbit hole. I think I annoyed a lot of people along the way. This bottom piece, I went to this metrics meeting and put my little video recorder on my phone and tried to get people to explain p-values to me. And you can probably guess how that went. That was kind of fun. Maybe it was a little mean, but I got a lot of fun mileage out of that. But this piece that I did in 2015 continues to get a lot of traction. I still get notes from college professors. I actually get notes from high school teachers sometimes. I do say there's one thing I regret about this piece, these high school teachers tell me, why did you have to drop an F-bomb? Now I can't use this with my students, you know, because I basically, this piece was looking at this question, which was really prominent at the time, is science broken? And I took a deep look at it and my conclusion was that no, it's not broken. It's just a lot f-ing harder than anyone gives it credit for. And we really need to make space for this difficulty and we need to recognize it. And you know, I'm speaking here today to a bunch of science nerds, but my audience is the general public. And you know, if you think you're having trouble in your field making progress with this, it's really hard to get this message across to the public and I guess I've become a little bit of a crusader for this. And so here I am trying to write about this stuff and trying to write about things like p-hacking. I really wanted to write about p-hacking because I think it's an important problem, something people were talking about. But how do you explain that? I mean, you know, most scientists don't know what p-values are, don't understand this issue. And so I'm going to tell the general public who probably never took a stats course, probably don't even remember their high school math course. And so I gave a lot of thought to it and I thought the only way to get this across, the only way to make people understand and not just understand p-hacking, but I really wanted people to understand science is hard and here's how it's hard. So with my colleague Richie King, who's fabulous, he now works at Netflix, but we created this interactive tool where people could actually try out p-hacking. And this was our sort of scenario. We used actual real data. The question is, you think that the U.S. economy is affected by which political party is in office and we're going to use real data to look at this. And I really wanted, I played around with what's the question going to be. I wanted something where people had sort of some kind of automatic or preferred answer. And so immediately, you know, people using this, I know this slide is hard to see, but you're faced with this problem that I think is fundamental to science and this is why science is hard. And that is because, first of all, you have to decide how am I going to define my problem and then how am I going to measure it. And this is, it doesn't matter what the question is, these questions never have easy answers. And so here, you know, how are you going to define politics? Is it the presidents? Is it governors, you know, Congress, things like this? How do you define the economy? And so what we showed here is that people could play around with this. There were 1,800 possible combinations. More than 1,000 yielded p-values of 0.05 or less. And so basically, it was possible to support any outcome that you wanted. And you know, there's a lesson here for science or the actual question. But what I was really trying to get across is not that all of this is bogus and scientists are cheaters, but that this is actually really hard. And like, even if you're trying to do, you know, an honest effort here, these are difficult problems and it's not straightforward. And there are reasons why you need to scrutinize and be careful about science. Not because scientists are bad or that everything's bullshit, but because, you know, these issues exist and it's why you can't just take one study and say, aha, we know everything. So I think that, you know, p-hacking is something that in certain fields, psychology and medicine or two that I point to is a lot of understanding about it now. It's no longer acceptable to like openly do this. There's sort of, you know, even if people are doing it, they sort of have a feeling that it's wrong. But at the time that I was working on that piece, I was also working on my book, which I published in 2019. It's about the science of exercise recovery. And I got so frustrated because all of the problems I was writing about in psychology existed in sports and exercise science, except they were like a thousand times worse. And no one was talking about them. And when I asked about them, I was just treated like I was kind of shown the door. And it was, you know, it was really frustrating. And I'll just say that the number one comment on Amazon about my book or one of the top negative comments is this book sucks. All she says is that none of this works and science is really hard. And I said, yes, I'm so excited. You understood. Like, thank you for reading my book. So anyway, here's a field that's looking for small effect sizes using very small samples. N of 12 is completely standard. And there are reasons for this. I also want to say that there are reasons that different cultures and different fields use different techniques and different ideas are acceptable. It doesn't mean that they're right, but it's not just for nothing. And I want to be careful to not just say, oh, those sports scientists, they're real idiots because that was not the case at all. But I think like most scientists, they didn't have much good training in statistics. I think that that's true for as a journalist, I found that that's true for scientists in most fields. And so that wasn't unique. But what was unique is they were trying to look for small differences in small sample sizes and most of the variables they were measuring had a lot of innate variability. So like you can just imagine what a mess this was. But as I'm going through the literature, I encountered this methodology. This guy who was very prominent in the field, his name was Will Hopkins, had created this thing called magnitude based inference. And it was basically, you know, he used some very appealing terms, but he was selling this as a way to get more results, more publishable results. And of course, the reason that this worked is because it was bogus and it was actually doing the opposite. And when I first started showing this to statisticians, most of them said, this is tin hat levels, conspiracy, like you shouldn't even write about this because this is just bogus and don't give it attention. But I said, you know, this slide is hard to see. But this, here's a meta analysis of foam rolling, which is this thing that a lot of homeowners do. And if you look at just all, there's a lot of this, the studies are kind of crossing zero, but there are a few outliers. And those are the ones that use magnitude based inference. So this stuff is actually polluting the literature, right? Like it has consequences. So I finally found someone, Kristen Sonini, a statistician at Stanford, who's fantastic. She took me up on wanting to work on this. And so she helped me. And again, this is something like no one understands statistics. How am I going to explain this? But we created some visual graphics that kind of showed how this worked and why it was actually increasing the false positive rate. And as a result of that, she then went on to write a paper in one of the top journals in this field and it ended up getting banned from that journal. So that was kind of a win. But this is not, I want to be really clear. So I think that as a journalist, one of my roles and one of the important things that we can do is, you know, hold people accountable. It's not my job to fix the field. And I want to be clear. I'm not taking credit for what happened. This is really Kristen and she's someone who's within the field. But I think it's important to know and like the role that journalists can play is that we're really illuminating these problems. And we're showing, you know, where these issues lie. But again, I think this example shows that I found myself at times being kind of a conduit. So I'm saying, hey, sport scientist, you're doing this thing. Did you know that in psychology, they used to do that too? And like, I think I did help facilitate a conversation a little bit. And I think one role that the media does is that, you know, when I publish stories, it may be for the general public, but a lot of scientists are part of the public, right? And so they see it too. And I've had a lot of people tell me that, you know, those stories were influential and them starting to think about it. Again, that's all on them. Let's see, I'm going to pick over that. But I think it's important going back to the sport science. One thing that was really exciting to me is after I was when my book was already in process to be published, there was a formation of STORC, which is a society for, Katie just mentioned them, but it's basically a sips for sports and exercise science is being driven by early career people. It's fantastic. And it's just really great to see that. But this is really, I think an example of what we're facing here and what we're talking about is a culture change. And there are cultural norms that are said. And this is really, you know, a societal problem, a sociology problem as much as it is about science. I'm just going to end with this piece that I wrote talking about this idea of sound science. This is a term that was invented by the tobacco industry. So the idea is being if you said an impossible standard for science, you can never meet it. And so just more research is needed, we can never make a decision. We saw this, there was a companion piece to this during the Trump administration, the Scott Pruitt EPA was using this sort of terminology to try and dismantle a lot of EPA regulations. But I think we saw this a lot during the pandemic as well. So this previous thing, this piece was actually making the case that bad data is worse than no data at all. We had a lot of instances where science and data were being used, not as vehicles for understanding and scientific discovery, but as talking points as basically ammunition for arguments. And so this is a real problem. So I'm just going to end with where I've settled on this is that I think that one of the pieces that's become really apparent to me is that the general public does not understand the role of uncertainty in science. And this makes them very vulnerable to misinformation. We saw this so much in the pandemic, they said masks were good and then they said they were bad. If you understand that science is a process and that it's okay to change your mind and this is a normal part, you're not going to maybe fall into that trap. And I think that I really now I'm trying to bring a certain humility to my research. Nassimine Vizier wrote a really nice piece about this for scientists, but I think this is true for the public too. And so one of the things that I aim to do with my reporting now is to really help the public readjust their expectations of science. I think that we have given the public an unrealistic expectation of what science can do. And there's a sense that it's either infallible or it's false. And that dichotomy is dangerous because it means that no science is infallible. And so it basically means from the get-go, nothing's trustworthy. And so we need the public to understand this. So my current project, I'm working on a limited-run podcast series for Scientific American about the role of uncertainty in science and it's really about all these things I've been talking about, trying to help people understand that. So I'll be here for the next two days. If any of you have thoughts or ideas, please come find me and I'd love to hear from all of you about this. Yeah, and I guess that's it. Here's my email, Christy, at nasw.org. Like I said, I'll be here.