 All right, I think we're live. As everyone is used to by now, wait for just a second for these settings to filter across your on air. All right, yeah, I think everybody has been pulled over into this session from the other one. So welcome back, everyone. Third talk of the day here. I am very, very excited to present one of the two open science talks of the day. Of course, we also saw an open science flavored talk yesterday. Scheduling reasons meant that we had to spread these out a little bit on the program. But if this is a subject that interests you, do definitely hang around for the last talk in the meeting today, which will also be on open science. So I'm very excited to introduce Dr. Tevia Cornell and Brandon Heil, both from the New College of Florida. And they're going to be talking to us about a lovely title, Crisis Causes and Cures, a digital analysis of psychology's recent crises and the allure of open science. So please take it away. Thanks. Thank you so much, Charles, for the introduction. And we're truly grateful to the organizers for the opportunity to present our work at this meeting. And thank you for joining. This is an excellent conference format considering the circumstances. And we want to encourage everyone to take full advantage of the interactive features of the Crowdcast platform. We know that our talk contains a range of potentially unfamiliar terms, but we didn't want to spend large parts of our presentation on defining statistical concepts, et cetera. If we mention a term with which you're unfamiliar, feel free to use the chat to get clarification. And if none of your fellow audience members can provide an explanation, we'll be more than happy to do so during Q&A. For the next 20-something minutes, we want to talk about a crisis in psychology that has concerned scientists and the public for at least a decade. One infamous early moment in this crisis was the publication of a paper on extrasensory perception in 2011. Social psychologist Daryl Bem of Cornell University had authored the paper. In a slate cover story six years later, Bem was quoted explaining the phenomenon as follows. Extrasensory perception is when you can perceive things that are not immediately available in space or time. For example, when you can perceive something on the other side of the world or in a different room or something that hasn't happened yet, end of quote. Simply put, Bem claimed to have proven scientifically that people were able to anticipate the future, which sounds like a pseudo-scientific concept. This came as a shock to the scientific community and the public. As the author of the slate cover story put it, science is broken. This was one of the first big milestones in what is now called the replication crisis or replicability crisis or reproducibility crisis or irreproducibility crisis. We use these terms interchangeably, but I believe Daniel Hicks will explain later today that there are important nuances in meaning to each of those terms. Bem's study is often referred to as one of the triggers of the replication crisis, but there were other controversies well before 2011. Michael Pettit from York University has created an interactive digital timeline featuring numerous events and controversies surrounding replication in psychology since 1784. The crisis, as we know it, gained momentum in the late 2000s and early 20s. Here you see the timeline entry for a publication from 2005 titled, Why Most Published Research Findings Are False. This paper was authored by medical scientist John Yanidis, whose name will come up again in Brandon's network analysis. Yanidis argued that most published results of scientific studies were likely false positives. The article has become a replication crisis classic with almost 10,000 citations listed on Google Scholar. There is a crucial difference between Bem's publication on extrasensory perception and Yanidis's publication on false positives. Bem's publication caused such a stir because it used well-established scientific methods to prove and replicate apparently pseudo-scientific findings. But usually when we talk about the replication crisis, we talk about the difficulty of replicating experimental findings that seemed probable at first. Yanidis's publication struggles with this latter concern, irreproducibility, of what we thought were scientific facts. Some examples of studies that could not be replicated include those on power posing, social priming, and the so-called cattle hormone oxytocin. Here you see a graph from a Nature News article from 2015. It visualizes the outcome of a four-year-long crowd-sourced effort to replicate 100 psychological studies. This replication attempt was led by the Center for Open Science in Charlottesville, Virginia, where psychologist Brian Nozick is based. Nozick is one of the main players in the open science community, and his name will come up a few more times today. As you can see, only 39 out of the selected 100 psychological studies could be replicated. Other scientists' attempts to replicate studies in their own fields also failed. And this is a serious problem. The notion that science does not necessarily accumulate reliable data, let alone coherent theories, has significantly damaged the public reputation of science and contributed to a worrisome proliferation of anti-science movements. Now that we know what the replication crisis is and why it poses such a threat to science, it's time to move on to our digital analysis. First, I will share some results of text mining analysis, and then Brandon will present some of his bibliometric work. Our findings lead us to question whether methodological changes are the most promising countermeasures to the replication crisis. We are also left wondering whether the preoccupation with the replication crisis might possibly distract from other crises in psychology that are far more damaging than a lack of replicability. Let me give you a quick overview of my methods. I use the reference management software, Zitavi, to search psych info and medline for key terms. I use Zitavi's full text search to get access to as many PDFs as possible and Abby Fine Reader to convert the PDFs into text files. This allowed me to work with these papers in R. My code is based on examples in Matthew Jocker's text analysis with R for students of literature, as well as Julia Silke and David Robinson's text mining with R. The first set of analysis I want to share is based on my RC corpus, RC as in replication crisis. I search for papers that contain replication crisis or replicability crisis or reproducibility crisis in any field, including title, abstract and keywords. This provided me with 483 references and I was able to obtain full texts for 375 of those. I did some topic modeling and analyze topics year by year. I must say that I was surprised that I did not see any significant changes in topics over the years. It seems to me that the discourse for the past 10 years has been relatively consistently concerned with the reliability of data, effect sizes and statistical methods, not with more social or structural factors that might contribute to the crisis. I also perform sentiment analysis but couldn't really find anything of interest, most likely because the sentiment lexicon doesn't quite speak the language of science. I would appreciate feedback on that if anybody has any thoughts on using sentiment analysis on scientific corpora. Homing in on the question of causes of the crisis, I performed keywords in context analysis or quick for short. A quick algorithm allows me to find any mention of a keyword of my choice across the corpus plus the words that appear just before and after the keyword. I search the corpus for cause and causes and summarize the hypothesized causes of the crisis by year. The proposed causes range from statistical and experimental methods through conceptual problems and human error to fraud, problems with peer review and questionable research practices. The latter are often said to be caused by the reward mechanisms in many scientific fields, most notably the publisher of parish culture. Note how the focus on inappropriate statistical methods, ISM and questionable research practices, QRP, have been very strong since 2015, which you can see in green. Interestingly enough, inappropriate experimental methods, IEM or conceptual problems, CP, seemed to be less of a concern until 2019 in orange. Conceptual problems were strongly on their eyes between 2017 and 2019 in yellow, gray and orange, but then they disappeared completely from the picture by 2020 in dark blue. These quick analyses are tedious. I found that a more convenient way of getting at hypothesized causes off and proposed solutions to the crisis are by-gram analysis. By-grams are sequences of two words, and I found a way to visualize the most frequent by-grams with arrows for directionality. If the arrow points from life to satisfaction, for example, it means that life is the first word followed by satisfaction. You can tell by the color of the arrow how frequently the by-gram occurs. The brighter the blue, the more frequent the by-gram. I've highlighted some hypothesized causes of the crisis in red and proposed solutions to it in green. As far as the causes go, you see references to post-hook analysis or post-hook theorizing, multiple comparisons, the file drawer problem, measurement error, publication bias, false beliefs, and questionable research practices. These by-grams overlap with what I found in the quick analysis. Now here's something new, the proposed remedies. I highlighted sample sizes in pure review in green, but both of them can be causes if done poorly, as well as remedies if done well. Other proposed cures for the crisis include patient statistics, meta-analysis, preregistration, and creative comments licensing. I see a disconnect here between the reported causes and the proposed solutions. Patient statistics might alleviate some problems related to frequentist statistics, but none of the proposed solutions would affect the more conceptual, ethical, and structural issues of false beliefs, publication bias, and the published or perished culture. Still, calls for preregistered studies, open access, and meta-analysis seem to dominate the conversation. These proposed solutions mirror the goals of the open science movement that has gained traction over the past few decades. According to the UNESCO, open science has many facets, including, as you can see in the top left portion of the slide, open data, open access publications, and open source software. These aspects of open science are definitely in line with psychologists calls for preregistration and meta-analysis. However, shown towards the right of the graph are open engagement of social actors and openness to diversity of knowledge. These UNESCO criteria did not surface in my analysis. I dug more deeply into the open science literature to see if the seeming lack of attention to social and conceptual inclusivity was indicative of a larger pattern. My OS corpus, OS as in open science, consists of 1400 and 18 full texts obtained through a search for open science in all fields. Like with the RC corpus, I conducted a year-by-year topic analysis. The focus on data and statistical analysis in the OS corpus is even more pronounced than in the RC corpus. Even terms related to concepts, models, or experimental design that are prominent in the replication crisis discourse seem less prevalent in the open science discourse. I also have a bi-gram analysis for you. The purple circles mark terms that directly relate to the principles of open science. For example, intellectual property, public accessibility, creative commons licenses, selective reporting, which might happen less frequently if open data were to catch on, and a big cluster surrounding the word data, data sharing, data management, data curation, et cetera. Again, note the conspicuous absence of social factors and conceptual inclusivity. This methodological focus within the OS corpus does not mirror the breadth of the UNESCO's definition of open science. My observations are in line with arguments of some critics of open science who have adopted the hashtag broken science. The term broken science pun intended gestures to the predominance of well-connected straight white males in the open science movement, so-called bros. In a publication in The Psychologist titled, broken science is broken science, Kirstie Whitaker and Olivia Guest made clear that not all bros in the open science movement are men, and not all men in the open science movement are bros. Nonetheless, the term seems to capture the disillusionment of some researchers who feel minoritized within the scientific community and who don't experience open science as inviting or inclusive. Hashtag broken science seems to suggest that open science isn't open to all. Instead, the movement seems to perpetuate or even amplify psychology's deeply rooted structural problems of bias against certain individuals, methods and conceptual approaches. I can imagine that such an atmosphere might contribute to a worsening of the replication crisis. And I would certainly argue that the experience of aggression and gatekeepery in open science, to which the Salman color Twitter referred, is in itself worthy of being considered a crisis. This concludes my portion of the talk and now Brandon will speak to the networks of crisis talk in open science. The results of his network analysis resonate with some of the critiques of hashtag broken science. They suggest that most of the publications on open science emerge from relatively closed networks that exclude, among others, the critiques who have coined the term broken science. All right, thank you, Tobias, for those wonderful analyses. I'm gonna go ahead and take you through network analysis now. So essentially network analysis is a great way to understand how information gets circulated and by whom. So Tobias' text mining and analysis showed us how crisis talk is really centered around data reliability and methodology. So we wanted to get a more person institution and geographic sense of the discourse and make our understanding of these concepts more comprehensive overall. So again, the questions that I'm asking are, how closed are these networks? How open are they? And we know, based on some of open science's critiques, that open science or broken science may be inaccessible to people with different perspectives, backgrounds, and affiliations. So how did I do my network analysis? I used VossFewer, which is a software developed by Niece Jan Van Eck and Ludel Waltman at Leiden University. And it's a software that helps construct and visualize bibliometric networks, including bibliographic coupling, co-citation, co-authorship, and more. It also utilizes the level of science database. So I focused on co-citation and co-authorship for this project, but let me clarify the difference between bibliographic coupling and co-citation. So bibliographic coupling is when two sources have one or more citations in common than they're bibliographically coupled. And co-citation is when two sources are cited together in a paper that they are co-cited. So I utilized Web of Science to generate three corpora by doing a term search, which codes through publications with each phrase in the abstract and or the title. So for example, I searched open science to generate a corpus of about 2,300 sources, which I could then import into VossFewer for network analysis. I repeated this process with the other two corpora and I did encounter some logistical issues with having such a small corpus. So the open science and replication crisis is only about 32 publications. So I welcome any and all feedback on how to conduct meaningful analysis with a corpus as small as this one. So let me get into the graphs. So here's a beautiful VossFewer graph. I love this program. Let me use this to give you guys an understanding of how these visualizations work. The sizes of the nodes correspond to their strength in terms of number of publications. And the distance between the nodes, the more related they are. And so the closer they are, the more related they are. Colors are determined by cluster, which I'll get into in the next graph. So let's take a look at this. What is the same? Here we can see that the US, England and Germany are the most prominent in terms of co-authorship based on their size and position for the map. The pink line coming off to the side is Norway and I had to crop it out to fit in the slide. So I'm very sorry, Norway. And the reason I picked co-authorship is because I thought it would be a really good way to visualize where collaboration is taking place and which perspectives are most prominent in this discourse. So we can see that replication crisis talk seems to be focused in Western countries. Another important question I have is to what extent is this network a product of the level of science database? So I did an English term search where this is only gathering sources from countries where English is the primary language of the scientific literature. And I would really like to look into this more and it's important that we keep this in mind as we go further. Here is replication crisis co-authentation by journal. And here we can see which journals are the most prominent by the size and relative position of the notes. Here it's a lot easier to see the clusters because they're defined by color and you can see that they are grouped by discipline, including neurology, cognitive science and statistics and statistics in red. We believe you have neuroscience in blue as an example. And the focal point here is perspectives on psychological science. And this makes sense considering the conceptual nature of discussions around replication crisis. It's also important to recognize that this prominence could partially be informed by its special issue on replicability in 2012. And the most interesting thing to me about this graph specifically is the presence of all the statistics journals because it mirrors to be as findings that in discussions of the replication crisis, most attention is directed at statistical methods as the probable cause. And we'll come back to this shortly when we look at the corresponding graph for open science. Here is the replication crisis co-authentation by author. I've taken a slightly different approach. I'm still using cross viewer, but this is what's called a density visualization. I picked it because it conveys similar information where the warmer colors are more prominent within the corpus and the distance does also indicate relatedness. It's also more readable and especially since we're dealing with names, you'll see a few here that are familiar, including Yenitas, NOSIC, and you'll see some more that come up later, including wagon makers, Simmons, Cohen and Cumming. So essentially, this is the network of authors that are major players within discussions of the replication crisis. We will look at co-sided authors for the open science movement, open science corpus and the combined corpus later on and it will become clear that open science is suggested to be the solution for the replication crisis. So here is the co-authentation by author for that combined corpus. It consists of many of the prominent authors from both the replication crisis corpus and the open science corpus. So again, you can see those names, Yenitas, Simmons, NOSIC and more. Returning to a geographic look at these data, here's the open science co-authorship by country. We can see that the most prominent perspectives mirror the co-authorship of the replication crisis corpus just on a larger scale. So the US is still the focal point of open science discussions with strong contributions from England, Germany and Canada. And I would like to reiterate the importance of the database we're using and how it may be shifting the results to represent a more English or Western outlook. And here's the open science co-authentation by journal. So as you can see in the center here in red, nature and science are the most prominent nodes. They're also very central on the graph which indicates their importance in this discussion. And again, if you look at the bottom right, you'll notice that perspectives on psychological science is rather large and that area kind of mirrors the psychology of the replication crisis corpus from earlier. It's also rather sizable. So it's interesting that the statistics journal made an appearance in the replication crisis co-authentation network, but they're not present in the open science network. And open science co-authentation by authors. So this is our second to last visualization. Co-authentation by author for open science. Since the open science corpus is our biggest corpus, the author network is quite expansive but I want to hone in on the right side of the graph. I really like co-authentation by author because it's a great way to see who is talking about what. There are a number of familiar names that appeared in the earlier corpora. For example, you can see here in green, at the largest node, Brian Nosik is the most co-cited author for the open science corpus. And this visualization is cool, but it's rather complex. So I decided to break it down into another format, which might make things a little bit easier to understand. So here are the same most co-cited authors within the open science movement. So I'll give you a moment to look this over, see if you recognize any faces, and see if you can notice the outlier here present in the visualization. And if you found Robert K. Merton, the black and white portrait in the bottom right, you were correct. So my question is why Merton, a psychologist who passed in 2003, one of the most co-cited authors for such a modern movement? And you may have heard of the Matthew effect of accumulated advantage, which originally referenced the way that researchers will recognize for their work in an unfair way. So more or less, the rich get rich, the rich get richer and the poor get poorer. So essentially the Matthew effect states that more credit is given to scientists who are already established and it creates a difficult environment for new or unknown researchers to get a foothold. And a theory that's analogous to Merton's, but much less cited was proposed by Margaret Roster. And she called that the Matilda effect, which acknowledges that there is a gender disparity in science specifically that women scientists often go unrecognized for their work in favor of their male colleagues work, which was of the same quality or worse quality. So there's ample evidence for this effect, which brings me to one of my final questions. Why Merton and why not Rossiter as well? Is open science as open and inclusive as it claims to be or are these networks relatively closed off and inaccessible to scientists of different backgrounds? So, excuse me, for our next steps, we are planning to release a survey focusing on the extent to which scientists in psychology and allied fields feel connected to the open science community. And that concludes our talk. So we particularly welcome feedback on productive uses of sentiment analysis for scientific literature, meaningful analysis for small corpora, a combination of appropriate balance between quantitative and qualitative data. As we mentioned, we were gonna send out that survey and then the limitations of using web of science and similar databases. Thank you very much. Fantastic. Thank you guys. This is a neat idea for a data set to pull. This is, I'm really interested. This is cool stuff. I already have one question coming in. Oh, okay, from Stefan Hespelrigan who writes, does web of science provide any data about its geographical coverage? Cause it seems like absence of a country in web of science doesn't imply that there's not any work being done in that country and he's not even, he says he's located in Russia. He says I'm not even referring primarily to Russia. The absence of France is also kind of conspicuous and Eugenio Petrovich writes with a related point that I'll go ahead and add now for you guys to combine. I think that the co-authorship counts should probably be normalized by the publication productivity of each country. Otherwise a big country like the US is always gonna wind up central in the network, right? Okay, well, awesome. Thank you guys both for those questions. As for the latter point, I do think it's important that the size of the, to remember that the size of the notes do correspond to the number of publications. So that is like a pretty important part of how Bosnia creates its graphs. But I do think that that's a very interesting point about normalizing the co-authorship. That's something that I will look into in the future. As for web of science's geographic data, I couldn't find anything specifically in my digging but from my understanding, I do think that my results may have been influenced by the fact that I did an English term search. So for countries that primarily use English, that's probably why it was returning mostly English publications. But yeah, so I think that there's definitely some work to be done to look into exactly how that works. Yeah, from some personal experience of my own, I know web of science can be tough for, well, it's a surprisingly opaque data set. They sort of, they feel as though we should be honored to be using it at all, I think. So next question coming in from Eugenio Petrovich, who says, so the one problem with author co-citation is that frequently the raw data are rather dirty because of homonyms or similar issues and also sometimes are also in web of science, only first authors are recorded in cited references. So how much advanced data cleaning did you do for this data set? The point about first authors, that's super important. I definitely would like to look into that later because I do understand that it only returns the first authors. As far as data cleaning goes, before boss viewer creates that output for the graph, it will return like a list of every name. So I did go through and uncheck or check which ones I think needed to be included. Definitely, I feel like more or less surface level I could have gone deeper with that. I would like to add that boss viewer explicitly works with web of science. And it was just a very convenient tool that we could find. And we were also considering Gefi, but there would have been so much more work involved and particularly in working with a student when it has to be done within a semester. Didn't see worth the hassle. But the feedback is very valid because obviously we wanna take this further and publish. So we might look into alternatives there. Yeah, and thank you for that feedback. That does me a lot. Material constraints of research are very real. Yeah, at public state colleges, yeah. Absolutely, yeah, I've talked with some colleagues and some of the other folks, and I'm meeting in fact, who have done some web of science research. And I know the cost of just connecting yourself to the full raw web of science database is so astronomical as to make you feel physically uncomfortable. It is a huge number. So yeah, we have to boss viewer as a tool given that it's a way to work with the data. It makes perfect sense that you would have picked it up. That's entirely plausible. I wanted to turn really quickly to the chat actually where there's just a few, oh, sorry, just one more comment that's come in. One thing that could be helpful as well. So Eugenio Petrovic adds that some of the new versions of boss viewer are also now working with a Scopus crossref and dimensions.ai dataset. So there might be some cool opportunities for cross-comparison to see if the trends are robust. That's really neat. I wasn't aware of that. I hadn't seen the new versions when they added it. That's something that I've seen, but I haven't really had a minute to sit down and kind of get into it. So yeah. Nice. Another question now coming in from Christophe Maltaire who writes, this is a very interesting connection between the replicability crisis and open science. A methodological question on the text mining side. So if I, first of all, if I understood correctly, you did a one topic model. Did you try higher topic models with a higher number of topics to bring up, to bring out more details? And what's behind your description of the quick analysis is being particularly tedious. Okay, yes, I did try higher numbers of topic models and I limited myself to, and that's correct. It was just one topic per corpus for the slide. If I did, for example, when I did three topics, then I essentially had the same thing every time. One was the content, the disciplinary content of the journals and brain connectome also cells. At that point, I should mention, I'm gonna drop PubMed because it just introduces too much noise. If we want to look at the replication crisis in psychology and allied fields, I thought that PubMed would give us psychiatric data, neurological data, but it really is mostly focused on biomedical data and genetics. So that would usually, if I did three topics per corpus, per sub corpus, that's usually what would pop up. One topic was the cells and the brains and one topic was research data methods and one topic was psychology replication crisis, replicability, maybe error, definitely false positives or false, so that's what I saw. I didn't go higher than three because just didn't seem really useful at that point. And then, so it is, I used LTA so obviously, that's why the word clouds have varying word letter sizes because it shows you the prominence. It's a weighted topic. It's not just a bag of words, that's static. So then as far as, what was the other question? Quick analysis, I said they are very tedious because I look for, when I have 1500 papers and then I search for cause and causes, I get spreadsheets with hundreds and hundreds of lines and I do 10 or 20 words before the search term and 10 or 20 words after the search term and then I as a human, open each of the spreadsheets and look at what is here. And then I classify every mentioning of a cause into one of the categories I showed, inappropriate experimental methods, inappropriate statistical methods, fraud and so on and so forth. And then I try to visualize it and that just takes days. If I do a biogram analysis, I already get it, I just, the visualization is the output. And then I can just look at what biograms are frequent and what does this tell me? Nice, nice, thanks. Next question comes from David Devonis who writes, do earlier critics of the false positive problem show up in any of the current discourse? He lists some examples, Paul Meal or David Lichen, et cetera. Yes, definitely. Meal and Yaniris is also one, while he's of the current generation, one of the very first one who said it. Meal is older and some of the reformers really, really like Meal. He was also in our M.C.O. citation analysis. So there are several cognitive scientists, I believe, who strongly draw on Meal's work. Other people don't seem to really understand the obsession with Meal's work and go into a different direction. I don't know, maybe Meal is too conceptual, not methodological enough for many of the reformers. And Yaniris just seems to me always cited, just constantly cited, even though Yaniris put a lot of emphasis on conceptual problems and biases. But even though he cited a lot in the open science literature, it doesn't seem from our analysis that that's such a big focus in terms of social or conceptual issues in the open science movement. The open science movement, as it appears through our analysis, seems to suggest that if only we forced everyone to share their data, if only we were able to conduct enough meta-analysis, we would neutralize all of our biases by force, so to speak. Let me actually, I wanna use that question to pick up, to drop in a question of my own, because we have plenty of time. I wonder what your plans are for the diachronic aspect of the, in particular, I guess, that the text mining data seemed to be more diachronic than the mapping data. Because, so there's a sort of, this double-edged sword kind of question here, right? On the one hand, it would be really cool to be able to start to tell a little micro-history of the way that some of these changes unfolded. You mentioned the timeline that, I need to go find that timeline. It looks really cool, by the way. At the same time, I don't know how big the, once you start taking a small data set and then carving it up into years, the numbers get even smaller. And so I don't know whether or not you feel confident in the kinds of inferences that you'd be able to make. So what are your plans for that kind of time development story? What I'll do, I'll say something and then I'll post a link to the chat and maybe then you can post it to Crowdcast, to the timeline. Excellent question. We're asking ourselves the same question, hence Brandon put on the slide, how do we deal with small corpora? The year by year analysis I did for the replication crisis, some of them were non-existent. Most of them were between 70 and 90 papers. But for the earlier years, I just couldn't get, I had zero papers or maybe five or 10. So that's definitely an issue. And I think it won't work. So I might be able to do these analysis for the past five years, but just not going back further and for it to be really meaningful because I think I should have 80. I'm not sure if there's a size that we all agree on and provides us with valid output, I'm not sure. But I am a historian and that's why my first instinct is always to look at a development over time. And it didn't seem to get us very far. So we'll have to probably drop some of the historical, at least as far as the digital, as the distant reading goes. We probably have to drop some of our historical approach and work more with the secondary literature and focus more on the qualitative data as we continue this project. And again, we'll have the survey, which will probably give us enough information to also transform this into a solid paper. Yeah, to echo- Oh, sorry, no, no, please, please, no, go ahead. To echo what Tobias said, I was also working on a year by year analysis for my open science corpus. Since that was my biggest one, I felt like I could maybe get somewhere with that. But for some years, I would have like 20 sources. And then I was like, well, I can probably just go in and read all of these and then make my own judgments and then put that somewhere in here. But as far as digital analysis goes, I was kind of struggling to create something that would say something, you know what I mean? For corpora that are as small. So it's definitely an issue that we both encountered. No, that makes good sense. I thought that might have been the answer to the question. And yes, let me grab this link from the chat and post it over here on crowdcage. It's also at this point, thanks to Michael Pettit who has been so supportive. He knows so much about the replication crisis. Very cool. Let me, one more question here in the feed from Rose Travis who writes, so thanks for the talk. Did I get it right that social factors may play a role in the reproducibility crisis, but they don't wind up not being talked about very much? If that's right, could you say a little bit more about that? Brandon, do you wanna start? Absolutely. Yeah, I definitely think that they're not, well, at least as far as our digital analysis goes, we weren't returning, we weren't getting a lot of information regarding the social factors. So one of the UNESCO parts of open science, as far as the definition goes, was acknowledging all social actors and understanding even like indigenous perspectives and stuff like that. He had openness to diversity of knowledge. And when we were, for example, Tavia was doing her text mining, those terms did not come up. It was all data statistics, methodology, stuff like that. That's kind of where the focus seems to be centralized. So that's definitely something that we were really interested in as far as that goes. Can you pull up slide 43? So one of our concerns is not only that some people, particularly women, but it also seems that there was a big controversy surrounding one of Michael Frank's blog posts. He's a professor at Stanford that kind of singled out computational modelers who often feel excluded. So their approach, if their approaches are excluded, what modelers do not pre-register their experiments. And so if we now force pre-registration on everyone and say only this constitutes a sound methodological approach, then an entire subfield is already excluded. So that's hugely problematic. And I would count that as a social factor. And then as we can see in the close networks of open science discourse appears to us that the same people are literally circling. And if you go in circles, I'm not sure how much progress you can make. I think that maybe some input from people who are not part of the main conversation would be needed. And then as you can see on the slide, here's an entirely different problem of social factors that we think as historians, well, I'm a historian, Brandon is not, Brandon is the biopsychologist. But as a gender study scholar and historian of science, some people have suggested, some tweets have suggested, some individuals have suggested to us that the replication crisis might consciously or subconsciously be blown out of proportion for psychology to be able to conceal all the real crises that it has. For example, the Me Too crisis. For example, the APA torture scandal, which was on the news for the past few years where when it came out that the APA had contributed to, or at least not, yeah, contributed to torture by the US government. So those are, from my point of view, severe crises. And if one blows up a methodological crisis of a lot of false positives like that, and I don't mean to suggest that it is not important, sound methods are very, very important. But maybe we should look first at how many scholars of color do get a real shot at a career in psychology and allied sciences. How many women? How many other minoritized individuals? What happens with psychological knowledge? Is it being used for good or for bad? And I think the replication crisis should not be our first concern when we look at psychology and allied fields. Other things might be more pressing. It's a great point, yeah, yeah. On that note, actually, I think that's a perfect, I think that's a perfect set of closing words. So why don't we leave it there? Thanks so much. This was a fantastic talk. Again, really, really exciting data set. And I'm really interested to see where you guys, where you guys go with it from here. We'll be back in just a few minutes. My apologies if we are a little bit late coming back. The next talk is actually going to be coming in by video. I'll say more here in a few minutes, but I'll be back with you guys in just a few minutes.