 All right. Hello, everyone. We will head straight into our next session, which is a series of lightning talks. I do hope you will all stick around. My name is Fallen Modi, and I will be moderating this session. We will have four speakers and each will be speaking for no more than eight minutes. So with that, we're ready to start with Chris Aberson. He's a professor and chair of psychology at Humboldt State University. Over to you, Chris. Great. Thank you. I'll be talking today about some of my experiences as an editor, trying to move a journal toward open science practices. To give you a little background, I'm an editor for a journal called The Analyses of Social Issues and Public Policy. It's a journal that began in the 2001, and it's published by the Society for the Psychological Study of Social Issues, and it served the need for the society as the society's other outlet, which is called The Journal of Social Issues, which was only published. Basically, every issue was a special issue. It was like an edited volume, and it was like, we didn't have any space for standalone empirical articles. In 2017, I became the outlet's fifth editor. In my interview and all the documentation I submitted, I emphasized open science practices such as open access transparency and openness promotion guidelines, open science badges and things of that nature. Pretty early on, it became clear that open access wasn't going to happen because there are contracts with the publishers that we can't change. But all the other stuff, everyone on the committee that hired me was quite enthusiastic about. I became the editor the first day of 2017, and I adopted transparency and openness standards that day. I, as per my contract, I had cleared everything with our society's publication committee. They were on board with everything. Everything was fine until about six months in. Before our yearly council meeting, I got an email from the society president that had directed me to remove our journal as a signatory. This was a new issue, so it couldn't be discussed at our council meeting that year, so I had to wait until the next year to have a spot on the agenda. The lesson that I took away here is that I failed to have an understanding of both the formal and informal power structures in within a society. There was one influential person somebody very high up in the society, who is a critic of open science, and I didn't understand how just that one person could could actually derail my work. So this year, things came to a head at the council meeting where we had agenda items regarding top status for for the journal, and also whether I would be allowed to continue open science badges, which had just just recently begun. It was a contentious meeting. A lot of issues around data transparency statements people made points that those articles that said that data were available were going to be seen as more important for better work than those that don't have data that's available, and then there's very good reasons for not having data available as I certainly agree. In the end, the non signatory status on the top standards was upheld. I was also directed to retracted editorial I wrote on transparency and openness as kind of my introduction as as the editor badges were upheld by a single vote. The lesson I learned here was a lot of people in the field just really don't know in my own field and this this field would broadly be, you know, social psychology, a lot of people in the field simply don't know much about open science, and they kind of perceive as zealous, you know, everything is all or nothing. Lots of concerns about open data requirements, as I mentioned, but also lots of misunderstanding and basics. People didn't know what open access was versus open science versus open data, all of these things were mixed up in people's head so I think that there was an educational component that I that I really needed to deliver more on. In the third year I worked with the publication committee to revise the data ability data availability statements as directed by the council. This really just ended up in a situation where we changed, we require people to disclose data availability availability availability to language around it being encouraged. The publication committee wanted to let sleeping dogs lie on the retraction they said you know nobody resetted those editorials anyway. And then very interestingly, a few months later, widely publishers announced a new policy for all of their journals. All papers would be required to include an open data statement so this issue that had been at the core of what I had been fighting for, had now been completely resolved. They did how now it had now become incredibly mainstream. So the big lesson that I took from there is that it's really important to be patient, a change at high levels can be slow. They're often well behind where we want them to be, particularly the people who attend a conference and a conference called meta science, you know they're behind what we want. And it does catch up and I saw another example of this recently you know APA American Psychological Association became a signatory on the top standards. And that's that's a real dinosaur organization that's not that's that's that's come around finally. So, where have we ended up. Well, comparing pre 2000 to 2000 and pre 2020 to 2020. And later, because this is about the time where the where badges became something that was available for the whole year. The median sample size increase substantially from 64 to 304 a lot of that is my own focus on statistical power I'm sure. And adding open science badges about half of the articles that we publish have received at least one badge. We're trending closer to 60 70% for for for more recent submissions, we've been able to introduce registered reports, and our submissions have increased dramatically. We went from an average of 73 a year to now we're actually these numbers are inaccurate even up as of today, we've got about an average of about 200 per year. And of course 2021 is not not even done yet so there's been a big jump in in how much and how much how many submissions we're getting anecdotally some authors have told me that they selected us over other relevant journals because we were the only ones that promoted open science. What happened to my term, my term was about to end but I was I was extended for two years on the urging of that. Most negative about open science I really become a champion of my work. Again, change is slow, be patient, listen to those opposite people in opposition and talk to each other understand your each other's views like I did with the, the, the anti open science person. We found that we agreed on almost everything. There were just very, very, very minor differences. So, really do try to keep that dialogue open. Okay, thank you all very much. Thank you, Chris. That was fantastic. I will now move on to our next speaker who is shocker on and them shock is a PhD candidate in the Department of Psychiatry at Trinity College Dublin, giving a talk title cross cultural scale validation. Ready when you are. Thanks a lot. Yeah, I'm just going to share my video now. So, okay, please let me know if you can't hear anything. Imagine a bright PhD student starting off her first year. She's just finished master's degree that involved the cross cultural use of multiple scales. And now she's eager to spring into the exhilarating world of clinical scale validation. Little does she know she will end up questioning not only the very purpose and utility of the tool she's meant to produce, but also the foundations of the science to which she has devoted her entire career. I'm shock. Yeah, a third year PhD candidate at Trinity College Dublin, and the student I described was of course me. Over the past few months, I faced some challenges while designing a cross cultural validation of a trauma related shame scale. While many of these have been logistical such as keeping my PhD timeline in mind, finding collaborators in Asia and Europe, etc. There have also been some conceptual and methodological challenges to my perspective on conducting cross cultural research and on recruiting from non weird populations. So in the spirit of igniting a conversation on this topic, here are some of the issues and questions that I wish I had grappled with earlier on in the process of my own study. Sometimes research doesn't start out as cross cultural. This was the case with my study. Once the seed of having more diverse samples was planted, I let it lay dormant while preparing my study for an Irish population. In doing so, I missed out on having the input of collaborators in India, for example, from the get go. This input would have been advantageous in a few ways. Firstly, involving researchers from local target populations from the idea stage fosters a fairer relationship between PIs and collaborators, reducing the underlying power imbalance between PIs from largely Western resource rich institutions and researchers from non weird or not so well funded ones. Decisions about level of involvement, potential compensation and authorship can be made more respectfully in this way. Secondly, crucial choices in the study's design, such as ethical considerations, adapting scale items to the local cultural context, etc, should really be understood by PIs at a meaningful level. This requires time to learn about the cultural environment they will be entering and is where early involvement of local collaborators is key. I mean, imagine suddenly going into a community barely been exposed to and then hoping that they'll trust that you understand their culture. She doesn't even go here. Do you even go to this school? No. Ironically, if you're not familiar with early 2000s American pop culture, you might not understand why that clip was funny. Okay, so it's not possible for a scale to be valid in every subgroup of every cultural group you sample. But that's no reason to not try and improve. Even keeping the constraints of individual researchers in mind, there's always more we can do to get a slightly better diverse sample in cross cultural research. The reliance on convenience sampling in so-called non weird countries has resulted in many studies sampling subpopulations that have weird trades anyway. Think schools, universities and so on. These are educated, often westernized populations who are not part of minoritized groups. So can they really be considered a significantly different sample than, for example, the white Dutch population of the original study? I'm exaggerating, of course, but it's worth putting in the effort to recruit from a wider range of people in a region and not just the easiest to access population if at all possible. I'd also like to direct you to this talk on the validity of the term weird given by Sakshi Kai and colleagues at the CIPS 2021 conference. On the other hand, if there's little chance that a scale will be adopted within a community in the long term, at least in its current form, then maybe the most ethical course of action is recruiting from populations that have the most likelihood of using a validated scale later on. In other words, the weird trade populations I mentioned earlier. I would love to have participants from rural Indian communities complete my validation questionnaire on shame and child sexual abuse. However, given the sensitivity of the topic, particularly in small insular rural areas and the lack of psychological services there, it's pretty unlikely that participants will end up benefiting from my study. Perhaps the most crucial point I want to bring up is how do we discuss the utility and post-study uptake of cross-culturally validated material? Well, when it comes to scale validation, we need to consider some sub-questions. What is the end goal of your validation? Is it to strictly statistically test or confirm the validity of a particular scale by replicating the original study? Or do you also aim to encourage the uptake of a scale in a new range of populations? Although we may go into studies unconsciously assuming or hoping that the former will result in the latter, encouraging post-study adoption of a scale probably requires adjustments in how the study is carried out and almost inevitably in the properties of the scale itself. There's tons more that I couldn't include here, but if you want to continue the conversation, please consider following me at the T-searchers and consider following the Junior Researcher Program on Instagram and YouTube. Thank you so much for watching. Yeah, that's it for me. Thanks a lot. Thanks, Shakya. I'll just ask you to stop sharing the screen. Thanks. I did. Okay. Is it still sharing? No, I think it's fine, actually. Up next, we have Yuching Kai. Yuching is a Masters of Research student in Developmental Neuroscience and Psychopathology at UCAL and Yale University. Yuching is presenting a talk titled, Assessing Flexibility in the Measurement of Socioeconomic Status and Meta Research. Over to you, Yuching. Thank you. Hi, everyone. My name is Yuching Kai and today I'm going to present this research about the measurement flexibility of social coming status, which is by me and through other collaborators. So it is a matter of research. So what is social coming status? Social coming status or SCS is social standing or the class of individual or group of individuals. So it can represent for the accessibility of the resources for different units of individuals. SCS has been widely adopted in many different domains of studies. For example, in psychology or cognitive science, SCS has been found to be associated with many different outcome variables, including mental health, physical health, language development, and brain development of the children. You may think that social coming status is a quite straightforward concept, but the measurement of it is quite complicated and flexible. To begin with, we can use different indicators or resources to measure for SCS. So the most commonly used ones are education income and occupation, but you can also use not so conventional ones like political resources and subjective SCS. Even if we choose the same kind of indicator, the scoring of them can also be different. So for example, when measuring education, we can use levels of education or years of education. That is a categorical variable or continuous variable. Another thing to consider is whether to aggregate different indicators into a composite score. So for example, Honest Head Index is a very popular aggregated SCS score, which combine education, income, and occupation together. And all three of those indicators can be used as single indicator in other research. At last, we also need to consider what is the level of measurement for SCS in the study. So for example, individual SCS, parental and family SCS are the three of the most commonly used ones, but on a more extensive level, we can also measure for neighborhood SCS. For example, here is the neighborhood SCS of New Haven, we are currently living. On a higher level, we can also measure for things like gross national income of a whole country or region. So the current study wants to evaluate the flexibility of SCS of its effect on the results of cognitive neuroscience, specifically. So we first systematically review different ways of measuring SCS in this specific domain, and then we want to reproduce them using two public data sets, that is the PS from China and PSID from the US. Then we want to evaluate the impact of the flexibility of measurement on the possible outcomes in psychology and cognitive neuroscience. So we first did search, we'll first search the article and select the relevant ones, and the ones that can be potentially reproduced. So we use variables from CFPS and PSID to reproduce those different types of SCS. Then we evaluate the influence of the flexibility using the variance that can be explained due to the measurement itself using the index of ICC. And we also calculated for the associations between outcomes and SCS and also between different types of SCS. So the preliminary results has been pre-registered on OSF, and if you're interested in this study, you can take a look at it. So in this part of the analysis, we selected 53 papers, which used the 38 data sets, and we found that there are more than 40 different types of SCS, which is even larger than the number of data sets in this part of the analysis. About 20 to 30% of the variance can be explained by the measurement itself, and the correlation between SCS and targeted variables, and also between different SCS also vary to a good deal. So here is the correlation matrix of the different SCS, different types of SCS that is calculated from CFPS and PSID, and you can see that the number varies to a good deal. So what can we imply from the current result? As you may already know that the measurement issue in psychology has been discussed a lot recently. So for example, for depression and self-regulation, there are many different ways to measure for those concepts. Similarly, for social economic status, the flexibility of measurement has been found in the current study in the domain of cognitive neuroscience. So this could be explained by the complexity of SCS itself, but it can be measured by different indicators, but there are also problems in studies using SCS as a variable. So many studies arbitrarily choose different indicators, and they do not have citation or explanation over when, how they choose those indicators. And this could have an effect on the reliability and reproducibility of the findings in the cognitive neuroscience domain. That's all. Thank you for your attention. Thank you, Eugene. Up next, we will have a video that I will play for you on behalf of Hassan Khan, Hassan's research assistant at Ottawa Hospital Research Institute, and has recorded his talk for us on has open science penetrated academic hiring practices. Just bear with me for a minute while I share the video with you. Hello, everyone. Thank you to all who are attending today. My name is Hassan Khan, and today I will be talking about whether open science has penetrated academic hiring practices. Before I jump into my presentation, I just like to give a background about myself. I recently completed undergraduate degree in psychology from Carlton University. And for the past year I've been volunteering at the Ottawa Hospital Research Institute at the Center of Generalology. As many of you know the lack of transparent and reproducible research continues to be a worrying trend among the scientific community. For example, in the survey conducted in 2016, they found that more than half of the 15 hundred scientists reported being unsuccessful in reproducing their own research findings, which is quite concerning. Not surprisingly, these researchers believe that most often they're not the reason for the lack of reproducibility was due to the pressure to publish, as well as selective reporting. These factors are grounded in institutional practices, which continues to emphasize traditional methods for promotion and tenure, such as grant funding or the number of publications. Aside from that, the increased burden of bureaucracy often takes away time from doing and designing research. Now what can we do at the institutional level, we can shift our focus from a traditional method of assessment to an artificial method of assessment such as adopting open science practices to assess researchers. Although there is no agreed upon definition of open science, there are essentially a set of practices that look to promote transparency and credibility of scientific research. This can include registration of study protocol to publishing and open access journals, using preprints and making your study data publicly available. To promote this culture shift we have to get a sense of institution standard in terms of adopting open science practices when it comes to hiring faculty. So the purpose of this study was to evaluate the current hiring practices of academic institutions around the world, with regard to the mention of open science and research based faculty and postdoctoral positions. We conducted a cross sectional study of 192 institutions globally and gathered job postings for the past 30 to 60 days starting in February 2021. Our search strategy included obtaining job postings from the institution's career website, as well as any viable job boards. Job postings were assessed by the modified open science modular scheme. This is a self certification scheme that was made publicly available to us on the open science framework and is actually modeled off the transparency openness and promotion guidelines. Each reviewer assigned a level between zero to three to each job advertisement, a level of zero would indicate that the institution made no mention of open science practices in their job description, whereas a level three would indicate that the institution mentioned open side practices in the job description, as well as committed to including a proven track record of open science as an essential characteristic. So what did we find after examining 305 job advertisements in academic positions in 91 institutions, surprisingly only two had any specific mention of open science in their job advertisements. So it's clear that the institutions need to do more to promote open science if we are to deal with the reproducibility crisis, and there are some ways that they can address this. For one, they can make a commitment to open science in their job advertisements. They can provide examples of how open science is being promoted. They can share educational outputs. Last but not least, they can ask applicants to share how they have used open science at their professional capacity, and how they will continue to promote it going forward, if hired. I would just like to acknowledge the following collaborators for their ongoing support. Dr. David Moore, Burrida Franco, and Elab Malbole. Thank you all for attending today.