 All right. Hello, everyone. I think we're going to get started. People are still filtering in. Thanks so much for coming to our session on implementing meta-science. I'm Nick DeVito. I'm a postdoc at the University of Oxford and a meta-scientist. I work maybe a policy arena on things like clinical child transparency. But I'm going to be your moderator today. And just to give you a little introduction or your co-moderator, you'll meet Delwyn, my co-moderator in just a moment. But just to give you a little background on this session. So when Delwyn and I and Maya were first, who's one of our panelists today, were chatting about putting together a session for one of these virtual sessions, we were thinking about sort of what we could add and what our common experiences and what we would like to see. And basically, we came to the conclusion that having been to a few of these meta-science conferences before and generally active in the space, that meta-scientists are very good at sort of identifying and diagnosing and measuring the extent of a problem. And then talking about solutions. But when it comes time to implement those solutions, for a variety of reasons, that doesn't always happen. It doesn't always happen effectively or efficiently. So we wanted to collect a bunch of folks to sort of have a talk about this and chat about their experiences, people who have real experiences in implementing solutions to meta-scientific problems that we sort of all know exist in one way or another. So we're going to cover responsible research assessment today. We're going to cover grants. We're going to cover the publication system. And we're going to cover clinical trial reporting, but also broadly reporting, responsible reporting in the context of all of science. Those are our topic areas. We have some great panelists that Delwin's going to introduce in a moment and she'll walk you through some of the mechanics of how the session is going to go. It'll be a panel discussion for the most part, but with some interactive bits that Delwin will get into in a moment. And thank you to everyone who filled out the free sort of symposium survey and we'll be sharing some information on that. Over to you, Delwin. Thanks a lot, Nick. And hi, everyone, also from me. So as Nick said, I'm co-moderating this session. Very happy to be here today. And so my role today will be to monitor the chat and also present some questions to the audience and then really try to do some live synthesizing of the whole discussion. And I want to do a quick shout out to Frédérica Crois who's also here and who's going to be my teammate of really trying to do this together. So thanks a lot, Frédérica, for supporting us today. And yeah, also thank you. As Nick mentioned, we sent this Google form to solicit a bit of get some of your input before this webinar. And that was really helpful to get a sense of who's participating today and to collect some of your experiences that will also be integrating into the discussion. So we're also really keen to, of course, keep getting your input throughout this webinar. So what we'll be doing, so if you have any questions throughout the session, please post them in the chat. We'll be monitoring that closely and feeding that back into the discussion. And we'll also be doing a bit of live polling with Mentimeter. And so just going to share a screen here to show some instructions on that. So Mentimeter is just a platform in which we can solicit some, do some live polling, essentially. And all it requires is for you to go to this website, www.menti.com. And then you'll be prompted to enter code. And it's a one-time code that's valid for the entire session. And so if you just use this code, you'll just, the questions will pop up, essentially, and then you can enter your responses and then we'll have the results live to be presented in the symposium. So I think Frédérica is also going to be posting a link to this with the code. So yeah, so hopefully that's clear. If you have any issues, just post questions in the chat. Now, going back to the survey, so we wanted to get an understanding of who's going to be there today. And so we asked two questions. One was, what is your primary discipline? And this is the results that we got. So we see a really strong representation of social sciences and also mathematical, physical and life sciences, as well as medical sciences and other disciplines as well. And we also wanted, we're very interested to see who's in terms of primary sector. And here we've really had a strong representation of people in academia, but also people from non-profit, from government, and some folks from industry and some other types of non-profit. So that's really helpful to know. And I think also really helpful, something to keep in mind for the discussion going forward. And from the survey, it was also really clear that we had not only people from research generation, but also people from the implementation side. So that was really great to see. So thank you for that feedback. And thank you also for your free text responses, where you shared some of the challenges you've encountered and some of the expectations you have for this symposium. Again, we went through that and tried to incorporate that in some of our panel discussion questions. So now that we have a good sense of who's here today, we're going to move on to our panelists. So thank you again to all the panelists that are there today. So I'm really excited about this. So we have Chris Chambers from Cardiff University. We have Kelly Kobe from the University of Ottawa Heart Institute. And then we have Myles Alholz-Hillard from the BIH Quest Centre for Responsible Research at the Charité. And finally, Sally Tinkall from the Science and Technology Policy Institute in DC. So I'm going to let the panelists introduce themselves. So just a couple of minutes each presenter, each panelist. And then after each panelist, I'm going to come back with a Mentimeter question to just get some feedback from the audience on the topic that that panelist will cover. So we'll just be switching back and forth. But yeah, it would be nice to get a bit of feedback from you as well on that. So having said that, I'm going to stop here and pass it on to you, Chris, for the introduction. And after your introduction, I'll follow up with the Mentimeter question. Thank you. Yes. And thanks for having me. Welcome everybody. So I'm Chris Chambers from Cardiff University. I'm a Reformed Cognitive Neuroscience, I suppose, the best way to put it. I used to be a primarily specialist academic focusing on psychology and neuroscience. But as years have gone on, I've got increasingly interested in meta science and open science reform. And that's really my main role these days is in implementation of various reforms like registered reports, which many of you will have heard of. The top guidelines, the UK reproducibility network and others. So I've been really involved closely in trying to drive forward these policies. And not just advocacy, but also dispassionate reflection on whether they're working as intended. So I think it's really incumbent on all of us who are doing this kind of work to try and distance ourselves from the sorts of results we'd like to see from meta science interventions. And it's just the same way as many of us promote a kind of result-sagnostic approach to science itself and apply that kind of disciplined approach to looking at the effect of our interventions as well. So I'm quite keen on that. I've got more, I can say later, about some kind of approaches that I think work quite well and things maybe that work not so well when designing interventions in this space. But that's my kind of very brief summary of me and what I'm all about. Thanks a lot, Chris. So yeah, I'm going to follow directly with our first Mentimeter question, which was on the topic of registered reports. I hope you can see it there. So the question is, have you ever done a registered report? Yes, no, or I don't know what a registered report is. Already got quite a few answers coming in. Give it a bit more time. But I'm seeing the no category as being dominant for now. Yeah, I think give it a couple of seconds. This is really cool. I like this. Yeah, and we can, you know, we thought that these might be little props for discussion. So as these will get, whatever the results of these are for each panelist, we can weave it into the discussion overall. Exactly. Do you want me to explain what registered reports are so we can push that third column down a bit? If you, yeah, sure. Why don't you take two minutes right now and quickly give up the registered report? I'm assuming we're all familiar with the standard way peer review works, where you do your research, you complete the study or set of studies, you analyze your results, interpret them, draw conclusions, and then you write your paper. And then that gets peer reviewed, and it's evaluated based upon all sorts of criteria, theory, methodology, but crucially also results and conclusions. And it's that evaluation that's done on results, which causes a lot of the problems we have in science when it comes to irreproducibility, in particular publication bias, where journals predominantly populated with positive results that support hypotheses and negative results get suppressed, and various forms of reporting bias where authors are faced with the pressure to produce positive findings, essentially fish for those results in their data and present them selectively in order to tell a story. Now, registered reports seeks to solve this problem at its core by taking the regular review process and splitting it in half and saying, first, we're going to do peer review based upon your protocol before you've done the research. And we're going to evaluate that based upon the theory, the question, the methodology, and the potential implications of the program of work. And then if the journal or platform evaluates that positively, then you get an in principle acceptance, which guarantees publication regardless of outcome. So the idea is that we eliminate from the entire publication and ideally research workflow, publication bias and reporting bias. Right. Shall we move on to the next? Merrick Dellek. All right. Yes. We lost audio on our end for a second. But yeah, no, great. Great to see those responses. And thank you, Chris, for that brief explanation. All right. Then I'll pass it on to Kelly. And then we'll also follow up with another Mentimeter question. Thanks, Kelly. Great. Thank you, Dylan and Nick. Hi, everyone. It's great to be here today. I thank you so much for your time in attending the session. I'm Kelly Kobe. I'm a scientist at the University of Ottawa Heart Institute. And I wear a couple of different hats that I'll mention now and in this sort of intro. So one of the roles that's relevant to the discussion today is that I'm on the Steering Committee of DORA, so Declaration on Research Assessment. So I've been engaged with DORA since late 2019. At the time it was the advisory board, but there's been some changes to governance. So now it's a Steering Committee membership. And further to sort of that role, at the University of Ottawa Heart Institute, I direct the Meta Research and Open Science program. So it's a relatively new role that I've started in. So really excited to build that up in terms of sort of my independent research in those areas. I'm keen on research related to open science broadly, but right now I'm focused on research around data management and sharing, as well as research on sort of institutional reform and monitoring of open science practices. So for example, through the development of consensus around what we ought to track for institutions and the creation of a dashboard to automate and track those principles. I'm also a member of Equator Canada. So when you think of Equator, if you're familiar with it, many people think of reporting guidelines. So I'm interested in sort of reporting quality of research, but also in training and education and helping support researchers to report their work in a way that's clear and transparent and not spin, for example. I'll stop there just to say, though I'm really keen on our discussion today and I'll be bringing those different perspectives to my responses today. Fantastic. Thanks a lot, Kelly. It's great to have you here today. So yeah, transitioning to our next Mentimeter question. So is your institution signed up to Dora? All right. Very balanced responses in the beginning. Okay. A lot of I don't know. Mm-hmm. Yeah, it's pretty balanced, actually. All right. So quite a lot of I don't know. Maybe, I don't know, Kelly, do you have a brief reaction on this or anything you want to comment on? Yeah, I think it's interesting to say. I think certainly a criticism sort of lobbied at Dora by some is that it's simply you just sign your name to a declaration and then you're done. So obviously that's not the goal of the organization or the folks that are really committed to it. But I think in practice, that may be what's occurring at some institutions. So it may be that folks are not aware because there's sort of the there's the buy into the idea, but there's a sort of an inertia where they're not actually implementing the principles of Dora. So if they're not doing that, their community may not even be aware that they've signed it or that they've considered signing it. Right. Right. Absolutely. Yeah. There are quite a few yes responses though. So that's that's that's encouraging. Very good. Absolutely. All right. Okay. Then let's move on to Maya. Flo is yours. Hi, everyone. I'd just like to echo what everyone else has said. Thank you for joining us for this conversation today. I am a PhD student at the Quest Center for Responsible Research in Berlin, Germany. And I can't remember the exact word Chris used, but I'm also a reformed or recovering cognitive neuroscientist. But I made this switch I would say quite a bit earlier in my career because I already started realizing there were some changes that we needed to do to how research was being done in my field in my master's. So I actually stumbled my way out of academia and into policy work for two years, really working on the implementation side actually at the organization of our last panelist. And now I am doing my PhD in meta research. So this topic of bridging more basic research questions and applied research questions into implementation and into results, agnostic evaluation is is is pretty important to me. What I've spent the last three years doing is focusing on clinical trial transparency, which is an incredibly important issue in biomedical research because clinical trials form the basis of evidence informed medical decision making. And yet a lot of the trials are results are never published or published incredibly delayed. And even though there are laws in place for certain trials, the enforcement has been laxadaisical, we haven't seen clinical trial is meeting ethical and sometimes legal guidelines to that. So yeah, a lot of my work has been one to quantify the issue and then two to take that quantification into actual change. And that's something I'm excited to share a bit more about successes and failures and existing challenges going forward with with making those changes transparency. Amazing. Thanks, Maya. So going to I was trying to get to the mentor meet a question again, one second bear with me. All right. Okay, here we go. Yeah. Sorry about that. So here we go. So because you do some work on monitoring practices and communicating that. So does your institution track and communicate any meta scientific relevant metrics? Oh, okay. For those who say yes, I'd be interested in the chat if you could see what those are. Absolutely. Great point. Yeah. Okay. So we have quite a few people who say that the institution they're aware that the institution does track and communicate some some practices. Okay. Yeah. So I think overwhelmingly yes, but we but again quite quite quite a few people say no and I don't know as well. So again, something to really keep in mind for the discussion and we'll keep track of your responses in the chat as well. Thanks a lot. That's great. Okay, with that then I think a couple of responses coming in store but I think we can move on to our final panelist. So Sally onto you. The floor is yours. Oh, Sally, I think you're muted. Thank you for inviting me to be here today. I am not a tech expert, but I will bring the expertise that I have to this group. So I am at the other end of my career. I'm senior in my career here in the United States and I have moved from having a research laboratory to working in the administrative government end of research here in the United States. So at the National Institutes of Health, I worked in administering grants, the peer review process, as well as higher level science ending my career in the Office of Science and Technology Policy at the White House. So moving to progressively higher levels of how the government goes about the business of funding and administering research and I'm now at the Science and Technology Policy Institute 10 years this month, which is amazing to me. And I'm here because I'm not through helping government do it better. The whole process of funding and enabling and supporting research. I'm here particularly because I have done a number of publicly available reports for the NIH on their high risk high reward research program. We've evaluated the process, how they call for the research and fund it, as well as the impacts of that research. So I've done evaluation across that spectrum. And what I'm interested in providing to this panel today is some work that has been published the first year of a three year project on the NIH anonymized review process, how they're the actions that a government agency is trying to take to improve the conduct of research. So that's what I bring today. I'm excited to be here and to learn all about the efforts of this group. So thank you for inviting me and I look forward to the discussion. Thank you so much, Sally. It's great to have you here today. So we have two more Mentimeter questions on this topic and they're coming up right now. So the first one is have you ever participated in a non-standard granting process? Okay. Yeah. Quite a few noes. More yeses than I would have thought them. Yes, absolutely. Many yeses actually. I would be a yes for this question. I've done a lottery thing before. And this is good because we have a follow-up question for this one as well. So I think I don't see so many more responses coming in. So I think I'll just move on to this follow-up question. So if yes for those who responded yes, was it a successful experience? Yes, no, or not sure? Again, a lot of yeses. That's really interesting. Great. Okay. Some people still thinking about it. Okay. I think that was about the extent of our yeses from the previous slide. Absolutely. That's amazing. Very, very good. I think that's it for that now. I think also want to thank everyone for adding your feedback in the chat on the metrics or practices that your institution is tracking. So we'll note that down for the discussion as well. So with that, thanks a lot to all the panelists for this introduction and I'll pass it back to you, Nick. Yeah. So now we're going to move into the sort of standard panel discussion times. Delwyn might jump in with another metameter question sort of based on some of the discussions we're having at some point. But basically I was going to throw some questions out there and the panelists, whoever's sort of interested in offering thoughts on that, whether it's one or all or anything in between of you. That's great. And then we'll just sort of move through from there. So you can raise your hand or whatever. And if no one raises their hand, I might pick on someone. But so I guess my first question is in the areas you're working in, how did you first come to understand that this was a problem sort of worth pursuing? And then how did you get from problem to solution to the solution you arrived at and are working at now? So sort of what was the basis for getting there? And then how did you get to what the solution is from the problem? I'm happy to have a first go at talking about that, if you like. So this may be true of many of us, I think, that for me, it was personal experience. Because I was trained as a lab scientist and was trained to play the game that so many of us have to play in order to have a career. And over the years, I think I got increasingly tired of that and a bit frustrated by the it's arbitrary capricious nature. And the extent to which aspects of the science I was doing that were completely out of my control, such as the results that I would obtain, and which should be out of my control if I'm doing proper research with the very aspects upon which I was being evaluated at every turn, whether I would get a paper accepted would depend on my results, whether or not I'd get a grant review to be dependent on how many such papers I published and so on. And I think it's kind of rage because it's a quiet rage that prompted me to start thinking more about these issues and stepping back from it. And that's from where, for instance, my involvement in the registered reports came about. And then I think in terms of what actually happened to make those changes take place, I think a lot of that is luck. And it's timing. Who else is doing this at the same time? Social media has been hugely important in connecting us in ways that were not possible 20, 30, 40 years ago. The issues that we were talking about about lack of reproducibility, lack of open practices, lack of reliability, transparency, these are not new. They've been discussed since the 1950s and 60s. Why now? Why did things change now? Well, I think the ability to simply talk to each other more easily and to work together rather than being little isolated silos of activity has been hugely impactful. And that combined with a certain element of luck means that things can just happen. And it's difficult to say exactly why or what deep strategy you'd build on that, except don't ignore your own sense of inner frustration, like act on it and see what leave as you can pull yourself to make change. I'll jump in here next because of the work that I'm doing on for NIH on the anonymized review process directly addresses some of Chris's frustrations and frustrations that we all felt even when I was managing the review process, frustrations on my part at a flawed process. Because the idea of meritorious research is very nebulous. And we, you all have seen the publications that show that there is bias in the process based on gender, based on age, based on race and ethnicity. And so all of those, it's not just about science in who gets the grants. And so that was part of my frustration that that follows on to Chris's frustration from another perspective and trying to create a remedy for some of these biases. I'm constrained by the fact that I need to do it in a way that is intellectually rigorous and statistically sound in order to convince government that changes need to be made and what changes should be made. So I just wanted to share that part of that whole flawed process. I can jump in too. So just to extend on Chris and Sally's great comments, my experience has been quite similar. I think that the notion that I felt in graduate studies was that I was willing to work hard, I was willing to work long hours, but not to produce the research that I felt was not rigorous, not powered. Everyone kept telling me I was doing such a great job, I was getting grants, I was polishing papers, I was doing everything right. But I knew that what I was producing wasn't adding value to the scientific literature or adding it in an optimal way. So that certainly from a personal perspective really resonates with I think what Chris and Sally have indicated as well. From the perspective of Dora, I think it was pretty much the momentum that these folks have talked about, not just from individual researchers, but also from societies and funders and different stakeholders coming together to raise the flag and say there's something wrong with the way we're being assessed. It's not supporting optimal scientific rigor and processes and the fact that our systems of scientific assessment, they were not developed scientifically in any way. So they were developed from old boys clubs and systems that were inherently biased and not perhaps thought out. They just evolved over time and don't meet the needs of our scientific community. So I think Dora really brought the community together with a diversity of different stakeholders and raised that issue. And I think of course the first step is raising the problem. Many researchers feel it. Now we've raised it as a sort of community or more dynamic group and that awareness raising I think is really critical. And the next step is of course addressing that and coming up with solutions and monitoring those solutions over time. I want to just say though, I think that in talking about the problem in the research ecosystem, especially as it pertains to research assessment, I think we have to be cognizant of jurisdictional differences. So there's a sense I get that people in Europe, for example, who may be further ahead in the open science sphere, may be really focused on coming up with solutions. In Canada, sometimes I feel like I actually did my training and worked in Europe for almost a decade. And when I returned to Canada, it was almost like a return to the Stone Ages in terms of people's awareness of the problems in the academic reward system and their willingness to sort of bring evidence to the fore around those. So I think that while we need to be solution oriented, we shouldn't completely dismiss the need for education and awareness raising as a first step to make people aware of the problem. So certainly in institutions that I've consulted for or liaised with, many for instance administrators, if we take like the impact factor, the H index, they use them, but they don't know what they are. They don't know how they're calculated. They don't know what their biases are. And I think still having that education and raising the problem in certain communities still has value at this point. I would love to jump on that point of thinking about the environment that you're working in. I think that that is maybe twofold. One is what the jurisdiction or what is able to be changed because there is a formal infrastructure for it. And I think the other is having an environment which supports you to take the time to try to make those changes, which aren't always lined up. So I am now based in Germany. I was before in the U.S. I have to say that in terms of clinical trial transparency practices, there are countries that are further along in making these changes. But in terms of actually being able to solve a problem, I am here at an institute that has decided to invest resources in doing this work because all of us are taking time. We've all left the research field that we were working in before and we are now spending our time looking at science as a system instead of looking at the systems we originally were. So I think it's twofold. I think, yes, jurisdictions can be a challenge if they don't have maybe a legal basis for what you're trying to change or they don't have the support within the legal framework. But if you're at a place that you can actually make the change, that can help. And I think that's also going back to your question, Nick, about how do you go from finding a problem to finding a solution? I didn't come to clinical trial transparency because this was the problem I was aware of. I left cognitive neuroscience because I saw similar underpowered studies. I was working with fMRI with 20 people making huge conclusions that I didn't understand how we were getting to statistically, et cetera, et cetera. And I have just gone towards places where I can make a change. And that means that I have changed my topical focus. But I think that there's so much reform we can implement in changing the scientific ecosystem that for me, choosing to go to environments that are ready for change is more powerful than necessarily focusing on a single specific topic. So I want to move us now to a little like a couple potential questions about stakeholders and stakeholder engagement as we're trying to, because that's such a big part of implementing any of these things, because we're working in a complex system with lots of different stakeholders and lots of blame, plenty of blame to go around. People in the metastatic community, like I said earlier, are not shy about placing that blame. But eventually sort of you need to get everyone to stop passing the buck and actually start acting. And I'm really interested in all of your work as you're dealing with trying to either build a consensus or build momentum or change policy, whether that be for individuals or institutions on national or international levels. What was your sort of, just to start us off here, so like what are your first steps for identifying which stakeholders you feel like you should be targeting to get your idea off the ground? And then I'll follow on with a couple other questions about stakeholders after that, but we'll start there. I would say look at the gate, who's controlling the gates? I always look at gatekeepers and they're basically three in our work. Journals and publishers is kind of one set. Funders are another and institutions are another, so universities and of course institutions can be broader than that. And I do think there are additional stakeholders as well in industry that can play a role as well. But I think the core here is really those three. So I began certainly with registered reports in looking to see who would be in the best position to support this new article type which eliminates bias. And that at the time was academic publishers. We've since moved away from that and actually it's one of the interesting I think aspects of this is over time. As an initiative gains a bit of steam, I think it can actually have its own wings and it doesn't need to rely on existing structures so much. I think existing structures are useful often for getting an idea off the ground but they can also hold you back and also there's a danger that if your initiative is successful it can strengthen those existing structures in a way which is actually detrimental to the community. So I think there's also a point at which you try and look to promote the initiative beyond those structures as we did with registered reports. But I think to answer your question for me it's certainly looking at the gatekeepers. I'm going to jump in here real quick because I don't know if it's my job to challenge another statement. But Chris, what is the basis for you saying that the registered report eliminates bias? Has that been analyzed? Because I found in doing diversity and demographic change in the grant process that there are a lot of assumptions about the process about grants and awards that I'm not able to track down and find to be valid. There are assumptions that we make. So I just want to track that when you say that that process eliminates bias, we have tested ourselves. We have checked ourselves. Yeah, no, thanks, Sally. Great question. So I'm talking about particularly publication bias and reporting bias. I'm not talking about other forms of bias. There's so many. As you rightly point out, there's bias against certain demographics in the scientific process and all sorts of EDI issues linked to that. I'm specifically talking about publication bias where journals selectively report positive results. So results that support the hypothesis or which report statistically significant effects. And reporting bias where researchers feel pressured to selectively analyze their own data to produce such findings even when they're likely to be false positives. So I'm not making any claims whatsoever about other types of bias. In terms of the evidence for the specific types of bias I'm talking about being addressed in registered reports, we have got emerging meta scientific evidence that the rate of positive results produced by registered reports is dramatically less than the standard literature. So depending on the field that you're in, the rate of positive findings in the life and social sciences ranges from around 80% to nearly 100%, which is unrealistically high if that literature reflects the true state of reality. When we look at the distribution of such findings in registered reports where those biases are in theory eliminated, that rate of positive results drops down to around 40 to 50%. And so that's the first of really emerging evidence that it works. I don't have a direct response to that comment but to what you were saying about the three gatekeeper groups. I mean those are definitely again those were you said journals, publishers, funders, and institutions and for us those have definitely been organizations or specific organizations within those stakeholder groups that we've targeted. But I would challenge this to say that even stakeholders who may say, oh, I don't have power do actually have more power than they see fit. And they also have strong incentives that may speak against their reforming their behaviors. So for example, as an early career researcher, it's a highly volatile world for me. It's scary. If I say, no, I'm going to work a long time to make sure that my code is entirely reproducible and I'm going to get into a Docker container and I'm going to spend my time doing that and I'm going to come out with one good publication instead of three. But I still can choose to do that. And I think that one one way to act is also to find the few leaders out there who are willing to take risks who have whatever circumstances that are allowing them to take risks. And I think that's one of the approaches that we've been able to tack onto as an institution that's based at within a medical university is that sometimes we just find one clinical trial leader. So who's a medical doctor and a professor who is more open to taking some risks. And again, they have less risk than an early career researcher, but still more risk than a funding agency that isn't an individual person, you know, as an anonymous, more anonymous body. So I would say for us, we've taken specifically two stakeholder groups as our focus so far, we've targeted institutions. So we created an institutional dashboard. And then we've used not a traditional research method to disseminate it. We are having workshops, we are having stakeholder engagement events, we're bringing people into phone calls, we're getting their thoughts on what they think of whether this is usable, whether this is a terrible idea, but having those conversations with institutional leadership. At the same time, we are running a study with individual trialists where we say, here's, here's what your trial looks like, here's, here's where you're doing things right, here's where you're not, and here's how you can still improve. And yeah, we're not expecting an overwhelming change because we're asking people to do unrewarded work. But we think that if there's enough, enough of these increased awareness of these, and this is tying back on to what you said, Kelly, before about increasing awareness, that there might be one or two or three people who start changing this behavior. And slowly, if we're taking this multi-stakeholder approach, like these will come together. I will also highlight that we are starting now a project where we're going to be targeting funders more specifically, because we think that they, they have an incentive to make sure that the trials that they're funding actually get published. So we're going to try that extra, that new stakeholder group approach now. I think these are such great points, Maya. They're really resonating with me. I know from my own perspective, in terms of engaging stakeholders from my own meta research, I actually don't find it hard to engage stakeholders. I find there is, in my community, there's some growing interest and awareness about issues, for example, around research assessment or around implementing open science. There's a difference though between people wanting to engage and meaningful engagement. So sometimes to take the step from, you know, being aware or being interested in being aware and wanting to be like part of the discussion and actually moving forward, one big gap is resources, and that can be like fiscal resources, depending on the stakeholder or personnel resources. Or in the Canadian context, I find it's actually expertise. So there's, I find in our jurisdiction, like a lack of expertise in meta research, meta science, open science policy. And as a consequence, you know, you can bring everyone together, but to actually move from, you know, having people sitting around a table to actually getting something done can be challenging if you don't have the right expertise and skillsets or experiences in the room. When it comes to Dora, obviously Dora has like a very open call. They place a lot of emphasis, especially in their new strategic plan around EDI and the considerations of inclusivity. They do a ton of community engagement, so they publish blogs to make stakeholders aware of what they're doing, organize interviews, presentations, and so on on research assessment. They're developing resources. So that's a great way, I think, to engage stakeholders is to give them tools to help them and support them. And those tools are developed in partnership with organizations, which is really important. So it's not a case of just giving tools out to a community and hoping they resonate. It's actually sort of a user-centered design where they're developed with the community. Dora also does a lot of advising to academic institutions and places a strong emphasis on convening stakeholders at conferences and in sessions, like the one we're having today. And I think those are really important in identifying new stakeholders and continuing that discussion and creating that engagement in our community. Because at the end of the day, many of the practices we're talking about, whether it's implementing reforms on researcher assessment, implementing registered reports, improving trial reporting, improving grant reporting, all of this is behavior change at the end of the day. So we need to have stakeholders engaged. And there's scientific processes to look at things like implementation science and to create behavior change. So rather than just sort of thinking like, oh, what should we do? I think we actually have to lean on these people with those expertise and ensure that we have sort of monitoring of our stakeholder engagement and the actual decisions around implementation as well. I think that's really key to make sure that we're not just sort of a bunch of folks with good intentions to move this sort of meta science initiative forward. We're actually monitoring and knowing that what we're doing is actually having the effect that we want it to as a community and that that effect over time is sustained. Nick, may I respond directly to that point? Sure, go for it. Because I think that that ties into a challenge that we faced, which is on the one hand, I don't know if there's a specific meta science skill set, but realizing that what we're doing is so broad, we're going from very basic web scraping, super technical work all the way to that very human side of theory of behavior change and implementation science. And one person can't do everything. We just don't have the resources to get skilled and everything. I deeply believe that we can get better at something. But at the same time, I can't get so good at the qualitative skills so that I get proper user-centered design for the tools that I'm developing at the same time as I'm creating the back end of the tools at the same time that I'm in meetings with high-level stakeholders to get them to adopt the tools. And I think that's been a challenge for us is to, one, identify specifically those needs and then to find that expertise and also to figure out how to fund that expertise. Because also with the challenges and maybe Sally also, you know, more about how grants work, but there's limited ways that you can use funding and a lot of the work that we're doing now is, goes beyond the traditional way of organizing a research project. So yeah. And I'm going to add on to Maya's comment. Part of it's for clarification on my part. So you talk about behavior change in stakeholders, but from my perch in the world, looking across U.S. government, there's a significant obstacle in policy change. So you can change the behaviors of your applicants in your research community and the university all you want, but the funders have got to change the process. There've got to be policy changes. And maybe you're defining those Kelly as behavior changes and I'm not. So I'm a little confused there because I come from a very concrete perspective of the process has to be changed. And then the people follow. I'll jump in. I think it's super fascinating. Sally, obviously we're in different jurisdictions and things, the processes may be different. So I guess there's different approaches. So there's the idea of like a grassroots initiative. And then there's of course the top down. And I think there's probably place for both of this in terms of reform. So hearing you say, you know, that the funders need policies and we need policies as a starting point. And then maybe some education and implementation of those policies. I think to me like that makes sense. I agree. But it just doesn't jive with my experience, at least in my system in Canada, in terms of successful implementation. So for example, in Canada, our tri agencies are the largest federal funders of research. They've got lots of policies. They've got, in some cases, I think, good policies. In some cases, I think their policies are perhaps not specific enough. But to give an example that may interest my around like clinical trials reporting. So like most jurisdictions in the world, we have a mandate to register and report our clinical trials in a registry. And until very recently, like this year, the wording around the policies specific to reporting the results was that researchers had to report the results of the trial in the registry where it was registered without undue delay. So you try to enforce without undue delay at institutions where I went in and audited who'd finished reporting their trials registered and reported them like it's impossible. You email someone and say, you know, you ought to do this. The policy is that you should do it without undue delay. Well, some of the senior scientists I spoke to felt 18 years was not undue delay. So it was very challenging. And for instance, we have similar policies around open access. So we're again, we're behind the curve in terms of immediate open access, but we have this sort of 12 month embargo period where we ought to have things done. There's absolutely no monitoring of this. There's no monitoring of trials either. Our group published an audit of Canadian trials. They're not being reported in their registries like half the time. And something that may interest you, Sally, is that when we look at Canadian trials that had an American site involved, so like one site was in America, we're better at reporting our trials in registries than because you guys sort of lift us up. Your culture of reporting is different than ours. So I think yes, we need policy, but I think more so we need quality implementation of a policy and audit. It may exist in certain jurisdictions for certain practices. It's not the case in Canada. We do not monitor the vast majority of our policies. There was a recent survey this past year of grantees from the Canadian Institutes of Health Research. Anyone who got funding to do a trial, they sent them a survey about registration and results reporting and so on. And the report is comical almost. It says they have a really hard time with response rate. I mean, you're the funder. You can demand that people report this information to you, right? This is not an optional survey. If you want to enforce your policies, there are steps you can take to get this done to 100%, right, or very close. So I'll stop there. The rant must end, but I think, yeah, there are definitely jurisdictional differences and policies important, but policy alone will not get us towards successful implementation. So keeping this conversation going to an extent, I'm really interested too in about dealing with stakeholders, how, you know, I think Sally challenging Chris on what's the evidence for registered reports reducing bias is indicative of sort of what I'm talking about with this question. So if you were to present, if Sally were the decision maker and the stakeholder you needed to convince, you might really care and focus about the evidence. And I mean, I think this is particularly interesting because we're working within science where everyone is supposed to care about the evidence and the quality of the evidence supports something. But also for just reflecting from my own work on trial reporting, you do, the evidence isn't always enough, right? We have plenty of evidence of lots of things at work, but no one does in one way or another or it doesn't implement or that might be effective. So there's obviously, it's not enough, the evidence alone is not enough to make these changes. So I'm thinking about in my own work, you know, when we're talking about trying to convince people one way or another to care or do something about trial reporting, you can give all the evidence about how we can't effectively evaluate interventions. But you also might need to appeal to sort of ethical aspects saying, you know, we're reducing research waste. And also, all these people participated in this trial and put themselves in harm's way. And it would be sort of doing them a disservice to not report the research, which are very different appeals than simply saying like, we can't do a systematic review, you know, that's accurate. So I'm wondering, in all of your experience of working with these stakeholders and trying to convince them, has it made in within science, you know, has it mainly been appeals to evidence, or is it appeals to lots of other things as well? And what are those other things? I think that's a great question. If I can just jump in there. I mean, you mentioned that evidence is not enough. And if I can say something slightly controversial, I also think that it's not always necessary either, particularly when you're starting out. I remember when we first proposed the idea of register reports. And one of the objections that some journals would give to us as a reason for not adopting it was because there was no evidence that it worked. And you're never going to get the evidence that it works, unless you adopt it and try it first. So you can end up in a kind of loop. I think particularly when you're trying to get an idea off the ground, appealing to logic and philosophy and ethics and these kinds of things is actually quite it is certainly very valuable. And then once things are going, then the process of evaluating the evidence becomes crucial. But I think sometimes opponents of reform use the evidence barrier as a way of resisting change. They say, we're not doing anything until you show us the evidence that it's better than the status quo. And my response to that is always two things. Number one, if you don't try it, you'll never get the evidence. And number two, what is the evidence that the status quo works? And there is usually none. If you look at the regular peer review process and you ask people, why do you do it this way? What's the evidence that this approach is best? You will get silence. It's just the way it is. So we have to be very careful about that sort of status quo bias, which can make any change an insurmountable prospect. We've got to make sure that we look at things on their merits. And sometimes I think that means saying evidence at the beginning is not the most important criteria to be considering. I find that very interesting because I think it also resonates with people who are willing to take risk individuals or organizations even when there is an evidence. And actually, it's making me think of we need a high risk, high-risk, high-reward program for meta-research implementation also. So yeah, maybe that's something that NIH should consider. But yeah, I think a powerful tool that some of my colleagues at the Quest Center have tapped on to is more connecting through emotions. And I think that's especially in the space of clinical trial transparency, you have existing groups that are working far from the logical space. And they are patient advocacy groups. And I think we actually see quite a few of these projects on pediatric studies because, man, children are dying and we're burying those results. That pulls at the heartstrings. And so I think one of the powerful tools for implementation here is evidence aside is tapping into patient engagement groups that are existing, that want to engage in the research. And that we should also be engaging in our meta-research because we're already asking trialists to get them to engage in their research. And if we get them to engage in our research, they can take our data and they can use it for their advocacy work. I think this is also something I think about a lot is where's the line for a scientist versus an advocate? Is there a line? How much can I argue beyond the evidence? The evidence is my comfortable zone. But in the end, yeah, it's blurry. So one tool that we have used is collaborating with people who are more of that advocacy space who are working on emotions instead. Saying, here's our data. Help us disseminate this message in a way that you can take it. So that it gives us somewhat the space to play both roles. Can I jump in here? Oh, Kelly, did you want to go? You go ahead, Sally. I'll jump in after. When I listen to this, nobody on this panel is wrong. Everybody has points to make. And they're all valid. And these are parallel paths. So when we think about which stakeholder groups to talk to, and we talk to them in parallel because we're trying to bring about a systems change. So it has to come from multiple components. The one that we haven't brought up from my very jaded perspective in Washington, D.C. after all of these years, I just want to own that is how the process is incentivized. So one of your parallel tracks that is really the most potent to change behavior is where you put your money. If we did not give a block grant for $5 million for a clinical trial, and we said, step one, you get this much money. Once you've published your results or you've met the steps Kelly was describing for disclosing your data, then you get another chunk of money. I'm making this up as we go here. I don't know anybody that's going to do it, but I do know it's why I worked in program. Wherever I put the money, that's where the science went. So I would just like to put that out there as a parallel track to consider alongside of the stakeholder engagements that you're talking about because it's unfortunate, but that's a major player. Just really quickly before we go to Kelly, just respond directly to Sally. I actually have a conversation with a colleague in France the other day, and I know you were just speaking in a hypothetical terms, but let's pretend we were implementing that policy. So France, the main funder of clinical time in France, I believe does have something like that where they withhold 10% of your funding until you report, and apparently even that isn't enough. People at times would rather not get the 10% of the funding than report their clinical trial correctly and apparently. I don't know, maybe there's ways to call back that 10% by people who know the system, but I don't make that point to say you had a bad example. That's not what I'm saying at all, but merely that it just shows the complications and thinking about when you're trying to implement these things where you're like, oh, if we withhold 10%, everyone's going to do it. Everyone's going to want that money, and even that might not be enough. So first of all, I'm going to tell you 10% is way too low. It's got to be like 40%. You've got to make it hurt, which is sad but true. I'm a pragmatist. And that's why I talk about parallel paths. It's just one track to change that I think should be acknowledged along with the other behavior changes that we're encouraging behavior change is what we're doing. But I acknowledge that, but reach the level where they're going to pay attention and do it. We had the same problem for years, and Maya can probably speak to this about data sharing in the US. We made the agencies made a recommendation that federally funded research data should be made public. Nobody did it. But now we're changing that behavior. We're changing the incentive on that and on winning your grant. And it's incremental, but all of this is going to be very slow. But I take your point, Nick. Thank you. Great. Great points. There's so many interesting things being discussed here. I think I want to jump on some of the things Sally has just mentioned because they resonated with me with restrictive funding or sort of resources more broadly. I think when I'm working with stakeholders, and I know when Dora works with stakeholders as well, we're looking to implement things or make change. One thing that you can do is exactly that is provide resources. So for instance, Dora is, they have community grants that folks can apply to. So when you have people engaged, you want to move them from engagement to action, providing them with funding to implement something or pilot something can be a first step. There's also, for instance, like Project Tara, which is Tools to Advance Research Assessment. It's a number of different resources. There's survey results from across the states. There's a repository of examples of folks changing research assessment. There's also, separate from that, Dora, maintains some case examples of institutions that have implemented researcher reform. And I think these types of resources, they're really essential because if you have a stakeholder engaged and you want to keep them engaged and move them towards action, sometimes when you're working with, in my case, research administrators, it can seem like an overwhelming task. And no one wants to go first because it's a lot of work to go first. So if you can give an institution either resources in the form of funding or an individual resources in the form of funding or tools, I think that can be really helpful to progress them forward. Even if the tool, for instance, may need to be modified to their institution, it could be that that gets them beyond thinking about doing this to actually having a concrete plan and a little bit of resources to move them forward towards implementation. I find in the absence of tools or examples when it comes to research or reform or implementing some of the open science policies that I've done at institutions, it's too big. It's too conceptual. The folks in the decision-making spot often don't have the expertise either. They may be warm to the idea or appreciate we need to do this, but just don't know where to begin. So these types of resources I think are key. So to move us, there was a question in the chat from Uzgar Uzar. I'm very sorry if I mispronounced your name. And they asked specifically about, they asked, do researchers, PIs and supervisors that have power on early career researchers play a role as gatekeepers for assessing this change? And I think that that gets to a point that I wanted to address about overcoming the institutional inertia that can stand in the way of a lot of these changes, especially in science, which, despite being where we get a lot of our societal advancement from, is inherently conservative in a lot of ways. Institution, when you talk about the mechanics of these large-scale systematic change we're talking about. So has that institutional inertia been a big barrier for you at any level, at the level of the individual researcher, at the level of institutions, at the level of national and supernational bodies? And what strategies have you employed to overcome that inertia? That's a great question. If I can just jump in quickly, I won't be long. Yes, it's a huge issue. So I think if you look at a department and you look at the sort of hierarchical structure of academic departments across most of the world, there can often be a small minority of senior people which hold disproportionate power over decision-making. And early career researchers who wish to adopt more open practices often find this a barrier in convincing their PIs to try something different or to overcome misconceptions and whatnot. And I think we encountered this very directly early on in the life of registered reports because no sooner had we proposed the idea than about 100 senior researchers went on the attack and tried to kill it. And some of you are familiar with the history of the initiative. We'll remember all of this kind of bruja that happened back in 2013 over this. And at the time, I thought, wow, this is quite something. I mean, why such an emotional response from all these senior people? In terms of strategies, I figured out something very important, which is that there are certain people whose minds you will never change and it's futile to try. It's better to change the environment around them in such a way that if they don't change, then they lose something. It's the old carrot in the stick. And if you focus on that approach, you find that there are plenty of people who are willing to try something different and are willing to change. And that's not just the junior researchers, but also there are a lot of senior researchers also who are very motivated to try and improve science for the sake of the next generation and their own work. So I think finding the friendly people in the crowd and also this is really simple, but just sticking around. I think a lot of the opponents to reforms don't have a lot of patience. They don't want to fight a war. They don't have the resources all the time. And if you just persist, and if you're still there five years, 10 years later making the same arguments, you will simply starve them to death, just put it bluntly. And they run out of steam and you end up winning the argument. But I'm not sure there's any great wisdom from any of that, except keep trying. And my message to early career researchers who are facing these barriers is if you can't make the changes that you want to make because you've got a PI who won't let you, it's not your fault. I think you have to maybe suffer that for a while. But then when it comes to the next stage in your career, actively look for a supportive environment that will help you do the best science you can. I really appreciate those points. And I think what you said about focusing on the people who are open to change ties into to what I was saying we were trying to do, which is finding those two PIs who are willing to change. But I think we have to go back to Kelly's point about resources, because even those open people are open to it to the extent that it doesn't mess with the incentive structure that they're functioning and also tying back into what Sally was saying, which is, I mean, there are people who are just fully closed. I'm not going to waste my time on, but there are the people who are fine with it, but it just can't take any more time than what we're currently doing. So if you configure how to do a registered report and not make the peer review before we start the project take any time, sure, do it. And I think this is where we have this resources issue. And I don't have a solution to this. I have a massive barrier because we don't, we are the quest centers or research institute. So we are connected to the institutional resources that support clinical trials, but we have no power over them. We have conversations, we made them co, co, I don't know, co experimenters in our studies, we've gotten their buy in, but we don't know how to get them any more money. So the trialists who are open to making change, they say, yes, please, but I need help. And our response is, sorry, like, here's a hyperlink that we can give you, but we cannot give you someone at the clinical trial office who will do that support. Again, this is where the systemic approach is important because the UK, and I know Nick can talk more about this, but the UK has seen quite a bit of change because parliament came down and said, okay, institutions do it, and then they each hired someone to be responsible for clinical trial transparency. So yeah, that's the openness with resources. I also want to push back on this sticking around point that you brought up, Chris, where I think there is a challenge, again, with stability for us as people within academia, I am working on like the third party funding in Germany. So I work on grants and most of my institute works on grants. We have some people in permanent positions who are not professors, but I have no stability. So I am still playing the game to survive. And the honest truth is, I would love to keep working on this individual trialist work that I've been doing for the past three years. And I can't because my next grant is to work on funders. So all this relationship building that I worked on, I officially have no time to work on it as of June 1. That's really hard. It makes me not the strongest player in this. And yeah, I'm trying to figure this out. I know also we've got some people who are in the audience who have left academia and continue to work on these issues saying, okay, what if I work in another sector? Will that offer me the stability and staying power? But yeah, this is definitely another challenge. Yeah, I want to jump in. These points are super relevant. And for me, I feel I'm just nodding the whole time in agreement. So the point about sort of persisting resonated with me. So I think I'm a few years ahead in my career stage than you, but I still feel that. I have a permanent sort of speak position now, but there's not a lot of funding to do what we do in my arena. And it's very challenging. But Chris, your comment about persisting resonated with me because I work very closely with David Mohr, who is a mentor and collaborator. And he frequently says, we just have to keep going. And we have to, everything we do, we hear no, no, no, no, no. And we just keep going. And eventually someone says maybe or yes, and we keep going and we make progress like this. And I think he has said this change may not happen in his career, but he's hopeful it'll happen in mine. And so the persisting may even be like across generations to make this change. And I think that we shouldn't at the same time as we feel we're not having this sort of revolutionary change we want to have, we shouldn't, I think, be completely pessimistic, because I think really amazing things have happened, particularly in psychology as a discipline, but at different federal jurisdictions in terms of funding and policy, like we're moving in the right direction. And I think a lesson for me in terms of, you know, wanting revolutionary change and getting incremental change was hosted, along with a colleague, Manoj Lalu, a workshop on preclinical study design and reporting. We were super excited about this. We poured our heart and soul into creating really interesting examples of sort of, you know, power and when doing things, everything from like animal husbandry to reporting our results in a clear and transparent way and examples that were taken from the literature that were poor and how to do it right. And our attendees were graduate students and research staff at our institution. And it went really well. We felt like so energized and empowered afterwards. People had like solutions to take and implement and we had them back a couple months later for sort of part two and an update on how they'd done with regards to the initial session. And what we found, which was exactly the comment in the chat, was that supervisors were a complete barrier and that, you know, although they left feeling empowered, knowing what to do and wanting to implement, their supervisors didn't want them spending time on this, didn't understand what we were talking about. And I thought that was really, really disheartening at the time, but also we've worked to make incremental change. So rather than adopting a whole suite of practices, they went back and did one thing that year that changed and increased the rigor of what they were doing. And, you know, that's maybe not ideal, but it's better than no things. So I've had to change my really sort of optimistic desire for like speed and revolution. I've had to harness that back sometimes in certain conversations and do what's practical. I just want to point out that I think Chris is going to leave us for childcare right now, but we're just about wrapping up anyway. Yes, I am so sorry, thank you so much and for the great discussion. Yeah, thanks a lot Chris. I want to kick it back over to Delwin in a sec, but I just want to ask one last question and maybe I'll just go rapid between these three panelists and just say, each of you just give like a short little answer to this as a wrap up question. But can you define what success looks like for your goals? And then once you reach that success, how do you determine what comes to the next thing? That's might be a lot. You can put what's not, you know, I want to leave time for Delwin to wrap up, but just to give a very quick thing, you know, the all child campaign is something my mentor and boss, Ben Goldacre set up that all childs reported all times registered. That was about 10 years ago. He was had to fight opposition from all quarters of academia and industry and now they just announced that the new clinical trial legislation in the UK is going to have real registration and reporting requirements built in with broad support. So just like a win, a win of this sort of thing moving forward and help resistance and like capability, you know, as Maya said, it's not always easy to keep that momentum going. But so yeah, so what does success look like for you in the area you're working on? And then how do you transition to the next thing? So Kelly, let's start with you. Sure. I think so from I guess like a Dora perspective, Dora has a new strategic plan and you can check that out. It's from 2023 for the next three years. And I think one thing that they're looking to do is to actually support advocates of research assessment worldwide. So really moving from the sort of buy in and getting awareness and in certain pockets actually moving towards creating support for advocates. So that will include a Dora advocate toolkit and trying to grow the online community for Dora. And they'll continue, for instance, Dora's engagement grants. I think as well, another key sort of next step in terms of reform will actually be as it's come up so many times is securing funding for Dora to persist as Chris has said in the longer term. Funding is really essential. I think for the professionalization of Dora to have core staff that are actually paid to sort of get things off the ground. I think to be honest, my own perspective as a meta researcher, my goals would be similar. So to get funding, to sustain what I'm doing, and push forward tools and resources for the community to move us closer to this goal of openness and transparency. Maya. Yeah, sure. I guess I'll push back on getting funding for our jobs because sometimes I think that my big picture success goal is making my job disappear. I would love to be out of work because I don't think that this should be a topic, the specific try and get clinical trials to report. So that's the big picture success that is far, far away. But I actually am going to go back to this point you brought up before, wanting to engage versus meaningful engagement. And I would say that wanting to engage itself is a success. So seeing that people are open to the idea is already a step forward. And I think it's easy for me to forget that and say, oh my God, we didn't get changes in numbers. But that is the first step. And then meaningful engagement, meaning that I'm actually seeing an improvement in a specific clinical trial transparency practice. And maybe I'll just shout out that there are some meaningful changes that have happened. So for example, trials that have been registered in the EU clinical trial registry and thanks in no small part to work that, Nick, and with your supervisor you've done, that has jumped at the Chaviti from just over 50% in late 2020 to now over 96%. So to remember that, hey, we are getting better at some things and that's already a success. And Sally. So again, from a different perspective, for me, success is actually revising the grant review process, the grant award process, because review and reward are two distinct steps. And right now it is a very flawed process. So learning how, figuring out how to fund researchers that addresses some of the very issues brought up here, researchers need to be funded longer than three years at a time. They might need to be funded at 10 years at a time with in interim steps, but figuring out what supports the current style of research. Our grant program was developed following World War II more than a few years ago. So the idea that we need a new process that meets the needs of today's researchers is just clearly something that we should be trying to achieve. And so for me, revising that process so that researchers got the funding, had the time to report their results, had the time to figure out their study design didn't work, revise their study design and move forward without jeopardizing their next grant, I think it would take care of a lot of the reproducibility issues if they had time to go back and fix themselves. So fixing that process, that would be a huge success for me. Okay, I'm going to hand it back over to Delwyn now to sort of wrap up some of what we're talking to incorporate some of the different strands that things were happening on throughout this. Are you ready to come back to us, Delwyn? All right. Thanks a lot, Nick. So yeah, there's been a lot of frantic note-taking in the background here. And actually, so Frédéric and I have been jotting notes down in a mural. Some of you are familiar with that. That's kind of a whiteboard in a way. And so this is not comprehensive, but I think we're going to be daring and just give you a sneak peek. And the idea is that we're going to keep building that based on the notes that we took today. And then hopefully be able to share that with you after the meeting so that you can also take away some of the lessons learned that we found out about in the meeting. So I'll just share screen. So here we go. So hopefully you can see this. And yeah, again, just an attempt to consolidate some of the ideas discussed today. So first we talked about these first steps of how do you move from identifying a problem to proposing a solution. And, you know, we talked a lot about frustration with this, you know, in a flawed process, a lot of biases, assessment is not scientifically developed, and so forth. And so there was really a call from the panelists today to just act on that frustration. At the same time, there was also an acknowledgement of the importance of timing and luck. That was not to be minimized as well. There was also a lot of comments that related to the importance of also recognizing that, you know, some people are really in a supportive environment, have the financial resources as well as protected time to really work on solutions, whereas others may not always have the space and the bandwidth to do that. And so maybe an option is also to choose environments that are ready and supportive for change. But an important step that was identified was really at the very beginning, it's just key to raise awareness of the problem. That is a really important first step, and it's a really meaningful one that has a lot of value. So then moving on to the next theme of the discussion was revolving around stakeholders and three of the big stakeholders that were mentioned in the discussion today were journals and publishers, funders, and research institutions. We talked about the importance of identifying these gatekeepers, so stakeholders such as these. And so, but there was also a call to find a few leaders in the community who were willing to take risks. So maybe it's not so much about, you know, appealing to a broad audience, but sometimes it can just be more impactful to really identify two or three people who just have the, again, may have the environment or just like the really strong motivation to help you and also willing to take risks. There were also concerns about, you know, how existing structures can hold you back. And so it's really important to promote initiatives beyond these existing structures. Then we moved on into some ways in which stakeholders, in which you can engage stakeholders. So we talked about some initiatives like this institutional dashboard that shows, communicates with people how an institution is performing on certain practices as well as phone calls and interviews and so forth. But really here, an important point was just the importance of giving people the tools and tools that are not only created in isolation, but together with the community to make sure that they fit their needs. And so tools were really seen as an important way of engaging with stakeholders. Of course, a lot of challenges were identified with engagement. So this came up a lot, this difference between meaningful engagement and wanting to engage. And that it's really important not only to, you know, put some to reform policy, but also really actively monitor what, how that is being implemented and ensure that it really meets the needs of the community. So, yeah, I don't want to make this too long, but just maybe this section here, this was about what you appeal to when engaging to stakeholders, what has worked. And Chris really made the important points about, you know, the fact that if you appeal to evidence at the very start of the process, you can end up in this endless loop. And that's sometimes not very productive. And there's also the status quo bias. So, you know, what is the evidence that the status quo is working? And that's really an important step in this process. And if you don't try it, you'll never know. We also discussed about the importance of identifying allies who can work maybe on a different level than you can. And maybe appeal at a different level, perhaps an emotional level. And so that combination of appealing to different things, it can be quite successful. Again, we talked about tools as a way of appealing to people and then changing incentives. But that's frustratingly slow at times. And Pivotal also was the role of funding institutions. But we also discussed that that can be quite challenging. And sometimes the only solution is to make it hurt. And then, yeah, just the last two points were how to overcome inertia. In particular, this very known challenge of, you know, convincing, for example, your supervisor and this common problem of power imbalances and why supervisors can actually be a barrier. An additional challenge of, you know, as an ECR, you're struggling with this, this problem that you don't have much stability. So you're kind of playing the game to survive. You might start a project, but not be able to see it through. So how do you overcome some of these challenges? Well, Chris said, I think some people never change, but you can maybe change the environment around them. And also we discussed a lot the importance of being persistent, also across generations, focus on those people that are open and willing to change, find allies in the crowd. But again, here, the, you know, that's all important. But again, these people also need the resources. So there's only so much they can do with the resources at their disposal. And that, again, speaks to the importance of having systemic change. And then finally, more positive outlook, although you see we were running out of steamer here a little bit, but essentially this was the section about what the success looked like. We talked about it being easier to secure and sustain funding for implementation initiatives. Also talked about developing more tools for the community, seeing a real uptake in practices, for example, if that's what your initiative is about. And then fundamentally also about revisions to the grant review process and figuring out how to fund researchers in such a way that addresses ongoing issues in the system. So that's it from our side. So we'll be sharing that also after the meeting when we've consolidated a bit more of your notes. Thank you. Yeah, great. Thanks a lot, Delwyn. That was really fantastic, actually. And yeah, like we said, we're happy to sort of share, we'll use the same channels we went out to for the pre-survey, I think, maybe if we're able to do that, which shouldn't be a problem. But yeah, we're just about at time here. And I want to just thank all our panelists. And I want to say that I really hope that, A, that I know that there were people here from outside academia and people asking questions also from outside academia. And I hope that despite us all being academics and speaking a lot about the academic perspective about some of the lessons and takeaways, it might have been useful to folks working in other areas. And then I also just want to take away that I hope that this discussion can keep going. And if you have thoughts or like to chat about this, please, I would say consider reaching out to me or Delwyn or the people who hosted and maybe some of the panelists would be, I know Chris was chatting with somebody and said he would be in touch in the Q&A section. So yeah, so thanks so much for your participation, for your time and for your interest. And once again, yeah, thank you to Sally and Kelly and Maya and to Chris in absentia for being here with us. If no one else has anything else, I think we will conclude it there.