 Yeah, so Kevin no six brother took an extra five minutes there, so So we'll try to jump right into it, so I'm sewer buck Executive director of the good science project, which is a organization focused on improving science funding science policy And prior to that I've spent several years Working in philanthropy at Arnold Ventures formerly known as Arnold Foundation We're had the privilege of working with Brian 10 and a half years ago to help plan and launch the center for open science That's one of the best experiences in my life. Thanks, man So the the meta science conference so far as you've noticed has focused I think largely on Kind of the conduct and practice of science So we've talked about diversity and collaboration and you know, what are scientists incentives incentives they're facing We've talked a lot about publications and publication bias p-hacking pre-registration Citation practices, you know all those sorts of issues But I think one huge issue is the role of government funders in In incentivizing these practices and so forth I mean between NIH and NSF which we have folks here Who have worked or did or currently work at those institutions? We're spending nearly 60 billion dollars in 2023 most of which flows to academic scientists, of course, and I think there are at least three ways that Government funders can be tremendously important with regards to meta science One is just in encouraging good practices encouraging good practices as to the conduct of science and to publication Or sometimes being neutral or maybe even thwarting good practices A second way is the government funders could actually fund Meta science type studies, you know the types of studies that many people in this this room do and a third way the government funders are important is In conducting meta science internally, you know conducting experiments even within their own Practices their own, you know ways of handing out funding conducting peer review, etc So as to learn to improve You know conduct a kind of internal improvement studies on themselves And so the the folks on this panel have participated in one or more ways with Kind of all those forms of the ways in which government funders Can can interact with meta science So the way this is going to work. We don't have any formal presentations It's going to be a kind of casual conversation. I'm going to ask some questions of the panelists And then a little later open it up for audience Q&A. So So to start off with a kind of a softball question What is your name? Richard Nakamura Neil Thacker you didn't tell me you were going to ask me that Alan Tompkins NSF All right now to step it up a little maybe each of you can just say say a few words about your biography You know your CV so to speak what and you know where where you work or have worked with regards to government funders I was formerly the director of the Center for Scientific Review. I retired in 2018 Before that I was scientific director of the national institute of mental health and deputy director of the national institute of mental health I was in government doing program funding for many for a couple of decades Overall, I spent 40 years within the federal government. I was a Director in the health services research and development program at the VA doing program evaluations performance measurement and research on the VA health system and then I moved to NIH in 2005 to work on Open science issues. I don't know if we called it that back then But I worked on the public access policy data sharing policies Preprints a bunch of stuff and then in 2018 I moved to the ALS association Where I coordinate their research program advocacy program and care services program So I'm a recovering academic I left the University of Nebraska in 2014 to join NSF and I've been Working at NSF and was drawn into the public access open science activities at the agency Starting around 2015 so compared to my colleagues. I'm a newbie Great, great. So all of you as I said have participated in promotion of Meta science whether or not it was called that at the time So maybe Ellen let's start with you and talk about the NSF's Science of science or science of science policy, you know, the program has undergone various iterations But it's involved in funding meta science We are so when I got to NSF the Program that was called size sip at the time now called Science of science had long been involved in funding coach no six brother and other researchers On a variety of issues. It's really expanded And if we have time I was just going to go through seven Current projects that we're funding just to give you an idea. I feel like a car salesman I want you to think that you can drive a research funded project from NSF By the time we're finished today because I think you can we need to fund all sorts of Kinds of science and that's why I wanted to tell you about that But in addition to the science of science, which is in the social behavioral and economic sciences, which is where I work Also the computer science Program has developed and really Probably the leader at the agency on funding open science through Right now martin helbert is the program officer and so many of you have heard about that but nsf We you said the billions of dollars that we invest Cross government and nsf funds a lot of those but not as many Dollars are invested in research. So there's a lot that's invested in the open science infrastructure Repositories things like that and we just had a Almost 40 million dollar investment that went to the university of michigan A couple of years ago in which we were expanding their work So one thing I do want to leave you with is public access open science is expensive and Interestingly, we don't know what the return on the investment is so brian and others are You know committed to it and and we also Uh Have those values, but we also say where should the taxpayer dollars be invested in why what do we get from it? Should every dissertation be open? Curating that information so that others can replicate that takes a lot of time So, uh, I would like to know what works Under what circumstances and why For public access open science even beyond yeah, we all want to share But how under what circumstances should we be sharing? Neil perhaps you can talk a little more in depth about your work on open science Matters at the director's office of the NIH Yeah, sure. Thank you. Um, I think To to frame this one of the Undercurrents of my time at NIH was this this theme about productivity and impact and efficiency in scientific activity and Even in my career, which is I guess getting longer now We see more and more authors the number of authors on a paper is increasing The amount of collaboration is increasing and that takes time and One of the reasons why I think collaboration is increasing is because as the literature grows people have to Focus their knowledge into smaller and smaller areas of science And then to do something meaningful you have to work with more and more people of different backgrounds and stuff to to do some work and that all has cost And so when I started in 2005 The director of the time of NIH L. A. Sir Hooney talked about this concept of a digital lab assistant Some kind of bot or agent that could go through the literature Every day crawl through it automatically and then tell you What are the interesting things that are happening in that field that can help you understand what's going on And help deal with that deluge of information that's being produced every year in some ways We're drowning from the productivity of our scientific enterprise And you know now almost 20 years later We have a significant percentage of the literature that's machine accessible and we have uh a convergence of that Data and the our ability of computers to process that data. And so This is a very exciting time to see that But of course we're not there yet and we're still left with these questions about productivity and value and The the the benefits of a scientific investment and so I can talk about that later that's sort of moving on to what i'm doing now currently and how i'm struggling with these issues So so now i'm a research funder in a rare disease space in al s a disease that has no cure and moves very rapidly and We're constantly trying to leverage a small amount of funding And through our advocacy program and through that funding additional funding from the government and other sources To make a big change in a disease. It's poorly understood And it's it's very difficult to do that because in part I don't know how to Organize a scientific portfolio. I have ideas I there's a craft of science funding, but there isn't a science to it And I would love to get that science. So i'll give you a very simple example It's really hard to get diagnosed with a neuromuscular Degenerative disease right now. There's all kinds of complicated tests. You need you need specialists to have experience to do that And it doesn't happen. Well, there's like a year delay And so if we want to put out a problem oriented research call call for for research studies and say Give us papers. Give us research applications on how to reduce the time to diagnosis by six months We'll get a lot of papers On developing blood tests biomarkers imaging studies things that may have a 15 to 20 year Time horizon before they'll lead to a change in clinical care And what we won't get and I keep asking for this what we won't get is studies on How to get into our messy health system to just get people referred more effectively And that's just because the researchers in ALS don't do that kind of science They don't pay attention to that And if we do what science funders typically do is we just pick the best from the the applicants that come on We'll end up doing the 20 year study Time horizon and that means we're talking about four generations of people getting ALS and dying Before that benefit comes comes through If we put out the same app the rfa and say give us an answer that's going to help people in five years We would get a very different application and I very rarely see across all of our spaces RFAs that give some kind of time to impact In their in their research consideration And I I'm not sure why we do that when in any other kind of investment There's always a time horizon an ROI horizon in that and so I'm struggling how to Think about that and talk about that Now Richard when you were at NIH at the Center for Scientific Review Among other things you led a fascinating experiment with a peer review. So love to hear from you about that sure When I arrived and was appointed director of the Center for Scientific Review We had an interesting problem a question of one this peer review worked to produce a prioritization of the best science But even more important was do we do this in a way that's unbiased? That's fair to the scientific community Are there prejudices involved that are basically unfair and unrelated to the quality of science? Donna Ginther and her colleagues had just done a study Which showed among other things that there were significant racial and ethnic differences In the success rates of scientists And among other things it pointed out That black scientists Receive 55 percent the success rate of white scientists There were other smaller differences between Asian scientists and white scientists and Latino scientists and white scientists But this other one was really striking and One of the advisory committees to the director suggested that We think about doing a study of anonymization That is if we redacted information that gave clues on the race of investigators Would that change the score difference between black and white scientists? Because peer review is noisy And because studies of this kind are expensive We calculated that it would be impossible to compare all races in groups But we did us decided on a study which would compare 400 applications from black scientists Against a match control group of 400 applications of white scientists In addition, we decided to have a comparison group of randomly chosen white scientists Designed to match the difference in success rate between white and black scientists So with these three groups We redid review of full applications And review of applications that were redacted to remove all identifiers of the individual investigator This study its overall design And its analysis was pre-registered with the center for open science a coincidence that's That was very pleasant And they were quite helpful in helping us think through How to pre-register this study The study took a long time. It was very expensive. It's not a study that's likely to be replicated However, what we found Was that the peer review That is we had regular reviewers Do a review and we only looked at the fully independent scores that these reviewers Arrived at in both redacted and full application conditions What we found is that the primary comparison which was between the White control scientists and the black scientists Which I'm called controls, but but the comparison group Um There was no significant difference in the interaction between race um and score so um A group and and uh race and um the condition of the application So in in a sense, this was a busted experiment What we found was that the noise levels were much higher in our study Than in normal peer review So that our power analyses and calculations did not work to allow us to find get a significant conclusion However, we did find in the secondary analysis That the When we compared the white scientists in the randomized group and the black scientists Uh that the difference In scores between the groups was halved in the redacted popular in the redacted applications And that was a significant finding. It was not only cut in half But it was the white scientists scores that got worse And the black scientists scores stayed the same This has led to a thought Uh that one of the possibilities is for NIH to offer the possibility of Anonymized applications for review I don't know if that will happen yet But there is definitely some discussion going on about that possibility. So I think this is the kind of study though That the government needs to think about and continue to do Um to help I'd like to add that more recent examinations of the difference in success rates between black and white scientists has shown That the difference even though we continue to be the same when I retired That is almost a doubling of success rates among white scientists Has been virtually eliminated Why that is i'm not sure But right now the success rate of the two groups is is close to the same I'll leave it there for now Excellent So that's a fascinating experiment and so it prompts an additional question that I Would like to ask of anyone who wants to answer Is why don't we see more experiments like that in your case? It sounds like the experiment was particularly expensive because you had to redo review and you had to Find some way of anonymizing applications, which can take a lot of staff time, I assume But one can imagine a number of experiments about various Meta science ideas that might be you know much much more cheap to implement So I guess so I guess one question is what are the obstacles to that happening? Is it just internal bureaucracies at external politics legal and ethics, you know the You know there could be any number of reasons, but um, you know, why don't we see You know like one experiment like this every year for every billion dollars we hand out in funding Which would mean a new experiment every week So To be provocative from your lips to the funder's ears I think I think it would be great to do that and steward you and I were Here at the academy about a month ago hearing about experimentation and federal funding and one of the things we heard was the Novo Nordisk Foundation is undertaking a quite a An extensive effort. There are private foundations so they can do this They don't have some of the constraints that publics do But they're trying to understand the review process. They're looking at things I think like what you were just talking about but in addition they're looking at ideas What if you give reviewers the option to select out and say I really like that proposal? I really want to fund that even though you three don't like it. I really like it and So they're trying to understand different iterations But as you say Stewart if if we don't lean in and do this kind of research We're going to go the next 40 years without knowing really what's going on I should say that Donna Ginther was funded by nsf for that inquiry And one of the embarrassments I have is she couldn't do it at nsf NIH opened up their data nsf did not When I was at the VA we we would do evaluations of the health system And we would find sometimes problems with the services that were delivered by the agency that funded us and that would cause problems And sometimes it was good enough to say well, we found these problems because we looked for them and now we can fix them And that was that was seen as a good thing But sometimes that didn't matter and it just meant that it was a justification to perhaps go after our budget So the same issues that brian was talking about earlier this morning about Finding a problem and how you respond to that problem Is true for a government organization But on top of that you have people who are interested in changing the budget of that organization That have nothing to do with the integrity of that particular field of science and so there is a What seems like an additional uncontrolled element of risk that goes beyond an individual career But I'm not a scientist in a university. So I actually can't speak to that. It may be the same the same risk Of being transparent. We we had a saying in our office that uh, what was it transparency sucks Because we were the ones that were putting out all the data about what we funded and organizing it by disease category and budget and It it raised no shortage of complaints and concerns that Federal agencies are very conservative when it comes to risk of revealing something that will affect their funding And so that tends to Provide a lot of caution I know that I had to work Fairly hard in order to allow some of the data From some of the studies that we did conduct To be released. So I cooperated with outside scientists to enable Them to look at certain aspects of our data such as scoring of reviewers and whether or not there were correlations among reviewers and That was relatively difficult to achieve initially Though the system Went along with it ultimately and I think we know a lot more about The nature of scoring within the system particularly the the scoring That scientists don't actually see which is the preliminary scores from each of The reviewers and how independent they are I may quickly if I could go back to something that you were saying Neil that uh Prompted me to go back as you all know the public access open science federal guidelines have changed As of last august The ostp in the elandra nelson memo Was Intended to ensure free immediate and equitable access to federally funded research and We're really leaning in at nsf on the equitable part, but it brings up the issue that you raised So we know we want to do it But we don't know how to do it and so one of the things that i'm proud that we're doing Is we've committed to engaging the community We have been engaging as you might imagine the more resource parts of the community they come to us They talk to us etc You mentioned the publishers earlier. They've been knocking at our door, but Research institutions and societies have been asking us what's going on. They go from agency to agency They go to ostp and we say we're talking you don't have to come to all of us But one of the things We're trying to do is Reach out to those minoritized communities and institutions who may be unintended Adversely consequence by what we're doing and so part of what we're asking is What do you want from public access and open science? What are your fears? How can we be sure to help you and one of the things i've tried to do is an nsf official Working on this is to say nsf Is undoubtedly going to make mistakes in how we implement the nelson memo But we're here to listen to the community And again if it can be evidence-based, that's great, but we're willing to take opinions as well About what should we be doing and what we hope is what we do In 2035 is not going to be the same as we are doing in 2025 that there'll be A lot of information that we acquire in ways that you all we're talking about So that we're not ossified in what we start doing we're going to have to implement We're about to release our plan, which is going through OSTP review and so we're about to make it public But part of our plan is we're going to continue planning alongside implementing So I think that engagement and and you know, we're talking about what suffers from The open science and the meta Issues we don't even know how to effectively engage communities We've been engaging communities the national academies has been on the forefront of how to do that yet There's not a really well organized Literature that says what works under what circumstances and why and brian you and I share social psychology background And when I grew up that's what we were trying to do with problems often and effectively But we would say let's put this Information together and let's systematically try and research so that we better understand And I think your line of research. Dr. No sick Not your coach. No sick has really helped us to see both the Opportunities and the pitfalls of what we do in that systematic science Right So all of you have hit on various aspects of things that government does or the government funders do that Possibly could be improved. You know, you're talking about access to communities You've talked about the kind of productivity of the overall portfolio So I wonder if folks have kind of an idea of what's the future for meta science You know, what are what are some what are the most fruitful opportunities that you would see You know, if you could say we know 10 years from now, what will we look back on and say this This is a success that you know, it's great that nih wants an initiative to study x or to to do an experiment on y or You know the nsf, you know launch an initiative, you know to do some sort of meta science You know experiment, you know, what what would we want to look back on and say that that was a good way to spend the past 10 years Well, I I think it would be a mistake of meta science stuck to studying meta science like We fund science for To solve really big problems in that are facing us or to answer fundamental questions about Existence right really basic stuff and we don't know how to do that. Well, and so we had a really great presentation yesterday about Chemistry funded out of China and its relative prestige related to other Chemistry studies in the world That's the kind of work that we should be doing for American science funders as well. We're making these big investments enormous investments and It's so productive. There is so much being done That we can be hold ourselves accountable simply by just Cherry picking all the good stuff that happens from a diversified portfolio and just talking about all of that exciting stuff And the community and congress will feel like they're getting a strong value for their money But we don't talk about all the failures. We don't talk about the inefficiencies We don't even have ways to measure them and at the scale of operation We're talking about that. You just can't do that in an anecdotal qualitative way And so what are the tools for designing a problem focused portfolio Where you're actually saying we're going to try and solve this problem by this time And what's the best way to do that in the als space? We decided to make als a livable disease by 2030 and we're going to use legislative solutions and clinical solutions and research solutions to come to that goal But we're really sort of feeling our way through it about what's the best way We don't seem to have a lot of models that we can draw from Whereas an organization like NIH or some of the other funders are working at this scale that we're working at Hundreds of times a month compared to our budget and so the opportunity is there and I think thanks in part to Leaders like richard the data are there. You can look up all of the funded grants by program announcement On reporter and so that's all there and we can start to to go through and figure out This is a right way to frame a research funding announcement. This is not This turned out to be effective. This wasn't effective here We had the right scientists and so we got great applications in here We didn't have the workforce and so maybe it's not a project solution. Maybe it's a capacity building solution I would love to to get that feedback. I think that's a very kind of grounded Approach that we should be taking to to an enterprise at this scale I'd also like to caution however That we have a tendency to fall back upon metrics particularly various forms of citation analysis In order to establish whether or not this or that procedure is working better is having a greater influence Just because it's easy. It's relatively easy to calculate There's a high correlation with other kinds of measures Yet the behaviors that scientists have to go through in order to raise their citation rates That metric are all counterproductive are mostly counterproductive They follow a school of thought for instance or get collaborators to only cite your own papers Etc etc The least publishable unit if you have many of them you can increase your citation rate So all of these behaviors Are dysfunctional in science and we need to make sure we have a broader set of criteria for what successful science like Actually achieve what achieving clinical goals so I I think for And I thought more about this in nsf space what we're going to do in the meta science of open science Is going to take a twofold approach one is going to be strategic So the equity is going to be strategic and the other is going to be use inspired which is Primarily do our science funding and so we'll hear from the community and And so I think we'll try and do both but I think both the pleasure of being a scientist as well as the frustration is We don't quite know how to do it right to achieve the outcomes. We wish So sometimes I think These moonshots are are good for galvanizing people. So though Cancer is still there even though NIH is a moonshot for cancer on the other hand, we reached the moon. So I'm not quite sure how to do it. I'm I will be humble as we said earlier about this I'd like to just add that I think it's great that in the In america and in europe The funding is really divided up Tremendously so that there isn't one agency on which all sciences depend for funding There are many different individuals who you can talk to to convince to fund your idea And some of them are more would be more likely to do it Divided responsibility for actually making funding decisions is a great blessing for for our science But richard you raise a great point which is that the way in which we measure success in various medicine studies Might produce a perverse outcome in which we inadvertently end up Incentivizing some of the kinds of behaviors that we all kind of decry around here. So I wonder what do we do about that kind of paradox and do we need more Methodological work on like how to measure The impact of science in a kind of broad diverse way such that we're not incentivizing gaming one metric Yeah Gaming is a real problem. Scientists are particularly good at gaming. You tell them a metric And they'll figure out how to game it how to maximize it for for their measures. I think having many different measures And having different sorts of criteria Is helpful But I don't know what a final solution is because there's there are so many It's so attractive and easy to use A number which seems to correlate well with scientific reputation and I just know that it produces lots of bad behaviors Any other thoughts I don't know I I guess it would be I went to dinner with some folks on on Monday, you know open science experts and One of them was talking about how they had to leave their house in the middle of the night because a forest fire was coming and They didn't get the warning until the middle of the night and the cell phones were down and so If we organized our science funding not around Papers but solving a problem like How do you predict when a fire is going to come so you can get out of the house on time and and be safe? That's a real outcome. That's a tangible outcome. I think it speaks to The trust of our institutions When brian when you were talking about the trust of individual scientists trust first institutions are down across At least in the united states For everything it's not just science And in part it's because we're talking about metrics which are only relevant to ourselves And not really for the problems to which we're funded to address Or the cultural goals or the economic goals that we're funded to to address And so somehow we can get outside of that and get to the the heart of why we're here I think We would have a lot more success. We would know when we have success and Our stakeholders would as well That's me In terms of like doing more experiments and more kind of internal experiments at government funding agencies I wonder if there's a way to kind of institutionalize that practice So around nine years ago like at the arno foundation We were involved in funding a couple of folks who worked with the social behavioral sciences team at the obama white house The goal of which was to do behavioral science You know kind of behavioral economics experiments within government agencies to For example improve the rate at which military service members signed up for a pension to donate to their 401k And One problem with doing that kind of thing from the white house is that the white house, you know changes from time to time and so So so what they did was they they actually established most of that team at the general services administration gsa Where it's now known as the office of evaluation sciences And it still kind of continues to stay because it's been more institutionalized and more insulated from you know The political wins and so I wonder if that sort of practice might make sense at NIH or NSF if there needs to be kind of institutional team that Is thinking about new ideas thinking about how to measure outcomes in a way that doesn't incentivize the gaming of metrics or or that in some way ameliorates that if we need to institutionalize that somehow Any reactions? I think it Places like NIH and NSF it already exists to a large extent. That is The senior leadership of those groups are largely insulated against appoint presidential appointments and they they work to balance the interests of either political party so that As a senior executive there I learned to speak republican I learned to speak democrat and I would know what sorts of ideas to present in which in front of which audiences in order to Show that every every group that they would benefit from investment in research at NSF, you know, we in the last Decade or so I forget exactly when it was about 14 that we started We stood up an evaluation component of NSF that was primarily inward-looking we're we're evaluating what I've Come to recognize it's really tough To do these evaluations. So we have an internal evaluation team. You have Everybody asking them what to do. They ultimately end up doing what the director wants to know Because there it's a scarce resource and I don't know about the other agencies VA NIH Etc. But our evaluation team is pretty small and they don't have a lot of time and and so I've I've come to again be more modest and humble about thinking what we can achieve in government With those evaluation units on the other hand as you say it's necessary We do try and and make some changes in my directorate We wondered about The funding of dissertations. So we're trying to work with some societies and different Entities and former program officers to outsource the funding of graduate students in their dissertations So it's a trial that we're doing informally We work with the evaluation unit on what we should be measuring But it's it was easier when I was an evaluator and someone was paying me and I could say this is what you want to know Let's sign a contract and at the end of two years I'll give you the metrics that you want but being in charge. It's it's actually quite tricky I certainly agree with that and You know I did some of that work when I was in government and now that I'm outside and we have a tiny budget Spending money on evaluation of our research program is really hard Why don't you just fund the research and we we actually did work with RTI on an evaluation of something and I'm really glad we did it and it's it's just very unusual I I do want to hear about the funding that you guys have for That you were talking about for for meta science because Some of the data are there and it can be More effective and flexible to do that work outside of the government through extramural funding Because you get a much broader and less politically driven set of evaluation questions Um And I'm hopeful for that and I'll just if anyone is interested in doing that work and working with a small foundation I I would be Happy to talk with you and Mary Rose. I think is here as well from the health research alliance And she may know of other partners if you're looking to work at a smaller scale than the federal government but our data are less organized usually So we've got a little over 15 minutes left and I want to be sure to allow some audience q&a So if you have a question jump up There you go Stand up by Jeff Alexander with rti So, uh, thanks for the shout out to neil and uh, good to you all as well So coming from the evaluation community of the rnd evaluation community One thing that has struck me is that I'm here because we don't use meta science techniques in evaluation Typically because there seems to be a disconnect between this community and our community of evaluators But I do think you know, we've done some very credible Multivariate evaluations of research programs that don't rely just on citation analysis So we did one for the more foundations a few years ago So I think there is really good evaluation techniques that are available. It is expensive But the other barrier I run into quite a bit is Of course the usual things about well the privacy of the proposals and confidentiality and these kinds of things There was a lot of hope that the evidence act and some of the regulations after that would Help kind of free that up and you see At places like department of labor department of education a lot of spending on evaluation and a lot of Freeing up of the data and it just seems the science agencies are unusually resistant to Kind of taking the same reform steps to enable that self evaluation And so I'm just curious to hear your perspectives on Is there something else that has to happen in the science agencies? Is it that is part of the culture that they are resistant to self evaluation? Or is there you know real Reforms that can happen at a systemic level to help enable us as evaluators to do our jobs more effectively Well, I'll say that I think that there there is reluctance To do self evaluation however that seems to be freeing up And there is more willingness to do it I think we need outside scientists to lobby For specific projects And to find individuals within the government Who are willing to cooperate or collaborate? On those projects So some people found their way to me And there are others In the system That might be open to a discussion I'm willing to talk to individuals who might want to know some names Let me go over there My name is garen hillar. I'm from erdiff and I had a question for you So we're talking about projecting forward into what it's going to look like in 2035 and when we flip that statement of Transparency sucks to opacity sucks in 2035 And we've explored how we're going to be engaged with communities so that when we are exposing and sharing The efforts that are going on i'm curious Where your efforts are leaning toward in terms of how you will be engaging with communities over the next 10 years So that we reach that point where it's the transparency That's required for your funding to not be under threat and not an opacity that is protecting funding decisions I'm just curious if you're thinking about How that community engagement is going to scaffold us toward Seeking that level of engagement About nsf in particular just in general just just from the panel panel perspective how how are we navigating towards Meaningful engagement with community So that when we are transparent, it's not a transparency sucks It's it's transparency that's leading us in a direction that's strengthening the work I would go ahead. I would say it's not about transparency for integrity. It's it's really it's about impact and then transparency is a way Of working through with the community when we achieve what we set out to or when we don't And that's that part of integrity, but if it's not based on impact and it's only based on Transparency and publications and these process measures which are not relevant to People in their everyday lives. I don't think we're going to get very far We have to we have to take that next step And get to people to things that they actually care about and not just things that we care about Transparency is a part of getting to that point. I think it's it's a necessary step But it's not the end goal and and I think we have to keep keep in mind that distinguished between a process outcome and an impact outcome and We have we got to get the process right if we want the impact, but we have to talk about the impact to to broader communities I think we also need to ultimately show that Openness is serving the scientific community that's serving individual scientists that are open That they get more collaborations that they get more ability to talk across groups And we solve some of the commercial and privacy issues that are being raised as as To closed doors to openness I think The other thing that was a really major concern is the lack of the ability to replicate or reproduce studies and that I think serves as a strong pressure for people to be more open about what they did and how they did it And so we just need to show People that your science will be better if you're open Over here Hi there, terry four with the center for open science. I'm curious to know In thinking of the different components of open science. So dr. Tomkins you mentioned open infrastructure Open source software Libraries and other technologies whether it's open access of research products Whatever the case is In your experience In your experiences plural What has at a particular agency or department Uh How does strategy set within that particular agency in terms of which components to Focus on funding, you know, for example, is it the sort of top down where the director of a particular agency Sets the strategy and a sort of permeates down or is it more of a top or bottom up effort on the part of program officers? Is there any generalizability across Agencies or departments or is it just so particular to a you know, a certain agency that it's uh, that there's no generalizability NSF it works both ways. So There is the top down and there is the bottom up and What we have is this distribution and in some ways to Not do that. So we at NSF we had fairly flat funding until recently So the idea that we would take money and reinvest it. So the director prioritized our new tip director and that Is where new monies were dedicated flowed to that the rest of the agency is still pretty much flat from approximately 2005 levels. So we're Reluctant to take away from the scientific communities because the scientific communities are feeling like What they do is important and they don't want to see a diminution Of their access to funding which they're already feeling. So they were kind of betwixt in between. I'm not quite sure how to Extricate ourselves from that problem, but I'm certainly open to recommendations Over here Hi, Robert Tebow from Stanford University uh Neil brought up the point several times that Sometimes the questions that are being answered aren't the important questions to people uh, and I think part of that issue is probably that the For example in clinical trials, even if there's 50 clinical trials run on a certain topic The average paper only cites one or two of those clinical trials that came before So we're not necessarily build when we're running a new study. We're not always building on what's already known I'm wondering within any of the funding agencies Is there any discussion about the funding agency actually identifying the next study that would be the important thing to move To advance outcomes that are important to the agency and then actually having researchers bid To run that study and then getting funded to run that study rather than the researchers coming with their own ideas Now, obviously, I wouldn't want all research to be done that way But I'm wondering if there's any discussion or any models like that that are being run currently I think niaid was a good example where you saw a lot of leadership from an institute director Um in suggesting things for the research community to pursue Um, but there are um Very different policies and uh directions Uh from each of the institute directors So, um, I think there are all sorts of models out there It would be helpful to have somebody look at at this more systematically About how Institute directors influence Patterns there are some who have a very long track records like dr. Fauci Um, and and you can look at whether or not certain kinds of policies and practices work better than others Again behavior is extremely difficult to do. It's very difficult to control. So, um, this is largely, uh, would be speculation Over here I'd like to plus one that last question My name is Catherine Kaiser. I'm from the University of Alabama at Birmingham School of Public Health And I'm curious having applied to several of the agencies represented here, uh and private funding I've seen some private funders Also get on the, um, uh, train of open science practices specifying that they're funded Researchers, but they're not very not prescriptive about how you do it or where you do it or what level of auditing there may be Um, I wonder if there's any consideration to incentivize open science practice adoption Optional extra page Disclosures that don't take up my 12 pages. Uh, they don't take up my biosketch Uh, because I often delete out things that I think are very important for the way I do my science because I I look at what I think I'm going to be evaluated on So I'd like to just uh, see if that's been discussed or how that might be taken Again, there's gamification that could go on but like I said optional Extra free space. What do you think? Well, I think it's a great idea. I mean one of the things we did when For preprints was make sure that they were allowed in the biosketch as a way of Socializing a preprint as an output But I think to put extra space in an application Is is fine, but that means that the reviewers have to value That information anyway Well, if it were again, you know part of the directed thing that this this is an an important Component of how this funder may wish to also make decisions between applications Oh In light of the nelson memo, there is across the federal funders. There's going to be Not the traditional data management that nsf use but it will be Data management and sharing plans which will be Identified so those are not Yet at nih they actually have been doing that a little bit longer But nsf and the other federal funding agencies will do that and the data sharing is not all your data It's the data associated with publications and the way that we police that as it were at nsf is program officers may have a chance to take a look at a A publication and make sure we're asking to Have the doi for the data set Uploaded as part of the report the annual report But we we typically depend on the community for the new application So did tomkians do what he said in that previous proposal? And then you get members of the community who said well, he didn't share that with me I asked him to do that and those kinds of small Communities end up Working. I don't know. Well, we didn't evaluate it jeff but We we do think that it it it comes up a lot so often that we do think that The community reviewers both the panelists and the ad hocs are paying attention to it Also at the center for scientific review a lot of attention is paid to reviewer time So anything that expands the obligation of the reviewer to spend more time on an application tends to Not get promoted within And they've been looking to streamline recently over here ours Villa Hubert economics Cornell Something that relates back to kuchinosis earlier point about incentives and some of the things that have come up here You have difficulty evaluating some of the return on investment of funding in part because Both the breadth and the time horizon on which you can evaluate is limited by the funding So let me be more precise. You tend to fund a certain time period in which action occurs But the outcomes are measured far beyond that originally funded aspect. And so there There might be scope to sort of say well, let's have all these Preprints or post prints or other activities that come around be reportable or even a requirement for reporting way after The actual core funding ends Researchers aren't going to do it for free. So that might suggest that you might have some small component for Reporting seven eight years out after the grant was ended because that's where some of these long run effects Are going to be be measurable thinking here of education interventions training components, etc They don't have an immediate outcome their outcome might be this graduate student is now Publishing somewhere else years afterwards. Have there been any thoughts about sort of expanding the scope of what is Reportable and when to report that I can talk at nsf So yes, we're we're thinking about it. In fact, we're doing an internal inquiry asking this in the context of broader societal impacts with the hypothesis just what you're saying being that the some of the Policy impacts for example that we might anticipate Don't materialize for long until long after The the funding has stopped so in terms of Being able to communicate it since we have our public access repository It can be uploaded. So if you have some kind of product research product Paper, etc. You can load load that even decades after Your your proposal has has completed We went back 20 25 years and just surveyed pis and said could you tell us what if anything has happened in policy or other societal impacts So it's a qualitative study It wouldn't say it's the most rigorous But it's a preliminary start and we were asking Hypothesizing that much of what happens happens long after the the award closes itself If less than two minutes, maybe we're going to have two more questions if you speak fast, so All right, uh, Ian banks from the american enterprise institute brian talked about incentive structures and how Research institutions are responsive to the incentives of government funders and government funders are responsive to the incentive structures that They are subject to namely congress. Can you talk some about how? Congress and we can try and leverage congress to shape some of those incentives structures Well, I I think NIH anyway are the governance is Academics who are selected to be on advisory committee, so The interests of academic institutions are very important in federal funding Maybe more so than congress because again You're talking about a very technical issue and a broad array of topics and it's it's easy to cherry pick and talk about exciting topics And and all the funding that happens in all of the different districts across the country When you have a diversified decentralized portfolio, so yes, congress can be really helpful But I don't know if congress has been sensitive to The kinds of impacts that's expected out of a funding agency other than Grants allocated to specific districts. It's so tricky One more be quick All right, thanks Matthew Lucas. I'm with the social sciences and humanities research council of canada So first of all, thank you for this presentation really interesting the issues you've raised and the challenges resonate north of the border Just as they do down here Um, and I have a number of questions. I'll follow up with you individually after the fact I thought it was also interesting No having the funders on the stage after bryan's talk and challenge for a new way of doing science So I thought thinking about change simple question would be if there's one thing you could change About the way funding agencies currently do their business. What would it be? big one For me, it's it's simple. I would ask every Part of a science agency to have three or four time sensitive impact goals Of real world problems that they're trying to solve I'm an advocate for basic science. So I like long-term Goals big issues So I think there's room for both and NIH fortunately funds both kinds of things and I do believe it would help to have some of them time limited So for nsf, we have advisory boards that we work with and report back to so we have that set up. I don't know that it's Yielding exactly what you talked about But I just want to mention for those of you who may not Be following this but the academy has been hosting Roundtable on aligning incentives and open science and when we talk about those Alignments and trying to move forward They're grappling with it. And so I've been just listening to those conversations now for four or five years And I can tell you we don't know the answers great tannenbaum and his colleagues are working with Many academic institutions to try and move this forward But all these comments that you've brought up are really really difficult Which means that we should be leaning into them. And so please I'm For all of our agencies and the other federal agencies Be a squeaky wheel and a steward. I think or whoever mentioned Talk to your congressional delegations and and Have their support All right with that that's a wrap. It's time for a break give a hand to our panels Thanks everyone