 Everyone, thanks so much. Can you hear me? Okay. Yeah, you're coming through. I'm clear. Super. All right. Good morning. Good afternoon, wherever you are. Thanks to everyone for joining this session. So yeah, so as we mentioned, the focus of today's session is funder perspectives on meta science. I'm just going to give a few introductory comments. I don't want to take too much time away from our speakers and the Q&A session. And then I'll go for and introduce each panelist, and then we'll take it away. I'll just ask everyone to put their questions in the Q&A section, and then we'll get to those after everyone has had a chance to present. So yeah, essentially the session today again focuses on meta science from a funder perspective. Meta science raises some complicated issues for funders, especially those who are principally supporting kind of traditional forms of basic research such as original data generation and hypothesis driven discovery science. So when and how to make decisions to support meta science can be a challenge. Again, especially for life science funders who are focusing more on translational bench research. So really, we're looking at a mindset change that's required for a funder to kind of embrace meta science as original scholarship contributing to the advancement of knowledge, and that addresses kind of societal concerns. So, so during this session we're going to hear from both public and private funders they're going to discuss when and how they've supported meta science research what they see as the opportunities, but also what they see as the challenges as as we move forward with more support for meta science. So we have four excellent panelists today. I think we're going to go in the following order so first we'll hear from Arthur, then Susan, then Nick, and then that would if that's okay with everyone, and then we'll take questions at the end of the at the end of the session. So we're going to present each of our panelists. First, we're going to hear from Arthur Lucia. Dr Arthur Lucia is the assistant director of the National Science Foundation, and serves as head of NSF directorate for social behavioral and economic sciences. He's the Gerald are for distinguished university professor at the University of Michigan and co chair of the National Science and Technology Councils subcommittee on open science. And related public work examines processes principles and factors that guide decision making and learning. His efforts clarify how people make decisions and choose what to believe what, when they face adverse circumstances. John Simon Guggenheim fellow and Andrew Carnegie fellow and as a recipient of the National Academy of Sciences Award for initiatives in research here under bachelor's degree in economics from the University of Rochester and social science PhD at the California Institute of Technology. We'll then move on to Susan Fitzpatrick. So she's the president of the James S. McDonald Foundation. As president. She also serves as the foundation's chief executive officer. Susan received her PhD in biochemistry and neurology from Cornell University Medical College and pursued postdoctoral training within vivo animal. Spetroscope study. Susan can pronounce that word for you better than I can brain metabolism function in the Department of molecular biochemistry and biophysics at Yale University. This Patrick is an adjunct adjunct associate professor of neuroscience and occupational therapy at Washington University School of Medicine and teaches neuroscience in both lectures and seminars. She lectures and writes on issues concerning applications of neuroscience to clinical problems. We'll then move on to Nicholas Gibson, who is the director of human sciences at the John Templeton Foundation. He's responsible for developing grant programs on the scientific study of religion and non religion, the psychology and virtues and character strength and the interface between spirituality and health. He has a particular interest in projects taking a cognitive approach to these areas. Dr. Gibson studied psychology, physiology, the university of Oxford and received his PhD in psychology of religion from the University of Cambridge. He was subsequently a research fellow in science and religion at Queens College Cambridge, where he also taught as social psychology. And then we'll move on to Dawid Guider, I hope I pronounced that correctly, is the director of programs in discovery science at the Templeton World Charity Foundation. He's responsible for the foundation's new initiatives in the discovery phase. So this includes the grand challenges for human flourishing, which is a $40 million effort to support interdisciplinary scientific research on humans cognitive effective social and spiritual well being. He develops new initiative and grant proposals in a wide range of areas such as natural sciences philosophy and public outreach activities. In 2018 he launched the accelerating research on consciousness initiative, which involves a $30 million commitment from the foundation to investigate scientific theories of consciousness through adversarial collaboration, and by promoting open science practices. So he also serves as head of program management and continuous improvement and leads the foundation's efforts to promote best practices of open science. And he's also responsible for overseeing changes to policies and procedures and developing new grant making practices to better support discovery science. I just realized I should probably introduce myself. So, I'm Erin McKeonan, I am the community manager for the open research funders group, of which several are funders here are our members. And so I will be moderating today's session, and I think I will hand over the floor to Arthur. Hello. Thank you. Good morning good afternoon or good evening, whatever applies to you. Really appreciate the opportunity to spend some time with you and I'm very grateful for everything that's going on in the meta science conference I've been able to attend all of it but it's really a very important thing. Let me, Aaron asked us to talk about two things one is about NSF kind of funding orientation, and I'm going to dispense with that quickly. We fund a lot of work in this domain, we want to fund more, but we want to do so in a particular way. And so what I want to shift to now are the challenges and opportunities of doing that. So the stakes of this venture are incredibly high. And what I'd ask people to do for a moment is to feel the urgency of doing science in a way that really leads to whether in the short run or the long run, tangible substantial improvements in people's quality of life. So, as we think about that I want to show you a couple slides. And just the first thing I want us to think about is gratitude. We have these opportunities to do research to discover. It's an incredible thing. There are people in the context of urgency. There are people around the world, most of whom it is hard for us to see when we're at academic conferences or in universities, but they are there. They have tremendous challenges. They have existential crises of the type that again it can be hard to imagine, but many of them are looking desperately for ways to get through the day. And so there's an incisional a set of aspirations that they have or needs that they have with actions they can take. And so there's there's an incredible urgency to this. And so I want to give two things to think about as we think about how to respond to that in a meta science contest. The first is something that you should make you feel good, which is think about what science has done. The context of connecting people in the context of health in the context of building interpersonal relationship into relations and institutions that help people serve one another and discover. So that's, that's good. Now if you want to feel challenged, think about the amazing talent in this room, and in rooms like it, you know across many countries and across the world. Now think about the difference between what science does, and what science could do. I love us that difference is vast. And when we think about the urgency of the situation in which many people find themselves. It's not just a theoretical construct. It's a topic of great concern. And when you work in the public sector when you work in government, you see that straight away. So there's this question we have urgency, I'd ask you to feel the urgency from the perspective not of the scholar but from the people who we could help. The question is, what do they need from us most. And I would say that the thing they need from us most is fidelity. Right. And by fidelity, I mean, when we articulate a view of the world, when we talk about how one thing might or might not cause another, the fidelity is the relationship between that claim and the reality in which they live. So many of the conversations we have here are about scientific practices that sometimes degrade that fidelity, right, but fidelity is the key thing that people need from us, because what we do can inform their understanding of the relationship between what outcomes they need, and what choices they have fidelity of the scientific process matters generally, because the other thing they need from us is legitimacy. Many times we find things that people don't want to see, we say things that people don't want to hear, or that are uncomfortable to hear, and yet hearing these things can help people make decisions that improve quality of life. A bridge that helps people hear and understand and think about things that are uncomfortable is the notion of legitimacy. Because I may not agree with what you're saying when I first hear it, but I have some sense of how you came to that understanding. And that gives me a bridge to thinking about how I might incorporate what you're saying into my life into my actions, and so forth. So fidelity and legitimacy are the key things that we think and if you're asking about what is the orientation of the National Science Foundation towards meta science. It's creating new opportunities for fidelity and legitimacy on concerns that matter most to people. All right. So now I want to say something that might be controversial. In this context, what are the outcomes of interest. So I'll put three on the screen here I might put three on the screen one is replication and reproducibility. And what I like to do for a moment is get the counter argument as to why this might not be the thing that we care about. I like replication reproducibility, but if we think about fidelity and legitimacy. So what I'm going to call R&R, because I only have a few minutes. It's really, it's most useful for evaluating inter subjective properties of a design finding relationship that is evaluating the relationship between a research design and a claim that you make. R&R can be useful for improving the fidelity or accuracy of that relationship, but it's neither necessary nor sufficient for doing so. Okay, R&R isn't necessary because a researcher can get it right the first time, right, with a method that you can all see that that's actually how it works. And so R&R isn't necessary and it also isn't sufficient, because we can replicate or reproduce flawed designs. Moreover, if we replicate a single time, right, there's questions about the generality. My point here is not to say that replication reproducibility aren't important. What I want to say is they are important for a higher reason. And as we think about which R&R studies to fund, right, it is through the filter of which lead us to which produce fidelity and legitimacy that are of the greatest public value. A similar argument could be made with quality of publications. Publications have lots of measurable attributes, which are the ones that we care about. And again, you know, I'll talk about careers and citations too. These are critical to empowering scientists to act in a particular way. But if you approach a US government agency for research funding, we care about you, but at the end of the day we have a responsibility in the US government to every person who lives in the country. And so the way that we see the request is how can we take the amazing things that only science can do, and do them in a way that create fidelity and legitimacy on some of the most important topics of our day. So in terms of interest, how important are they, they are important, but they are important from the government perspective. If they affect lives, quality of life, length of life, the quality of lived experience, right. These are the things that motivate us. These are the things that we care about. And when meta science proposals come to us that have a capacity for doing this, we are interested. And the more decoupled they are from that. It's just harder to make that case, not because we're not inherently interested in other things. It's because anybody who's looked at NSF knows we can only fund about one of it out of every five proposals that we get. So we have to make very difficult choices. And we have a fiduciary responsibility to every person in this country. And so lives are what it's about. We have to make sure a meta science study that not just has a bridge to quality of life, but empowers people to serve one another more effectively to improve one another's lives more effectively. That is even more interesting to us, we can do it in the confines of basic research it can be very abstract, but a research agenda that has the ability to empower people to better serve others. That's just even higher, because ultimately I think from our perspective, sciences service. It's awesome, it's wonderful, the interactions are great the discoveries great, but at the end of the day what justifies the existence of the National Science Foundation is that science is a form of service, a way that we serve one another. So I'm so grateful for the conversations that we're having here for people putting this together. Gratitude is the main feeling I have about this organization about this, this effort. And meta sciences is the answer to a lot of these questions. It's just I think we will have the maximum impact. If we think about, you know, the main dependent variable being how do we better serve others. And with that I yield. Thank you so much. And if you could please put your questions for Arthur in the Q&A and we'll get to those after everyone has had a chance to speak. So we'll now move on to Susan Susan you have the floor. Good morning, again, afternoon and evening to everyone. And I want to really thank my colleague from the National Science Foundation because I think he has queued up this session. Just perfectly. So I'm going to try to build a little bit on on what he said and give a more maybe smaller scale perspective from a funder that is not the size of the National Science Foundation. So DSMF, the James S. McDonald Foundation is 70 years old, much like the National Science Foundation we were probably founded in the same year 1950. And we are both actually embedded, even though NSF is a government funder, unlike the foundation which is a private funder we are both embedded in a history of what was philanthropic support for science before the government got into this. And really tried to take an international approach and the ideas that we looked at our questions was using what we call foundation initiated funding schemes that rather than just sitting and waiting for stuff come out over the transome, you would go out and identify potential areas of funding. And that the foundation in particular we were often looking for projects that would a question assumptions, you know the everyone knows X, but actually when you push on it, nobody knows X, or the, or the, or the evidence for X is a little sketchy, right, or that would question dogma and orthodoxy, so that we could go back and revisit questions and say do we are we actually certain that we know what we know. So, with that our mission is also stated by Mr McDonald our founder to improve the quality of life. And we have done that primarily through knowledge acquisition, and it's responsible application. This was an extremely important component of Mr McDonald's vision that knowledge both has great power for good and for evil. And so the responsible application which means that we have to know what we're doing and we have to be certain, and we have to be willing to be humble about our own knowledge, and I think that was an incredibly important part of what we do. And as part of that the foundation has always supported, not just fundamental empirical or discovery research, but has often had components of our programs that would support with the research of philosophers historians science technology studies researchers, but these were embedded in the programs so there was not dedicated streams of support for this, they were considered integral to the, to the overall goal of this funding initiative so we did not want to make big distinctions between someone who was doing empirical research, you know, standing at the field, and someone who might be doing what would now I guess the term would be use meta science research right people who were doing deep analyses, people who were taking retrospective and prospective looks at a field, people who were really trying to review and aggregate information, and who were questioning the practices by which science works. So that's something that we have have made a core and and hopefully will always continue to make a core. But now I want to, I want to step away in some way, like how how skip did and kind of challenge the field a little bit, because as someone who, you know this is the one of the beauty so I've been at the foundation for 28 years you know I'm speaking now you know from somebody who's you know seen things come around and go around, you know multiple times. So my own experience with what happened in fields like science technology studies. The field of bright brain mind and education, an NSF initiative that I was quite involved in as an active provocateur which was the science of science policy is that these fields have a tendency to fall into the trap of becoming their own unique academic disciplines. And so suddenly they develop a jargon journals, their own society meetings, and they're preaching and talking to one another, and they lose touch with the very practitioner community that they need to be actively engaged with if they're going to have the kind of impact and influence that they want to have for the goals that I think skip did a beautiful job laying out. What we actually want is knowledge that is useful that has a fidelity with the world and then can be put to service. And so if you're, if you're developing a very insular view of your field where you are only writing and discussing and talking and now getting into little into nice scene warfares about who's five step you know five bullet identification problem looks different than someone six step identification of the problem and actually you're using different words for the same thing using the same words for very different things. And now you get into a back and forth argument papers get generated. And, and now what's the service, where's, where's, where's the point so I do have to, in some ways, come in the work of the Center for open science, because I think they have really worked to avoid this and have remained very in touch, very connected, very concerned, and really doing this two way relationship between the practitioner community that you really want to be interacting with, and the meta science community, because in reality, you are scholars contributing to the same goal. I think remaining connected is extremely important. And changing this culture. I mean, I think by remaining embedded in the practitioner culture you also have a better likelihood of bringing about this change and what is value that I think many of us share and I'm sure again most of the individuals who are dedicating their careers to doing this kind of work really care about this. I'm going to sort of finish up by just talking because, unfortunately, one of our panelists, who was originally supposed to be here Mary Rose Franco from the health research Alliance is not able to be here. I'm sorry for stepping in. But so I want to put on my little HRA hat for a minute and just talk about biomedical research, which is a huge sprawling enterprise, highly resourced, made of incredibly numbers of interacting parts, right. Even though it has the tradition of meta analysis and doing comprehensive reviews still thinks the idea of doing meta research is somewhat other than the all important work of generating original data. Right, even if that original data and I'm going to steal from skip here is unlikely to have any fidelity to the natural world, or have any legitimacy in terms of addressing the needs of patient and health communities right. So I think they're just as the meta science communities is reaching out and working closely with practitioners. I think there is a real educational effort that needs to go on with the funding community, and particularly this group of fund of funders who are specific and who represent disease advocacy communities about why generating this kind of meta science knowledge is exactly in their lane, because what they care about is knowledge for use. And we have to have more confidence in the kind of work that is coming out of pre clinical and early clinical studies, so that we don't continually fail at late stage clinical investigations, and then enrolling this out to try to do the things that we want to do. So I would encourage this community to sort of, you know, focus on those goals on those bigger goals as sort of launched this meta science effort and has engaged so many people, and to work hard to avoid the devolution into becoming mainly a closed shop in which you're, you're, you're mainly now writing and talking to one another. I also threw into the chat, a recent initiative that the health research Alliance put on which was called reimagining the biomedical research enterprise for a healthy future. This was a international essay contest, we asked people just to write in their ideas about how we could create a more inclusive, you know, responsive sort of biomedical community that yesterday was the culminating event of this and which we heard from those that were as the two winners of the essay contest and some of the honorary mentions. And so that's all available on the link that I put into the chat and I'd really encourage this community particularly to take a look at that. Because there was some really wonderful ideas, including using pre registration for clinical trials, which I think could have had a good can have a giant impact. So thank you, thank you very much. And thanks for mentioning that essay contest, again, just to second that there's some great essays in there so I'd recommend folks check that out. All right, so we will move now to Nick. Thank you very much Aaron, and thank you Susan and skip for those really good introductory remarks for this session. I am trying something that I haven't done live before which is to queue up a PowerPoint deck as a virtual background. So, we're going to see if this works or not. As it is. I don't know, recompiling my slides here we are. Hopefully you can see me and see that slide. What I'd like to do in the time that I have is is to ground these, these thoughts in the practical realities of obtaining funding from, in this case a private funder for meta science. And so I start by agreeing with the with the perspective of of Arthur and Susan on this endeavor of meta science. But also want to think about why a given funder does or does not fund meta science, and why a given funder might do so but think of it in some different terms. What I want to do is help prepare you to pitch, in this case to the John Templeton Foundation, your meta science proposals in a way that are more likely to get supported by us. To do that, I want to briefly introduce you to the founder of the John Templeton Foundation Sir John Templeton Wall Street financier that the Warren Buffett of his day, who died in 2008, a global value investor went out looking for undervalued stocks around the world bought when others were selling and sold when others were buying. And he wanted to take this approach to think about the generation of knowledge, and he he founded three foundations John Templeton Foundation, the Templeton World Charity Foundation, and the, let's see there, there it is the Templeton Religion Trust. Two of them located in the Bahamas and that we will speak to one of those in a moment, and one of them located just outside outside Philadelphia in Pennsylvania USA. But all three have a very similar mission and a very similar charter. There's a broad array of goals you can view our individual websites and think are all these goals the same and they are, in fact, different foundations might choose to prioritize different aspects of that mission at any one time. But we're all bound by the charters that that we have, and in which Sir John said, I want you to fund these sorts of questions, and to do so, enduringly, that these questions will lead to other similar kinds of, but even deeper questions and carry on doing that, even though it'll be hard to find answers to some of these questions. What he didn't want to do was to be moved around quickly or have his foundations moved around by whatever the current fad or urgent priority of the day was. Instead, he had this this long term outlook informed by his investment strategy of how do you prevent future poverty, how do you solve the problems of tomorrow. So, when we're thinking about whether to fund a project that is on meta science or meta science adjacent in some way, then we're trying to see how does this fit into our broader mission. We have an open submission call right now it's it's once a year for letters of intent or online funding inquiries as we call them, and we received well over 2000, 2400 or so back in in August and so we're busy reviewing those right now, and we will reject more than 90% of those and invite the remainder invite a full proposal and hope to fund about half to two thirds of those. But so, when do we decide what to invite and and why I'm going to give you three quotes from three of Sir John's books. And the reason I'm doing that is because within his charter, he said, I can't I can't begin to describe all the things that I want the foundations to find except I've written about them in these books. There are seven books named in the charter I'm going to give you quotes from from three of them, and five of those seven books talk about the self correcting nature of science. So, here's, here's one from is 1994 book is God the only reality. And he describes science, even though science is a social product, the social factors are limited by the unique corrective character of scientific activity. The continuous filtering and sifting that go on in the course of experimental collaboration and scientific interaction and publication lead to a progressive elimination of distortion. Like how science should work. We can ask whether it does work that way. But but he certainly had this view of science as a self correcting this is a good way of answering the sort of big questions he had. In other words, here's another quote from his 2000 book possibilities. And you'll see, I didn't read the subtitle there actually it's possibilities for for over 100 over 100 fold more spiritual information, yet a particular way of thinking about the world and the universe that was often couched in the language of spirituality. But throughout that, an affirmation of science is the way to answer those questions. So here's, here's this quote from possibilities as part of a historical legacy of the scientific method. Most scientists have learned to avoid the stagnation that comes from accepting a fixed perspective as a community of inquiry, focused on the process of research. They have learned to become epistemologically open minded, always seeking to discover new insights and new perspectives. When their concepts break down, they devise new hypotheses and test them. They challenge old assumptions, competing with each other in creative professional rivalry. I don't know if Sir John ever read review or two. I'm not sure this necessarily reflects that but but again, it is this idealistic picture of how science ought to work. And, but it also takes a long view on scientific progress. One, as Susan was reflecting that is informed by the history and philosophy of science over not just years but decades and centuries. He looked back. He wasn't a bench scientist himself. He was an investor, but he talked a lot and read a lot of historians and flush first of science and was given this long view. So these foundations are they're not spend down sunset foundations, these are foundations that are there for perpetuity just paying out 5% of our endowment each year. So, even if progress is happening just one funeral at a time, then progress is still happening. But what if there were a way to speed progress up to accelerate progress so that we don't have to wait for those funerals. Maybe it can happen within the lifetime of scientists maybe scientists can change their mind. And so this this represents another theme within his thinking that of humility. So this is from an earlier book the humble approach from 1981. Here he's actually quoting Vannevar Bush, the chief architect of what became the National Science Foundation. So Sir John begins being being humbled before science is a good first step toward the humility we should have before God, and then quotes Vannevar Bush. Science here does things it renders us humble and paints a universe in which the mysteries become highlighted, in which constraints on imagination and speculation have been removed. And which becomes ever more awe inspiring as we gaze on the essential and central core of faith. Science will be the silence of humility, not the silence of disdain. So here, this gives you some insight perhaps into what one of Sir John's central goals was, which was to discover more about the nature of God, but that science was a critical pathway to doing that a critical one. So, perhaps, as people think about mysteries, big questions, the unknowns in in the universe, this could make scientists more humble and have more, more awe, maybe willing to change their minds. Let's bring this back to the sort of work that you are trying to do. How does our mission as John Templeton Foundation relate to what you are trying to do within this community of meta science. So, we share this goal of trying to accelerate scientific progress of thinking about the self correcting nature of science, we're interested in the nature of science itself, and of scientific practice, and we're interested in this idea of intellectual humility. And I will unpack that a little bit, but I wanted to give you examples of some of the sorts of grants that we have made on each of these three themes. First of all, is our support for the Center for open science. We began supporting Brian and his excellent team back in 2014. And part of that has been to target and expand the build out and improvement of the open science framework, following on from what was started by some other funders, but also some meta scientific work, looking for example at how to evaluate the impact of pre registration. But we're, our motivation here was to achieve these goals of rigor and transparency of reproducibility in the work that we're funding across the board. And so if we want our grantees to produce science that gets us closer to truth, then we need to make sure they have the infrastructure necessary to do that work. And we saw the open science framework as a critical piece of that. Another example is, and a very different sort of example here on on the nature of science is our support for the University of Chicago knowledge lab, and especially their meta knowledge research network led by James Evans there. I'm going to put the grant ID numbers here so that you can explore a little bit more on our website, if you like. Another sort of science of science kind of project is is to Barabashi and Sinatra. This is thinking a little bit about genius within scientists. And taking a meta science approach to which scientists have influence and produce enduring and transformational knowledge. And as a Barabashi and Sinatra taking this scale free networks approach to that sort of problem. I mentioned this as well just to to demonstrate how some specific areas of Sir John's interests can show up. So he had a particular bucket of funding on genius and genetics. I wonder how those two might go together. We generally break them apart in terms of how we think about those buckets but so in some of our support for work on genius, then this meta scientific approach shows up. The final set of examples is thinking about intellectual humility. And here are our four either active or almost active grants, looking at the role of intellectual humility in in the practice of science. So happens that all four of these are on psychologists. But there's nothing within our charter that says it has to be that way. But to this first team. Looking at this is to mean does here and Michael and tell and Alexa tell it. Looking at when do scientists update their beliefs in response to new data. And what does it take for a scientist to change his or her mind. And does intellectual humility play a role in that. This is this is one where we had this idea come to us but but absent the intellectual humility piece. And so where we talked with the applicants, could we add in what they be willing actually to add in measures of intellectual humility. As part of their design so that we could connect it better with our particular interests. It's true for the second grant to Bethany Teachman and Charlie ever saw looking at whether in the end whether pre registration. The active pre registration is an intervention that might increase intellectual humility if you have to put your cards on the on the table and say, here's here's here's what I, here's what my theory predicts. I'm going to turn out that way. Is that itself humbling and in a useful way. Kim rios here studying studying scientists who study religion and studying scientists. What do they think about other scientists who study religion. Does that relate to their evaluation of its rigor. And then finally, this isn't quite formally announced but I know it's Twitter knowledge at least the support for the psychological science accelerator to to expand what they can do, but within that also to look at how intellectual humility might relate to the sort of intuitions that scientists have about the generalizability cross culturally a particular psychological phenomena. Well I want to close by thinking about the sorts of things that we would be willing to support in the future and to sort of summarize some of those things that we have done in terms of bigger questions that that we could be interested to, to support in the future. Here are three. I have four more on a final slide. So, how and when do scientists change their minds. What does it take for a theory to die, rather than just be be resistant to to death. What is the role of intellectual humility in scientific progress. This is a sort of specific example of a question about how science yields truths but people think about simplicity being a key feature of a good theory. But is that actually how it works out do scientists really evaluate their theories on the basis of simplicity. Those would be things that we could be interested in. A few more specifically from the perspective of a funder and we we try to be reflective in our grant making and to evaluate what we're doing and why we as funders are doing what we're doing. We have lots to learn from from our grantee community and from from other funders, including those on this call, but to do some of the things that we're trying to do, then there are things that we, we don't have good answers to so if we want to direct scholarly attention to some neglected topics. How, how can we effectively do that why are some topics perennially neglected even if they are central parts of the human condition, for example, what does it take to get scientific attention on on on those topics. How can we as funders help working scientists use good measures and good theories, rather than just whatever is is popular we all know those measures that just won't die. Well we're using it we know it's not the best measure but you know everyone else is used it we want our data to be complimentary even though no one will actually put it all into one one data set later. So how do we make that the break to the better version of the measure or or the completely new measure that is actually more rigorous. We heard from Susan about interdisciplinary and and that's a that's a key interest of ours as well. How can we help break down some of those silos between different disciplines. And then finally just a permanent evaluation perspective, how can we increase the return on the investment that we give. Those are, those are some, some broad remarks I'll just leave you with a couple of little quotes from Sir John about humility being a gateway to discovery, and that it's okay to make mistakes along the way, and maybe even really helpful. Thank you. I'll turn it back to you, Aaron. Thank you so much. Thank you. And especially I think it's, it's very helpful to see those kind of concrete examples of things that a funder might look to support in in medicine so thank you for that. All right. And so, again just reminding folks if you have questions to put those in the Q&A. And we will move on to David, David you have the floor. Hi, I'm David. Thanks so much Aaron for moderating this discussion. I'm going to start off by sharing my screen. So, hopefully that is clear. Thanks for the introduction, Aaron. So I won't say much more except for that my life, the introduction, the beginning of my life at a funding organization was actually to be hired as the very first program officer when Templeton World Charity Foundation started scaling up its grant making activity and so my perspective here as a funder is one of learning about the whole process from your strategy to grant administration and also one of making almost every imaginable mistake along the way. So with that in mind, I hope to share some of the lessons that I think will be useful, maybe even a few points that could be counterintuitive. So let's start here with some flow diagrams, very oversimplified, but please bear with me because I think it's fun and there might be something to be said here. So, you know, to the left here we can think of, you know, on a high level, what funders do they come up with strategy, often by surveying the field and commissioning reports of various kinds. So they invite proposals to execute that strategy and they review the proposals and award grants to a select few researchers. And maybe here in the middle, we can think of scientists, they find resources to let them conduct research, they conduct the research, and then they report their finding. And then to the right here, we can think of publishers, they review the reports submitted to them, they curate approved ones and topics and the keyword that make them easy to find. And then they disseminate those reports as widely as possible. So there are two things to say about this diagram. The first is that it's vastly oversimplified. And it really contains enough detail for the second thing that I want to say. And that is something that I think, you know, it's sort of a bit of an uncomfortable thing to have to deal with. But if you sort of bear with me trying to explain it for a moment, you see that when I started as a scientist, and for most of my career as a funder, I heard people talk about scientific discovery in terms of innovation and indeed what we've heard so far. And that is what we do. We, you know, the scientific endeavor is one of trying to serve a broader community by discovering and developing new innovations that can be applied in practical ways. And, and that's great. But when we talk about research leading to new innovations or turning innovations into tools or interventions to help people, we should ask the question, where does this innovation come from. And I know what I'm about to say is pretty unscientific, but it seems to be that there's a general expectation that innovation comes from this middle box here in the center of the page. It comes from scientific research, new methods, new tools. And, you know, new discoveries coming from that. And if this is even remotely true, then to me it seems like a bit of a missed opportunity and I might seem obvious but it's a bit weird that the one would only have innovation in a particular case. If we look at other industries, for example, one can see innovation at multiple levels. So for, you know, the Toyota in the 1980s, made a name for itself because of their Kaizen philosophy they had continuous improvement innovation at every stage of the manufacturing cycle. Now, I'm aware that this is an investment conference and I'm not showing you scientific data, but as an anecdote in support of this innovation point. In 1996 when business strategy was becoming a big thing, Michael Porter wrote this famous article called what is strategy and the Harvard Business Review. And even there, you know, an article about strategy led with a very substantial section on operational efficiency. And this kind of Kaizen approach, the approach that was pioneered by Japanese companies at that stage innovating at every part of the production process. What was mentioned as a necessary feature for an organization to be successful. As a few other examples, Netflix owes some of its success to this filtering algorithm that predicts what you might like to watch. And that innovation came from a crowdsourcing mechanism that offered a million dollars to the inventor of the best solution. Or if you look at, you know, other very successful companies in the world, it wants to either make the most profit or best public benefit. It's really hard to attribute their success to factors that are not tied to innovation at different levels, innovation and marketing, innovative technologies and products and innovative revenue models and so on. So, coming back to this slide. I think there's one question I often ask myself is should we be content to strive for innovation, only in the middle box. And I think it's hard not to ask for more I think it's it's sort of probably a bit of an awkward position to make in this conversation because it's it's tempting to dream of what broader innovation could could bring us in in these other boxes you know, what if we could cut down grant application times by 50%. What if we could do the same for a publication timeline. What if we were better at recognizing and funding the most promising risky projects. What if we were better at making sure results reached people who needed the most. I think it's very satisfying to strive for those ideals. And I think meta science can help us do that to some extent. So, you know, recent findings that helped us understand that funders and institutions could change incentives to promote more rigor. And I think that's a really cool thing to imagine that that meta science might might bring to this field. Now I should just stop for a moment and acknowledge that the data that we have is a very large amount of data that we have. And I think that's a really cool thing to imagine that that meta science might might bring to this field. Now I should just stop for a moment and acknowledge that meta sciences is not going to be the only source of innovation. For example, journals and funders have increased their online presence to reach more people through proposals and publications. As COVID-19 ramped up, we saw some funders launch a rapid response grants and some scientific reports undergoing expedited review. So putting that stuff to one side, I think it is easy to see, at least from my perspective as a funder, that findings in meta science can really help guide us to new innovations in the scientific discovery process. And that makes me excited. I think it can help us improve some of the other boxes here on this slide so that with the overall outcome for human flourishing, being better than it otherwise would have been. And I'd say that's a part of my perspective as a funder and why I'm interested in the field. And, you know, almost have it in a nutshell, I think we could, one can imagine using science to improve science so that better science can improve everything else. And we're here also to stop the screen share and just talk about a couple of other points because I think with that being clear, there are two perspectives that I think should be presented as almost as boundary conditions and maybe a word of caution for some. I find very well with what the other funders have said here so far. The first perspective is on the difference between what we can imagine and what we can execute. So I'm very lucky to be working for a foundation. It's very quick to adopt new solutions. I've tried numerous ground developed mechanisms. We are able to quickly sign on to plan S and Dora. We've launched this new structured adversarial collaboration program in 2018. We supported a sub-ground competition tied to registered reports. We used pre-registrations, open reviews, range of other things. But it's extremely difficult to do this as a funder. And even as a private funder with relative use of bureaucracy, it's very, very hard. I'd love to tell you that it's just a matter of us getting our act together and that we'll do it soon. But I think the truth is much more complicated than that. When I looked at a batch of proposals, the number of ideas that deserve funding always exceeds the resources available. And in such cases, our decisions are not about funding or not funding a project. It's about taking resources from one opportunity in order to fund another. And behind every application is a real person with a strong team of real people. And they have friends and families and bills to pay. And with that in mind, I think my desire to innovate has to be balanced with a responsibility to be fair and respectful and cautious. Because even small mistakes can be devastating. And for that reason, it seems a bit counter-intuitive, but sometimes even if an innovation seems really great, we can't necessarily just go and try it. We have to look for opportunities to do so safely. New practices can take times to establish. They have to be reviewed by committees and tried on a small scale and then maybe ruled out more broadly. And there are also, you know, we're aware that the practice or policy that benefits researchers in one context may have either no effect or the opposite effect on another. And that could be problematic. So that's all to say that, you know, I greatly appreciate the value of meta science, but innovation in this field really does take time. So based on that, I'd like to offer a second caveat here, which is that sometimes the pace of innovation is constrained by practical limitations and not by the rate of meta science discoveries. So I kind of, you know, if it existed, I would have loved to cite a meta science study on how much meta science would be needed to match the progress of innovation. That's probably just overactive imagination, but if it exists, please DM me. But, you know, in the absence of me exactly how much research we need in order to improve the research process or how quickly we can innovate. You know, we have to guess and obviously make educated guesses iterate and have conversations like these to think about it carefully. But ultimately, funders can't allocate more resources to meta science without first taking those resources from other areas. It is a zero sum game in many cases. And then these decisions become very hard when those other areas relate to cancer over other devastating diseases. So, you know, I absolutely agree with everyone here that that more funding is needed for meta science. That's why we funded this conference and a number of other projects. But I think the point here is that we shouldn't despair if progress seems slow. And that there would be little use in having those meta science discoveries outpace their implementation in this space. I think I think there are almost practical limitations get to how fast the field can move. And I, by the way, I would love to prove it wrong. It's just, it's my perspective. So, you know, if, if, if that's true, then I think this is partly a matter of recognizing that meta science activities could be limited by the resources available to us and the rate at which we can implement best practices. But the good news here is that this would also be an encouragement to the research community to translate new discoveries into concrete practices that benefit the community. And I think, you know, so some of the people on this call, who do the research, you know, it might be that by through the way that you translate your findings, and, and the way that those might be or the pace at which those might be adopted as new innovations that benefit the community. That could really help bring funders and others along. To conclude from my perspective as a funder, I hugely appreciate that we need to innovate in a way that we facilitate the scientific discovery progress. Some of these innovations can be informed by meta science research, a lot in my view. But even the most promising ideas can be difficult to implement. And the momentum in the field will likely depend on context to which it can be translated into practical applications. So, you know, that said, I'm super excited about this field and proud to be working for a foundation that takes it very seriously. And, you know, really looking forward to doing more work in this area and discussing these topics further with you. So, thanks again for giving me time to share my perspective I'll now hand it over back to Aaron. Thanks, David. That was great. And thanks again to all our panelists for their for their contributions. We're going to move on to the Q&A we have about 20 minutes. And there are a couple of questions in the Q&A and I also encourage folks to add to that. So I'm going to just go in order here. I think Susan had already answered one question there. So, Jeffrey Mogul asks or says, you know, Susan points out that meta science would be more successful if it integrates into the disciplines it's trying to change, rather than just becoming a new siloed person of its own. So this integration, he says I think it's difficult because regular working scientists feel attacked or threatened by people attempting to change the rules or the incentives that have served at least some people well. So, yes, is there a solution to this problem is this a situation where instead of trying to get buy in from below the better strategy would be to advocate for new mandates from above. So I guess I'll go to you Susan first and then if other folks want to want to chime in there as well. Yeah, thanks Jeff for that question. I think, you know, it's an interesting point I mean how do you change, how do you change culture. So do you first attack, you know, structures, or do you try to change hearts and minds. I mean, so, and I think you know the answer with any dichotomy is both, right. So, I think to some extent, the tone at which meta science research is presented, and I have found that it does not often use this sort of gotcha kind of thing that you know you know it has been quite sensitive to the idea that that we that we are working on a shared interest. So, for a scientist to feel that they're being attacked, because they are using techniques or approaches that are not getting them what they want. Right. It's not that that we're, you know, I mean, they want to get gather and gain and generate knowledge that is true to the natural world that is that might it doesn't mean that it has to be immediately useful but could suggest itself to be useful in the future. I mean, if you can improve the quality of the information that you're generating by being reflective on your own practices, why would you not want to do this. So I think, yes, they can be top down incentives to help and support and create a culture where this kind of interaction continually goes on. And then I think there, there'll be this bottom up and you're seeing a lot of bottom up I think particularly young scientists who are coming into the field are really embracing these kinds of interactive approaches between science and empirical science so I think I think that I think it's also going to be very field specific or generation specific, but I'm very hopeful that that these changes can occur by by both doing hearts and minds and and sort of structural changes. Great thanks isn't anyone else want to want to weigh in on that question. Yeah, Arthur. Yeah, thanks. I'm, I'm not, I worry, so I think it's a great question mandates are strange right for one is for two reasons. Even though we may share kind of fundamental set of values about what science can do. People study phenomenon at different, you know, at different levels of understanding different levels of, you know, theory and things of that nature. So, kind of one size fit all standards are don't tend to be well received and don't tend to work very well, but I think that you know when communities share a set of like values about how how they come to understand things you can work from there. We've tried a different approach at NSF, and it's really to focus on like the outcome again practices in the service of an outcome so you know things we've been doing at NSF as we require data management plans we require to make publications available and the assets available, unless it's there's an ethical challenge with doing so. But like one thing we haven't done I think we'll never do is say what and you must replicate or you must have a certain number of replications, because that doesn't make sense. The, the move that we've made I'll talk, you know, there's two of them that we've made recently is we put out I'll put in the chat we put out a dear colleague letter that's how we communicate with everybody about broader impacts. And what we've asked people to do in broader impacts is to be a lot more specific about the relationship between the research you're doing and how it's going to improve people's quality of life. And I think, you know if you really want to deliver the mail on that you have to talk about the things that we're talking about here. Right. So that's one thing we've done the other thing we've done that some of your, some people on the call me may think it was controversial but I think it's in the same direction. It's in the science of science program. I'll just tell you personally when I walked to NSF. I love what the science of science program could be. But what I saw in terms of what we had funded over 1015 years a lot of really good work but I saw the dynamic that Susan Fitzpatrick talked about that maybe a lot of the work was very influential, and not really penetrating out. So we reposition that program the focus on discovery communication and impact. And now we require, you know, people in that context to really build the bridge between the work that they're doing and the impact that it can have. So I think pointing people to like the outcomes and the people and the quality of life. I think is is a way to respect community standards respect diversity and the types of things people do, but motivate folks to create greater fidelity. Thank you so much Nick or died anything to add on that one. So, the next question we have is from Brian no sex so he's talking about a feature of some areas of medicine is this social coordination of many researchers towards investigating a bigger problem than they could do individually. So this can create challenges for obtaining funding support, because the collaboration is decentralized projects may not fit with standard funding mechanisms. So referencing Nick's announcement of Templeton supporting the psych science accelerator suggests that gaining support for projects like these is not impossible. So he's asking, what are the key concerns that grassroots projects like these need to address to become competitive for earning financial support. And maybe we could start with Nick on that question. Thanks Aaron and thanks for this. Good question Brian. I mean more answers to this than there are funders, because each funder hopefully has more than one way of doing this but but probably different ways for each but I'll give the partial answer from our perspective. I think in general for for us and necessary, but not sufficient condition is that the project in some way relates to a core theme of interest to us. And there's a lot of those. So that could be innovation in general and understanding that the history of innovation. That could be something around a topical area like like religion or character, something like this. There will be some other examples that are like that genius example I gave earlier. So in some way linking it to one of those perhaps using the study of that area as a case study for the broader metascientific events that that the applicants are trying for, but for us. And it's also helping us make progress on that specific topic. I'll give one other example. And that's the, the start of the developing belief network. So we've, this is a initially a $10 million initiative, bringing together developmental psychologists across field sites around the world to do effectively study swaps, protocol swaps. And develop a common suite of measures for basic things like theory of mind that could be used to do good cross cultural work across these field sites. Now, why are we interested in that. Because we're interested in, in how children learn about invisible things like that they children say they believe in like gods or ghosts or ancestor spirits, or what have you. But other people ought to be interested in that as well. How did children learn about other invisible things like electrons or germs. And probably there's some similar learning mechanisms involved there. So, so we've, we've, we've been able to launch the start of this platform to build the team science infrastructure. But we have constraints on our renewal and follow up rules. We really deserve to go on, at least another five years beyond what we've started. So we absolutely hope other funders will will come to the table whether it's mainstream science funders, federally or privately who care about science education, for example. So we're concerned to, to help design that project so there are on ramps for other funders you maybe don't care about the religion piece, but can see that this project advances a broader goal as well. So advanced, advanced a bigger goal, but also make sure that you're advancing one of our themes is my partial answer. Okay, thanks Nick. Anyone else want to weigh in there. Yeah, Susan. Thanks Brian for that question. I'd like to actually turn that question on its head a little bit. There's this idea that there are these you know, large popular, you know, communities of researchers who want to do this kind of research and are having difficulty getting support. We actually found the opposite in our experience that when we try to bring together a community of researchers, their own often parochial interest kind of carry the day. So they're very interested in their own project, but they have a difficult sense, seeing how this gets into a larger program. Right, so they may come to a meeting or be part of a practice community, and they want to share what they're doing, and they want to have what they're doing get improved from input from the group. I mean, I did that they are contributing to a collective accumulating knowledge that is going to make everyone's work better is a little more difficult. I, we have found that this is a little more difficult, even when the foundation is funding this work. And so therefore providing the incentives, because we're often working against other incentive and reward structures. Particularly within academic science that really rewards hair splitting, rather than collective collective knowledge formation so so I think, I think it's a, I think it's a bigger problem that we probably need to work on it multiple levels, and to build multiple approaches and senses of trust and community that I think will might might take, you know, to this channel David might take some innovative approaches that we actually don't have in place right now but I think I think it's I think it's a bigger actually a bigger issue than than what Brian brought up. Thanks Susan yeah. So to this question I just want to talk about a recent change at NSF because I think a lot of people know that NSF has not traditionally funded very large proposals in this domain or in the social sciences generally I assume a lot of people here from the social sciences, but that has changed. The culture has changed with inside NSF where I think Pete there's a real understanding now what large scale social science looks like and can be. So, within a couple weeks we're going to be announcing two grants one is already sort of public, a $30 million grant that will improve the finability of a lot of types of data that people are looking at here a single grant. In a couple weeks we'll be announcing a second $15 million grant that does a similar thing in the context of social media data, which will increase access while securing privacy and things of that nature. The key to all of these things is what Susan said right you've got to come up with a big sort of general vision of the service that you can provide to lots of scholars you have to think of it like infrastructure, as opposed to a single research project. You've also got to be able to tie it to a, you know very big societal concern, and I'll just say if people are thinking about this NSF lane, not only has the change I've just occurred talked about that Alejandra ratio and the questions talk, asked, if we could bring together, you know, a higher education organizations government organizations and private sector. That has just announced a new directorate a new part called tip its transformation innovation and partnerships. There will be a new funding lane there for that type of thing. So if people in the meta science context can find the intersection between infrastructure that really supports you know a wide set of activity and meta science goals that brings in partners from these other other contexts, there are great opportunities there. Okay, thanks so much other and that's actually a good segue so maybe we could go to Alejandra's question. If other folks want to weigh in on this idea of bringing together higher education institutions, government organizations and the private sector as kind of a collaborative group that could support research in these areas. So anyone want to comment on that. Yeah, Susan. I think this is actually something that you should comment on because this speaks directly to the open research funders group and the round table at the National Academy of Sciences so I think this is this is your question. Yeah, so I'll just say briefly yeah so Susan is referring to this National Academies of Sciences, the open research funders group is collaborating with them to do a round table on inlining incentives and the ideas precisely this that we have so many kind of stakeholders in this space we have, you know, private funders public funders universities, you know, private research institutions. And what we really need to do is have kind of all of these folks pointing in the same direction and and really aligning their, their messages so what can we do at an institutional level to incentivize in this case maybe what can we do at a funder level to coordinate that and also improve incentives. So we are looking at that and trying to bring together stakeholders from all these different areas get everybody kind of at the same table discussing these issues and really finding out where we can get each actor to to align these incentives and move together as kind of a coordinated group. It's not, it's not easy. But it's definitely something that we're looking at doing and I and I hope other folks are also looking at at these ways to come together so that we're not kind of working in these separate silos but really moving forward together. Yeah. So thanks for that. And I'll put a link to the round table in the in the chat if folks want to more information on that. I just want to weigh in on this question. So, I guess there's a really a related question here about funders from anonymous attendees so how much should the funders be kind of coordinate coordinating their efforts to better serve society, instead of kind of this competition that often occurs so does anyone want to comment on that particular coordination among funders yeah Susan. So I think, I think there is probably more coordination of around in between funders and probably applicants might know. I mean, you've heard that we there are many of us here involved in this this round table. We are part of the open research funders group, the Health Research Alliance is an alliance of, I don't know, 10s or not if not over 100, you know, disease specific research funders who share knowledge best practices. The James S. McDonald Foundation is part of something called the brain tumor funders collaborative where private funders and brain tumor research which is the relatively small field are all coordinating our efforts. So I think, I think there's actually more of this kind of sharing of information that now goes on. But I also think there's something that we, we have to think about it and I've been thinking about this a lot so this this is often coached in the idea of efficiency, right. If we just pulled our resources and people could submit one proposal to everybody and then we could parcel these things out. Wouldn't that be much easier. And that gets to who makes those decisions. And what what something that the found that the James S. McDonald Foundation may find interesting may not necessarily be of interest to NSF, or it may not wind up being a priority for the Templeton Foundation, even though we on some level on a metal level today, have a shared, have a shared or common interest, we're bringing to it, which I think is both the beauty and the strength of private philanthropy, as well as maybe a weakness I don't know, but we each bring our own idiosyncratic kind of point of view to this, which does not really mean that you get a greater diversity of research being funded, then you would if you were doing it through a central process. I think one of the things we always have to be worried about is this, this dominant orthodoxy, because it's not just. Oh, when the orthodoxy turns out not to be true, we'll just do something else. We built an entire infrastructure around that orthodoxy tools training, you know, research approach, you know, everything has been focused on that. So it's not like you can just pivot very quickly and say well here's an alternative hypothesis we can review. So I think you have to keep alternative hypotheses alive in some ways, and alternative approaches alive. And I think the best way to do that is through a very diverse and distributed funding mechanism. So it's, you know, it's, it's a it's another one of these balances between what what of two dichotomies where, you know, the best approach is somewhere is finding finding that that angle of repose to some extent of where where it might work best. I mean that's the private perspective I'm sure my colleagues might have different ideas. Yeah, I would go ahead. Yeah, I just a very briefly I think this this comes to the difference between what we can imagine and what we can implement. Getting such a large group of people to work together incredibly difficult and very expensive. And so, even if, even if we can imagine doing it. I think there will probably also be wastefulness. And one then has to ask whether whether it's worthwhile but more than that, it might just be impossible to get so many, many funders to collaborate together. Right. Well, yeah, Arthur, did you want to come? I wanted to, you know, from the government perspective I wanted to agree with, you know, everything Susan said I mean in terms of the different approaches it really matters and it is, you know, the institutionalization of of an academic enterprise. It is so hard to change. It is so hard to challenge. To move forward to advance in so many cases we really need to kind of break down walls. You know one of the things I'm so grateful for in the public private thing is, you know, at some in some ways NSF is pretty restricted. We can only fund you know basic or use inspired research applied research where there's a specific allocation we're statutorily restricted from from funding that. And the flip side of that is we can fund things on 51020 or a time perspective than I think that's really hard for a lot of private philanthropy is to do. Right. So I think there's the mix, you know, we have these constraints and opportunities. The private tab these amazing sort of opportunities and folk I and networks and things like that. I think the union of our skill sets, the union of our opportunities is so much greater than what any one of us have that, you know, I think, you know to really push things forward to meet the goals we've all been talking about. It's that the real diversity of our sort of approaches outlooks and so forth that have are the best hope for driving this thing forward. Great. And I think that's probably a good work right at time. So Susan did you just want to make a quick comment. I just wanted to say very quickly there was a question earlier about neglected areas of research. I think this is another place where private funders can often spotlight those areas by by taking these smaller initiatives I mean it's very difficult for someplace like NSF to sort of spotlight an area of research because of the responsibilities that it has to the broader enterprise, but we can do that. Great. Thank you so yeah we are right at time so I want to just thank everyone for an excellent panel and thank all the participants for their questions. And yeah so a digital round of applause and I will hand things over to our organizers. Thank you so much.