 Okay. Well, hi everybody. This is a panel discussion on something that we're calling big team science. And, well, all sciences collaborative, but big team science takes this to an extreme. The idea is that you have a very wide collaboration across many, many people who pull together both intellectual resources and perhaps material resources to pull off projects that are much larger than could otherwise be pulled off. So collaboration is nothing new in science, but we've noticed that there's been an increasing trend over the past, maybe 10 years in disciplines such as psychology primatology infancy research and ecology. And it's really important to have these big team science projects that are facilitated by, you know, the internet and communications technologies. So the purpose of the panel discussion is to explore what these collaborations enable what you can do with these very big collaborations that you could not otherwise different ways to structure these collaborations and some challenges for these collaborations. We also want to explore some disciplinary differences or I had taken interdisciplinary perspective on these topics. So we're going to have about maybe 4045 minutes of discussion with plenty of time for Q&A afterwards. So I'll introduce the panelists or let them introduce themselves. So why don't you give, why don't you kick us off. Sure. Happy to. So my name is Nicholas Coles. I am a research scientist at Stanford University. And I'm also the director of the psychological science accelerator. For those who haven't heard of our network, the psychological science accelerator is a globally distributed network of laboratories and we pool intellectual material resources in order to accelerate the accumulation of reliable knowledge and psychology. And our network currently contains a little bit over 1200 researchers from 82 countries, and thus far I believe we've been responsible for the implementation of some of the largest experiments ever conducted in social and cognitive psychology. Thanks Nick. Tim, how about you. Thanks Patrick. So my name is Tim Parker, I am a professor of ecology at Whitman College in Washington in the United States. And I am currently a co-leader of a, of a many analyst project in ecology and evolutionary biology where we've recruited hundreds of different people to analyze one of two data sets to try to generate some understanding of some of the the degree to which analytical choices drive variability in results in ecology and evolutionary biology. We know there's quite a lot of heterogeneity in results in ecology and evolutionary biology, and we're exploring, you know, one potential source of that. Hundreds of analysts and actually another couple of hundred internal peer reviewers as well. I'm also a participant in another big team project, DragNet, which Lauren's going to talk about. Great, thanks Tim. Why don't we take it over to Lauren then so that she can tell us about DragNet. Great, so my name is Lauren. I'm an assistant professor at the University of Missouri and I've been actually a part of big team science, as long as I've been in science it seems like starting out with a nutrient network but which is now a transition to the DragNet, which is basically we're trying to understand how different global change factors like disturbance and nutrient additions influence grasslands. And so we have one system of an experiment that's repeated in grasslands all over. And so, yeah, we have one experimental setup, data is collected the same way, and then everybody else runs the experiment. So it's sort of like a collective right and then we get the data and process it all together. Thanks Lauren. How about you Drew? Yeah, hi, so I'm Drew Altul. I'm a British Academy postdoctoral fellow at the University of Edinburgh Department of Psychology. And I'm one of the member of the coordinating team for many primates. So many primates is a collaboration of comparative psychologists and primatologists from around the world. Our goal is to pool our resources in particular the different species and groups of primates we work with in order to conduct high powered studies, particularly replications that will hopefully answer the big questions about the evolution of primate cognition. So we were just finishing up our first big project and we've got two more, two and a half say that are well underway at the moment. Perfect. And Kylie, how about you? Thanks Patrick. My name is Kylie Hamlin and I'm a professor of developmental psychology at the University of British Columbia. I'm also a governing board member for many babies. Many babies kind of stole its name from all the other menis happening in the world. And in 2015, it was just a bunch of infant, mostly infant cognition people getting together and thinking that there's lots of questions that the small samples that we're able to get in our own labs just can't answer. And some of those being straight up replications, you know, doesn't affect exist and others being questions that really just need many, many, many, many infants, more than you could ever get in a single lab to do. So, we're currently about 200 laboratories in the world, and we have several finished projects, as well as about five others and spin offs of those projects in the pipeline. Thanks Kylie. So the, the first topic that we want to explore is what big team science is for what it enables. So I guess my first question for the panelists is, what's the origin story of your collaboration or organization, and why, why take this approach. What does it allow you to do that you couldn't otherwise do. So, Nick, why don't you kick us off. So the psychological science accelerator was founded August 2017 by psychologists at Ashley University named Chris Chardier. And it basically started with a blog post where Chris talked a little bit about his vision for bigger and better psychology. And for people like me who had thought a lot about the value of big team science this was a long overdue call for reform. So, a few weeks later, we already had 72, 72 labs that had joined this network that we now call the psychological science accelerator. And I believe it was maybe a month later that we announced our first call for studies, and then just immediately started building up the network. So that was almost a little bit over four years ago now. And I became director at the beginning of this year after helping sort of build this into a network that is now a little bit over 1200 researchers and I believe that maybe we're one of the, the biggest big team science organizations that I'm aware of in the social sciences right now. So we have now 11 studies on our roster to which have been published at Neat Dreaming Behavior, and many more coming soon. And Patrick, you kind of asked a two part question about why do big team science, but to make sure I don't take up too much time. I'll push that question to the side for a moment. Sure. How about you true. Many primates started in I think late 2016 early 2017 when a bunch of us were kicking around some similar ideas. And I think it really started with us thinking about psychological replication crisis and particularly issues of sample side because in primatology and primary cognition we have. We have smaller samples even then than cycle social psychology or developmental psychology used to have we there was a real challenge with this. And so we wanted to understand were what were the problems that were potentially being caused by this and ultimately what could we do about the fact that if you go to just an individual lab or an individual zoo. It's just not that many individuals individual primates that you can work with so how can we as a community get around this issue. The second thing that came along with a realization that the focus and primary cognition is on just a few species chimpanzees reaches monkeys for examples and there's a vast amount of energy being devoted to those animals, but we're not really getting a very particularly wide or comprehensive picture of cognition across the entire clade. So we in order to actually understand the evolution of primary cognition we felt like we needed to just try and broaden that out and bring more people in because these two goals working together, the what made us want to bring together a bigger organization that could reach out and bring in more people from who wouldn't normally also be involved as much. Perfect. How about you Lauren. So our origin story started with my PhD advisor and several of his friends they were all in a synthesis working group and so they were tasked with trying to use meta analysis to understand some of the generalities that in plant community psychology, and they were just finding that you know the data and the methods were so different across the studies that they were really frustrated and they're like there's got to be a better way if we're really going to be able to sort of make these conclusions and so they're like well what if we set up, you know an experiment to actually test this in the same way everywhere and so they put out a call on ecologue. This is for the nutrient network right by the way, they put on a call on ecologue. And the first year they had 15 people join on and then it's just grown from there to over 100 sites doing the same experiment that you know then you can have all these different. Once the data is pulled you can answer all these other really interesting questions, which has resulted in a lot of work. And so drag net which is this new network that is sort of the nutrient network 2.0 that I'm a part of and helping lead is sort of what are the next 10 years, what's the next big experiment that we want to do, can we get new people involved, can we learn our lessons from the previous version, and you know sort of move forward so that's that's where we are. Hello you to. Thanks yeah it's, I was sitting here trying to think about where I wanted to start my, the origin story of our many analysts project, but since Lauren brought in meta analysis, I'll go to meta analysis. And which is really just sort of my origin story in this interest, and you know an interest in this topic is just because, you know, I did some meta analysis and there's really nothing like meta analysis to really give you insight into, you know, variability in outcome and also variability and quality and reliability and variability and choices that analysts make and everything else, which really got me interested in in this in the sort of broader kind of meta science questions about why are the results vary in, in ecology and evolutionary biology and, and a related sort of question, which motivated me as well which is just, you know, what can we do to get more, more reliable results. Anyway, I actually don't remember exactly when I started to talk with my collaborators about this, doing this many analysts project but we, I think we started thinking about it, you know, you know, four ish four or five years ago. And then, you know, eventually managed to find some data sets that we thought were suitable to, you know, to have to share and have a lot of people analyze. And we just, when we decided to do it as a, as a registered report so we wrote up a proposal. You know, you know, a methods and a, you know, an introduction and a method section, got it reviewed got it, got it accepted, you know, as a stage one registered report and then we just started recruiting people and we just used Twitter to recruit and, and have, like I said, recruited several hundred people I don't even, I don't even know how many people are involved, but you know, on the order of 400 people or so. And, and those folks have now done their analyses, and we're just in the process of trying to, trying to get rolling and I guess just to maybe answer that second party question a little bit. You know, to answer this question that we're trying to answer this research question of why do results vary so much among studies and ecology, you know, if, you know, to explore this one potential mechanism which is the choices that analysts make are different and they contribute heterogeneity to results. So, you know, one of the primary ways the only way you can really study that empirically to actually have an idea of how analysts, how different analysts do, you know, answer the same question differently is to give them all the same question and see what they do, like it just seems like the natural way to explore that empirically. Great. And how about you Kylie. So, you know, our origin stories in 2015 just a bunch of people at a conference chatting about like the possibilities of what we could, we could do if we did something like pool resources across many labs. But I would say that like there are three things that we try to do as many babies, which we can only do in this kind of setting, one is just have bigger samples than you could get in any individual lab. So, the babies and I presume primates are the kinds of subjects that you can't get a lot of trials out of. What you can do in an individual lab is like sort of double limited right you have fewer subjects, and you get fewer trials out of them and so things like sort of looking at psychophysical curves and all kinds of stuff you might want to do you just really can't do unless you have tons and tons of subjects pooled together. The second is of course basic replication like does this or that famous effect in the literature replicate or doesn't it. And one thing that we do a lot as many babies is we try actually not to necessarily do direct replications of published things, rather we try to sort of do in some ways adversarial collaborations where the group comes up with the best way to test a hypothesis. So you know there's maybe 10 papers in the literature about something about infant cognition, five of which suggests they can do it and five of which suggests they can't or something like that. And the idea is that before we start the study, people from both sides of the theoretical divide agree that you know if we do this study and the results come out like that, then, you know yes babies can do it and if the results come out in this other way, then we all agree. No they can't. As opposed to this sort of like I published a paper saying yes you published a paper saying either no or saying like I interpret your data differently than you interpret it. We sort of get the interpretation stage done with first. So yeah, those are sort of our three goals. Perfect. Nick I see that you want to say something more. So this is my first time hearing some of the others origin stories and it's really fascinating and it just a thought popped into my head that there seems to be a common thread that's uniting all of us today and I think that what we're seeing is that the biggest issues that this face require a lot of resources, a lot of perspectives, and many minds and just as we saw that big team science sort of has helped physicists make great leaps and helped us map the human genome. We're seeing that that sort of big team science model can be usefully applied to other disciplines as well and it seems like one of the common threads in our stories is that at some point, we came to this realization that our topic of interest is really complex, and that we've been trying to study something that's complex, but have done so with these financial and logistical constraints that sort of forced us to do this. We're doing this with very small operations. But it seems like with all of us were, we're learning perhaps from like the meta science movement that our studies can yield different results depending on experimental designs and different perspectives and different measurements and different cultures and areas that we sample data analysis strategies and I think this is all pointing to the idea that the reality that we're trying to understand is extremely complicated, far too complicated to be understood through small operations. And I think what's uniting us is this realization that big team science networks like the ones that we're creating, give us those pooled intellectual and material resources necessary to try to understand that complex reality. Yeah, and I wanted to ask a follow up question for you all that relates to those themes. So, hypothetically for many of these projects, you could have many labs run related projects and then synthesize them later through meta analysis or some other quantitative synthesis, hypothetically. However, you know, all of these projects organizations work on unified protocols and actually have a collaborative element. So I wonder if you all could speak to what you think for your project is gained from that specific collaborative element. Maybe, Drew, can you kick us off. Sure. Yeah, I mean in the in the animal cognition literature there's, I mean quite a bit of debate just like there is in most literatures about is something a direct replication is a conceptual application did they do it right that they do it wrong. I mean in our case it was really important to figure things out beforehand and come up with a fairly unified protocol because it turned that we like, you have to come up with, I'll take our first study as an example, which is a short term task where you have some cups on a board and you move you hide a tree and you move the cups around a little bit. It's actually very simple. You just have to get the monkeys to remember how long, like for 30 seconds or 60 seconds to see if they remember where the cup was very simple tasks, but it turns out you have to have different kinds of board sizes depending on the size of the animal to make sure that like the visual field is being represented the same way with all these different animals. And these are the kinds of things we never would have. If we didn't sit down in advance and pre register and try and figure this out all in advance, we would be hitting we would be hitting so many issues down the lines we tried to implement this at every individual site. So instead we tried as best we could to have every site show us their implementation in advance. And then the committee, a bunch of us would look over and say okay yeah that looks fine or if they had issues we'd always respond as quickly as we could saying hey, don't worry about this. This is okay we'll make note of this. And even when we did this issues would still come up down the line a little bit, but the fact that we did as much as we couldn't advance made a huge difference it would have been basically impossible if we didn't make do all this planning and centralize it in advance. Thanks. How about you Lauren. Yeah I so I think one thing that's really great about this big team science right that we found is that it's just really built on a group of amazing people and so we have so many right when you're writing a paper you have 3040 50 of the best minds in the field to review your paper and make sure it's really good and that can be overwhelming of course but in the end I think it really pushes the science forward in ways that we wouldn't have guessed. And especially we have a protocol for add on experiment so someone has an idea right that wasn't the initial idea they can propose that other people can all collect that same data pool that work on a new project and so the questions have also really gone in directions that the original. So idea folks the original people who set it up never had anticipated and right like that's why I'm involved is because we get to do cool stuff and so I really love the fact that it's just been such a great group of people to work with and that's been super important sort of as we're transitioning right so now we're getting to almost two generations of of of scientists so like, right like I my visor student but then now I'm bringing students into the network and so we're really, you know, going pretty fast and so what we found is that what's really important about it is the community so people want to use our data there's plenty of published data out there now from the network and people want to use it. But it's hard to convince them to sort of fit into the framework sometimes because they don't sort of you know they don't participate in the meetings they don't contribute the data right so there's a clear difference that we're seeing between just the data that's available versus actually contributing and being a part of this intellectual community so that's what I love the most about it. Yeah, thanks for that. How about you Nick. You raise this really interesting question of like, why don't we all just do science by ourselves and then we'll combine it later. But what I really liked about your origin story, or and is that your group was built off of just knowledge of how frustrating that process can actually be. If you combine the synthesizer literature through something like meta analysis or systematic review, you know that you often learn that the people have just done things too differently. And it's so different and so many different ways that it's not even clear if it makes sense to combine it all into a single analysis or to a single conclusion. I think Lauren what I really like about your origin story was like part of what you all wanted to do was like drive towards standardization a little bit so that things were actually a little bit more comparable. I would say another frustrating aspect of trying to come in after the fact and synthesize everyone's work is that you have to try to find it first that's very time consuming, but people aren't very good at describing what they did in their studies. And oftentimes, you know, sometimes you'll run across an author that you, you use one of their studies in your meta analysis and they tell you all these details about their study that you had no idea about like oh yeah we measured X, Y and Z and you say, I would have loved to have X, Y and Z in my meta analysis but I never knew you had it. And there's just so many limitations of the tools that we have to synthesize the literature after the fact. It just seems so much more powerful to coordinate in the first place as opposed to try to clean up the mess afterwards. Yeah, thanks for that. Okay, I wanted to pick up on another issue that Kylie hinted at, which is when you're managing these big projects, these big collaborations. You are in control of a ton of resources or the collaboration is in control of a ton of resources. So what you choose to deploy those resources on is extremely consequential. So, how do your respective collaborations make that decision or is there a structured process for that. Maybe we could start with Kylie since you already hinted at what many babies does. So, the thing that we did first was we had the community nominate potential topics or effects in the literature that people would like to see as subject to a large scale of reputation. And then we voted on the top ones. The project we chose to do first was actually a phenomenon that pretty much everyone thinks exist exists and that was the sort of intentional decision because we wanted to just like have a proof of concept can we actually coordinate across. At that time it was 60 labs doing behavioral, you know, lab based research. And after that concept we went into much more controversial phenomena to see if we could replicate those for future things that perhaps were not voted on in the initial round. So we have a formal submission process where people can submit like a proposal to say, you know, I would like to make this a many babies project where they need to describe things like why is this something that the field would think is worthy of, of a ton of resources. Why can't it be done in an individual lab kind of situation. And, you know, who's going to leave the project system, some sort of practicalities of how this might work, etc. And then our governing board reviews those submissions. We've had, you know, a few cases where we've gone back to people to ask them to do more to justify the sort of need for this project and other cases where it's been clear to everyone and, you know, those have been approved right away. Yeah, thanks for that. How about you, Drew. Yeah, so we're basically kind of on our, our first round of ideas. We've done a similar thing with with what many many babies has done as Kylie said where we basically in 2018 at the International Primitological Society Congress we all sat down in a special meeting and kind of brainstormed ideas really quickly and wrote them all down. And then later we voted kind of did a short form voting, rather than we had our long list and we narrowed that down to a smaller group of ideas and then we had people volunteer to basically write little outlines, kind of an outline and case combination, backing up and supporting and presenting each of the, the short list ideas and then we voted again. And the first two of those are now going ahead as projects two and three. And the next one I'm not sure if we're going to take another call for ideas for the next for the next round of we're going to go back to our, to our pool. Great, thanks for that. How about you, Nick. Our structure is somewhat similar to many babies in the sense that when the network of when it appears that we have the resources to do more work. We open usually a call and about anyone to submit and part of our process then is that we have an internal review team that proposes proposals for things like feasibility, scientific merit and potential interest to the network and then at some point we get feedback from the rest of the network to see just how interested they are because it seems like all of our organizations rely pretty heavily on volunteer and so your network's not excited about the project it's it's dead on arrival. So that's been the traditional way that we've done things although we definitely had interesting conversations with funders about doing themed calls, where we ask people to submit proposals on a very specific topic so that's something that is interesting that I think will be the future of the psychological science accelerator, but it hasn't been the past. Oh, I'm sorry. So I think for us. Right, I was interpreting this slightly differently so our intellectual resources, anyone can propose a project, propose a paper, anyone can do that that that's up to anyone so that is very driven by the individual researcher. I think resources are best spent I think like actual financial resources is having a data manager postdoc to help us coordinate all the data get the data from all the sites and make sure it's you know in a form that's usable and the same across every so we can all use it across all sites. So that's our number one use of funds and it's you know of course tricky to find money for a person like that but it's critical. I think the second way that we use resources is to host in person meetings where we can spend time brainstorming getting to know each other getting to know the ideas and this is especially helpful for inclusion. Right, getting to know people from other countries that maybe you didn't know before. And so, yeah, that's how we spend our resources. And to follow up briefly more and so when you're when you're choosing what scientific topics to to investigate do you have a process for that as well. So, our sounds a bit different from everybody else's, but right so we, it was structured the experiment was structured on some basic questions that, you know, we're sort of built off this meta analysis working group and then at that point really anyone can propose a question with the data. And we do have to say, you know why it's important and write an abstract and say what data we're going to use and submit that to the, the group but it's really open questions are totally available to anyone who wants to them and we have in fact way more questions than we ever have time to actually write papers to answer them and so we've yeah that's how we do it. Yeah, thanks. And how about you, Tim. I just wanted to, I want to ask Lauren a quick question before I say go for it. So yeah so Lauren. How did, what was the decision making process behind growing drag net in its form out of out of not like how did that because there's obviously had a whole lot of decisions made about the structure of that experiment how did, how was, how are those choices made. So we were super democratic so we had a meeting where we all sat down, everyone their grad students postdocs faculty to write down what we thought might be the interesting next questions that we could ask and right there there are certain parameters in which that we can do this it has to be really cheap and it has to, you know, be able to be replicated, but that was a democratic process so we wrote down tons of questions that we thought would be interesting for the next set of infrastructure. And there were really several themes that came up and in the end only, you know, this one was what just seems most possible at the most places in the most types of grasslands across the world. Yeah, and do you go ahead. I, I know, I mean I'm not actually sure now what I want to say because I want to respond to all sorts of things that I've heard, but I do want to, I want to give up another little bit of perspective on drag net, I mean I'm a contributor to drag net because I haven't taken part in the, in any of the processes. But just sort of as, you know, as a contributor, you know, I'm a contributor to drag net because, you know, I came at this I'm actually a behavioral ecologist I'm not a plant ecologist although I now have an apple pretty seriously in plant ecology, but I came to drag net mostly because I was really excited about this idea of distributed experiments with standardized methods and generating generality like, you know I teach ecology I spend a lot of time. You know, wondering what the heck I should teach my students and what we really know about ecology and, you know, and how do I, how do we generate, you know, how do we generate really good information and like other people have said, you know about meta analysis is tremendous amount of uncertainty often, you know the methods are so heterogeneous that it's hard to know whether things are, whether you're comparing apples to apples at all. It's just the, the, I'm just super attracted to the, to the method of drag net the distributed experiment where, you know, in the single, the single biome grasslands, all around the world people are doing the same things, and you were getting these data sets that are allowing people to answer these questions about generality that I think are really challenging for ecologists to answer in general I think I think ecologists really struggled with these questions. I'm just excited to be contributing to something that even if I never, you know, lead a paper or even if I'm never even an author on any of these papers I'm just excited about, you know, contributing to something that I feel like is, is really generating scientific generality, which I think often in the, you know, with the many individual papers that I've written that have just come out of my own work and my students work like I don't know a lot of that stuff I don't know how general it is. Anyway, I'll just, I have other things I want to say but I just could keep talking so I'm just going to be quiet. So, it's good that we're already stimulating a lot of thought. Before I move on to the next question, just to notice to note to the audience. If you have questions that you want us to handle during the Q&A section. Please do ask them in the Q&A box. We'll just let them build up and some towards the latter part of the session will start choosing questions to ask to the panel. Next question. You know, we've heard from the various panelists that is how these, some of these collaborations have perhaps hundreds of people involved. And that's a, that's a big task to coordinate all those people and to figure out who does what. So, what methods do you use to determine what people's roles are and to avoid things like diffusion of responsibility. We can maybe start with Nick. It's interesting, Patrick, because there was diffusion of responsibility right there. And you employed one of our strategies, which you know I think is to just clearly identify what people's roles are. So one of one of our strategies is we've been over time developing more and more explicit collaboration agreements. That's sort of state what it is that we want everyone to do and what the expectations are as far as timeline is concerned and the actual contribution. This often does involve a little bit of compartmentalization of tasks, which I saw was an audience question. So, you know, usually what we'll do is we will identify a specific person who will for instance be in charge of ensuring that the data management protocols meet sort of our expectations. And so in the psychological science accelerator we have a rule where you cannot have a non centralized data collection approach because it we. I think that many big team science organizations had hit a point where they try to merge 200 data sets and realize everyone named their variables differently and formatted their data setting differently. And so we have data methods people who think a lot about data, data storage and advise the teams on how to do that and help them format their data collection approach. We also serve solely as ethics coordinators who help get IRB approval at various sites. We also have people who serve as dedicated project managers to sort of track the progress of the study and make sure that it's on track coordinate with various individuals. So I think for us the key has been finding people who will fill those roles and making sure that their role is very clearly defined and and expectations are clear. Thanks for that. How about you Kylie. Sure. So one thing for us is that we seem, although we feel that we want everything to be sort of distributed sort of across the group without a lot of hierarchy. In fact, we have a fairly hierarchical structure that ends up emerging for each project. So we have a governing board who kind of oversee everything and indeed, at the end of the day if anyone, you know, below them in the hierarchy doesn't do their job we have to do it. This is how it ends up turning out but then each project needs to identify a lead team person or group before they are, you know, permitted to start that is sort of responsible for each individual project, and that lead team on the project is supposed to create sub teams on each project for things like data management, you know, stimulus creation, writing, you know, etc, etc. So it ends up being that we hope that before each thing happens that everyone is pretty clear on what their role is as Nick mentioned. That said, it can be very difficult when you see that like one person is really or a few people are really doing everything, because some people haven't been, you know said they could help and then something happened and of course that's pretty common. And we just hope that other people can step in etc but it you know it's risky it's hard to to ensure that the work is distributed and that doesn't fall on, you know, some poor postdocs. So I think that's a serious desk and and sort of ends up being the only thing they do so I think I think that's a serious challenge to these kinds of collaborations and ones that like, you can do a lot in the, in the, on the front end to identify roles, and hopefully that will work. And yet still it sometimes doesn't work that well. How about you Lauren. I, you know, again, the number one important thing is having a data manager, because that that's really someone's only job, especially now that things have really grown so much and and we have two networks of data nutrient work and drag and drag net so having that data manager is key and then there's as many people have mentioned multiple committees that's oversee things like authorship committee etc etc. And, you know, for the first good 10 15 years things have been going pretty well, although a lot of responsibility does end up sort of falling on the, the sort of heads of the people who end up getting the most grants and they're mostly for coordination. But we've mostly followed the kindergarten rules, actually, for most of our time right play nice. So everything you learned in kindergarten right those are our rules and they tend to work pretty well. But I will say that now, you know, again as we're growing, we have also started to realize that we might need a little more structure than you know ecologists really don't like that we really don't want to have that hierarchy we really don't want to do that kind of thing but I think it is a bit necessary to keep things functioning pretty well, and it can still be a kind hierarchy it doesn't have to be a mean one but yeah so we're finding that as we grow and sort of making new projects is definitely necessary. Thanks for that. How about you Drew. Yeah so many primates we follow kind of to two principles, in terms of how we distribute the work and get things done I mean, we have, we want to be flat, though we have kind of some necessary hierarchical goal structures, much like with many babies we have a coordinating team for a project and the coordinating team necessarily does have to ask people to do things sometimes. It's been, it's been pretty flat, flat so far and everything is just volunteered and labors all just people sign up for different tasks. And the other thing that we just have to, we have to be aware of and kind of have to accommodate is, is whatever people need to do at their sites to get ethics approval or to have a longer timeline or a shorter timeline or whatever they need to do at their, at their zoo or their lab or their sanctuary, we kind of have to be accepting of that so it means we have to have a certain kind of flexibility. And a certain kind of openness to how people want to do that work and we can't ask too much of them because we are asking, we're asking for their time we're asking for the zoo's time or their facilities time. And Tim what about you. Yeah, well so the many analysts project is obviously I think it's, it's sort of it's in some ways it's kind of the odd one out from the other projects that we're discussing here and that we're, you know we're a relatively small team of people who decided to recruit a lot of other scientists to do analyses, but the others to some extent or another those other scientists are our study subjects on their, their, their, are collaborators in some way there are all those people are going to be co authors on this paper I suspect we're going to have. It'll be the ecology and evolutionary biology will probably hold the record for number of authors for ecology and evolutionary biology paper. But, but you know really there's a there's a small team of us who's running the project. And so, I mean we have similar, similar issues that people have already brought up which is, you know, how does this workload get distributed I mean we're all, none of us are. It's none of our primary research you know it's a side project for everyone involved. And we don't have, we don't have another issue that came up some time ago was resources, we don't have any dedicated resources for this project we're running it entirely on a shoestring. We actually have a situation where, where right now some of the, some of the really key data management is being done by a graduate student, who is, you know, that graduate student is really excited about the project, and I think it's going to be, you know it's great that that graduate student is, you know getting this experience etc. But you know that, you know, we didn't, we didn't set out to. When we designed this project we didn't think oh we're going to really end up, you know burdening this one person with all these, with all these, you know, you know, data management tasks but that's just sort of how it's how it's evolved and anyway I think that maybe the take home lesson is just that there's a, there's a, these projects are really really time consuming and, and they're, they're difficult and they're always probably like every project, it's always more difficult than you think it was going to be. You know, this is certainly more difficult than even our most pessimistic estimate of how difficult it would be. I don't find it rewarding, but, but you know resources are resources are resources are limiting, you know and when you're trying to do a project like this on a shoestring. You know, I don't want to say the lesson is you shouldn't try to do it but it's just, it's going to be hard and it's, and it's going to put stress on people and I, you know, I don't know what to say about that in terms of you know what the lesson is necessarily but it's a challenge. I wanted to follow up on the play nice role and this maybe also relates to the issue that sometimes these projects are very stressful. So, what, what do you guys have any procedures for what happens if someone doesn't want to play nice or another direction you could take this is how do you in general handle disagreements that occur during between team members. Maybe we could start with withdrew this time. Well, I was just chatting we don't really, we haven't really had so many issues of disagreements between people there's different people sometimes have different competing priorities which can lead to some issues but those are usually. Those are usually issues that don't really have to do with people's personal opinions on the research or anything like that it's usually about people's funding or ethical bodies. So like the, like the kind of the biggest issue we've had to deal with is people from sanctuaries working with people who have worked in labs and now we have kind of a general understanding that animals and people from labs who do invasive research as long as the animals do not have invasive research done on them they can take part in theory but it's a that's kind of the most juggling have to do. And again that's not really disagreement among the people it's a disagreement. Kind of over all of our heads. Thanks. How about you, Warren. Yeah, so early on, there was a lot of concern about cheaters right what would we have a lot of cheaters how is that going to work and there's even to this day still shockingly very few and we really don't have that many problems. I think sort of right Drew as you said there's a lot of issues with competing needs and interests and I think that's where we run into the most trouble the needs of postdocs are different from that of students from that of faculty and research scientists etc. And so I think that's where we start to run into some trouble but again it's mostly worked out by long conversations and a lot of communication with each other but it tends to get worked out. And I think also where we're running into some trouble again as we're growing is that there's different interests and expectations of the network. And so I think there's a culture of being contributing to the data being a part of it versus just wanting to use the data and join in that way and so I think there's some differences where there can lead to some frustration. Because people don't want to get opinions from 50 authors right they're like why can't you just like what I'm doing and I understand that it's a lot to deal with but your papers always better your work is always better if you work with the really thoughtful folks in the group so we generally do pretty good. Thanks for that. What about you Kylie. One thing that we do is that before labs start data collection there's a number of sort of groups that we asked them to jump through, including sort of reading a bunch of documentation and agreeing to our sort of code of conduct and principles that we have specified on our website, making commitments to openness from commitments to openness to, you know, promises to treat everyone with respect, and that kind of thing. So, because I think we front load this sort of like, no no you really need to behave yourself and this is a friendly collaborative and we haven't had a lot of problems where we have had problems is I think you know some people who have traditionally had more status in the field. Sort of not noticing what's been going on for a while not like participating in meetings and things like that and so the decision making happening and then at some point wanting to jump back in and sort of say, no no I don't agree with that decision making happened, but but they didn't participate in the decision making and sort of how do we. Yeah, how do we be respectful to everyone while also pointing out that like, we've all been doing this for a while. Yeah, so that is probably the only thing where we've had real conflicted by real conflict I mean like some some people uncomfortable, but generally speaking like more and I think you know it's gone pretty well. Thanks. What about you Nick. I think most of conflict that we've experienced in psychological science accelerators, pretty similar. But I think one thing that has made it hit a bit different for us is that some of our collaborations have had over 500 collaborators on it. And so I think we at first really tried to operate on this consensus model and say like okay let's all. I like the preschool metaphor like let's all like get in a circle and talk about our feelings and figure it out. But as we grew in size and grew in a number of co authors, you get to a point where you realize it's going to take a year for these people to resolve their disagreements totally. And in the meantime we've done the field somewhat of a disservice as we sat on, you know a lot of really interesting and potentially impactful findings because Patrick and I are arguing about whether the standardized effect sizes or not. That is a real argument that Patrick and I have had, and it's still unresolved after after many years. And so when one interesting model that we've tried in a study that was informally conducted by the network but is sort of affiliated is we've tried to consensus based model where the person who sort of writing up the discussion and conclusion section is listening to these conversations and all the feedback and trying to write a paper that represents the majority view, but then allow collaborators to upload dissenting opinions that are linked to as supplemental materials. And I think that as big team science becomes a bigger team science, this might be a model that we consider, you know, with the goal of making science move forward as quickly as possible. And about standardized effects as Nick. I wanted to follow up on an issue that Kylie brought up though about status I thought that was a really good point. So I've noticed that many of these big team science initiatives are idealistic in the sense that they're trying to maybe disrupt existing power structures, or do things a little bit differently from the norm. So my question to the panel is, do I have you notice these same power dynamics popping up and in your respective collaborations, and or do you have any procedures in place to prevent those power dynamics. And maybe we could start with with Kylie since you brought up the issue and probably have something to say about it. So my sense is that the people who started our group, and I assume that this is true in many of these groups are are ones who are, you know, potentially somewhat as much as we benefited from power dynamics I guess are also a bit skeptical of them thinking that this larger, less hierarchical system, where things are more distributed is better, and we'll make better science so I think we kind of start with that moderate bent in our in our, the way we see things. However, that doesn't mean that we're not trying to interact with people who are, you know, are more much more senior more in the traditional sort of status hierarchies of psychology in our case. And so we, if we find that we can't even convince like half the field, or half the fields ignoring us we haven't done a good job. So we really do need to try to work with both sides I guess sides, if you can say it's, it's different sides. So one of the things we've done in that way is really just repeatedly invited the people who might not initially feel positive about what we're doing to join us and sort of just so that they can see that you know we're being reasonable and I hope that they will participate from the beginning and then and sign on to the eventual product. And that has not always worked, as I mentioned there was a case where someone wanted to sort of come in three quarters of the way through. And in those situations we've discussed like how what are we going to do. You know, I'm a senior person who who is getting involved now. And yeah, but, but I think, you know, it can't possibly work if we went without the hot, you know and ignored the opinions of the high status individuals given we're all existing in a world in which those individuals have their status, potentially for a reason, but we want to do better and we're distributed things now and so we're just we just try to do both, I guess. Thanks for that. Drew, how about you. Well, I think I think many primates has found a few conditions that really help us kind of avoid too much controversy in the research that we do. I mean, firstly, the kind of the kind of general experiments that we're doing. They're not very like controversial they tend to be kind of on the descriptive side because what we're looking at is in many cases, a bunch of species that have never been tested on something is basic is working simple working memory short term working memory, or the ability to inhibit taking some food common common tasks you'll see done, you know, in 4050 papers with cappichins but most species have looked at so there's not a lot of reason for people to get too invested in how any particular species does. And then the other thing about our structure is that we we work mostly on slack, and we work mostly in with volunteer groups. And so if someone wants to be involved they have to get involved they have to get on the slack that they get on the Google Docs and they have to actually get involved they want to cause some trouble, and we haven't had anyone really caused trouble and I think that's, maybe that's the thing about senior people is they don't really have the time with the inclination to go be looking at them four or five times a day in order to keep up with the discussion whereas people who are younger and don't have as much invested are much more willing to just, you know, put in the effort, and aren't really particularly perhaps any any particular theory. Yeah, thanks for that true. So I'm going to move on to the next question. We've already talked a bit about why some of you have talked a bit about challenges and your respective collaborations or organizations. Okay, I guess I'll ask sort of an open ended question about this. What would you say are the biggest challenges for your respective collaborations. And I'd like to hear from each of you on this. So, maybe we could start with Tim. Yeah, that's a really good question I mean I think I touched on this, some of these challenges already but I mean, I think, you know at different stages it seemed like the challenges differed. Well, let's see. So at the, I guess, right now the biggest challenge is one that I've already mentioned, or alluded to anyway and that is data management we are where we have data from, you know hundreds of research teams being submitted. And, and even though you know we tried to, we tried to, you know restrict how people submitted data and tried to give people really specific instructions, we got for all sorts of reasons, a lot of which are totally valid. You know we got, we got a lot of results in a lot of different forms and just the challenge of trying to try to pull all these data from these many disparate sources as somebody already mentioned you know that can be really challenging and it's for us right now and we're spending a lot of time on it. You know and other challenges I've already mentioned to our you know, a lack of resources, the fact that this is nobody sort of front burner project for the most part. And the fact that you know we're just relying, you know we have us, we have, you know a bunch of collaborators who are, you know all doing their best to contribute to this but you know oftentimes you know like I'm really busy and I don't have time for the next other collaborators want something from me and then like, then I've got time and you know they don't have time etc just those sorts of things. I think I'll just, I'll leave it at that for now I mean there are various other challenges but I'm curious to hear from other people. Sounds good. How about you Nick. Our challenges have been quite similar and I know we've touched upon a few of them so far you know funding incentivizing the labor, ensuring honest accurate reporting personnel disagreement. One that I think we, we haven't talked about as much so far as infrastructure issues and this has been a real big challenge the psychological science accelerator. The tools that researchers have at their disposal are not really meant in design with these really massive collaborations in mind. And so we find that we're often trying to break a tool to make it work for our purposes, or developing brand new tools, so that they can be sort of retrofitted, or rather than retrofitted created for specific use. But every step of the way we learn that we're just navigating an infrastructure that's not ready that hasn't been built for big team science and. So one example that I'm sure you all have experience at some point already is uploading authorship information to a manuscript portal. We have 500 authors on some of our studies and you get to the manuscript portal and you're like okay I can just upload a CSD maybe no no no no. Manually uploading 500 people's information takes several days and it's honestly like the worst part of the project in my opinion. And then of course you submit the paper and someone says oh but I changed institutions. So you go back in the manuscript central to change it in the manuscript central crashes and you have start all over again so that's just one of the examples where the infrastructure issues. But also one that we've seen a lot in our network is just having the infrastructure necessary for managing a database of members, and knowing what they've done and when they've done it and what they're interested in doing and what their areas of expertise are. You know, businesses and corporations have dedicated departments for that, whereas we have maybe a postdoc who like kind of is into developing websites, sort of like working with a researcher to try to do it. And so, I think infrastructure is one of the big issues that we haven't talked on yet. Yeah, and I guess what I would add, because I do agree with all of the things mentioned is sort of a misunderstanding of what this work really is I think that people in the network see the value of them and really understand how we can make this amazing by working in this way. And like, do we do better science together than we do if we compete, which is my, I guess that's also probably a kindergarten rule but I find it can be challenging to explain this work to people right like I feel like it's hard enough to justify a paper with like 50 to 100 authors I don't even know how you deal with this with 500. So, you know, people it's very common to hear that well how what could you have possibly done, you know, if you're like one of 50 authors one of 100 authors and it's like, Well, we do a lot. Right. And, and so, I think the misunderstanding of how much work it really takes and sort of the credit that you get for that is there's no way to like write that down on your CB right like I spent hundreds of hours talking to people about, you know, so research projects don't overlap and so they can each have their own piece and feel confident in their work like that's really hard to describe that. And so that's where I think is another piece. Well you true. Yeah, so the issues of like infrastructure and long term sustainability certainly loom large for us. I guess rather talk about that again I'll, I'll highlight something that's quite a quite a big thing for us which is the fact that most of our animals come from places and what we call range nations. And most of our researchers are in places in like the United States and Europe and Australia which are places that don't have primates. But the primates are in places like South America and Asia and Africa. And so it's a big issue and concern for us that we want to be kind of expanding and working with people and the primates in those areas. That's something we're just kind of to get started on now but I mean, that's a big that's a big project for us. And it's also big. It's a big challenge because that probably requires more funding on our part because we need to be able to provide incentives for people across the world to be able to participate if we want to if we want to keep this going. Great. And how about you, Kylie. What Drew just mentioned is something that I was going to touch on. But the, you know, I think in some ways, being able to have enough extra resources in your lab or whatever to dedicate some amount of them to big team science, as opposed to just things where whether your student is first author or whatever, sort of is a little bit like it might be the richest labs in the world or the most well resourced labs in the world able to do it. And we have tried and tried and tried but I've had a really hard time getting funding to help other labs who are less well resourced participate. And this means, you know, a lot of things, but one of the things that you might think would be a benefit to big team science is better representation around the world. And I think in some ways that it is. But that better representation is still going to be heavily skewed toward, you know, weird people. And that is something we talk about a lot. I mean, we talk about how our governing board is not diverse, our labs are not particularly diverse. Even when we are able to recruit labs in other places that are more diverse. How do we give those other people more power in in the relationships and decision making power and things like that. So this is a challenge that we have far from done very well to overcome. But it's just I think one of another thing that a lack of funding can make hard to achieve, even though you might think at first glance that all of this is just really increasing representation, like sort of. And yeah, I mean our group was recently able to get funding with some of the other groups here by saying that we're going to sort of study big team science and produce some of this infrastructure that might help big team science do better. And then actually gotten funding to do the big team science. More like to learn about big team science. So, despite, you know, a huge number of unsuccessful grant applications. So yeah, it's tough. All right, last question before we move to the Q amp a. So if we've been talking about challenges a bit. What shifts or changes would you like to see from funders or universities or other stakeholders to help make big team science more sustainable. So you touched on this already Kylie so I'll on in the floor to you before going to other people. So I don't know how others have, you know we often get funders seem interested, they seem really interested because they're interested in replicable results they're interested in finding out true things. But they're not so interested in funding the replications. They're more interested in funding like the really exciting stuff that, you know, tends to happen in individual labs. And so we always end up sort of getting quite far and then like ultimately getting rejected. I think that one of the things that we've tried to do is is talk to universities themselves as a like, maybe universities could have a sort of big team science thing where universities have pots of money going toward these kinds of projects. We've had, you know, relatively little uptake, but I wonder if that's a potential wave of the future, because we find that like the federal agencies are less are less open to it than you might hope. What about you Lauren. Yeah Kylie I think that's a really interesting question about using the universities as a source we've been able to do that and I shouldn't say me personally but the group. And actually, very infrequently have we had the science funded it's mostly often sort of the coordination which is good right because that's what really makes it all function in our case. But you know, big funders have only taken that so far because there's this idea that like well if you can do this on a shoestring budget why do you need money for us look what you've done with no money so far. And so, which right we all understand where that's frustrating but we have switched to an approach where we tried to sort of approach universities or organizations that have funding for sort of institutes like for example at the University of Minnesota they've gotten money from the Institute on the environment because they like to host the nutrient network right it's a big deal for them to be able to say like, you know we've got 60 papers and we're getting you know it's like 10 papers a year and it's looking really great and so perhaps some sort of institutional funding like that could work well for you because it's worked pretty well for us. How about you Nick. This is this is really interesting. We've also considered the institution funding issue quite a bit but one of our concerns is that by by seeking funding through institutions we've essentially centralized the power of the network. And you know, you know, ideally we want there to be a director in a different region every, you know, every five years. And so that's sort of our concern is that we limit ourselves a bit and particularly in the US it's very hard to hire someone who doesn't live in the US and so in order for us to like be diverse and financially sustainable we would have to keep recruiting people into the US to fill important roles so that's one of the concerns we've had with that with that approach. Although I will say we're certain, you know, I think that many of us have certainly considered it nevertheless. I wanted to make a quick note about you know, particularly federal funders priority, or preference rather for, you know, the exciting science is happening in a single lab as opposed to this big team science which I view is equally exciting. But we often when we're asking for money we're asking for money to build infrastructure, because we think that that's sort of the, you know, essential thing that we need to continue doing the science that we do. And those those infrastructure grants are extremely challenging to get. And I've been a little surprised about how challenging it is and surprised that funders aren't seeing the value in it because the infrastructure that we've developed for instance and psychological science accelerator has been has had an enormous impact on the field so far. And people have taken, you know, our data sets and published papers often reanalysis of our data sets because they're so well documented and so well centralized. People have taken our policy documents and adopted them to other emerging big team science organizations as a way of modeling their group. People have taken our project management sheets and you know our protocols and used it to conduct other large scale replications outside of our organization and so when I think about psychological science accelerator I think that we do a lot of really amazing science but I actually think that the infrastructure and policies that we develop are our most impactful export. And so that's why it's extremely frustrating that that's the exact thing that we have so much difficulty funding. Drew, what about you. I want to make a brief case for kind of local funding involvement. So, because as I mentioned our task is very simple. They need to be flexible. So, all oftentimes the people we get who want to spend some time working with the animals and want to just get involved are undergraduates research assistants and master students. They come involved through their curriculums and so that connects us back to the institutions and the actual learning curriculums at the institutions. And so, at that level we'd like, like there's certainly potential for us to link, particularly where there are zoos to link the zoos more closely with like the institutional curriculums and the universities. And to make it create a more seamless bridge because there certainly aren't that many bridges at the moment and regret to create a more bridges between zoos and other sanctuaries and the universities and the people and the students who want to work with the animals at these sites. Great. And Tim, do you want to wrap us up. Sure. Yeah, thanks. I mean, people have, you know, I think made a lot of good points that I want to comment briefly on the on the idea of this idea that I guess drew was just ending with two which is just this institutional funding kind of local institutional funding. And so for instance as a contributor to to drag net. I'm funding the work, you know that work through local is my local institutional funding like and it's great and you know that and that works and I, I don't want to, I don't want to criticize it because I think often it's really kind of the only option. But, but I do I guess want to make the point, you know, I want to echo a point that was made earlier about, you know, trying to promote, you know, contributions from more diverse array of places. And there's so, you know, Kylie mentioned weird which so I mean, I'm not a social scientist but I know that that's a social scientist acronym first. I don't know white European, I don't know what it is I don't remember what it stands for but anyway, like, you know, a bias subset of humanity that a lot of research comes out of on the same thing happens in ecology, as well. I mean, like there's a lot of research coming out of places like Europe and North America and Australia. And, and, and not that and I actually don't know what the map of drag net participation looks like but but not that hat was global, but it was definitely skewed towards towards North America and and Europe, etc. And if we're relying on people to just have that institutional funding, obviously when if we're hoping that to get contributors from places that fundamentally have less fewer resources. It's going to be harder for those people to have institutional funding sufficient to do, to do the work so, although I don't want to sound, I don't want to be critical of using local institutional funding because I think that it's in the current environment it's kind of essential I think it also does limit the breadth of of participants and systems that we can include. Alright, I'm going to switch over to the Q&A, I can see a variety of questions and if more come up. Please do ask them in the Q&A box, we'll just keep going until we're at the end of our time. I'll ask the first one to the group. Do any of your department heads or research institutes have a mechanism to consider big team science style work and hiring promotion or tenure. So, let's start maybe with with Lauren. I was, can you rephrase that or you just say that again I'm sorry. Do any of the relevant people who give who control career advancements have a way to consider big team style science, big team science style work in hiring promotion tenure, any of those big milestones. I'm definitely probably not the person to ask because I hope they do, but again, I think what it just takes for me at least is a lot of information, providing information for people about why it's so important and so there's nothing specifically about that for me at all, but I do work really hard or other people I've heard of the network but I work really hard to explain to them why it's so important. Great. This is good this is a tough one I think because it's open open question. But Nick, how about you. I'm not, I'm not really aware because I haven't asked for a promotion. I haven't even dared to ask for a promotion. But I will say I've heard from other members of the network that they've had, you know, some some frustration with this exact issue where they're applying for funding or they're applying for promotion and they've been deemed for being too much of a supporting lead scientist and not enough of a lead scientist and those, those those those hit really, really hard, because you know we're all doing this because we, we saw that science was being done in a way that was inadequate and we all, you know, poured our heart soul and energy into trying to build a bigger and better science. And it seems that maybe the people who make hiring and promotion and funding decisions haven't quite caught up to our thinking on this yet. Next question. Do you do you all have any mechanisms in place to avoid group think or to similar of decisions. Let's, let's go to Tim maybe. I'm going to pass on this one I'm not sure it's super relevant to. I mean, in the sense that we're not one of these democratic process, sort of organizations with lots of people it's very we're very centralized anyway so anyway I don't feel like this questions maybe I'm missing it but I don't feel like it's particularly to myself. Yeah, so the question is, are there any mechanisms to avoid too much similarity decisions or group think do you have any structured ways to promote different types of opinions. I mean I get that I get that I don't know if you're redirecting the question to me I understand the question I just don't think it's that I'm not sure it's that applicable to my situation. I'll chime in because for the audience members Patrick repeated it because I missed the question. Sorry, sorry, Tim. I will say that I suspect that we could be doing a better job at that when I saw that question I wanted to immediately jump on it because I said oh, maybe we should be doing better there. I would say that we in some projects not done with our network but by members of our network. We have, you know, intentionally explored adversarial models collaboration where we intentionally seek out people who we know disagree on this topic. Within the psychological science accelerator we also have tried what we call the red team approach recently where we have a study proposal, and we sent send it to people who we think are very critical and will hate it. That they hate it and we try to make it so that they don't hate it as much. And we've had I think some success with that so far. We've learned that that that process hasn't ever given us a perfect study yet, but I think that it's led us to have studies that are much better than we would have had if we didn't engage in that sort of intentional seeking out of critiques. Great. Next question. Let's, let's assume that your organization gets very substantial funding. So, you know, the National Institutes of Health in the United States gives you an enormous grant. What would be your top priority for how you use that money in your organization. Kylie. First I want to say I don't think that is a risk that any of us are likely to encounter anytime soon, or ever, but I do think that what we routine the reason we routinely apply for grants are for two reasons. One is to get money to pay people to do the things that are otherwise are graduate students on up are doing for free and don't really have time to be doing. Like data management and infrastructure building and all of the sort of work that ends up, you know, if we could have fixed these problems in the beginning. They would save a lot of work overall so it's a like people power would be a huge one. And the other thing would be actually increasing diversity in ways that are that are sustainable, and that might actually promote, you know, labs being able to participate long term, as opposed to a one off kind of participation, things like that. Great. So, oh, how about drew next. Sure yeah so if we had if we had enough money to do this I mean for us, we were going to need to take the step into developing our infrastructure sustainability, like pretty soon I think and like for us I think getting someone who can coordinate and communicate in the range nations maybe in multiple range condoms, particularly like well, ideally someone who's actually based there like someone who knows and is fluent in Spanish in South in South America, for instance, we can then coordinate and communicate into great with people who are working there. Those are the kinds of things that we need to really kind of make sure our operation is sustainable and also grow it in a in a diverse way. Sorry, I hadn't lowered my hand from before I'm. Okay. Great. So, next question. Oh, I see Lauren wants to say something go ahead. I just wanted to make one quick comment that I'm like really happy to hear that everyone, because you know Kylie those were my two thoughts as well of how I spend my money and I just think it's really great that really trying to figure out how to make these networks that are supposed to be inclusive right the whole point of these networks is to like be as inclusive as we can, but yet we're very definitely failing in many ways and so I'm glad to see that that's where we want to go. I just wanted to point out that I think that's really great. Nick, go ahead. I think those are all you know extremely important uses of funding and I think it just underscores how badly we need funding because there are so many things that we need to use this funding for. I don't know what I would use it for first but I know that high on my priority list would be infrastructure, because I think that you know this big team science model is the best model we have so far for producing and reliable knowledge at least in psychology that's been my experience. And my hope is that we can help other people do this I don't want to control every single big team science project that happens in social and cognitive psychology, because it would take up more hours than than I think we have. But to be able to hand someone a program and say hey if you need to have 500 people submit a consent form, and know that they submitted it, and also upload their authorship information and also do this this and this if I could give people that infrastructure and say, now go and do science that would be incredibly valuable, especially if I can make that software openly available which is everything that the psychological science accelerator does is based on principles of openness. And I think that would be useful not just for scientists but other fields where they're trying to coordinate really large and complex things. But this doesn't by any mean diminish the importance of the other things that we would spend funding on but it's just one that I've, you know, sort of thought a lot about and looked at you know, looked into the future and said oh gosh I hope we can get funding for this one day. Tim, did you have something that you want to add. Well I was going to I put my hand up because I was going to ask Nick for an example of infrastructure and then he gave one so I'm happy. Thanks. Well, so I'll I'll ask the next question then. So when we were talking about, you know what approaches to big team science. We, a lot of you talked about having a common protocol, but there's a question here on that asks about sort of a different approach to big team science which is adding in systematic sources of differences across protocols or systematic heterogeneity. So to any of you have thoughts about that alternative approach to big team science and you have any examples that you could talk about. Nick, why don't you go ahead. Sorry, Patrick I missed that question to those responding to a question from Lauren. Can you. Yeah, the question is about systematic differences between protocols. That's the question that I really hoped we'd have time to answer. Yeah, I think that's a really great use of big team science resources. One of the things that we've learned in the psychological science accelerators that there's all of these changes that you can make to a study that drastically change your conclusions so the area that you sample your Christmas and the models that you run. And when we first were tackling these issues we thought that we were tackling them in isolation. And then we realized that they're actually multiplicative. And we would say oh but this these three data analysis strategies yield the same answer. When we run our analysis on us participants, but they don't yield the same answer when we run our analysis on participants in Africa. And so what we're realizing is that all of these, these things that can change your study are multiplicative, the methods, the measures, the participants, they all interact in order to change the result that you actually obtain. And so this is something that we're just starting to explore I think in studies with the psychological science accelerator and it's one of the priorities and I'm hoping to convince the network to have is to systematically spend our resources not just doing super clean direct replications everywhere, but intentionally introducing these sources of heterogeneity, because in order to run a study like that you need a lot of participants a lot of data. That's what we're really good at. We're really good at getting a lot of data to answer those questions. We're heading up on the last bits of our time so I think for the last few minutes. I'll give anyone who wants a chance to give some closing thoughts on big team science, the future big team science challenges to it anywhere you want to take it so closing thoughts. And if we could go to Lauren. Sure. I guess my closing thoughts are that I've learned an awful lot from talking to folks outside of ecology and what big team science looks like and I really appreciate that so I think that these kind of efforts can really help us see think be you know think about problems we didn't or solutions that we didn't even know we could come up with with just within our field so I have really appreciated that for these. Yeah. Yeah, go ahead. Well, well I have the opportunity and given that it seemed like we're all really talking about the same kinds of challenges I do want to plug this potentially long term funding line that we've recently gotten about sort of studying big team science and solving problems for big team science out of the social sciences and humanities. So we're at research council in Canada, and we're adding we, it's called a partnership grant and the idea is that we will add partners as we go. Some of the people on this panel are already partners but there's another application coming up in a couple of years that is potentially a lot of money where we could have a lot more partners. So if anyone who's in the audience or on the panel would potentially like to participate in such a thing. So that's with me. Again I'm Kylie Hamlin at UBC, and we can potentially talk more. Yeah, go ahead. I just want to unmute myself. I just want to say that I mean it's, it's great to for us to get together and, and I found this to be really valuable and it's just nice to hear all, you know, these various things that big team science these, these, these collaborative efforts have done in the various ways they're productive. I mean I was already convinced of the value of this beforehand but you know hearing stuff today has has further convinced me. But it does seem to me that the biggest challenge is going to be convincing institutions institutions like funders, etc, in the value of this really deeply collaborative work I think the structure of science right now is very focused on like the PI like the person, the person who's doing it all. And, you know, I think in some disciplines like in physics with things like particle accelerators and stuff which obviously I think you guys are playing off of with a psych science accelerator. You know, there's recognition that you know a lot of the really important stuff can only be answered with really deeply collaborative and broadly collaborative efforts and I hope that recognition spreads beyond physics. Wonderful. Thanks Tim. That's all our time so thanks so much to all the panelists. Thanks for everybody for your, for your close attention. I thought this was a great session. So, thanks everyone and have a great conference. Thanks Patrick. Very thanks so much.