 Okay, so we're on live stream now Mark and should I start. Yeah, absolutely. Good. Hi everyone my name is Mark Lejeunesse and welcome to my very informal workshop on how to wrangle large teams for research synthesis. Before starting and maybe you need a little bit of introduction of who I am. I'm an ecologist at the University of South Florida, and I've experimented a lot with including many groups of people at various stages in their career into research synthesis project so for example right now I'm teaching introductory biology I have 250 students, and I have all these students at helping me out doing a scoping review on a topic. And what you'll find is a lot of the details I'm going to talk about today have to do with my many many failures in getting consistent outcomes and I can't say I quite have a solution to streamline the whole process and including a lot of people but I've certainly learned a lot of things along the way, and I'm hoping to share these with you. And if you're willing to take on a large group and what I mean a large group. Well, I'm getting ahead of myself here when I mean a large group. I mean, more than 50 to 200 or so individuals. It gets really difficult. And at the heart of all this we want to create a high quality synthesis. And when you include many, many people, again with various backgrounds and whether or not they understand research practices. And a lot of challenges with consistency. And so a lot of the details I'm going to talk about today is make trying to make the whole team on the same page so that you could get consistent outcomes. And maybe walk away, making decisions on whether or not you want to include a very large team. And without further ado, let's just jump into it and begin with what I mean by team. And when I was a graduate student, a large research synthesis team was essentially collecting my friends to be a participant in a research synthesis project. Some teams are absolutely streamlined. Every member is highly qualified expert. They know the role. They know how to complete tasks with little or no training. Ideally, this is the best team you could have when everyone is on the same page. Everyone is motivated. Everyone understands the tools. But now I because I teach large classes I always get excited about including undergraduates in research synthesis projects. And so, for me, I typically have a giant cohort of minions that I really want to have them participate in research. And the work involved in getting them to participate is really on my end, in that I got to make sure that they know exactly what to do so that they could succeed. No matter which team you have, right, either it's a bunch of colleagues you're part of a work group you got some funding to parachute people together and work on a project. You are part of an existing team where you just crank out a systematic reviews or meta analysis, or if you are, you have the opportunity to include a collection of people that aren't typically part of a synthesis project. The same. And the beauty of it is a lot of aspects of research synthesis can be a broken down. Research synthesis is complicated, but it is a highly structured process. And that's really to our advantage. That's the golden thread that connects the entire process is every stage is very well understood. And, and so I'm going to cover a lot of these stages and I'm going to discuss which ones are most amenable to including many people in terms of like the dividing jobs and handing out distributing effort to complete those tasks. Right. So here, for example, this you could see this this is this applies to any type of multi member group task assignment where you got a big job. You break it down into little parts. And then with those little parts you distribute it to many people. And so research synthesis. And like I said earlier, although very complicated and time consuming. The aspirational goal is maybe you want to include many participants in the process so that you could have a greater scope in your synthesis project you could cover more studies, or maybe you're interested in farming some of the meticulous things like extracting data from figures. And, again, unlike other types of research projects research synthesis is very open and has many opportunities to include participants. However, not every synthesis project should have a giant team. And if you feel like you're including a lot of people because you want to increase efficiency the pace of a systematic review or meta analysis. You may actually find yourself slowing down the entire process by including all these people because you got to train them. You got to catch them up. You got to evaluate validate their decisions, maintain consistency. And of course you got to communicate effectively and wrangle all these folks to try to get the project done. So let's say, let's have a little bit of fun here. And what I'm going to do is, you might have seen these things before right these. You know, putting systems on YouTube where you discuss things and you place them on a scale of how amenable here I'm going to put all these various research synthesis tax tasks on a scale and how amenable they are to including many participants. Later I'm going to talk about in more detail the actual composition of a team. But right now, I'm going to focus on, you know, aspects in the tasks involved in research synthesis, where you could include non experts, where you could include people not essentially evolved in the development of the project protocol or the project aims or any conceptual details. But they have little pieces of work where they could tinker with make decisions and send it back your way in hopes that you could aggregate all that information and move along to the next task. Okay, so one of the first stages is the scoping stage. Now, depending on how far along you are in your project sometimes you have an idea and you just need to figure out what's been done. And whether or not there's many studies out there whether or not they're complicated. I think this is an opportunity yes to include many people. I will put scoping as, you know, kind of a B, B or a level as being the best as being the most superb at tasks research synthesis tasks that is open to a broad participation of people scoping is okay. I, I'm doing it right now with 250 students. And what I'm doing, having them do is just kind of wander around willy nilly in an unstructured way to try to find studies that are relevant to our topic. Now my my aims are very different than probably what you're interested in my aims are to teach them science, teach them how to navigate references and how to understand how science is reported and published. And the ancillary benefit is I'm getting a feel on a real research project. And so they have a task where I s I tell them. We are doing this topic topic X. Go out there finally primary research study. Provide me the citation and a PDF. I'm going to just quote a little bit of information about that study related to our topic. Overall that's fairly successful but again, I'm not using their decisions, or the information they collect to do a proper synthesis project. I'm just scoping out what's out there, searching. So really a key stage. The idea is to have like a bunch of keywords to mine a bunch of bibliographic databases sources. This I don't think is something that you could distribute to work unless you have many, many, many databases you can put it on. But the overhead of that is actually fairly difficult, because you know for each database the keyword search string will have to be modified. And that's probably not something you want to delegate to a group. That's something that you should be doing on your own. If you're the presumably the manager, the lead on the project. I would say important stage and to be consistent and to be clear. And so I don't feel like this is a part of the research since since I says tasks that would be useful to have many people work on screening on the other hand right you've downloaded all the bibliographic information from multiple databases you combine them you duplicated you did all that business. Now you have 20,000 references you need to screen. And you want to do it in a, in a way in which you have high confidence that things to include and exclude are correct, are exactly what you want. This is really one of the important stages of including many people in a project. However, trade off. Yes, there is the possibility of processing thousands and thousands of references. You could have a dual, a double screening design where multiple people are putting eyes on the same abstract and title, making decisions on whether or not to include exclude. But the more participants you include in that. Many more challenges emerge, like how do you resolve conflicts. How do you resolve individuals that are providing inconsistent responses. My personal experience and I talked about my experience with having undergraduates screen many thousands of studies is they are incredibly inconsistent. It is not a population of participants that will yield high quality outcomes. I include them because again, my goal is slightly different from a proper research project and then I want to train them. I'm teaching them aspects of science aspects of biology. But I'll get to the challenges of including many participants in the screening part later. So I would say screening. Yes, fantastic. But there's many caveats associated with creating high quality screening outcomes. Coding. So you have to do quality assessments across studies. You have to code potential moderator effects across studies. Coding details to evaluate the quality of each study. I guess I said that already. This is also something that's amenable to including many participants. There's another one of these decision tasks. And depending on how you train people, unless they have a rigorous understanding of the concept that you want coded, you're going to get inconsistent outcomes. Sure, you could build in a built in again. Dual screening. You know, double coding of each study. Again to measure consistency. But then you've, because there's many participants you fall under the same challenge as screening efforts in that how do you, how do you reach a consensus across those decisions, how do you reach a consensus across disagreements. If the things you're extracting are numerical. If you address different values in those numbers, it gets very complicated and in the end, even though you've delegated these tasks to many groups and maybe you've processed a lot of stuff. You end up spending a lot of time or maybe you assign a new team to validate and double check that information. It's another iteration in the process. Where a lot of the work that's done may get dropped because it's not of high quality. And so, for many parts of the workshop that I'm going to talk about later is trying to avoid these multiple iterations of reassessing redoing these repeated tasks over and over again because some participants are not quite achieving the desired outcomes. Another important aspect if you're doing a meta analysis more than a systematic review is you're again extracting numerical information extracting outcomes. Yes, I feel there, there's mixed opportunity for extracting information from studies. I know that undergraduates should not be involved in the extraction of study outcomes numerical outcomes. They just don't quite have that sophisticated understanding of the scientific process yet. They're understanding on their end of experimental design of statistical tests of interpreting and understanding what's relevant from a finger. And so you the team that you want to be involved in that heavy extraction process which is by far the most tedious and difficult, because every study is unique, a different puzzle to solve in terms of extracting the outcome. That is not something that should be delegated to someone with someone that is a novice scientist. And I'm including undergraduates as a novice scientist here because they just haven't read enough they haven't taken many courses they don't. In my case they don't understand the biology yet and so having them extract like a contrast a control treatment contrast out of a study is a lot to ask them for. However, those tasks can be further cut up into more generic repeated tests like if you make a decision on what's important to extract from a study. Right, and you know that there's a valuable information locked in a diagram in a figure. Then, collecting all the figures that you need to extract data from is something that you could delegate to someone else to someone that is inexperienced because all you need to do is really is train them on how to do the extractions. And you don't need to have them understand or make decisions on what to extract. And in fact my advice, or that kind of stuff is I just have them extract everything from the plot. Right, don't let them make a decision on what's important or relevant have them extract the whole thing. And then you come in and you pull out the numerical values you want from the plot. Okay, we're getting there slow study. Hypothetically, all the outcomes have been extracted. And now we're at the synthesis stage. This is not a task that can be delegated or distributed amongst multiple members the the analysis and the interpretation of the results that's kind of left to the core team. There's people involved in devising the protocol or devising the aim of the study. And that's not anything you could farm out. Right, so the synthesis, synthesis part is really like a low level task in terms of delegating to many, many people. Right, I'm not saying you can't delegate it to a statistical expert. That's fine. Right. But would you distribute your analysis to a dozen statisticians to a dozen people to analyze and synthesize I feel like that's a mess that you shouldn't need to worry about. Finally the last part, which is the writing and the recording of the project. I include many people involved in that process. I'm mostly because it's a it's a good training opportunity for people to experience how to effectively report a synthesis outcome, how to discuss potential biases and issues with the data set. When people involved in that process while you're generating the text and discussion and the results. I feel like is important. However, I don't think that's also something that you could distribute to hundreds of hundreds of participants that say you're not going to get a anything that's readable emerging from that. So reporting kind of falls on the low end to. What I'm hoping that you get from this process right ranking these different tasks is there are many opportunities to include many people. There are many tasks that could be distributed. But the remainder of this workshop is not so much about, you know, whether or not these tasks can be distributed to many people, but more of how do you manage that process. How do you make sure that they're doing high quality jobs, right they're getting little jobs, are they doing it effectively. And then, and then what do you do with all that stuff. Once it comes in, and I could tell you right now I've experiment experimented so much with this, and I still haven't quite resolved the problem. And so I'm hoping what you get is me describing the challenges that I faced these ad hoc solutions to getting many people involved. And then, so that you could avoid some of my mistakes, so that you could maybe better strategize use better tools. I tend to limit myself and our right now there's a lot of a lot of cool stuff out there that could facilitate the collaboration of these tasks. But this workshop is not a tutorial on how to use those tools, but I'll provide a listing of what's available. All right, let me just take my shoes off here and get more comfortable. Okay, so let's move on. So the remainder of the workshop, which will essentially be me describing my experiences with including many, many participants. I'm going to focus on some management approaches. I might be useful. You may already be familiar with a lot of this stuff, but I think sometimes it's useful to hear how someone else has approached this problem. And by managing I mean you are coordinating the participation of many people, making sure that they're on track, making sure that they are submitting their work. And then, most importantly, you're the one that's assessing and evaluating the consistency and quality of their efforts. Then I'm going to talk about team composition. Again, your management approach will probably differ considerably depending on who is part of the synthesis project. Really right. You have like the soccer team, where everyone is a high level expert in and in research synthesis, everyone understands conceptually what research synthesis is everyone understands conceptually the subject, you want to synthesize everyone understands the stats, right. And so there's a lot of short hand discussion that occurs, and you could be extremely nimble and get what you want fairly rapidly and still be high quality. On the other tail end, you have hundreds of undergraduates with no expertise whatsoever. Some may be eager to participate in research. Others may just treat it like this is an assignment or an exam. And not give it the attention it needs. It's just like a task in the week they need to get done so they get a grade right so the motivation of the participants can be very different. I feel like this is what would happen if you would say farm out to the broader community, these menial tasks, or these repetitive tasks, because not everyone is fully motivated or interested. And, and, you know, gamifying all these things or say hypothetically you invite hundreds of people to extract data from figures. Opening years, I feel like you're opening yourself up to a world of pain. If you do that because you really need to think ahead and organize a system that allows for repeated validation of decisions. And I'll talk a little bit about how I've had to kind of change gears, switch my total management approach to farming out these repetitive tasks to non experts. Finally, I'm going to talk a little bit about tools available to distribute effort, distribute tasks. I focus mostly on in our because that's what I love. But between you and me are is not the most optimal way to do this stuff. And you may just end up using like a collaboration, a canned collaboration software where people could work on the phones people could work on their desktops in a nice clean UI. The HTML or JavaScript or something like that are provides ways to make nice UIs also. But the challenge is like how information is returned to you right so you may create a nice shiny app that is meant to code, or help people input their study extractions. Issues will come up. Issues will come up where there's like you didn't fully populate a drop down. And now you have to go back and edit revise the shiny app to include more drop downs to include more cells to input because the ugly nature of research synthesis is that there's huge variation in study design. It's reported. And so it's very difficult to have a one size fits all extraction or coding form. Because that means you really have to do the work beforehand to assess what is needed for inputting, and how to make things kind of readable for people to do their job. If you just have a giant form that accounts for every single edge case of reporting practices. That is totally overwhelming. I mean, imagine if you have a single form that includes all the cells needed to input every variation of treatment control contrast effect sizes. Every variation of how correlations are reported every variation of how potentially odds ratios are reported on a single form. That's a lot to navigate a lot of opportunity from people to make mistakes, or to just get totally disenfranchised with the process. But this, this is, this is what I feel our opportunities. This is our opportunities for people to to develop tools that are kind of generalized that allow for quick. Augmentation for special cases in terms of data extractions or coding. I'll get into that this is really really the exciting stuff. If you're interested in, if you're, if you want to use if you're like a Cochrane author you got a ton of awesome resources and, and software for you to achieve these things but if you're a regular vanilla ecologist like me. Maybe I'm kind of a weird case and that I love developing this stuff so I, I don't mind getting my hands dirty. You may not want to take it this far. It may just be quicker to just do it the old fashioned way where you're. You have a giant Excel sheet with many columns, and that's where they people input their data. But let's let's, let's keep moving here. Because there's many things I want to discuss in detail. So let's, let's sit back a second here and talk about why we want to include many people in our synthesis projects. And I think the idea is it's very enticing to want to take a big bite out of something to have like a really broad scope and process thousands of studies to have like a broad conceptual mapping of related but diverse research topics and just farm a lot of that stuff out. In hopes to be able to that to synthesize it to provide a broad overview of what's been done. And, and if you've done a research synthesis project you could see oh yeah okay. You know what, when I did this ex research project, I really spent the bulk of my time in the extraction task. In the process pipeline, extracting data was the most tedious and time consuming. If I had help in doing those things, I could have done it quicker. I could have done it more efficiently. I could have covered more. If I had brought in my research topic. I think that's the aspirational goal, but in practice, it is difficult. And when you pull in many people there's many things that come up that you don't anticipate. I could have really stall many tasks along the research synthesis process that is not efficient. Right, you're spending a lot of time, making sense of what they've done. You're spending a lot of time catching people up. You're spending a lot of time organizing resources making sure everyone has the tools or what they need to succeed. And so even though you're like oh I'm going to save so much time by farming the stuff out. Now you're spending a lot of time doing these additional and managing tasks. At least for me are not super exciting to do. So here are some pros and cons of having a large team. I'm sure there's many more and I will overlook the bunch but I think I distilled it down to these. These few examples. Research synthesis the task, the tasks involved in research that synthesis is stable that, you know, there's a real structured process. If you're approaching an experiment, and then you have to account for contingencies, the research synthesis process is highly predictable in what needs to be done. And so the long view of including many participants in that process, you can see the advantages of that right. There's advantages of having many people screen references, because they could process many things. Right, you could, because you have many eyes. You could expand the scope of the research topic. You could have broad coverage, which you couldn't probably address beforehand because you had limited time to achieve things, including many people means moment maybe, you know, rather than doing this narrow topic. You could scale up hierarchically to a broader concept and include that, that little topic, but also include many more related or indirectly related topics. Task efficiency, right, and this is where I kind of have mixed opinions on. I'm never really interested in doing things quickly. But I understand there's a whole branch of research synthesis that wants to push things through fast, especially if like they are related to important emerging issues you need a really quick assessment of what's happening. I'm having many people in a team can be a way in which to process and complete the tasks quickly. I feel like the best team to do that is a collection of experts. Right, you're not going to zoom through a rapid review by including non experts in that in that process, right everyone involved in that are just like Ferraris, and they're not people on bicycles. When you take on a large project a con is, at least for me, for real is my role changes. I mean, I, I see myself as like a muddy boots ecologist I like to get my hands dirty I want to be part of the project. Deeply, right, more than just a manager more than just a supervisor I want to be the guy catching a dragonflies. I want to catch dragonflies to. And it's the same with my research synthesis project is I want to be part of the screening I want to be part of the extracting phase. And, but when you have many people I have to let go of that control. I cannot be doing these tasks. Because I have to focus my attention on resolving issues that emerge, as opposed to task completion. And I'll talk a little bit about that becomes a different managing style. Even though deep down inside I want to be the person doing a lot of this stuff. And at its heart to I feel like I could do all these tasks very quickly. Right. Sure, I have 250 students scoping a research topic. Right, there's incredible opportunity for me to get a good I feel for what's out there, but I could also do this on my own. And, and if I do it on my own I can do it quickly. I know exactly what I want. I know exactly what's needed. You know, 20 plus years of experience doing this stuff I could zip through it quickly. But in the end, in terms of my role, it's a last opportunity to train a lot of people. A lot of lost opportunity to get young people excited and involved in science. And so again I got to change my role I can't be the one that does all the work. I got to delegate. And for the risk of dilution effects when you distribute work across a large team that that the flow of tasks are not being addressed in a timely, or, or correct way. Because there were hiccups in how they were trained hiccups in the tools hiccups in how information is accessible for them to make good decisions. And so as tasks get passed on to a group. There's no way you could like individually respond to things, especially if there's hundreds of people. And so there, there might be a dilution of information that you need to account for when you're aggregating everything that's been done. And this is this is takes up time. It sucks up a lot of time. My, the biggest thing that I've failed at by including many people is setting up a program where things are consistent with undergraduates again you know their motivation may vary quite a bit. Their interests will vary quite a bit. Now, I know there's students in my class that are not interested in participating in the scoping review. I'm breaking up here, I think. Okay. Is everything still okay can you hear me. Yeah, we can hear you Mark. Okay. I just got a warning saying that you know there was an issue. You're okay. Consistency and I last year at the this conference I talked for 10 minutes about you know my challenges with consistency in screening efforts by including many people. And so I'll talk a little bit about that, but in the end, it's difficult is really tough. Okay, so your role right is to manage. Not be the one who does the dirty work. For me that's a tension internal tension that I struggle with all the time. It doesn't make me I could be honest right you right now I'm not a great effective leader because I have problems, delegating things and communicating to people. But I but because I've done it many times I have some tips and some some guidelines on how to avoid a certain management approaches. Right. So the idea is you're the manager you're you're managing complexity. You have the broad view of the entire project. You have the broad view of what needs to be done. What tasks need to be developed so that they could be distributed. How much time you're going to need to allocate to make sense of what's been done. And you have to be there for the participants. You're going to be the one that troubleshoots, they're going to ask you a question you're the one that's going to be there to make a decision. Even though you're not directly involved in the screening or the extracting indirectly, you will kind of be involved because they're going to come to you with difficult examples in ecology right experimental design and what gets reported is highly heterogeneous. You could have a nice clean study where they just do a contrast control group and treatment contrast. And then you have another study that has multi level multi factor analyses on a process. And you're left like trying to figure out what the outcome is what's the relevant thing they're going to come to you going to be like Mark, we need this we understand that this is a relevant study but the key outcomes are difficult to pull out. And so then you come in, you're the one who has to make the decisions to try to figure out. But your goal though is to try to make the tasks that they're involved in as simple as possible. Your job is to just strictly screen title abstracts third job is to strictly code. Single things out of studies, their job is to strictly extract data from bivariate plots. Okay, do not source out critical decision making things do not have if you have a collection of non experts, don't let them be the ones deciding what outcomes to pull out of a study. Don't let them decide what outcomes what studies to include or exclude. They're just not quite there yet to be able to make those key decisions, you have to have an expert group, or you have to have significant training prior to that whole process. So that you could assure you could buy some insurance that whatever decisions they make are useful. Yeah, my experience is I go through. There was one year where I went through six iterations of abstract title screening with my students, right. We screened all the studies in one about half of the students were highly inconsistent. So it's like okay not a big deal. Let's just do it all over again. It's like a cohort of participants totally inconsistent, and it's easy to figure out who's consistent and who's inconsistent because again you're using statistics like Kappa statistics to verify the agreement. And usually there's always a bunch of students that are just really have no clue. They really have no clue what they're doing. And then, and then you spend time trying to catch them up. They're just disgruntled because there's a collection of students that aren't doing consistent outcomes. It gets ugly, it gets real ugly, but I got some tips for that. All right, so here's some management tasks for you, or some tips as you would as I would refer to them as how to help yourself out. When you're taking on a large team. You need to have your ideas in place right. If you are just starting off you got an idea. You haven't even scoped. You're not ready to have a lot of people participate. Put in the work and the ideal scenario is at minimum you've that you've thought through a Pico statement, or best case scenario the study is pre registered right you've thoroughly thought out. Every detail of the study, what exactly you're going to do. I feel like if you're at that stage, then it's easy for you to assess how many participants you need. If you're just starting off a project. You need to put in the footwork to evaluate whether or not it's even necessary to include many people, and I would say if at such an early stage like that don't spend time thinking about who to include. Because you have a ton of work ahead of you to try to figure out whether or not the project is tractable in the first place. Once you have a good idea good vision of a synthesis project. We need to start thinking about team composition and I haven't really talked much about team composition but really I'm going to split them up into collaborators, core members of the project. Right these are the experts. And then you have participants, people that had that are going to be working on these repeated tasks. We already covered the three main repeatable tasks that you can include many people on the screening, the coding and the extracting collaborators are important. Right, it's, but you're not going to get a super large team, even if you include many experts. So I've been part of many work groups where, you know, someone gets funding and then a collection of experts get shipped to an area. And you work together intensely for a week, you go abroad and then you meet again periodically six months later or whatever. But it's all experts right you have a statistician you may have a librarian you have a domain experts. And it's easy to see how these people fit in the project. It's the participants is like where it gets difficult like what you hear things you need to think about. Since you're farming out work. How are they included in the final report. Are they, is that enough work for them to be granted authorship is if you happen to have a few key participants that were really valuable should you give them credit. These are questions that I can't answer I tend to be absolutely inclusive. Right, when I teach a class and I say, if a publication emerges from our research synthesis, everyone that's participated irrespective of the magnitude of participation. You become an author that just saves saves me a lot of work in the end of having to decide who who's in and who's out, because I don't want to be that guy who's has to make those awkward decisions. I want to ask them to be part of a project. So, and so I need to provide incentives on for them to do high quality stuff. And that's one way like I, I don't have money to distribute to pay them to do these tasks. And so this is like the carrot on a stick that allows them to facilitate or at least be ambitious because they're like, maybe an endpoint is an actual publication. I spend time thinking about resources that are available to you. Again, if you're a Cochrane author you got a nice you you are well set. But there's a ton of stuff out there in terms of canned software for systematic reviews that are that can be expensive. They may have really enticing features. Right. They may be highly collaborative. They may have the flexibility that you need to delegate these tasks. You need to kind of figure that stuff out. I mean you can't just like invite a bunch of people, and then figure out what tool to use. You need to spend time to figure all this stuff out before you make a decision on whether or not to include many participants. Money is important money and time time. You don't want to waste people's time. I remember some of the reviews I got from my students is I wasted a lot of their time by doing these multiple iterations bouts of screening. I just wanted screening outcomes to be consistent. I would experiment weekly on different approaches to train people to have people catch up to try to identify the problematic students. But that's the participants when they are part of that you know they get disgruntled because they're repeating the same tasks over and over again. You know, in terms of screening they may be looking at different studies like making decisions on what to include exclude differently every time. But to go through that entire process over and over again and not seeing the project mature and achieve later important synthesis tasks. Well, maybe that's the that's the learning outcome is like science is hard, and it's difficult to anticipate what's going to happen. So I'd hate for that to be a situation where you just didn't quite think out in detail how people should be involved and how their time should be spent. And that just delays the progress of the project use collaboration tools right there's a lot of stuff out there that's free, or you could spend some money to have a central hub for everybody. Yes, there is a lot of systematic review software out there but those aren't those are like hubs where people ask questions, or, or have like tutorial resources, or there's it's not a place where you build a community. And so if you could establish a place where if someone needs to catch up on a task, they could just log in and see multiple discussions, multiple examples at tutorials, and to warm them up to help them finish a task, you save yourself because you don't have to answer emails or ask questions or answer questions. There's a central resource for them to find all the stuff they need to succeed. Right. So the way I've done it before is I've used teams, which is institutionally what we use here at USF. And if my team is local, right everyone, every participant is at USF, then I use teams because it has discussion boards, you the meetings are recorded everything can be placed in a central hub. So it's very nice. But if you're limited to something free you know Google Hangouts is a great supplement I mean everything you need is there to work out. The idea is to minimize managing time on your end to answering questions, often enough answers to questions can be delegated to participants participants will answer each other's questions in chats or discussion forums. I do a lot of time and I feel like that's has been a saving grace for me with my many undergraduate participants is that they kind of, they kind of teach each other how to troubleshoot and how to deal with these things and I could, you know review and verify their questions, but by most part they tend to be correct and prompt, and, and provide a lot of useful information, even for people who don't have questions right you may have someone who just like. They don't know what's happening, and they don't know what questions to ask. But if they visit a discussion board they could see a collection of questions that have been asked and how they have been answered. Fantastic tutorial and catch up resources I'm going to talk about that. These is really important, especially if you have many, many participants everyone needs to be on the same page. Everyone needs to know exactly what to do in a high quality way, and their strategies you could use to kind of facilitate that other important things right you want to have opportunities for people to live edit documents, or be involved in the research synthesis is great because you know there's a there's a really specific timeline, you cannot go to the extraction phase until you've collected the PDFs or found the main text of all the studies right so there's a real synchronic synchronicity to activities that need to be done. This adds a nice structure devise a communication strategy and so we're here again we're going to have a little bit of fun in describing like what's the most effective way to talk to a large team. My advice well let me just go into it now, but we're going to do another one of these rankings. What's the most effective way to communicate to team to make sure that everyone has what they need to succeed and to answer questions. Email is the least effective way to communicate to a large group. I mean people won't read their emails, your email is like distributed to a group, it may end up in people's junk mailboxes, it's text, it takes time for someone to read interpret. Avoid if you got 50 or 100 participants avoid email as the way to communicate to them. Again rely on the central hubs discussion boards, having documents or videos to help them navigate the process is really the quickest way for people to get done and so my advice video videos the top way to communicate stuff right. You can record the meetings people can attend, not a big deal the meeting is recorded. Just to show someone how to install arm. Just create a video. Don't write a document on how to install stuff. Just do a video tutorial. Right. This is takes less time than writing up a document. It's easy, you could just record do a screen capture right if you're in Windows there's like the Xbox screen capture stuff. You can record it in PowerPoint. And it's visual, the communication is visual you have audio at the same time. And you have the point and click navigation they see exactly what they need to do they can revise it they can remind it. It's way more straightforward than sitting down and writing a text to guide on how to do stuff. Right. And, and for me, when I'm delegating tasks to my students, it's all video. Here's what you need to do to install art. I go through it. Here's what you need to do to start screening studies. There's a two minute video to help them out with that. Meetings are important also right with many participation to participants. This actually gets difficult to organize and have everyone participate so again pre recordings is key, or if you are going to meet. centrally, then recording that process is important so that everyone has an opportunity to understand what's going on. Documents. It's a mid range way to communicate with people right. Having guidelines or examples to read is great. What if you could record a video. Right. And, and articulate how you made decisions. Right. You record yourself screening a bunch of abstracts and titles. Describing your, your decision process. That is way more illuminating than providing a bunch of examples in text form on like, why you included and why you excluded stuff. You could record on your end than having to develop these giant documents. Finally, chat's important. Well, I should rank put it up a bit higher here. A lot of questions are going to come up and having these open chat pre pre set times to chat with people while they're meant to be doing their tasks is like, you just have the chat window open while they're doing a task and it could see everyone's a real time discussion on what's what's happening how they're making decisions, everyone benefits from that. Okay, the resources emphasis pipeline is essentially your timeline and your meeting rhythm. You know exactly when to have rigid meet ups, because they appear at the tail end of completing tasks. Right here here hypothetically is the entire process of a research synthesis project. Large groups are only involved in a few key moments during that process. Right. But these are the opportunities where you, you meet right two weeks a month. See if you have a if you have many participants you got to build in that buffer on time to get those things done. Because you got to train them. You got to give them time to install the tools they need you got to give them time to learn the tools to have practice examples. And this sets up a rhythm. And, and the idea is having a rhythm where you meet at a preset time by monthly, for example, gives people time to anticipate when the work needs to be done. How to navigate their lives when they have to screen a ton of stuff, right not everyone's going to be able to screen stuff right away. So they're going to be like, okay, this is not due for two weeks. I know next week I got some free time I'll do it done. Team composition alright, I got to speed up a bit because I'm talking a lot. Right the idea of a team is, again you got collaborators, and then you have participants, right. The ideal team is everyone's an expert. There's no overhead for the manager because you could delegate stuff and you know that whatever is going to emerge from that is going to be high quality. Everyone knows what to do. The statistician understands what to do the librarian, librarian understands what to do the domain expert understands what's needed. It's the, it's the repeated tasks participants that are the challenging group. So my philosophy is like anyone can participate in a research synthesis project right. But do you want to include a bunch of novice people, because, even though there's the opportunity for like increase scope and efficiency in tasks and completing tasks. You are opening yourself up to a lot of training, a lot of catching up and making sure that the novice participant has everything they need to succeed. So, in the end you end up spending a lot of time managing and training people, as opposed to having a collection of experts, where you could skip that whole phase, and sure you're not screening things with as much intensity in terms of eyes, but you know that they're doing high quality decisions. And you all that morasses you don't need to think about other things you need to worry about local teams are probably the best because you just bump into people you could chat with them. And it's incredible how much work you could do when you're just sitting at a round table getting things done, you can ask questions directly boom boom boom things happen fast. Remotely, things are slower. Right, but the opportunity to include more people is very high right because now they could be anywhere in the world. And if you have a central hub to communicate and to organize training material and documents. There's many opportunities to include participants. But again, you know, as a manager you're going to be spending all your time trying to make sure everyone's on the same page in order to avoid issues of consistency of task decisions. Again just to reiterate video tutorials better than text guidelines. My impression is people tend to be more visual learning folks then reading active practice is better than text examples right so in your guidelines. If you give a bunch of text examples. That's not a the most effective way to catch someone up on what are high quality decisions, have them do a bunch of examples as they would, they're in a task. But do you know the answers already and a lot gives them an opportunity to like some introspection and evaluation of like why they made the wrong decision. Group activities versus or just better than individual activities this is more challenging if you have hundreds of people clearly working as a group to finish a task is not effective. Right. And so this really is dependent on how many people you want to include on your team. This is like try to synchronize smaller groups together together to get things done. So in my class for example we I have some time allocated during the lecture period for people to work together in person to help them finish their tasks. Discussion boards are absolutely key right chat chat rooms whatever anywhere where there's like a recording and keeping track of questions and answers is crucial, because not everyone is going to be like active at the same time a day. So we're not going to have that synchronous communication, the chat rooms the discussion boards is allows for asynchronous learning of what to do. So here's an example of like a video tutorial that I would post be like, okay your job is to extract data from figures, you're going to install the software, you're going to bring up this image, and you're going to try to replicate what I'm doing. Let's see if you can recover those and numerical data. This is an opportunity to discuss a bias associated with extractions and a way to kind of develop these consistent rules to extract what you need. So you have a bunch of examples that you recorded yourself do have them try to replicate that stuff as the quickest way for someone to learn something. So here's a my typical workflow when I have many participants and it's ugly, and it doesn't always work. But here's what it is. Okay, so I record a bunch of videos on how to access tools like our how to download it. How to install packages. And then there's a time period where the everyone needs to achieve that before we move on to the next stage. So if someone has installed what they have. Right. Then we do additional training on conceptual, because the people are making decisions after screening things they need to know what the inclusion exclusion criteria are, and they need practice examples so that it gels in their head on how to achieve those things. They actually do the task right you're assigning them collections of abstracts to screen, you get a little warm up phase where they repeat some of these pre known abstract decisions, so that they did. And so they don't just jump into it cold, but they have a few iterations that are thrown away beforehand. And that's because you know, this is not well understood stuff but decisions you make really varies at what stage you're at, when you're screening things at the start you may be hot but at the tail end you may be cold. You know, your goal is to kind of make things small and tractable for people and this is one way to kind of make sure that they're creating high quality decisions. All this, when you're signing tasks right you have this dual screening design. Multiple eyes looking at the same thing making decisions on your end. It makes it things difficult and complicated because you have to verify consistency. Right you got Kappa statistics you got to make sense of you got to evaluate whether or not some people are just clueless and whether deciding right the decisions are random or totally wrong. And if that's the case. A big cohort of those people that are doing awesome decisions right they just continuously get fed new tasks, but those that are behind, you need to catch them up. You still want to include them. They're still participants, although between you and me I have totally dumped the work of some people, knowing that it's impossible to get consistent outcomes from them. They can get involved later on so that's not much of a big deal, but you got to retrain them, re expose them to different examples and exercises again to make sure again everyone's on the same page. And last year's conference I spent 10 minutes talking about how difficult it is to get consistent screening outcomes to like delegating tasks to undergraduates it's hard and really a my advice is it's not. It's not an effective way to get what you want. To me it's like a key training opportunity right, they get their hands dirty and science. It's an opportunity to improve science literacy. But in terms of generating high quality research projects. It's a lot of work. It's really a lot of work, maybe more work than it's worth. All right, so let me end with opportunities to, in terms of tools to facilitate the whole process. Right central hub is important for communication. What tools do you use to actually delegate the tasks and for them to do again I'm concerned mostly with our, and there's some stuff out there but there's not that much in terms of the entire process and I feel like those are open opportunities for some people to develop. Right if you're using can software so for example if you're using like a web platform for collaboration on like screening and extracting coding data there's a ton of stuff for you out there. I mean there's a lot. Some of it's free some of it's paid for they vary considerably and what sort of tools some to provide automation tools. This is stuff you need to figure out before you start the project don't start jumping from one tool to the next. You're just going to aggravate people and so if you're going to adopt one of these can software tools you need to experiment and assess it before you delegate tasks. Ballooning growing field of like accessibility this is just like the ones that are web platforms. There's way other things out there that like this is a very incomplete list of a collaborative dissemination approaches. And that is what I'm mostly interested in because I like to tinker or provides you the most flexibility because you could have, you know, maybe this is not an advantage but you could have ad hoc solutions to problems. And since you know, I'm the manager, I could take her with the tools to get what I need. Whereas if I was using a can software I wouldn't be able to manipulate the HTML or the JavaScript or whatever is going on right here. So I don't have that type of operational flexibility when it comes to delegating tasks. So my conventional dissemination strategies is very simple, very simple, and I've done it all at some point. Right now I use Medicare. What it does is it just kind of creates a bunch of files filled with say hypothetically the screening tasks but it also could be a collection of studies that need to be coded, or a collection of the figures need the data extraction. What it does is it just kind of splits up the database and for multiple people and if you want a dual design it does it all for you. And then they're left to plug in this a sample data set into. And if you want Medicare the abstract screener. And once they complete the task, they could submit it to you. And then you could assemble and aggregate all the stuff this is basically what I use. Most of the time. Right and the same with the data extraction stuff and are there's many opportunities to delegate extraction from data, data extraction from figures. So my juicer is just one example of what's out there. One gap. However, in our is the coding and extraction phase in terms of like pulling out numerical outcomes from studies. The stuff is like amenable to just having like a form where people fill out in the input. They fill in the cells of information like the means and standard deviations of what gets reported from studies, but there's I feel like there's no clear tool out there in our at least that that is generalizable that allows you to account for edge cases of difficult extraction studies. I feel like this is an open open thing and I approached this problem different ways using JavaScript and HTML, and then when they submit the form it gets pushed into a Google spreadsheet. Another approach I've done this is have like a form fillable PDF. And then they submit me the PDF, and then I pull out the whatever things that he integrated into the form. All this is really nice for just like coding information. But when it comes to extracting numerical stuff, there's really no clear framework to get this done. I think most people do is just kind of populate an Excel sheet. But there's way more if you if you understand the process you can create some cool tool to really pull this off efficiently. So so I'm going to end with a bunch of aspirational tools that I feel could be augmented or pushed in in our or other ways to facilitate the extraction and coding of studies, which I feel like is something that is and not like developed underdeveloped in our. Okay, so our right now much in terms of coding and extraction, you can have a shiny app aspirationally. The shiny app should be something that someone could add is a generalized and someone could provide inputs on how many cells, or how much detailed information they want to code. Drop downs open numerical cells that gets complicated for for someone to pull together, especially because every research project and every study has kind of like its own problem to crack. But I think it could be generalized. And which is a really cool way perhaps to help validate whatever gets inputted because you could have conditionals and cells to communicate to someone saying hey, this is this is meant to be a count, and you've included a fraction. But Python could also be pulled into this to you but again, you're using Python and are as a back end thing really what you want is an interface and so in the end you'd be using some sort of a gooey package to create a front end for Python right tickle TK whatever. And so that that becomes a way more difficult task to pull together, but totally doable. So I have a couple spreadsheets. I've used it before I put it up here. Again, it allows you to have a collection of like something that has a form structure. Plug in data, a lot of people could just have access to the website plug in the responses to studies and it automatically populates a spreadsheet. Nice. That's also good. But here's the hiccup with form fillable PDFs recording and extracting is, you have no control on their end. What is the software they're using to actually render the PDF, and how to save it. Right so like right now most browsers have their own PDF. They're not using a tool right you're not using Adobe. So that means like once you fill out a form and you save it, the internal structures of the inputs are very different. And that causes a lot of headache for you sometimes a PDF is saved it's just an image. Right you there's no way to easily extract what's been filled. It gets ugly. You could use a canned package nothing wrong with using a can passion but here's ran. It allows you to collaborate and disseminate effort right you lose some flexibility though right you're really dependent on what they offer. And if you have some technical understanding then you know I'm telling you right now there's opportunities to develop something cool. Right HTML JavaScript. That's what all those web platforms are based on. And that could be rendered in our also right you don't necessarily need shiny. You could render HTML JavaScript CSS directly in our, and I feel like that's an opportunity also. And finally I, because I'm dealing with a lot of students, and we have our own dashboards or our own hubs to teach classes. And bamboozled those educational systems to do systematic reviews. Many opportunities out there. But in my case, every solution that I have used so far. I have been ad hoc to specific projects. Nothing other than say using meta gear has been generalizable across multiple subjects. So the person that solves that, that can create a generalized solution for extraction and coding. You're set. This is it. You're going to help a ton of people and facilitate the participation of many, many people and again to me. That's a wonderful thing, because it's a training opportunity. All right, well, I think I talk way longer than I should have. And so. And so I'm going to leave it at that, like I don't know if you're going to walk away from this being an expert on how to manage a team but I hope I provided you some, some thoughtful discussion on how difficult it is, and how what to anticipate, if you want to do it. And just flat out, you know, to put this in your back pocket. If you are interested in farming out or micro tasking or crowdsourcing. These repeated tasks, you are opening yourself up to a world of hurt. If you do not spend the time to train people. And so you, even though you, your, your goal is to do a increase the efficiency of a research synthesis project. You're slowing yourself down quite a bit because you have to develop all these resources to make sure everyone have everyone has what they need to succeed. All right, I'm out of breath.