 Hello, everyone, and welcome to the Center for Open Science webinar focused on an exciting initiative, the Special Education Research Accelerator. I'm Sandra Stegginga, I'm faculty at the University of Utah in the Department of Special Education, and I'll be one of your two co-hosts today. We're very excited to hear more about this innovative project and initiative from the directors as well as the key site collaborator in the project. Overall, this session is meant to be interactive. The panel will be presenting content, but we encourage participation in the chat and the Q&A of the webinar. Also, there will be time for some discussion and Q&A at the end. We hope that this time together fosters inspiration, but also collaboration and discussion as we continue to work toward advancing practices and research and open science. We encourage you to introduce yourself in the chat and let us know where you're located. Also, if you don't mind sharing what your field of study or focus is like special education, biology, or other. And with that, I'd like to hand it off to my co-host, Dr. Charlene Kayuhara to introduce our presenters. Yes, hi. I would like to introduce our esteemed colleagues. Brian Cook is a professor of special education at the University of Virginia. He is the last president of Council for Exceptional Children's Division for Research, previous co-editor of the Journal of Behavioral Disorders, and co-director of the Special Education Research Accelerator. When he has time, Cook drives around the country in his van with his wife and two dogs. And Bill Therian is a professor of special education at University of Virginia. He is co-editor of Exceptional Children and co-director of the Special Education Research Accelerator. I'm Charlene Kayuhara, associate professor of special education at the University of Utah where I conduct academic intervention research targeting students with high incidence disabilities. And I have participated as a research partner in two CERA projects and will be co-hosting with Sandra. Thank you, Charlene and Sandra. So we've got a fair number of slides. We may skip some of them depending on the time. We would like as things, if you have questions or thoughts to share, please do so and we can try to respond to questions and have some discussions along the way. But we want to give you a little background on where the Special Education Accelerator, kind of the genesis of it, where it came from. We'll talk some about crowdsourced research and then go into some specifics around the Special Education Research Accelerator, which we call CERA. We'll take a visit to our website that may be our grandest accomplishment yet. And we'll overview some current projects that are, one, we're just finishing up and two that are in their early stages. And then we're going to finish up talking about the Alathea Society, which is a broader venture that we're hoping to house CERA or the Research Accelerator under. Bill, do you want to take the next little bit? Sure. We'll find out exactly who's taking what. So we'll come together off with the next little bit. Yeah, so, you know, thinking about crowdsourced research kind of compared to what is considered traditional research, really, most of the time traditional research is what we consider kind of small science, right? So it tends to be very broad. Lots of people are answering lots of different questions, very either on their own or very small research teams, so broad but shallow. So explores a lot of different questions. Researchers are awarded for novel findings and there tends to be a little collaboration across research groups. And that's, I think, across a wide variety of fields, but particularly within the field of education and our subdiscipline of special education. So what are what are some of the problems with this kind of traditional approach, particularly in education? One is it really results in underpowered studies with small sample size. And this is problematic for a variety of reasons and probably the biggest one is that meaningful effect size might go undetected. Or sometimes, you know, you can get spurious results and you're going to get large effect sizes that just if they were replicated with larger sample sizes would not be able to replicate. So this leads us to having research consumers to believe in interventions effective or ineffective that in effect is the opposite. And it's really concerned with all, you know, within all applied science, but particularly in the field of special education, we're working with individuals that are already behind in academic and social skills. It's particularly problematic because we have a shortened amount of time. Also kind of a lack of transparency and openness and reporting study procedures and results. And we've seen, you see a lot of these echoes in different fields and we've certainly seen them in the field of special education. So publication bias is documented across many fields, including in our own field. Also selective outcome reporting. So non-significant findings 30% more likely than significant findings be excluded in published findings. And then, of course, the practices of p-hacking and harking. So really kind of trying to get those novel significant findings. So researchers consciously or unconsciously kind of looking for those results using techniques called p-hacking, you know, like p-hacking and harking. Sure. Cut off each other number one. Bill and I have been doing this for a while now. And as we grow longer in the tooth, I think it does give us an opportunity to reflect on our own research as well as the research that we read and edit and that we interact with in the field. And we do a lot of, I think, very good research in the field. But the more we thought about this in our entree when Bill and I first started working together and in an indirect way, I guess it's a good thing. But I was in Hawaii at the time and now I'm in Virginia, so it made me move out of paradise in some ways. But we started working together around replication and just realizing how seldom replication was done. But then when we dug into it, really realizing how haphazard it was when it was done and very often it was done in kind of name only. And as we started to wade into and look at the literature emerging out of psychology a decade or so ago, we started to think how does this play out in education and what are the implications of it in education. And so I was just thinking about that as Bill is giving some of these specifics that we're seeing and concerned about. I think it's both a level of concern and thinking that we can do better, which will have important implications for doing better for the teachers and students that we're trying to inform and improve services with our research. Absolutely, Brian. And that was, as he said, really scarce replications. And this is a special issue that Brian and I were involved in that was published in 2016 that kind of looked at replication within our fields, kind of following other fields review of their literature. And we found very, you know, very small number of studies that were actually replication studies with with it in the field of special education. We just update we're right now finishing off a review to update that and it might have ticked up just a hair but but barely a hair. It's still it's still well below 1% of the the published articles are replication studies in the field. Right. And this is, I think a result often of this kind of traditional research approach where researchers are being reinforced for doing novel studies. You know, not just saying, Hey, here's a potentially effective intervention. Why don't we work as a field to kind of modify it and see who it's effective for and under what conditions but instead saying, Oh, let's all go out and create our own reading intervention and work with our in our own group of children that we're able to find. So we're not seeing a replication, although honestly within the field of special education. If you go to this 2016 special issue see there's several papers that where we kind of make the argument while we don't call them replication there are replicative elements of a lot of our work. So we are certainly a field that that builds on what came before but not necessarily in a traditional replication framework. Also, really limited diversity and generalizability of the funded work that's in our field. So the two main funding agencies for our work is the National Center for Special Education Research, which is under the arm of IES and the US Department of Education and the Office of Special Education Programs and this this one this first bullet point really kind of surprised me and shocked me when you look at the work that was done and funded by Nixon over a roughly five year period. And these are this is they have different goals within IES and one is goal three and goal three are large scale random control trial. So that's what we looked at they funded 38 of these and these are these grants, you know range anywhere from three to five million dollars. So they're quite the investment for the US and all those 38 grants only 22 institutions receive those grants with 61 of those awards going to just eight institutions. So we're seeing a very kind of narrowing down of of who's receiving the resources in order to in order to conduct this work. So also real lack of diversity in the in the study samples. Sinclair conducted a review of study populations that we published an exceptional chapter in 2018 and found oftentimes there's not reported the diversity of the sample who was involved in the sample. And when it was they tended not to be very diverse in every sense of that word. I S goal I S broadly so Nick sir and NCR funding go three and four studies are disproportionately conducted in large schools and school districts and urban and suburban areas in the coastal region. So even the really high quality work that we're funding as a as a government tends to be limited in generalizability as far as the samples the demographics of these individuals they tend to be in the same parts of the country and they tend to be going to the same researchers. And this, you know, it seems like a real shame when you think about the capacity for the field of special education or education in general in order to conduct research. So if you think about the field as opposed to us all being individuals doing our own novel work and think about ourselves as a workforce that can be harnessed to answer important questions about serving individuals with disabilities in our schools. It's it were pretty large as well over 1500 special education faculty at our one and our two institutions. So those are the institutions where faculty are expected to engage in a high level of research. And vast majority requires and fund research at that institutional level. So we have federal funds. There's sometimes state funds. There's obviously foundations as well. But universities, public universities and private universities spend a significant amount of funds to fund research that happens at their institutions. So we have this large faculty base that we could harness just at these top research institutions. And then when you think even broader that you there's just a significant amount of people that are receiving doctors and education. So this is just one set that we found from NSF that show over 2500 doctors and education awarded in 2018 alone. So, you know, we have we have a huge workforce that could be answering these questions if we try to think about how we might conduct research differently from than a traditional approach. And I think there is in some ways just everyone exploring whatever they want. I think there are some things that get attention and and maybe the often discussed competitive nature of academia breeds some real workhorses to go out there. But I think the the inefficiency and the lack of coordination around how we conduct research where it really is every team or perhaps every every person in some way is going to out doing their own thing. There's a lot of redundancy. There's a lot of gaps in in how we conduct research and we'll talk a little bit about that later. Yeah, you want to take the next couple slides, Brian. Yeah, sure. So quickly here and we can always go back and fill in more email us because within an hour, we've got too much content to go through so real quickly just crowdsourcing be an alternative to some of these issues. We really like the Omen at all paper and some of their quotes or notions but the idea of combining resources across researchers to conduct studies that just couldn't be accomplished by individual researchers or teams. And we like that broad definition. Sometimes people say well so what what separates when you have a good group of colleagues from different places doing research versus crowdsourcing. I don't know. I don't know where there's a line between crowdsourced versus getting a group of people together, but it certainly there with the extremes is bringing all sorts of people together by different mechanisms to work on it. But I think crowdsourcing is just making sure that that we have multiple individuals multiple research teams together that the design of the study incorporates that and enables research that couldn't be done by individual teams and researchers. Yeah, you know, and I don't I probably doesn't fit the actual definition of crowdsourcing. But one thing that I would I would offer that would set crowdsourcing apart from maybe getting together with a group of eight researchers. You know, it's kind of breaking out of those social networks that you may be, you know, the all that we have at University of Virginia where everyone who went to University of Kansas and now they're working on the same kind of, you know, the same kind of research portfolio. Ideally crowdsourcing is broader than that with opportunities that kind of break outside of the academic trees that you might be involved in that we typically seem to be to be stuck in as researchers. It's a good point, Bill. So, instead of that vertical distribution of research where one individual or one group is doing all the different steps. There's a horizontal distribution of ownership resources and expertise where lots of different teams or individuals are working at any given level of the research process, the emphasis, instead of trying to get as much research done as possible, fewer, larger, ideally well planned and coordinated studies, getting done. One of the things that we were attracted to with this is we think it really facilitates systematic replication, both, both conducting replications of previous studies but doing concurrent replications across multiple different teams. One of the things we've really become interested in and we think that there's a lot of possibility is thinking about how to crowdsource conceptual replications in ways to examine effect heterogeneity and thinking of effect heterogeneity rather than kind of a bug that we have to be worried about that, oh no, the effects have been shown to vary, that that's a feature that in education we're going to have effect heterogeneity across many different variables. And so we need to start thinking outside of the box and rather than try to pursue the effect size, realize that there is a distribution of heterogeneous effects and really try to think about how we can examine and try to explain some of that heterogeneity. And one of the things that also attracted us a lot to this was the possibility of trying to democratize research, so bringing in large and diverse groups of both researchers and participants on any given project. Yeah, and that's one thing too, when you think about crowdsourcing, I think folks often think about crowdsourcing data collection, but really in a true crowdsourcing model and where I think democratization would come in is through the whole enterprise from deciding what's the important questions, you know, where the study should be conducted, the whole process, not just data collection is really the true democratization process of crowdsourcing, I think if it's done well. So this is one of those quotes that we really like. And boy, we've been in the situation where, well, we can't do that we'd like to do that but we don't have the resources for it. And I think one way to think about crowdsourcing is what do we really want to do how's the best way we think we can address this. Now let's figure out how to crowdsource those resources to make it happen. And that idea of trying to broaden the focus from trying to find the result to looking at a distribution of results across different factors. Yeah, go ahead, Bill. Go ahead. Oh, I think we'll just skim over this. We just want to be clear. This stuff wasn't our idea. We are, we crowdsource this idea to some extent. When I came to University of Virginia in 2018, we started talking to the folks at Center for Open Science about different things. We're at a reception talking with David Meller and Brian Nosek who started telling us about this psychological science accelerator and the many labs work, studies being done, which I didn't know anything about. And the more they told us, like afterwards, Bill and I just, this, we have to do this. This is exactly what our field needs. I think it's such a great fit for education broadly, but special education in particular where we often have low incidence populations that we're trying to study, that it is very difficult to get just an inadequately powered study in any one site or region. And unfortunately, some people at NYXR are agreed with that sentiment. So we really, this is modeled largely after the psychological science accelerator. And I think we're doing it on a smaller scale. There are some differences between education, which is very applied, intervention work that is often intense that issues of fidelity, for example, really come into play where maybe that's not the case in some of these other very large scale studies. And so part of what we're trying to do is take this model and see how it applies in education and special education specifically. Absolutely. And if you're not familiar with the many lab studies, I highly recommend that you go and pull these studies and read them. It's incredible work with large numbers of researchers across the globe, working with, for instance, the many lab programs, so you have 15,000 total participants. It's not incredible work that an individual researcher or even a research team would never be able to accomplish throughout a lifetime career without these kind of accelerators and approaches. So as Brian said, definitely props to the, to the science accelerator that certainly was a jaw dropping moment when we heard about the work that they were doing and the idea of bringing into education specifically special education. So as Brian said, we were fortunate enough to get an unsolicited grant in order to fund the special education research accelerator. And since then, we've got another, another IES grant, which I realized me neglected to put the number on here. And then also a National Science Foundation grant as well. Trying to keep that second one secret for a while. Even a secret. Yeah. So it's an online platform. We're going to go there and just take a look at it briefly and show you around a little bit. And, you know, and I, it's, I guess you could say it's a conceptual replication of the psychological science accelerator. I think that's a right way. Brian wrote that line and I agree with it. Good. I snuck that one in on you. Yeah, it was very good, Brian, very good. A little replication humor goes a far away with me. So thank you for that. Alright, so I would love to spend just I'm going to stop sharing my screen for a second so that I can pull up the, the site accelerator and show it to you all and see if I can get this to work. Did it pop up? Yep. Alright. So this is kind of the front page of the accelerator. You know, we have, we have a newsroom and a little bit of information. The main thing I kind of want to focus on is the studies aspect of it. So this is for folks that are involved. For folks that are involved in studies with us, that's going to be the main, the main avenue where they're going to, they're going to find studies. Is our website not working, Brian? You've never had trouble with it as far as I know, but playing without your internet connection. Well, I'm still on Zoom, so. Yeah. We'll just save us some time if it doesn't work. Alright, so if you go, can you, can you all see that? Yep. Okay, good. So this is our, this is the Sarah studies area and we'll talk about these. We have these two IES grants. I want to kind of focus on the Sarah pilot studies and Brian's going to talk kind of more specifically about this, this study. But what you see on this front face is what we provide to our research partners. So we kind of, if you think about the, the research process, we kind of, our study dashboard follows that process. So from IRB protocol and pre-registration to data collection protocol and so on and so forth. So we try to make it as, as straightforward as possible, embedding in YouTube videos related to, to what, what the study was. And then walking down the whole process from experimental design and randomization to data collection protocol to partner site tracking. And if you kind of think about how you might task analyze conducting a study, that's how we kind of try to put it together. And on the back end of this, and it's probably the area that we need to work on the most is the data collection activity. So we're aggregating these data from, obviously from a variety of different partners from across the country. We need them checked to make sure that they're correct. We need them to be pulled into various software. And that's, that's the area where we're using a lot of external vendors to do that that we hope eventually will kind of be more immersed with, with, within the particular website. One thing that I really like about this related to this idea of crowdsourcing and even more important replication is once the study is done, while we're conducting the study until we analyze the results, all this stuff is only available to those individuals that are actually implementing the intervention in the study. However, after they're done, and this one's getting close, we'll go ahead and make this what publishes to the field. So this is something that folks can do to engage and see what we did and to replicate those work. So anything specific in here you'd want me to show, Brian? No, I don't think so. We can take requests from the audience. You could show like the, the, the map, the, with the research partner description and the map and it'll actually show the link in case anyone's interested in, in inquiring about it. Yeah, so here, here, research partners, there's a, there's an opportunity for folks to contact us. Originally, Sarah, because it was an IES funded project, we were keeping it within the states, but we are going and we're going to be talking about this a little bit later. We're hoping to go international just like the site accelerator is so we can start conducting researchers across the globe. This kind of shows you where we are within the states. We've got a couple, only a couple of states that are, that we're missing. So we've got a significant amount of research partners that are available to conduct crowdsource research with us and the rest of Sarah. Nice Bill, thanks. See if I can get this, this to show back up again. Are you seeing the PowerPoint? Yep. All right, that's the worst part. The biggest nightmare for me related to Zoom is switching between screens. So I feel successful at this point. Yeah, it's all downhill. Thank you. All right, so we want to talk a little bit more in depth related to the studies. Brian, you want to go ahead and talk about that first one, the pilot study. That was the, the part of the site that I was showing. Right. And so this was our very first, this was the first project that we undertook. And we replicated a study published in the 90s. We purposefully tried to focus on something that we thought was relevant and meaningful, but was also very straightforward. This was an RCT. It was an intervention study, but the intervention was very constrained and could be done individually. And it wasn't something that was going to take weeks or months and was done in the context of a whole class. The big thing that came out of this and maybe long term, it was real good for us, but we got this funded. I forget if it was probably 2019 when we started. We got things together and we're ready to go and people were about to start recruiting or we're beginning to recruit schools in the pandemic. And it just really kind of threw us sideways and through a couple of no cost extensions. We ended up going online and doing everything online. And so it is the intervention was it is retention of science facts for upper elementary kids with high incidence disabilities. And the intervention was using what Scruggs et al called interrogative elaboration where in other words, you ask the kid so about animal facts. Why do you think that is? And you work through a series of prompts about why that animal fact would be true. And so we're going to see in both immediate and then over a longer time period if that results in higher retention. We're done with that study now. We can't decide whether we're proud. We are very proud. And but on the other hand, we're also a little disappointed. We envision this being much larger through the pandemic through it was a real struggle to get into schools in 2021 when we were doing this. And so we tried to get a little creative and our ultimately this was just a pilot and it really was just a pilot. But we had a lot of initial enthusiasm and we were very excited that wow, this is going to be a big splash for our first study. And then research partners just had to drop because even schools they had connections with they couldn't get in and do the research and oftentimes when it was it was just with a few students. So this was a true pilot results are pending. I'm very interested to see them are methodologist Vivian Wong is is outstanding. She is preparing a multiverse analysis to talk about results under different assumptions and using different approaches. And she is writing the code independently of cleaning the data having different people do that so the two never meet or influence each other in any way. So we don't we haven't kind of run preliminary analysis yet. It's going to be all all done at once. We have a colleague who is doing natural language processing and using machine learning to look at fidelity, which we hope to be able to develop to do that in almost real time is a way to look at fidelity via distance, which is a real challenge for us to to think about. And ultimately, as Bill said, we're going to be sharing not only the materials from the study, but but the data and the results as well. So this coming up national language, language processing, I think is one of the most unique things that came out of this. So, you know, we had, there was an intervention protocol under control protocols. And so we had exemplary protocols. And then as as our research partners implemented the intervention with the students, those audio recordings would automatically be transcribed and then machine learning to see the correlation between what they were what we thought they should have done and what they had done. And so it's kind of fascinating to think of treatment fidelity being instead of just being a checklist being a correlation with an exemplar script. And so if you think about eventually we're hoping Sarah we're in hundreds of schools or hundreds of research partners and the data is coming in we can in real time transcribe those transcripts and just and see how close they are. And even even maybe more interesting is for somebody who does research a lot you go in and do treatment fidelity, you check the boxes and you get 100% treatment fidelity but you look at different teachers or different implementers and you know there's a difference right you know there's parts that you aren't catching. So you can do that reverse as well in an exploratory fashion where you could see who had the greatest effects and then see if there's any correlation or similarities between their scripts you can kind of backward engineer those kind of softer aspects of the intervention so we think there's a lot of a lot of potential with for natural language processing for crowdsourcing just research in general that that we're hoping to see and we did catch someone we had we were we were doing this process and we caught before the person told us we we had someone implemented intervention they were supposed to do the treatment intervention but did the control intervention and that popped up for us that they that it didn't match the didn't match the protocol that we were expecting it to so but it didn't match the control that was the that was the control did not match the treatment and that was caught automatically so you could see how if we had drift or if we had someone implementing an invention and they were supposed to be doing a different intervention we could catch that pretty quickly even with a large group a large group of researchers which we think is exciting. And we realized that when Charlene was going to be one of the hosts, we thought we'd take advantage of that and that she was involved in the study she's involved in one of the the other studies we're doing now and so we asked her and she was so kind to on very short notice to put together just a few comments and make sure to include some positives, but experience in in being a research partner on these studies because that's something Bill and I can kind of conjecture about but but you know we don't know the what it's like on the other side of the crowdsourcing studies. Right, absolutely and it's such a privilege to be here not only co hosting with Sandra but to be presenting with both of you Brian and Bill. I, as a research partner so as Brian mentioned I can attest right to the challenges we faced in the initial pilot study for Sarah. From a research partner perspective getting into the schools and recruiting participants the pandemic happened and at that time in my area, research was pretty much shut down for, you know, into that following year and so it was very difficult to to to so that was an obstacle that we faced, but there are several positive takeaways and I wanted to emphasize and we'll get to some challenges in the next slide. So as an intervention developer and researcher myself, I, you know, I, Sarah provides such a critical platform for furthering our evidence base as Brian and bill refer to earlier, especially in special education and we're dealing with smaller, you know, populations of students with specific disabilities, or abilities, and I think as research partners to we are at an advantage in a sense that we have built relationships with, you know, our local school districts and leas and so presenting them with an opportunity to contribute to a large scale national project is appealing to them on many levels, primarily because of the benefits of crowdsourcing. You know that that that was talked about earlier. So that is like the impact on schools and teachers are much less in terms of time spent in schools and you know effort in recruiting. So, for example, if I were to carry out a larger research project locally, I would, it would require more time for me to be in schools to work with teachers and districts recruiting, you know, participants, obtaining consent and sent in all of that and so crowdsourcing provides away. You know to really streamline that process, which is really appreciated you know when you're conducting intervention research in the schools. It's also crowdsourcing I think this type of approach to research cultivates a shared view of collaboration. And Sarah presents an opportunity to work with researchers nationwide and that democratization piece I think is so critical that we need to be incorporating more of that humanity in the work that we do. And finally the, I think the most instrumental, I guess, takeaway for me as a research partner is that the support and resources the infrastructure that Bill and Brian and their colleagues have built, you know the website and there are many ways to to connect with them, you know, seamlessly if questions or problems arise has just been superb and I think that is essential in, you know, being a research partner and carrying out this broader vision of conducting replication research. So Bill can we go to the next slide yes so challenges so there's only been a few challenges and I think they're mainly institutional challenges involving. institutional institutional review board or IRB requirements at our respective institutions and I know that they vary across the state in our country in terms of, you know, what is it that we need to do in order to ensure that, you know, for me being in Utah that I'm able to go into the schools and be part of this research project, even though the main umbrella IRB is coming out of for example University of Virginia. So there have been some interesting things I think. And I think it's more, you know, on my shoulders in terms of really thoroughly understanding what those policies are, and how Utah, for example is able to work with other institutions and as far as this goes. Another part would be to really make sure that you understand I think the scope and the aims of the project because you know if you're collecting fidelity data or, you know, implementing an intervention that that understanding is clear. And then again I think this is, this is critical is that relying on the continued support and resources from Sarah. Because issues challenges obstacles problems may arise as you're conducting this out in the field and so it's really, you know they've been just phenomenal in in working with us in terms of making sure that we're, you know, implementing the study as envisioned. So, all in all it's just been a wonderful experience and I'm looking forward to continue to work especially on this next project. I think it's a funded study that Bill is PI on. So thank you. Yeah, it's been awesome. Thanks. Thanks, Shar. All right, I will go quickly then. So the Sarah two was the Sarah one as we call it was the pilot study Sarah two is a another. The unsolicited grant from from Nick sir, and the focus on Sarah two, which wasn't our original plan, we imagine just more scaling up, but I think the big focus of Sarah two was we started to be think more and more about exploring the conceptualization boundaries starting to dig into heterogeneity of effects. And, and we wanted to bring crowdsourcing into not just the data collection, but also the conceptualization and the design of the research studies. We know in education that intervention effects vary across moderators, but but we seldom systematically design entire series of studies to try to systematically get at the generalization boundaries across those moderating variables. And because of that our knowledge accumulation is typically incomplete and inefficient. So, we, again, picked an intervention that we thought was straightforward that we still that we think is relevant and important. And so we're designing and then piloting a series of conceptual replication studies around the to explore the conceptualization boundaries of repeated reading, specifically for the outcome of reading fluency for students with learning disabilities. And I've got some some graphics here to try to get our head around what we're doing in Sarah two. So, sometimes Vivian long and all of us, we like to talk about, we have in our field, a lot of really good studies. But maybe those can be thought of as bricks of research evidence that we have some really strong bricks of research evidence. Sometimes we don't integrate those into a wall of knowledge that can be actionable for educators in in to answer questions about does this work for who in what settings and for what outcomes, not just does this work in this one study. To do that we need to think about the creating blueprints to make sure that these bricks of evidence can be assembled into walls of knowledge. I should just leave it there, but not not really but that's a nice image how to do that's a little messier. So this is just an example this is from our proposal. And so there are lots of different potential moderator variables that you can they could be dichotomous they could be multiple variables. And so just imagine there are multiple sites or regions, there are different types of classrooms that that we could implement a repeated reading with there could be different types of personnel teachers assistants teachers parents but there's, and this is just hypothetical here for a heuristic to try to display what we're talking about. But let's say there's two main types of personnel we're interested in kids with and without reading disabilities and maybe we think that there's an effective time so we're looking at a different cohorts. And so we could theoretically imagine this grid of effect sizes across all the different combinations of these effect moderators that that we've prioritized or think are important. So then, and this is the part where I will beg off and not didn't say you need to email Vivian if you want to talk to her but she has what I consider kind of methodological voodoo that but she has many different ways to be strategic essentially about selecting different studies to this is a grid of 64. And so we're not going to in any given project be able to bite off 64. This is an example that I think is is probably awfully optimistic that we could bite off 16. But if we're going to be able to do 16. Let's be systematic and make sure that different combinations of variables are represented. And so we're not just doing studies that are predominantly, or that we have some variables that we don't know anything about. And so this is one configuration of that. And so then we would go and go ahead and do these blue is positive effects the size of the square and this is all hypothetical, just for illustrative purposes, but but we would have actual empirically derived effects here. And, and from those because they were strategically selected, we can then estimate the effect sizes in all of the remaining cells in the grid. And so we think the process of identifying these variables laying out this grid from experts in a particular area would be beneficial for the field just to have even if we don't do all of the studies. But because we could then estimate effect sizes but this could be doc students colleagues in the field who are interested in the area, rather than try to justify a study on their own and pick something they could be guided by where we don't have anything in this, this particular cell I should do a study on that. And over time we get better and more precise empirically derived estimates of all these cells, which better enables us to estimate the effects in other areas. But hopefully you start to see some patterns across this, where you can actually kind of get this surface area of effects where you start to understand effect heterogeneity and see that it works. That effect sizes are dampened or heightened, depending on the presence of these different moderator variables. So that's our big that that's our big direction for for this Sarah to, but again we're just piloting this and right now we're in our very first step, where we have a core consensus panel of six diverse experts in repeated reading, and we're, we're, they're coming to the Charlottesville next month, and we're going to sit down and take a first stab at identifying the moderator variables and developing a an effect grid and an integrated research design to explore that, and then we'll pilot doing some of those studies and then do it on a larger scale we plan to do a replication to apply for for a grant in the IES replication competition to do it on a larger scale. I think that's it for Sarah to bill. Yeah. And so the other one I'll just talk briefly about our National Science Foundation grant that Char has also involved in. This is an observational grant. This is the first. So the two grants Brian talked about really our development grants for for the Special Ed Research Accelerator. This is the first grant where we're actually utilizing it solely as a platform. And so we want most of what we know about education that in the United States at least, but particularly for students with with disabilities is this kind of anecdotal or it's based off of observational work that's not nationally representative. So we thought that Sarah would be a good vehicle to conduct observational studies throughout the country. So this was ECR is kind of their foundational research competition at NSF. So we're conducting a large scale survey and observational study to look at science instruction for students with learning disability and autism in fourth and fifth grade and how these variables affect science achievement and engagement for these students. So when we when we went out to to decide to put this this grant together, this is where I feel like Sarah really has a lot of power and we we wanted to go ahead and have these kind of a lot of research partners we want to have at least at least one in each US census district. We put this out to Sarah and we had interest robust interest in a very short period of time so over 45 folks express interest in less than a day so that shows that there's a desire with her field to engage in this type of work. So we selected 10 were in all the states that you see here and actually a couple more since I since I wrote this slide and including char there in the state of Utah. And then we use the generalizer if you're familiar with that site in order to select schools in order to be in so we have a generalizable sample of schools that we're going to be collecting this data and so the ultimate goal here is to have an actually representative sample of observational studies that are collected throughout the country. We're also very open access with this. So if you're familiar with data brary where audio recording these all these sessions we're going to upload these sessions to data brary along with all the other information we collect. So not only do we have our own kind of hypotheses and research questions we want to answer but we're going to have thousands upon thousands upon thousands of hours of audio recording and other data available for folks to utilize in order to answer other questions. And that's another thing I think would when you think about open science and crowdsourcing is it's not the paper it's the data the data that you generate is the most valuable thing. And so you need to make that available as well. I'm going to skip that because I want to make sure we get to some questions. So we have Sarah and we were thinking about it with folks that we know but also our funder kind of said hey you know it's great that you have this special education research accelerated but is it really crowdsource research if it's all done at the University of Virginia and you're making all the decisions and of course not you know if it's really going to be democratized research needs to be based in the field. And so we've begun to initiate the development of a new 5013C organization Alathea Society and Alathea means uncovering or truth with a Greek word. So we formed this new nonprofit again you know everything that that we're putting together we've stolen from other people. So if you know the society for the improvement of psychological services. Right. Sciences. Sciences. Thank you. Amazing organization. We're kind of modeling Alathea off of that particular organization. And what it will fall what we'll do is we'll take Sarah it will fall into the auspices of this new nonprofit. And then we'll also it'll be it'll be a research organization for special educators and those that are interested in working with individuals with disabilities and that are at risk. We're going to hold on conferences have a series of action events. And then also what's relatively new is we're going to have a new open access journal that UVA has agreed to house for us called research and special education rise. So do you want to add about that Brian. No other than the rise will be. We have a little bit of funding or support from the university library so no no APC is no article processing charges. Everything's entirely open access out we're really excited about it and we're hoping we get some some people who recognize the benefits of open access and. We're not going to have a journal impact factor to start with I think we're going to have a lot of other benefits but you know we're going to need some people to submit. And I think they'll be we'll do some interesting things to hopefully entice people to put some real good research on the journal and so that's what we're hoping to launch that sometime next year. So that's in the works to and look for more on that the last couple slides I think we can really kind of skip over but maybe I'll just when we reflect on it. We think wow this was just an idea of five years ago and and we're so pleased with the collaborations that we've developed them and we've actually been able to get some funding to to make all of this a reality and so we're very pleased. With with where we're at in many ways. But we also recognize that we've got a long way to what long ways to go, and that we have hit a few bumps along the road we're continuing to explore different options for for IRB and that is not always pleasant trips into the weeds on on that. It's been a challenge to get into into schools. We're so pleased with the enthusiasm in the field and just having over 360 research partners, but this relates to the last point. We need to, we need to be able to create enough time for ourselves to organize a more distributed organizational framework so others we. There's just only so much time in the day, and we need other people that have talents above and beyond ours and that's the notion of crowdsourcing and so we need to bring that to bear to get all of this to the next level. And I think that's kind of where we're thinking where we're at we have a really nice start but we need other people who are. motivated and have backgrounds and interest in this so with that in mind. There's a link here but it's through the website you can find it on the website email us if you have troubles finding that. But we want to continue to grow and we're getting more interest internationally and as Bill said, we have three federally funded projects so we've kept it just us based right now, but we're going to open that up and. You know, there might be certain projects that would be based in one region or another but but there's so many benefits to broadening our connections there. Continue to pursue funded projects we want to think about unfunded projects that we can do and do it on the cheap but with the power of crowdsourcing we think we can do some really interesting things and we have a few ideas. Along those lines, and then the the other thing that I've really already covered is developing a governance structure that goes beyond our immediate team because we are. Limited by the time constraints that that a small number of people have, which is again exactly back to the point of crowdsourcing to utilize that so so that we can expand and and and do better work. And so Aletheia where we're putting together an initial website for it's already a 501 3C will be officially launching that this summer we are looking for. Founding membership partners that eventually will be a membership organization so we would love anyone's involvement as Brian said we're you know we are expanding out internationally at this point certainly with the organization and we're hopefully with with Sarah as well so. We hope if you're interested in this that you'll reach out to us and engage in this work and start getting involved in the in the new nonprofit. So with that maybe we can open up for I know we don't have too much time but maybe we can open up for some questions. Or cheers or duration. Yes. I see some great questions popping into the Q&A. The first one is how do you envision funding to work in the long term. It's a great question so. We were fortunate enough to be able to get some funds to have an actual attorney put together the 501 3C so Aletheia should be able to pursue grant funding on its own as an organization with its infrastructure so we hope there'll be some of that funding. We also want this to be a membership organization so we kind of see a little bit of funding for folks to join their organization that the that they can harness to engage in this work. And then conferences as well so primarily on conferences. But we could see that as a as a you know primary means to to to bring in funds into the organization to conduct this. This work so kind of multi-pronged Brian has a dream and I think it's a wonderful idea of of paying people to review journal articles. Could you imagine that even if it was just a little bit of money or maybe a little bit of a discount on your membership. You know it's time for us to kind of take over the research enterprise. I mean it is ours but yet we seem to you know outsource and provide companies with significant amount of funds and and same thing to as a field of special education. Why can't we as an organization decide what grants we want to pursue and then if you think about the indirect that money would come back to the nonprofit and be able to to be used to deploy to do other grant. That's that's not that other work that's not grant funding. Excellent and then maybe I'm one other quick one and then we'll wrap up how is the authorship managed because you have such huge collaborators and groups. Have you worked on that. Yeah we have a content ping pong tournament at the end of it and it's ordered that way so if you're going to know why can't I think of the acronym. There's a credit taxonomy so there is a kind of an open process in order to determine the order of authorship. We're really far behind in this is a field in education. If you look at physics where they do very large group work. You know it's highly valued to be involved in these kind of larger larger projects. So yes so there are different tiers based off the credit tax taxonomy and then often within tiers it's it's advertised and you know and so. So depending on whether you're conceptualizing how much you're writing and such so that's something it's a critical question it needs to be addressed before we get before we get engaged in this particular work so everyone feels like their work is valued. I also think you know as time goes on. If we think about folks going through promotion and tenure. You could you could earn certain status within an organization within a court proud source saying what you were involved in and how you know what kind of an integral role you played. And to me that seems like a value added for the field because we're actually we're all worried about improving outcome for students with disabilities not you know necessarily shining a little light. Over our head for different things that were engaged in so we need to find a way to change the norms within our universities and again we're the same people we are the people that were viewing everyone's dossiers when they come through so I think over time we can do that. And the credit taxonomy involves you get basically hyperscripts. And so there are predetermined. Contributions that you could make to an article whether that is collecting data analyzing the data writing up the study conceptualizing the study and you would get different. Hyperscript you know one two and six after your name and and you would be put in a tier of authorship depending on kind of a predetermined prioritization about that. Daria you've got some great questions there and I'm going to we don't have time but maybe I can copy those and try to respond but but just real quickly. We don't charge anyone to participate in a Sarah study and it is one of the things in our in our funded studies were able to to provide some financial support. And an honorarium to research partners to participate but we'd also like to expand just to broaden the involvement in some of our projects to things that people don't get paid for and so it wouldn't be that the time commitment wouldn't be as significant that can people could do a small piece of some things we're thinking for example of doing a true experiment with random random assignment to post work is a preprint or not and then look over time. The effects that have on presence in on social media and citations and things like that that would be a fairly small lift but we could get tons of different people and involved and I think it could be quite a powerful study. It doesn't cost anything but no we haven't when I we want I think we want to avoid ever charging anyone to participate in a Sarah study. Well I'd like to give a huge thanks to our wonderful panelists today, Brian bill and char. Also thank you to all of you for joining us and being so active in the chat it's been a really fun time today it was wonderful to hear about where the idea for the Sarah project stem from. And some really innovative ideas I have like 20 questions that I need to follow up with you all about. It's just been an exciting time and we thank you very much for your time today and have a great week everyone.