 The whole point of an unconference is inclusivity. It is giving everyone a space to articulate some interests and some engagement on something that is of common interest. And the common interest of this meeting is wanting to make research better. And that can happen in a variety of ways. It can happen personally with the way in which we develop our own skills and abilities to do the work the most rigorously we can. It can happen at a small group level, providing training and resources and opportunities for our teams or collaborations or labs to do the work the best we can. And it can happen at a cultural level where we try to transform the reward structures the systems, the ways in which researchers are emboldened and able to do their work given the way the system works and encourages certain kinds of behaviors. So whatever is the rationale or what mix of rationales. This is the place and the opportunity of an unconference is that it is highly flexible to adapt to the interest that you have. You should not feel any hesitation to jump in to speak up to look for others that have shared interests and to explore how we might work individually and collectively to advance the common goal of improving the quality and rigor of education research. So I want to spend a couple of minutes just talking about the context and in particular the challenge that we face that has been a broad and growing discussion in education research but also more generally across the sciences of how it is we can do better we can improve the quality of the research we can improve the credibility of the inferences that we draw, we can improve the translation of the findings that we produce in research into practice whether it's in the classroom in society or anywhere else. And one of the core of those challenges is the recognition that we are not as transparent open reproducible rigorous credible as we could be, and there are a variety of open behaviors that may improve the ability to identify error that happens all of the time because it's a very natural part of the research process that provide opportunity to improve the rigor and credibility of the findings by being systematic and careful and identifying all the opportunities to make inferences and qualify those inferences based on the quality of our evidence and opportunity to self correct and build on one another by being more open more collaborative more sharing more inclusive. Some behaviors that have been discussed as a way in which we might improve the credibility of research, and I just want to highlight some results from a tool that we've been using called the open scholarship survey that we've been administering in a variety of different different research communities, drawing as close to representativeness if representativeness ish samples from those communities and assessing what people's behaviors and attitudes and perceived norms are about a variety of different open behaviors. So we have a summary plot from 12 different research disciplines on whether the sample reported sharing data in their most recent paper. And the positive side to highlight is that there is a lot of blue of people saying yes I did share data in my most recent paper across a variety of disciplines. Great, particularly because if we had asked this same question 10 years ago, it'd be very little blue data sharing was not very common. It did occur, but not at the scale that it is occurring now. The challenge part to identify particularly in this audience is where education is located in the amount of data sharing that is occurring, which is not as much as everywhere else. If you ask people or attitudes are you in favor of data sharing, you can see a lot of blue in favor very much in favor. You see some yellow of people that are not so sure about it and very little in the red. And then, again, education is in a place where it can move. There is room to catch up to where some of the other fields are. It's not just about data sharing, we can look at a variety of other different behaviors that have been advanced as potentially credibility enhancing rigor enhancing pre registration, being one. Again, a lot of blue of positivity and attitudes towards pre registration. Education has has room to grow. And if we ask, have you done pre registration in your most recent paper. Again, there's a growing adoption of this many more people in various communities have started to do pre registration. There's, it's lagging in what people's attitudes are there's more favorability towards pre registration than there is action on it and education has room to catch up attitudes about other behaviors like replication should we be doing more and more replications in our field. Lots of blue saying what in favor. Less blue and actually doing it at least in the most recent paper in this case and then when we ask in general, do you do replication research have you done replication research it isn't as favorable, or as enacted as it is people's favorability and attitudes toward it. And then also things like should we be reporting all results even no results in our papers, wide endorsement of it in terms of attitudes, but not as much behavior of that behavior of that action of reporting no results. So when whatever the domain we look at that sit consistent we see observe that consistently in general, relatively good favorability, and a lack of the behavior catching up to that amount of favorability, and part of that is a natural part of culture change process. It's often the case that attitudes towards valued things will change earlier than the actions to do those things, particularly when they're complicated behaviors, or the reward structures aren't yet in place to reinforce those behaviors catching attitudes, or the emerging norms. There's also an opportunity in leveraging what we see in every one of these data collections across different communities, which is evidence of pluralistic ignorance. This is a classic social psychological phenomenon where we believe the culture is one way but the culture is actually a different way in terms of what people's attitudes are. So this just taking an example on data sharing. The reality in the samples that we have is that there's strong favorability towards data sharing among education researchers 69% report being in favor very much in favor, only 8% report being in favor. But when we ask people, what do you think is the average degree of favorability in, in education among education researchers, people think that only 36% would be in favor and 31% would be opposed. This gap is a very common gap is the actual attitudes in the community are further out on pro openness, then we perceive the community to actually be pro open, and you can see that across a variety of behaviors I just want to illustrate this is aggregating all 12 of those communities in the prior slides of the degree of actual favorability towards these different behaviors replication or results data sharing all have at least 50% some very very strong majorities saying that they're favor, and you can see very little if you look at those same behaviors in terms of perceived norms what we think is the degree of favorability, you see that they're much, our perceptions are that there's much less favorability than there is. And just flashing back and forth you can see the amount of change in growing blue in actual attitudes. And so the lead to our perception is no not as much favorability, and more this be people being against it. So that's an opportunity, because there are very clear effective interventions for addressing the gap between what the actual community attitudes are, and what the perceived norms are, and one of them is making it visible. And so these kinds of slides saying you think more people are against it but actually more most people are favorite and favorable, favorable to it. Both can change people's behaviors where they're equivocal. Oh I see this is an emerging norm in our field. All right, I'm on board because I can see the value. And so it also makes it easier for people to adopt the behavior. Oh, I thought there was a lot more opposition to this in the community. Maybe I go ahead and, and give it a try. And so we can encourage some behavior adoption by promoting and making visible the actual norms as opposed to the perceived norms. So thinking about how to make the most of the unconference itself. There, obviously it depends on your goals if you're here to learn more about how to do these practices you'll have one orientation to the meetings if you're here just out of curiosity and learning what the discussions are, you'll have a slightly different orientation. And a lot of the activities that you'll see as David introduced at the beginning involves some design of thinking about how can we improve the systems, the approach, the training, the materials, the ease with which people can start to enact these behaviors the desirability of those behaviors, changing the stakeholders policies and incentives for making those behaviors normal. And all of that requires thinking about design. How can we take ideas and translate them into action, and particularly actions that can scale. We're not looking to just change a behavior at one time and have it end. A lot of the culture change action is about changing the system elements, the factors that make it more difficult to sustain behavior change towards greater openness. There are other things to think about in terms of how we design on intervention, whether it's training, or a new education tool or a new resource and otherwise. And some of these principles are very useful for trying to test and then iterate to improve adoption towards scalability. So a first to consider is focus on the what and the why, before thinking about the how. What we want to accomplish. What's the ideal outcome of this. It's very easy to get bogged down in the, oh, that's going to be hard because x, y and z immediately. And if you get bogged down and this is going to be hard because all of the barriers to adoption. You may forget what the actual goal is, what is it that we're trying to accomplish and why are we trying to accomplish that. But if you can establish that as an anchor, then you can look very incrementally and specifically at those barriers. Okay, these are the barriers to the ideal. Which of those are things that we can tackle we can make actual progress on from this grassroots type of approach. A second is clarifying upfront what success looks like what would it mean to actually make progress on this thing that we're trying to change. And getting that translation from oh we want things to be more open we want things to be more rigorous want things to be more credible as a very abstract ideal is hard to translate into well what does that mean what would we see. And making it concrete specifying what I mean by making things more credible is that when someone else tries to reanalyze the data from that paper they can get the same result right reproducibility is my definition in this context, and how would I measure that. In those concrete objectives, it can provide a basis for in design to figure out how is it that we should design to that outcome, and how can we evaluate whether we're making progress in our design and approach. A third thing to consider is using agile iterative development practices, rather than trying to solve the whole problem before you actually enact the solution. It is really easy in the context of culture change to get overwhelmed with boy these are big challenges, and we need to design an entire solution in order to tackle that problem to really make the full change. And of course there is a lot that needs to get built to change a research culture. So how to solve the entire problem in one bite will never actually get started culture change is hard and iterative there's lots of wrong terms that we will take in any solution that we try to enact. And so prototyping iterating developing in a way that serve always is providing something that we can do, even from the very first version, making it incremental is a much more effective way, and a lot of these practices are trying to build the entire ship, and then finding out that we forgot something, or we didn't plan it right right from the outset. So this often means doing things that are easy the easy parts first. And this is a very useful concept that's applied in a variety of different domains is the 8020 rule. And this is saying okay well there's this problem we've defined we define the ideal. We know what the problem is we know what this barrier is. And we can solve 80% of that problem pretty easily. But there's this 20% that is the hard part that we keep getting stuck on. It's really where a lot of the work and a lot of the resources end up going in. The concept of the 8020 rule is that 20% of the effort can solve 80% of the problem. And the remaining 80% of effort that's really about that long tail is getting to the full solution. For any iterative development practice, start with the 80 and parking lot those things that are the hard parts that we need to solve to get a full solution. But if we try to figure that part out now, we're going to get stuck and we won't get the 80 done. But if we can figure out which parts are the parts that are feasible that we can make progress on from where we are now. Then we know how to which things to parking lot and to plan for for later development when we have developed when we've established some success in our interventions. The agile process in order to avoid over investment, or in, you know, devising solutions that require big investment before we can even see any returns is start with what already exists. We may say, Oh, gosh, well, you know, I could implement the 80% solution in Google Docs, but that's not going to be able to get me to 100%. So I shouldn't go to do it in Google Docs. Start with Google Docs prototype there. That's easy. It's an existing solution. Let's work with that because we can make progress really quickly, and then identify where the existing tools don't serve the long term, and then problem solver. A lot of the effective agile work is prototyping so that you can start doing something right away. Road test the parts that you can do. And then problem solve for the parts that need additional solutions that are hard, harder to resource, but once you show proof of concept, then you have a mechanism for getting those resources. Another one to mention is that by coming to a meeting like this, you were probably in the, the, the idealist camp in some way of someone who was actively motivated to try to promote open practices to do open practices to be a change agent in the research culture. And it is not a problem that the rest of the research community is not an idealist on this regard. Most people go into research to focus on the problems that motivate them, right, whatever is the research thing that they are doing. And they aren't motivated in particular by openness or improving the research culture, they're happy to be open or have an improving research culture, but that's not the driver. And so we have to appreciate that the problem that we are working on whatever it is your problem. You are probably the one that cares the most about that problem. And so design for the person that cares less about it, but would adopt it under the right circumstances, because we can get caught up in well if not, if I, if everyone is I an idealist about this they're just motivated because they want this behavior then they'll do it. But then the designs don't appreciate the constraints that people are under that may limit the likelihood that they can adopt it that they have the time to adopt it that the energy to adopt it all of the factors that influence people's behaviors because they've got other priorities to. And if we don't appreciate those priorities in design, then we won't be able to get that these types of solutions into the mainstream. So we talk about this model, a lot at COS and many others use it as well of thinking about what are the necessary ingredients for effective culture change culture change that will stick and be sustainable. So we talk about it with this structure that we have to consider how the tools being available the infrastructure that makes the behavior possible that we're trying to promote how we can integrate into people's daily workflows rather than adding on all different ways to work to busy people's lives make it easier for them to do it make it a benefit for them in their daily work. How do we enact and engage the communities to make to shift these norms to give them the training the tools the resources so that they can do the behaviors, and how can we work with those stakeholders, funders publishers institutions, so that it's rewarding and even required to do some of these behaviors. All of these are necessary to fully enact culture change. And we in each of our solutions may engage parts of it, but each of those parts plays a critical role in the overall structure. So if there is a tool out there, but there's not good training for that tool, creating an excellent training resource is fantastic for fostering the community. It brings people closer to that infrastructure in ways that the infrastructure itself is not delivering. So the, the role of the unconference here is really in that middle layer that connects the stakeholders changing the incentives and policies to the tools and resources that are available for the community to change. So we are here together and ready to work on this. The whole point of the unconference is to say how is it that we can do better and can we start to build some resources, some skills, some a spree to core among us to start to enact these changes and improve the research culture around us. Thank you everybody for coming to this meeting excited to see what comes of it. And if we look forward in the future to having you in Charlottesville for the first time or again. Thanks back to you David. Thank you Brian for that we do have time for a few questions and I'll cut it off pretty precisely in about four or five minutes just to get to some further discussion. So good conversations in the chat windows. Do you want to be willing to mention kind of how the sampling strategy and the degree of representativeness is is attempted in this case and then how results will be kind of published or posted on our website to, and then a couple more Yeah, great question. So the sampling strategy for most of those communities that I mentioned involved identification of prototypical journals from that field, and then outreach to the available corresponding authors of articles from those journals in a particular, like a two year span. So it is. It is less representative of highly early career researchers grad students for example who are less likely to have published work are less represented in the sample and more representative of those that are publishing at regular intervals in journals within that field. Yes, that provides some benchmark that's an interesting one is that in other data, open practices tend to be more popular among more junior researchers than they are among more senior researchers so if anything, these data may underestimate that qualification that is a reason that it may overestimate is again an invitation about their attitudes for research practices and open research. We're less likely to respond to that request, if they're really opposed my gosh, I can't deal with this kind of thing. So we can't fully say if to read a which there's biases in any direction but we did our best to make it as representative as possible of the publishing community from that research area. In terms of making the data available yeah that's very much the goal, David and his team have been working on this as an ongoing data collection as part of NSF grant that we have to track attitudes and change in these communities over time, and is partnering with the research team in our in the office to turn this into published reports and open data so that anybody can use it and extend it. We don't have a ETA on that but it's coming.