 Thank you for joining us today for our presentation, Diversify Open Research, a toolkit for measuring lifelong learning. We're glad you're here today to help us build our toolkit for research. The outline for today includes discussing why we would measure lifelong learning competencies, how we developed the toolkit and are continuing to develop the toolkit, and then we'll show you how to access the toolkit when it's ready. Joining us is Marla Lobly. She is our project coordinator. She's from East Central University in Oklahoma. Kathy S. Miller is the coordinator for Open OK State. She's an OER librarian at Oklahoma State University, and I am Jamie Holmes. I'm a reference and instruction librarian at Tulsa Community College, and I serve on our library's OER support team. We came to this project. One of the questions we wanted to address was, I needed to address, is why measure lifelong learning. Our foray into this is influenced in great part by John Hilton's article published in 2019-2020 that had a literature review that discussed research projects that had been undertaken in the several years before that. And one of the things that seemed evident from that literature review was that OER metrics, success metrics, were really kind of focused on letter grades, drop rates, retention, all elements which are relevant and important, right? But as librarians, as instructional designers, we understand that the letter grade does not always capture what we might all define as success or learning. And so we wanted to kind of zoom out and see what else was considered strengths of teaching and working with modifying open educational resources. And as we were working on a literature review, we identified a lot of alignment between attributes, aspects of lifelong learning, and some of the strengths of OER and things that they use to reduce the modification of OER and open practices enable. So as you can see on the slide, classroom populations, also it's limited to researchers with the time and the resources to design studies. We know that we have a lot of contingent faculty, non-continuous faculty, adjunct faculty who are working with OER, but the nature of their positions means that they don't necessarily have the time or the energy or the resources, right, to conduct the kind of vigorous, rigorous research designs, research projects that are going to carry a lot of weight in the scholarly community. And so we wanted to come together and create, not a hesitate to use the word plug-and-check, but create a bespoke toolkit that could be used to implement a research project studying the impact of the use of OER on lifelong learning competencies, if that makes any sense. So our goal is kind of to think of, for instance, the kitchen sink librarian, if you've read Amanda Larson's article that has so many things to do. We are hoping to create a toolkit that they can come in and just kind of work through that, and they've got a study that's rigorous and aligned with best practices for, in this case, quantitative research design and implementation, and be able to step into that space and enact their own research project, articulate their own findings in a replicable and reliable, valid way. We also, as we were pondering this, and you can probably, if you refer to some of our earlier presentations, this is kind of a triptych of presentations, this is the middle one. We realize, as we discussed the difference between, for instance, the PhD trajectory when you're getting that degree and the MLS trajectory, both of which are terminal degrees, right, in a variety of fields, that the MLS is an information science degree, and librarians come out with a very nuanced understanding of information science, how to locate information, how to determine the credibility of information in an independent way, whereas PhD students who are taking, you know, 15 or more credit hours worth of research study methodology courses have come out with a thorough understanding, ideally, of qualitative, quantitative mixed methods, comparative methods, a variety of different rigorous research designs. And so when we compare those understandings, or when we compare them, when we combine those understandings to create this toolkit, hopefully what we have then produced something that draws upon the nuanced strengths of really a variety of fields to design a research study that can be implemented at whatever level, but that will produce findings meaningful across a variety of disciplines. Thank you, Kathy and Jamie. So I'd like to talk about how we developed the toolkit. We started, we spent a year doing a literature review, trying to figure out what are the measurable competencies of lifelong learning. And we just quickly discovered that lifelong learning is a very deep field, starting in the 1970s. And so that's it took a year to get through the literature. And we found a couple of major players in the literature. So if you click this link, it'll go to our project website that shows a couple of different summaries of our literature review. The one I'd like to show you today is just our one page selection of the key research that informed the project. And this first citation you can see one of the major, like, huge players in the lifelong learning literature is UNESCO, and other, what I call super natural, supranational organizations like World Bank or the European Union. And they were really looking into lifelong learning as a way to enhance society. So whether from like a humanistic view, human rights and human development, or an economic view of developing job skills with lifelong learning. And so these supranational organizations did a lot of work figuring out what lifelong learning is and what it looks like in society. And the main report that has really stood the test of time is called the Delores report. It was a commission that found four pillars of lifelong learning. And those pillars are going to come up later in our toolkit. And then the second major player in the research was of course the academic research. And we did find six scales that measured lifelong learning. Each had a little bit of a different twist so that none of them really fit our purposes, but we were able to see what competencies those scales were measuring. And we ended up with a list of 71 competencies. So these were gathered from the supranational organizations and the academic research and a few qualitative studies. And with those competencies and I'll discuss what we did next with our list of 71 competencies, which is quite a bit. Also in our literature review. We did look at, you know, as Kathy stated earlier, the state of we are research efficacy research so far. So here's that citation for the Hilton article she mentioned. And we also looked at generativism as the theory for the project. That's a theory by carnero that says that knowledge is created within a community that is constantly co-creating and recreating knowledge. So we had a very open source like open education concept of what learning is. And so we wanted to make sure that we had that perspective and that wins in line in mind whenever we were looking at our competencies. We had our 71 competencies. We could tell as a research team that there were some very obvious duplicates and different researchers just use different terms. And but we still needed to kind of refine those and make sure that we weren't using the same thing. So we among the four researchers we had a grad student at the time. We did a cart sorting exercise where we took those competencies and group them together first and then went through and reviewed like, okay, I think these two are duplicates. And it just so happened that our groupings among the four of us fell in line with those four pillars of the Dolores report. And so that was a really interesting finding that the Dolores report has is still relevant and the way we were approaching it kind of fell in line with that. So once we eliminated some competencies, we were down to 31. And then it was time to start drafting some items. And we really wanted this project to be as open as possible and then include as many different perspectives as possible. So we created this spreadsheet and published it on Twitter and send it out through our colleagues and then our advisory board as them to send it out to their colleagues and gather a lot of as many different people as we could academic non academic to draft some items. So here you can see our proposed domains that came from the Dolores report. And then each individual competency, the definition we selected for that competency from the literature. And then we did an example of a draft item and left space for up to five items for each competency. And you can see we did have to fill in some blanks to get to five for each item, but we did get some really good feedback from people. One thing about this approach is we don't know who submitted what, and but that's okay because this was just a very, very like first step into our research project. So after we had our items, and we needed a way to eliminate the items that weren't going to suit our purposes. So we did another open process, but this time more targeted. And so we recruited experts from a variety of fields related to OER and lifelong learning, and even some experts that just had expertise in item development and writing. And they did what's called a law she's content validity ratio. And I will show you a little bit more about what that means. We found this amazing spreadsheet, and that is available through an open license by Dr. Harold Peach. And this spreadsheet explains a little bit about what the content validity ratio is. It is for validating items on a scale. Just gathering the opinions of various experts to see if the items are measuring the constructs you want them to measure. And there's the citations for articles that describe that process in more depth. And the thing that was most exciting to me about this as spreadsheet was that it would calculate our CVR or the content validity ratio for us. So we just had to copy and paste our experts and feedback into the spreadsheet and then we have our CVR. So we gave the experts opportunity to do some qualitative comments on the items and that was very helpful. So after this project, or this process, you can see we had so many items, we could eliminate the ones that were zero or below, and revise the ones that we wanted to keep based on their comments. So we wanted feedback from students to make sure that for face validity to make sure that how they read the item was what we actually intended the item to say. So our student feedback, they also did a Lashie's process. I'll show you the results of that. You can see nearly everything was a point five. I'm not sure if that's because we didn't have as many students as we had experts. And, or if that just wasn't really the best way to get feedback from students. So we did a focus group to kind of get some more distinction between the items, and let them freely comment on items that stood out to them. And we got some really good quotes from them about how they saw specific items. And we did end up like especially this item, and we did end up changing the wording, based on their comments. So that was a very interesting and fun process. So then we had our items back to about 70 items, and including demographic questions. And we use those items in a pilot test with the understanding that we would do some factor analysis on those items and reduce the items again to ideally the final toolkit will be between 20 and 30 items that seems to be the ideal survey length to get the most responses. We did a pre post test model in our pilot testing, because that's really how this toolkit is designed to work once it's ready. And so that the advantage of that is that you don't have to one like do a really serious longitudinal longitudinal study that can be expensive and with limited resources. And you don't have to collect identifying information about each student to be able to compare individually their results. And if you collect identifiably like individual identifiable and the information. It can be difficult to get an IRB. And we wanted this process to be as smooth as possible so our survey collects no personally identifiable information. It just takes the average so did the majority of students grow between the pre and post test is the idea behind that. You can see all the results of our pilot testing at that link. I've included our raw data and then a easy way to convert the data from text responses to to numerical responses in case you want to import it into SPSS or another data analysis software and then some instructions there. We did pretty quickly notice some issues with the data and we had some negatively worded questions that we included at the encouragement of one of our experts in the law she's analysis. And so let's say for example we would have one question that said I seek new learning opportunities and then another that said I do not seek new learning opportunities. Well the same student would answer true of me to both questions. So clearly there's something going on there either they didn't read the negative questions or they didn't understand it. And so we also and then we also had some straight line answers where they just answered the same thing to every question. And we didn't have quite enough responses to be able to eliminate those responses from the data. So we still did our factor analysis and just as practice to see you know if there's anything else we discovered that we needed to change and we'll be repeating this pilot test this fall. We'll do the the pre test will have already been done in August and September and then the post test in November and December. And we are increasing our participants so that we can get more responses and hopefully have enough data to do a true factor analysis and eliminate the items that we do not need. So if you want to see our current survey and feel free to use it if you're interested. And we have a template here where you can just click that link and duplicate it make it your own as you see fit. I also just have a spreadsheet of the questions. And it shows the difference between our first pilot test and the second pilot test that we're using this semester. We took out the or change the negative questions back to a positive question and to try to reduce some confusion. We did some reading about the cognitive load it takes to process a negative question when it's mixed in with all these positive worded questions. And it's kind of debated in the literature whether to have negative questions in your survey or not. So that may be something someone wants to explore a little bit more. And then back to the idea of what Kathy was talking about and the differences between librarians and PhDs how they're what classes they're exposed to and what their experiences are. And we wanted to kind of bridge that gap a little bit for librarians who might need to adapt our toolkit to their context. So maybe you have to make some different decisions and what we made. And so we would like to give you enough information that you can do that. And without having to take some in depth research methods courses. So we are developing an OER and this is still very much in development. But this will be what you can access to decide okay how am I going to apply this toolkit in my context. So we would love to hear from you about what topics you would like covered. And so, whether you are an experienced researcher or not, and what types of things would you need to know to apply this toolkit at your context, or your institution. And so we have some ideas in here, but feel free to add your own ideas to the bullet points. And then we would love to see what you have to say. If you can touch with us and our project website and Twitter are linked here as well as all of our email addresses. And this project is made possible by a grant from the Institute of Museum and Library Services in the United States. And we just want to acknowledge them and thank them for letting us put together this project. We would love to hear from you if you have any questions and thank you for watching our presentation.