 My name is Trish. I work on the research and evaluation team and co-support, co-lead the Institute for the Scholarship of Teaching and Learning. And yeah, as Will said, as part of that role, I end up supporting a lot of teaching and learning projects, developing evaluation over the lifespan of their projects at various stages. And I can provide support at all of those stages or some of those stages as many as and as few. And that's me. I'll pass it over to Natasha. Hi, I'm Natasha. I'm also on the research and evaluation team with Trish. My focus is more on the subtle side of things of the scholarship teaching and learning. But as with Trish, we both consult on the TLEFs and the OERs and we're happy to help you with any any stage of evaluation for your ongoing projects. Perfect. And so before we begin, we would like to acknowledge that UBC, which is hosting this session, is located on the traditional ancestral and unceded territory of the Musqueam people. And as we're working somewhat remotely today, I'd like to acknowledge that here in the lower BC mainland, we're often on the unceded territories of many different Koso people. When I acknowledge being on the Musqueam territory, it's with the acknowledgement that as a member of the UBC community, I'm very lucky to be able to work and learn on a territory that's not my own. And additionally, as somebody who really focuses on open educational resources, I also really like to acknowledge that OER is often grounded in copyright and western notions of ownership, which can sometimes be in tension with indigenous and traditional ways of knowing. And we're not going to really go into that topic at all in this session, but I do like to acknowledge that those tensions do exist. And we did have a session earlier this year with Kayla Larson, the Indigenous Services Librarian at We Will All Library that did explore specifically that topic. And I'm just going to post a link to the recording of that session into the chat. So it's definitely worth checking out if you're interested in that. And then finally, before we begin, this is a very small group. So I would say feel free to be more chatty and we can run it more as a consult. It is being recorded though, just to let you know that. And then finally, if you're not currently an OER fund ground holder, we do have a grant cycle happening right now. And the deadline has been extended to January 19, 2023. And if you're interested in applying to get financial and support, all sorts of support for developing or using OER in your courses, please come talk to me. And there's lots of opportunities and there's still quite a bit of time to apply for that. And with that, I'm going to hand it over to Natasha, I believe. Great. So thank you, Will. So let's get started with going through our workshop. As Will mentioned, we are a small group. So do please feel free to raise your hand or just unmute yourselves and ask questions or throw questions in the chat. We're happy to pause and chat on the sections that are most relevant and useful for you. But just kind of overall, the goals for our session today were we're hoping that you can understand how evaluation can increase the success of your OER project, help you develop some kind of clear objectives and how you might reach those objectives for evaluation and look at what potential challenges might get in the way. So as we go through this workshop, please do keep in mind that this is kind of just the start of the conversation. The idea is to get you started with your evaluation plan, but it will take more than an hour, an hour and a half to work this through. And just a reminder that Trish and I and Will as well, we're all here to consult with you and work with you on developing this. So just a quick outline of our workshop for today. We'll chat briefly kind of about the big picture about the evaluation cycle. We have a couple of short activities to kind of help you leave with some kind of practical, tangible outcomes. And then we'll talk a little bit about different approaches to evaluation, considerations, and tips and tricks, and then some questions, answer, and defend. And just I will put into the chat. So we will have a couple of worksheets that we will go through during our session today. And you can find those at the website that is in the chat. So just as we get started here, it would be helpful to get a sense for us of who is in the room. So if you just want to fill out this quick poll for us here, we're just wondering about kind of where people are in the OER stage. It's great. So I think we're more kind of in the stage of interest, considering applying, trying to learn more about what the OER fund is in general. So just keeping that in mind. Yeah, I do feel free to jump in at any point in time. So let's chat a little bit about why we evaluate. So the goal of evaluation really is to make sure that your project outcomes are being met. So making sure that the goals that you have, the plan that you've developed, is working as you intended it to be. So evaluation can help you figure out if the goals that you have stated are actually being accomplished. It can help you improve the design and implementation of your project. So it can help you figure out if you have these broad goals, you have these intended impacts for those broad goals. And evaluation really is about making sure that those intended impacts are happening. So checking whether things are going the way that you hope and expect that things are going. And then evaluation can also help you make informed decisions moving forward. So looking at what's working, looking at what's not working, really thinking about evaluation as feedback, and then making informed decisions to move forward with the next steps. And then in terms of the practicality of it, kind of the OER fund, project evaluation is a required piece of it. So we do expect that if you develop something with the OER fund that you will be evaluating whether it is actually accomplishing what your purpose is accomplishing and will provide that information in the reporting process. So the other kind of critical piece of evaluation is that it is an iterative process. So it is not a one-time, I have this goal, I'm going to check what's going on with it, and then I'm done. It really is kind of similar to the research cycle, where it's always a circle, it's an ongoing thing. So maybe we'll skip the second poll here of kind of which stage you're at, just based on not many of us in the room here. I don't think anyone in the room here is a current OER fund holder. So just kind of getting a sense of this evaluation cycle. We will be going through kind of primarily the first two parts of the cycle. So looking at what are we hoping to evaluate, what impact do we hope that this thing that we're developing will have. And then we'll talk a little bit about identifying those sources and methods. So the goal for the rest of the session really is to get us through parts one and two here, to get us ready for part three, or for considering how we might do that in an application. So looking at practice and intended impact. So what do we mean when we talk about practice and intended impact? So when we talk about practice in the context of the OER, that really kind of is the resource that you're developing. We use the content of these slides and this idea is built off the TLF, which is a similar idea, but there's a bigger range of things that happen there. Whereas with the OER it is more kind of tangible and specific. So in the context of OER, the question really is what would you like to evaluate? What is it that you're building out? What is it that you're creating? So for example, that might be a textbook or a website or some sort of additional tool that you are creating. And then the impact is what do you want that product to do? What do you want the outcome to be as a result? So examples of that might be things like motivation or increased awareness or increased understanding, student learning, all of those different pieces. And so we really encourage you to think about what is it that you're doing? What impact do you want it to have? And then let's evaluate those impacts and whether they're actually happening. So let's evaluate whether student motivation is really changing as a result or whether student learning or performance is really changing as a result of this thing that we have built out. So thinking in terms of defining practice or product, so this little chart kind of takes us through those steps that we hope that you will be mindful of as you're considering evaluation of the OER. So for this slide, you kind of already will know what your hoped OER is or perhaps we're just kind of exploring what the OER is. So we won't go into brainstorming that in detail today, but this is in the worksheet and we would encourage you to take some time to brainstorm the practice and product that you might be wanting to create. So it would be helpful for you to spell out exactly what that resource is and who you hope it will be accessed by, the context that it will be used in. All of these things are important considerations to keep in mind. So considering the classroom context in particular, the level of the students, all of those different pieces of what the context might include. And then once you know what your practice or product is, thinking about how to define your intended impacts. So in this example here, our sample practice product is developing open self-study quizzes for all sections of a particular class. And the intended impact for this example might be to increase student learning and knowledge as a result. So thinking through all of those potential impacts, and we'll pull up a slide in just a second on that, but thinking through all of those potential intended impacts and what is it that you really want this product to be doing. So here we've got our list and I think we can skip the activity. So typically when we have folks who are current OER fund holders, we do ask them to kind of give us an indication. But we often see a good assortment of different things here in terms of the different impacts that people are hoping their new resource will have. I just want to jump in. So yeah, the purpose of this activity is to just kind of help people really think about making that link between the resource you're creating and what you want it to do. And this is just an example of some of the things. There are examples that we have seen time and time again in TLFs and in the OER projects funded to date. So you can also think about this as sort of like a motivational slide, as you're thinking about developing your project and what goals it is that you hope the resource to have. A lot of times people say like, oh, well, I'm going to develop the resource. That's the goal, but you want the resource to do something for students or for your own teaching practice to be helpful in some way. So even if, yeah, for those of you in the room that, you know, you're not at a stage yet to necessarily map this out, the worksheets that Natasha linked to earlier in this slide is really a good place to start once you have an idea in mind for a resource to build of things that you might hope that that resource will achieve. Great. Yes. And as Trish mentioned, these are kind of the more common ones that we see. And we do often see folks check off a fair number of them. So it's not that you need to pick one impact. It's a question of what are the things that you hope will happen? And oftentimes that is a number of different things. So, you know, cost savings for students as well as student engagement and student well-being and different pieces. But as Trish mentioned, thinking about using this list as a starting point to really reflect on what it is that you hope this resource will accomplish. And thinking through those pieces should hopefully help you develop the evaluation questions that you want. So looking at kind of what are you looking to answer? What are the guiding questions? The questions should involve kind of your intended outcomes. And the answer to those questions can become the claim of your project's success. So looking at how do these, in this example, how do these quizzes impact student knowledge of the core concepts of the course? And so being able to pull out that information will help you determine whether the resource is actually doing what you hope it is doing, accomplishing what you hope it's accomplishing and serving the students in the way that you hope to serve the students. So again, we'll skip through the activity portion here, but a reminder that you can use the worksheets that we've provided on that link to kind of develop and track these different pieces. So thinking through what is it that you want to build out? How is it that you hope that that will impact the students? And then what is the actual question that you want to establish out of that? So I'm going to move on to the next stage. So after you've gone through and defined your evaluation questions, what the resource is and the impact you hope it will have, what you want to do is think about how you would answer that evaluation question. How will you collect evidence to know how the resource is impacting student learning or impacting student motivation, et cetera, et cetera? So I'm going to walk you through some common sources and methods that we see in a lot of projects, kind of the more common ones we see. First, I want to talk about different ways to measure, different measures for capturing this data. So again, working with that example we had earlier, how did the self-study quizzes impact students' knowledge of core concepts? How would you know if that's happening? So some measures we can think about would be things like performance on knowledge tests, which we might access via quizzes, or something like student confidence. So how confident are they, a judgment of learning, for example, of those core concepts? And that could be evaluated via surveys or focus groups. Again, in the worksheets that Natasha shared, the link that Natasha shared, there are some examples of different measures that might make sense, depending on what your impact area is. So really think about when we use something like we want to improve knowledge, that's a really big concept, that's a really big impact to hope to have. So breaking down, how will you know if knowledge has changed? How will you measure that? And then thinking about once you've decided that, the evaluation method to use. So we're going to walk through a few of the common evaluation methods that we see in TLEF and OER projects. This is not an extensive list. There are many other things that you could do, like look at different types of course analytics, so engagement with the material, and how students are interacting with the material, feedback around that, looking at things like observations in the classroom, or observations of use with the tool. So there are other methods. These are the ones that we see most commonly. So I'm just going to spend a few minutes talking through them and kind of giving you a sense of sort of the pros and cons of each. When you're thinking about this, oftentimes what we see is someone say like, oh, I'm going to do a survey for this to find out if this resource is working or not. We really, really encourage you to go through those earlier parts of the workshop. So really carefully laying out what, so like the measures and the measures based on the impact you hope to have before you pick a method. Because just because you're familiar with something like a survey doesn't mean that it's necessarily the best way to get the data that you need. So just really being mindful before you pick a method, being thoughtful about what it is that you're hoping to measure. So first, just going to walk through an example of interviews and focus groups. So interviews happen at a smaller scale, usually one-on-one. Focus groups tend to be larger groups of people. But the dos and don'ts around it or so the tips and tricks around it tend to be similar. So we've just lumped them together here. So when you're thinking about doing an interview or focus group, one of the reasons that it might make sense to do it is that you get a more detailed understanding of experiences. So you're able to really dive into the experience of what it was like to use that resource or how they felt while they were using it and whether it made sense, whether it was helpful, whether it felt engaging. You're able to kind of ask more pointed questions and have more of a conversation around it. That being said, you are only able to get data from a subset of participants. If you're using this resource in a class of 200 students, you cannot possibly interview all of them or do a focus group with all of them. So just being mindful that it is a narrow or scope, but you tend to get richer data as a result. Focus groups are really beneficial if you're trying to elicit more of a discussion around unique topics. So if I was in a focus group with Natasha and Will, for example, maybe Natasha would bring up, oh, you know what, when I was using the resource, I actually found it really confusing how to navigate this part. And I felt like I couldn't get to the self-study quizzes because I didn't know where to find them. And Will might bring, echo that and say, yeah, actually, that was really confusing for me too. I hadn't thought about it. But you bring up that the color scheme wasn't great or the dropdown menu was unclear. Or Will might disagree and say, oh, actually, I found that because there was a link on the home page, that made it really easy for me to use. So it's not necessarily about getting consensus, but it's about how one person's response might trigger someone to remember something about their experience or agree or disagree with that feedback. So it's really helpful in that way. On the other hand, with interviews, it's better when maybe the topic is a little bit more personal, so something that's a little bit more sensitive, depending on the content, and really drawing out an individual voice. Another case is if you're worried about things like power dynamics, so not necessarily wanting to do a focus group that involves both faculty members and TAs, but perhaps interviews separately with a faculty member and the TA for a course to get those perspectives. And just really, you can spend a little bit more time to draw out an individual's perspective when you're using an interview. Some general tips when doing interviews is to make sure you have a protocol. Don't just walk into the session with a vague sense of, like, well, I just want to know how they liked it, if they liked it. Do make sure you map out questions in advance, be targeted around what your goals are, be mindful of people's times. It's one thing to just have a conversation, which can be a form of evaluation, but if you're really trying to answer a specific evaluation question, making sure that you're thinking carefully about how to get those answers. Also, think about how to elicit discussions. If you're running a focus group with a group of six to seven students, you might be familiar with the lecturing concept of crickets in a room. You ask a question about what did you like about this resource, and no one says anything. So again, if you have a protocol, you can map out some sort of follow-up questions, some ways to get responses from people to get the conversation going, so that you're not just stuck sitting in a silent room for an hour. Also, again, being mindful of getting student permission and consent, recording the session. And I also recommend, especially in the case of focus groups, having a second person there to help take notes. You're not going to remember all the discussions that happened, and what happens in the moment and how you perceive a situation might not map on to something that you think about later. Oh, one student brought up this perspective, but I don't remember exactly what they said, or I don't remember the context that they were talking about. So having that recording can be really useful to remind you what the conversations were about. Next, talking about surveys. So surveys can provide a wider, larger sample of the experience with the practice. So again, in that example, if you're surveying 200 students, you're likely to get a larger range of responses. So it's a wider sampling of the population. Again, it's a wider sampling of the population, but surveys don't go into as much depth as an interview or focus group do. So you have to be mindful and think about the types of questions that you're asking. Surveys can also be really useful for comparing groups of participants. So if you're interested in looking at two different types of experiences, or you want to learn about, how is there experience using this module in the resource versus this module in the resource? You can compare. If you have a set of questions around engagement or motivation or understanding of the content, you can compare what was student engagement like when they were interacting with this content versus this content and see how it differs. Again, for something like developing a resource, it's really helpful to get feedback on the entire resource, not just one piece of it. Surveys can also really easily be integrated into activities or assignments. So you can be thoughtful about the way that you gather and elicit feedback from your students. So in the example of an OER, if you have them, the example of the self-study quizzes, so you have these self-study quizzes, they're built into Canvas, for example, you could easily add a question or two at the end of the quiz that says, what was your experience like with this quiz? Please rate on the Likert scale how easy it was to navigate the Canvas site or something around, did you find the quizzes to be useful for your learning of X concept? Strongly agree to strongly disagree. And that really ties it into the content itself. You're getting feedback in the moment while they're interacting with the tool and doesn't require a whole separate survey that you're sending out and you don't have to, a survey doesn't have to be dozens of questions. It shouldn't be dozens of questions. You want to keep it short and concise. So being able to integrate it into the activity that they're doing and gathering feedback in that way is really useful. Both for motivating students to do it and for really, you can make a case for, you know, like this, I've integrated something new into my course, I want to check in on how it's working for you. So some tips around surveys are, as I mentioned, keeping it short, focus on the questions that will help you answer your evaluation goals. So this is something that Natasha and I spend, I think a fair amount of time in our roles is reviewing survey questions. And it's always a great idea to, you know, build on a survey and you could put as many questions as you want at the beginning before you share it and then go through each question and say, like, what data will I get from this question? Will this inform whether students are motivated to stay enrolled in this course? Will it inform whether students are learning this key concept? Just kind of going back to those evaluation questions that you developed earlier, based on the impacts you hoped it would have and making sure that those answers are there. Also piloting a survey, results can be really dependent on the question. We see a lot of poorly worded questions or confusing questions or questions that use a lot of jargon that students might not be familiar with or that you as the person doing the evaluation have a good sense of, but the reader might not. So having someone just read through your survey, even, you know, if you have a TA in your course or a student that is involved in the project, having them read through the survey questions and just making sure that what you're asking is coming across the way you intended to. And finally, using a FIPA compliant tool. So at UBC, we have access to Qualtrics, UBC Qualtrics, which stores all the data locally and keeps the student information secure. At this point, I'm just going to make a quick plug for a workshop that we're hosting, Natasha and I are hosting in a few weeks, a survey design workshop. So if and when you get to the point of developing an OER and you'd like some support on survey design, or if you're designing a survey for another purpose, doesn't necessarily have to be the OER. I do encourage you to join our workshop in two weeks, less than two weeks, for some more tips and tricks. The other piece that I want to point out is we talked about surveys, we talked about focus groups, we talked about interviews, and we know that people have different levels of comfort with each of them and what they're familiar with, but we really do encourage people to take more of a holistic approach and look at data collection and evaluation. So something we often do and recommend is using both. So having some quantitative and qualitative feedback. So maybe you do a survey and based on the survey, you realize, hey, students are really struggling with module three and four. Now I'm going to do a focus group to dive in a little bit deeper and find out what it is that they're struggling, why that content is not resonating with them or why they're not learning the way that I intended them to. Or vice versa, maybe you do a focus group and you get some feedback on the general sense of how things are going. Then you can implement a survey that goes to the broader class at large to make sure, okay, did this feedback that I get actually, is this matching up with the general student experience? So using a combination of both can be really rich and informative. So this is just an example here of a way that a little visualization of how you could look at something if this example here of a new peer feedback system. So if a question on a survey that students were asked was, I'd like to see the name of the system or your resource used for giving feedback on other assignments in this course as well. And then students responded on a liquid scale of agree to disagree. So now we have this figure to show a lot of students are agreeing with the statement. And then you'd followed up with some interviews or focus groups. You can also include some short quotes, anonymous quotes, to provide some kind of richer context about like, okay, so students said that they would like to keep using this peer feedback system you developed, your resource. So we have this 60% of students say, this open education resource would be useful. They'd like to keep using it on other assignments. But what is it that they like about it? Or why is it working for them? By doing an interview or some focus groups, you're able to gather that richer data. And so just some examples of things that people might have liked about our imaginary resource, the format of the text on the... Oh, sorry, some things that they might like is that it's anonymous so they're able to give more objective feedback. And then what we see, sorry, on the left is the sort of positive feedback from that 60% who agreed. And then on the right, what you're seeing here is sort of room for improvement. So some people might say anonymity gives people lower quality feedback or the format made it hard to read. So you're able to gather feedback from both perspectives. And I really want to just highlight the importance of that orange or disagreeing or negative feedback. That's really the point of evaluation. Well, that's not the point of... It's one really important point of evaluation. The point of evaluation is to get feedback. And hopefully that feedback is good. But when we think about evaluation as iterative, it's because a resource, a project is almost never perfect. And doing evaluation helps you figure out what it is that students are struggling with or the team that you're working with is struggling with that you can improve and make changes upon. So in the example here, the format of the text made it hard to read. If you're gathering that data early on in the development of your resource, you can just change the format of the text, no problem. You can change it or give students options to select the format that they want. And then you can redo this survey and get feedback to say like, did you like the format of the text? Was the format of the text readable? Then you have that feedback where if you're waiting until you've built the entire resource, you're sending it out to your entire faculty and then you get this feedback that the format was not so good. You've wasted all of that funding period and all of that time not really honing in on what's working and not working and not being able to make those changes as you go. So again, just really sort of beating this idea over the head that evaluation is meant to be an iterative process and it's meant to be an opportunity for you to make improvements and changes to help boost those positive comments and feedback that you get. So I just want to take a break here just because we're quite a small group before I move on to the next section and see if anyone has questions. We're quite ahead of schedule so we have time for a little discussion. Anyone have any questions so far? No questions. Perfect. Okay, so I'll move on. We'll have some time for questions at the end too if people have anything to add. So next I want to move on to once you're at the stage of thinking about collecting and analyzing data, some important considerations. So as I mentioned already, piloting your questions and thinking about brev approval is another important one. So we'll just touch on those briefly. So thinking about now you've decided, okay, this is the resource I'm building. I hope that it impacts students in these ways. Here's how I'm going to ask those questions. Here are the methods I'm going to use to ask those questions. Okay, you're building out your evaluation plan. So some considerations that you should do is establish a timeline with milestones. So thinking about how will you know that things are happening the way that you want them to happen? So who is doing what? Do you have a research assistant that's going to help you design your survey questions or help you by running the focus groups or interviews? What are the things that you need? Do you need to book a room to do the interviews? Do you need some type of incentive for people who participate? Do you need to learn how to use Paltricks? What kind of skills or tangible resources do you need as you move forward in your evaluation? How will you know if a milestone is met? So people think those surveys are short and snappy and easy to do. I'll just keep doing a survey every semester. But at what point do you say, okay, this tool is good enough. This objective is being met. The impact that I wanted to have is there. What kind of data do you need to kind of give you that assurance? The next piece is thinking about other considerations. So some common fallacies that we see as you're developing your project plan. Again, this is around the idea of piloting. So just because you understand something doesn't mean that your participants will understand it. So making sure that you, for any protocol you have, so survey questions, focus group questions, interviews questions, or any other verbal communication you're making with other people is just really making sure that what you're asking makes sense to them. So reducing the jargon, avoiding things like double-barreled questions, just making sure that the questions are being interpreted the way that you intended them to. Because the last thing you want is to collect a whole bunch of data and then realize, oh, I was talking about this module in a specific way or this interaction in a specific way, but students weren't understanding that, so they were talking about something else entirely. Just because something is interesting doesn't make it an evaluation question. So again, with the activities in the worksheet, specifically activities one to three, really narrowing down what your evaluation questions are and being focused on those pieces for gathering your data is really useful. So sometimes people are like, oh, I'll just add this extra question because it might be interesting to know how this is or it might be interesting to know this demographic piece of information. There are problems with asking those sort of extra questions. One is that it can be problematic for your data set. So if you're asking students, for example, about their gender, but you have no intention of using gender as a factor, you have no reason to suspect that gender influences the way that they interact with your resource, don't ask that question. It can lead people to think that you're interested in it or it can influence the way that they respond to the questions because they assume that's the focus of the project where it's really not about that. It's about understanding if this resource is working. So just being really mindful about the questions that you're asking and again, that point of, for each question you ask, going back and saying, how will this help me answer my evaluation question? And having that evaluation question really clearly mapped out will be helpful in that process. Just around determining whether the question is worth asking, consider the relevance, the specificity, and the inclusion. So relevance, literally for every question you have, do you need this item? What information will you get by asking this? Consider the specificity. So how will you analyze this question? So making sure that each question that you ask has a purpose towards the larger goal, that it's not just extra data you're collecting for the sake of it. And thinking about inclusion. So making sure that you're able to capture different voices or that the wording of the question welcomes feedback from everyone in the group. Some final considerations or additional considerations is thinking about how to integrate evaluation to the course, thinking about things like flow time and cost to administer. So if you're doing something like a survey, you'll need someone to help you design those questions. You'll need someone to create the survey, presumably in ball tricks. You'll need someone to analyze or anonymize and collate that data for you. If you're doing something like a focus group or interview, those are much more time consuming. And you need to think about how that process will happen. So we always very strongly discourage if you're teaching a course, don't also be the one doing the interview or focus group. That's a huge ethical do not do. For many reasons, you're unlikely to get the data that you want. Students will feel pressured to respond. There's huge power dynamics at play. Just in keeping student confidentiality in mind, that's also problematic. So thinking about who else needs to be involved in order to make this happen. Thinking about the time required. So if you're asking students to do a survey, again, the idea of keeping it as short as possible so that you're not asking an hour or even half an hour of their time to get you feedback. Likewise with something like a focus group or interview, how can you both provide an incentive and also like a reward for participating as you're taking time away from them for the purpose of gathering feedback. Another piece around time is also thinking about when to administer the focus group survey interview. Avoiding things like midterm season or final exams. Depending on your evaluation question, this might differ, but oftentimes we try to say incorporate it as closely as possible to when students are using the resource. If possible, getting their feedback after they've gone through that self study quiz module as opposed to four months later when they might not remember that they used it at all. Things like that. Thinking about ethics. This is a whole other conversation on its own. Natasha shared some links to a number of different resources. Also in our resource hub that she shared earlier, there's a whole guide around determining whether you need Rev Approval or not. We have a whole workshop on this question. So I should backtrack. Rev is our institutional ethics that covers behavioral research. So it's the Behavioral Research Ethics Board at UBC. And for many of these projects, they likely won't require Rev Approval. Most of the evaluation around OER projects is about looking at sort of more of this quality assurance step evaluating whether the resource is working. But it is a helpful process just to walk through some of the checklists to make sure that you're on the right track. And so Natasha posted a number of links that have many, many resources. We put together a 10 minute video that kind of summarizes the whole hour long workshop we normally host. So it's an easy one to watch. The second or the third one here is a checklist that's provided from the Research Ethics Office that you could kind of just go through and get a little kind of peace of mind on whether you're on the right track of going down the research track or focusing solely on evaluation. The other thing I just want to highlight is that being ethical and requiring ethical approval are two different things. So if your project isn't deemed as research, then you don't need to go through the brev approval process that doesn't exempt you from being ethical in the work that you do. And so things like asking for student consent before you collect any data, anonymizing the data or collecting anonymous data when possible for things like a focus group or interview having someone who's not involved in the course be the one who facilitates those sessions. Things like that are all really important in considering how that data is being collected and used. Also, in terms of the consent form, just really being clear and upfront about how the data is going to be used. It's about being ethical. It's also just part of that is keeping students informed about what the purpose of the project is and how the data will be used. And I found time and time again in evaluation projects that just being upfront with students to say, I'm trying out these new modules or this new resource for the first time, it's really valuable for me to get your honest feedback on whether you like it or whether it's working for you. Just being upfront with them about that before you send a survey or start the focus group, it makes a huge difference. Just being honest about the purpose of what you're doing. There's no need to conceal. It's not this mysterious research process. It's about gathering feedback. And likewise, just being honest, this is how I'm planning to use your data. This is how the data will be stored. All of those considerations helps the students feel more confident and I think more willing and open to share their feedback with you. So moving back to that evaluation cycle, we're not going to spend too much time on these last two pieces here, but just again thinking about evaluation as an iterative process. So once you've collected and analyzed your data, taking time to share your findings. So that doesn't necessarily mean you have to write up a big fancy publication. It can, maybe it will, but it also means having conversations with your colleagues, sharing something at a departmental meeting, attending one of the many CTLT events that we host and sharing your results that way helps other people learn about what you're doing and can encourage new resources or adaption of the resource you developed. So perhaps you make these self-study quizzes and you hear that students really love them in the context of labs, but less in the context of the lectures. That could encourage another colleague to adapt them for their own course. It also involves thinking about how you can modify your pedagogy. So getting feedback from other people, you know, some of the most valuable sessions I'm part of will hear someone say like, oh, like I thought this was going to work. It really didn't, students really didn't like it. Has anyone had a similar experience or has anyone tried doing this in their classroom and having that conversation? That's part of the dissemination process is sharing what you did and what you learned and can help you making that next step to change what you're doing and make a small tweak, you know, change the font size or add a new link to the homepage to make the resource more accessible. Elements like that, which so that you're modifying your pedagogy and then you're starting this cycle over. So what new questions will you ask? What new feedback will be required to help you understand whether the resource you've developed is working for people? So I understand that no one's in the stage of holding an OAR fund funded project right now. But I just want to go over a few examples of what evidence should look like. So when you're thinking about collecting your data and you're thinking about, how will I know if something's happening? Just a few examples of the way that you would want to talk about this information. So sometimes what we see is, when people are talking about their evaluation plan, what they're hoping to do, they'll say something like, we conducted surveys to determine student satisfaction. So what you really want to do is elaborate on what you learned as a result. So when sharing your results with your colleagues in a formalized report, whatever the source might be, being as specific as possible. So some of the examples here, highlighting that you used a Qualtrics survey to understand this new rubric that you developed. That was your resource. You found that a lot of the students say that it helped them prepare for the final exam. That was an impact that you intended as a result. You had hoped that the rubric would help them feel more prepared. That's what you learned. Something we see often in the context of OAR fund is the new open text, which is the practice or the resource in whatever context, saves students a certain amount of money based on how much money they were spending before. So just some examples on ways to think about what you hope will happen as a result of your project. And you can use these as ways to think about how will you develop your evaluation questions. So this is just another example from an OAR project that was funded or conducted in 2017 by Christina Hendricks here at UBC. So this is just an example from a paper that was published on her resource, breaking down all of the research questions and the methods. So again, just another example here of how you might start thinking about the evaluation framework. Finally, just some final resources. I think we've shared all of these throughout the workshop as well, but just a link to them here. And we'll make sure these slides get shared with everyone after the session, so you'll have access to all of the links that were embedded in the slides as well, so you don't have to save all of them. But just some more resources. And then for anyone who's here who's still kind of thinking about what an OER is and whether they should apply for the fund and how to apply for it, if you're thinking about the evaluation component, please feel free to reach out to myself for my colleague Natasha. For all things OER, all elements of it, reach out to Will. And then we also have faculty liaison. So if you're in one of these faculties, someone targeted, Marie is actually one of our participants here today, so another friendly face that you can reach out to to get a little bit more feedback and a little bit more guidance. And now, we've come to the end of the session, so I'm going to stop sharing my screen. We've got quite a bit of time remaining, but we'll open it up and see if anyone has any questions for us. I understand. Yeah, no one's necessarily ready or at the stage to be developing a survey or a tool right now, but if you have questions about the process or something about your project that we can answer, please feel free to go ahead.