 access to myself and my colleague Marisa who's joining me today. But if you want to reference back any of these materials, they will be available there. So to start off, I'll introduce myself. I know and recognize many of you. So thank you for joining us. Good to see you all. My name is Trish. I work with the Center for Teaching, Learning and Technology on the Research and Evaluation Team as a research and evaluation consultant. And part of my role is helping folks who have TLEF funding and other sources of funding. Today we'll talk about TLEF funding. Evaluate their projects. And I'm joined today by my colleague Marisa. Marisa, I'll let you jump on and introduce yourself. Hi, everyone. It's nice to see you all. So my name is Marisa. And I'm a grad student at UBC. And I work for the students as partners as the evaluation specialist. So a similar role to Trish, except I'm working with students as partners and supporting projects evaluation and the evaluation as a whole for the students as partners fund. So before we get started on the session itself, I just want to take a minute to honor the existence of the first people from the land that I'm joining this session today, which is the land of the Musqueam Squamish and Soilus youth in this place known as Vancouver. I really appreciate living here. I've lived in Vancouver for almost 10 years now. And I feel like I always provided with many opportunities to learn about myself from the land. And I just really appreciate this opportunity and want to commit to continuing to learn from it and respect the land that's been cared for by millennia by these people. I also want to acknowledge that many of you might be joining us from places different from where I am. And so just I invite you to take a moment to appreciate and give respect to the lands that you're situated in. So just to give you an overview of what we hope to accomplish today. So hopefully by the time you leave this meeting, you'll have a better understanding of why evaluation is necessary for your project success. And you'll have some clear evaluation objectives. So things like intended outcomes and measures of success. And I'll talk about what those are in just a minute, as well as some ideas on methods that you can use to reach those goals. I will also talk a little bit about some of the challenges that can come up while you're doing your evaluation and some ideas on how to troubleshoot that and where to get support. Again, I just really want to reiterate this is only an hour and a half of a session for projects that might span many years. So this is really just the sort of a conversation and to get you to start thinking about evaluation plan. I know many of you might have just received your funding or notice of your funding in the past few months. So it's really just meant to sort of ease you into the process and be a resource that's available for you alongside myself and Marissa to consult with you along the way. And just an overview of what will be covered today. So we'll look at sort of the big picture of what evaluation is and why you should and need to do it. And then go through a couple activities just to help you start brainstorming what it is that you're doing and what you hope will be the outcome, the result of that. And then narrowing in on what your evaluation question or questions are. And then we'll take some time to look at different evaluation measures and approaches and then move into that final considerations, challenges, other things you need to consider along the way. And hopefully we'll have some time for Q&A at the end. Myself and Marissa will be monitoring the chat. So if something comes up that needs clarification as I'm talking, please feel free to just type a comment in the chat and we'll try to address it either in that moment if possible or if it's something that will come up later off like that. But there's a bigger discussion pending. So the first thing I want to do is just get a sense of who's joining us today. So I'm going to share a poll with you and just ask if it can indicate if you're a current TLEA holder, a current SAP holder, student partners holder. Luckily, perhaps you're holding funding from both of these sources or if you're just here to learn a little bit more, maybe you're thinking about applying in the future or maybe you've done one of these projects in the past and want to kind of strengthen your understanding of the process. Who's surrounding you in this meeting today? So the majority of you being TLEA holders, but also many student as partners fund holders, which is fantastic. A few lucky folks who are holders of both and a few who are just interested. And it's really great to see a few of you from the student as partners fund. This is our first time offering this combined evaluation workshop since it is a new fund. So please do let us know if there are things that you're like, the language here is built around the TLEA fund because that's existed for decades. And this is the workshop that I host for it. So we tried to cater it so that it fits both populations and we're so we'll talk a little bit about some unique elements. But do please feel free if you're like, I don't really understand how this applies to me or I'm a little bit confused. Please do feel free to let us know. We'll try our best to answer your questions for this new funding opportunity. Why should you evaluate? Hopefully you're here because you do think evaluation is valuable. But oftentimes when I first start conversations with people around evaluation, they get hung up on this idea of evaluation as an assessment of their ability or an assessment of their teaching ability or the relationship for students or things like that. And those could be things that you are evaluating, you know how you're doing and what's working. But evaluation in this context is really about making sure that what you set out to do through your funding is actually being met. And that plan that you set in place is working as intended. So as part of your proposal for both TLEF and SAP funds, you had to write an evaluation section. And I know that often changes as time moves on and your projects evolve. But you have those goals in mind and you have these outcomes that you're hoping to achieve whatever they may be. And we'll talk about some examples in a minute. And so evaluation is really meant to be in place to say, you know, yes, the thing that you want to do is revitalize your course or provide some new resources for your students. So how can you make sure that that's actually happening and that people are getting the benefit that they want out of it? So with this evaluation is meant to help you determine if those stated objectives or goals are being met, as well as improve project design and implementation along the way. So it's not meant to be, you know, you put everything together and then you do this one snapshot at the end. It is meant to be an iterative process that helps you make changes as the project evolves. And with that alongside that comes with making those informed decisions about changes to make, things to keep, things to not keep. Also for both TLEF and SAP project holders, project evaluation is a required component for the reporting process. So you can't get away from it. So as I mentioned, we think about our team, we think about evaluation as an iterative process. And so this is the evaluation cycle that we'll be walking you through today and how we think about evaluation as a whole. So it begins with defining your questions and then moving on to identifying sources and methods. So how will you answer or how will you know if those questions are being answered? In order to do that, you'll collect and analyze data. Hopefully share your findings and that can mean many different things. It doesn't have to result in something like a formal paper or necessarily, you know, a workshop at a big conference. It can also include things like, you know, talking to your colleagues, presenting at a workshop like at Celebrate Learning Week or another CTLT event. So sharing what you've learned to both, you know, help your own understanding of what's working and not working, but also share those successes with others and, you know, hopefully encourage others to take on that path. From there, modifying the pedagogy. So taking what you've learned and feedback that you've gotten and, you know, revamping those questions, thinking again, okay, well, now I know that, you know, this activity helps with student self-regulated learning. But now I want to get a better understanding of, you know, in what ways is it doing that? Or, you know, is it impacting X, Y and Z? So it's an ongoing process. And so really thinking about how you're accomplishing that. And as I mentioned again, I know many of you are at early stages. I imagine that many of you are, you know, in our most recent round of TLF and SAP funds. So you're likely just at that earlier stage. And so we're going to spend a lot of time today talking about these, we'll call the first and second blocks of defining questions and identifying sources and methods to help you really build a strong foundation for your evaluation plan. Okay. So the first elements I want to talk about in defining your questions are what we call practice and intended outcomes. So practice is what you'd like to evaluate, what you're hoping to do. And intended outcomes are the impact that you hope will be a result of that practice, that thing that you've done. So I'm going to walk you through some examples of each of these. So for example, when thinking about practice output, this is sort of what the thing is that you wrote your proposal about. So it could be something like creating an open access resource, adding a community engagement component to your program or courses, re-admissioning program goals, adding TA training, making content more accessible, creating or redesigning course assignments. And these are just some examples, they're ones that we commonly see in the proposals that we get. But of course, it can be many other things. When talking about outcome, so what is the impact of the thing that you're hoping to evaluate? These are some examples that we commonly see from people. So looking at changes in performance, a change in motivation or attitude, looking at increased awareness to diversity issues as a result of whatever it is that you're implementing, looking at increased understanding of course content. Again, these are just some examples of some of the things that you might have talked about in your proposal, or that you might be thinking are elements that you want to evaluate. So I'm going to walk through an example to help folks kind of really think about what I mean and how to talk about this. So thinking about your practice, using an example of resource development, an example could be developing self-study quizzes for all students in a multi-section course. So resource development is that thing that you're doing. And here I'm specifying, it's resource development, but what exactly am I developing? When thinking about the practice, it's also important that you think about the context that it'll take place in. So I use like 217 as an example, because my graduate work was in the psychology department. So it's an area I'm well familiar with. It's a multi-section course that hundreds of students take each semester. So depending on your context, thinking about, is this something that you're trying to generalize to one course, to one course that has multiple sections, to one course that has multiple lab sections, or to a bunch of different courses, or is it something that might not be course-based? So you'll see in the language I typically talk about the impact on students and courses, but I recognize that some of you might be doing something that's meant to be facilitated for instructors or graduate students or another context. So I'm mindful that I use the word students, but this can be generalized to the context that you're in. So the next thing I'd like to do is just ask you to take a minute to think about the practice that you hope to implement via your project. And so this is a listing that is generated from TLF projects. So it's mostly all encompassing, but it might not capture everything that you're hoping to do. So what I invite you to do is just annotate these slides. So in Zoom, there's an option where you see kind of all the icon features of Zoom. So chat, the participant list, pull, well he won't say pulse, but sharing all of those things. You should have a little icon that says annotate. And you can just click on one of the items. And yeah, you can see someone here has clicked on. So just ask you to annotate on the slide any of the elements that you're hoping, so any of the practices that you're hoping to implement as a result of your TLF. And if there's something that you're doing or hoping to do that doesn't fit here, please feel free to type it in the chat. This isn't meant to be all that you can do or limiting. It's just a way for us to kind of categorize and get a sense of what's going on in the various projects. So we're seeing a lot of folks in the resource development areas and curriculum and design. I'm going to move on. We'll loop back to this idea in just a minute. Actually, sorry, I'll stop here for a second because I want to point to a specific resource that can be helpful and that this information is built off of. So I'll share this with everyone in the chat. So this is a resource that was developed, again, for TLF evaluation. But Risa and I had a chat and there's a lot of overlap and I do think it'll be helpful for those of you who are also SAP status partners, fund holders. And it's meant to be a guide to sort of help you walk you through this process. So I'm going to walk you through it today very quickly. But it's something that I recommend that you reference back to. And you can use it to kind of build out these questions, your evaluation questions. And so the first activity on the worksheet is basically this, indicating what the practice that you're hoping to implement is that you're currently implementing this. So I invite you to save a copy of that worksheet, download it, flag it as very important materials to reference as you build your evaluation plan. And then you'll see that in list here. So I invite you to just keep thinking about this type of language as you move forward. So the next element I want to invite you to think about is the idea of the intended outcome. So in this example that I was providing, looking at my practice of developing self-study quizzes for all sections of Psych 217, what is my goal of doing that? So what is my intended outcome? Kind of this example project that I'm working on, my intended outcome is to increase student learning and knowledge as a result of these resources. So the student learning and knowledge is the outcome and I'm hoping to increase it. So what we're going to do next is again, the same type of activity where I just asked you to think about the different types of intended outcomes you might have for your project and indicate those here. Again, using that annotation feature. Again, I'll put you back to that worksheet link that I just shared. This is built in as activity two in that worksheet. So again, a place for you to kind of mark this and move this forward and keep track of it for yourselves. So just give people a few minutes to indicate their areas of interest. And again, if you'd like to open that worksheet and mark down what you're marking here as well, keep track of that. You're more than welcome to. Thanks for your question. So someone in the chat asked about whether there could be two or more of them. And absolutely. We'll talk a little bit about like narrowing your scope of evaluation and sort of not asking too many questions, which can be difficult when we think about that iterative process. You know, often we ask people to think about just a few elements you want to target at first, but definitely, you know, as a result of, in my example, developing these self-study quizzes, I might be interested in learning about an increase in student learning and knowledge. But, you know, the format of the the quizzes might also be something that relates to student engagement or as a result of developing these quizzes, it might change the instructional team teaching practices. So certainly each practice, each of the things that you're doing might have multiple outcomes you're interested in assessing. That's absolutely fine. You likely also have multiple practices that will have different intended outcomes. So, yeah, right now I'm just saying, you know, track everything that you're interested in. And then through, hopefully, through this single example that I'm providing you, I can show you how you'll build out individual evaluation questions for each of these components. Great. Thanks for your annotations, everyone. So it's like a fair mix of a lot of different elements here, which is great. I think this is always the exciting thing about these types of projects is, you know, people hoping to reach students and individuals in so many different ways. All right. So moving on, and again, using this example that I shared, where my hope is to develop self-study quizzes for all sections of Psych 217 with the goal of increasing student learning and knowledge, how can I build that into an evaluation question? So in this context, I might say my evaluation question is, how do the self-study quizzes increase student knowledge of core concepts? And I really want to emphasize the importance of developing an evaluation question. So it's one piece to think about your practice and your intended outcomes. That's very important. But really encourage you to then take those elements and build them into a question itself. And the reason for that is that it really helps you target what it is that you're going to ask your students or how it is that you're going to gather that information. We'll talk about those methods in a second. But say, for example, I was going to do a survey to get this information. If I hadn't built out this evaluation question, what I might start doing is just creating a whole bunch of survey questions that ask about the student experience in a number of different ways, but may not actually target that learning of these core concepts. Or I might not be able to reflect back and then say, oh, these were these four core concepts I was hoping they would learn. But actually, I didn't ask about them in my survey. That's problematic. So having a really clear evaluation question is helpful when you go to start developing the survey or focus group interview questions or whatever method or whatever measures you're planning to use to capture the evaluation question, you can reflect back and ask yourself carefully, you know, do these questions, do these survey questions in my example actually help me answer my evaluation question. And that's a common component of my job is reviewing people's surveys or focus group questions and making sure that they're actually going to be able to learn whether their practice had this intended outcome. And so having a clear set of evaluation questions to start really helps you make sure that you're targeting what it is you're interested in and not overburdening the people that you're questioning with a very long survey, which is a common, common concern. So what I invite you to do now is spend about two minutes thinking about your evaluation question. And again, recognizing you might have multiple evaluation questions, but think about maybe just one or two. And I invite you to either write those down on a piece of paper or within the worksheet on page three, there's a spot for you to develop them. And then so I'll let people think about this for about two minutes, and then we'll spend about 10 minutes where I invite folks to share those questions in the chat. And Marissa and I will provide some feedback or provide some clarity or ask for some clarity around those questions. Again, I want to be really clear that we're not going to come back to you and say like, well, in our workshop in May, you said that you're going to ask this question. And now in November, your question has totally changed. That's not the intention of getting our feedback here. It's really just to help you start practicing how an evaluation question might look and get some feedback on what makes a good evaluation question. So I'll stop talking for a minute and let you think about this. And then once our questions are coming in, we'll start chatting. I'm just going to read this question out. So how did the interactive materials help student have a systematic, timely and ongoing record of the class materials exercises and automatic feedback? So yeah, I think this is a great question. As a starting point, again, thinking about down the line, once you start creating questions to ask about your evaluation question, breaking down each of these elements. So it's, there's a lot compacted there. So whether it's systematic, whether it's timely, whether it's an ongoing record, you'd want to ask about each of those components. And so just thinking about, yeah, making sure that you're targeting each of those when you go to ask for that type of feedback. But that's a great start. Thank you. Colin asking, yeah, whether you should include the intended outcome. Yeah, so in this example here, I create some sort of a framework for what an outcome might be. So thinking about student learning and knowledge. So it's best if you can build that into your question. You don't have to use this exact language. You can kind of, you know, what does student learning and knowledge look like to you? What would indicate that? So build it into your question. But it doesn't have to be necessarily these exact ones. And also it can be something that's not covered in this list here. I saw someone had their hand up. But now I've lost you. If you have your hand up, please go ahead and ask your question. Yeah, Sol, I see you. I see you now. Give up your hand up. Yeah, if you'd like to unmute yourself, please go ahead. Sorry, that was totally by accident, I apologize. That's okay. Okay. Okay, so I see we have a few more questions in the chat. So how are pharmacist facilitators knowledge of core concepts in pedagogy assessment, engagement, feedback and EDI by completing the five minute, sorry, not five minute five module certificate program. Great. Yeah. So another really rich question. Again, you know, once it comes down to that time, just making sure that you're breaking down each of those elements, you know, because maybe the five minute, why do I keep saying five minute five module certificate program, is really helping with their understanding of assessment. But maybe it's not having as big of an impact on engagement. I hope it does. But just as an example, just making sure that you're targeting these different elements so that you can know, you know, which piece is working and which ones maybe could use some work. And again, that just builds into this idea of the iterative nature of evaluation. I've never seen a case where I help with the project evaluation and everything is perfect on the first round and everyone loves everything and, you know, motivation is through the roof and grades have gone up 10%. You know, there's always a little bit of wiggle and a little bit of learning along the way. And even in cases where it's, you know, like their knowledge of the assessment and engagement is increasing, maybe there are some elements about the module that could use some improvement or that people found confusing. So also thinking about evaluation of the tool itself. So yeah, just an example based on that. So looking at the next question, where the outcome is to improve large scale assessment feedback. So looking at how useful the feedback was for the midterm, whether the feedback helped with understanding why they received the grade, and whether the feedback indicated actions that they might take to improve their results. Awesome. So this is a really clear example where, yeah, sort of the big picture goal is sort of, you know, how helpful was the assessment feedback, but breaking it down to these individual elements. So that's really great. Thanks for sharing that. So reviewing existing modules of self study questions and indicating challenges and difficulties and level of engagement and why those were happening. Yeah. So this is a great question. That's sort of a question that you would ask from the evaluation question. So thinking about the evaluation question as a whole, looking at, yeah, whether students are engaged to contribute to the question banks. And then these are sort of the questions that you might want to ask as a follow up. These are awesome. Thank you. Yeah. So I'm asking, thinking about the survey questions you wanted to ask. Yeah. So we'll talk a little bit more about specific question design in a moment. And that's fine. It's fine to sort of have a question. You're like, I know I want to ask people this item at the end of the project. That's, that's a different piece than your evaluation question. So those are sort of, let me say the survey questions or focus group interview questions you're going to ask. The evaluation question is sort of the all encompassing piece. So breaking it down by, you know, what is the overall goal? So as a result of asking these questions, what is it that you hope you're going to learn? So I'm just going back into your question as Sam. So how do the interactive materials? That's great. That's what your practice is. You could say something like a broad statement would be like looking at how those impact student learning and knowledge and student engagement and attitudes. I would say that would be something that like encompasses all of the different elements that you have. And then sort of the breakdown of that or how you would know if that's happening is by asking about, you know, whether it's timely, whether they're keeping an ongoing record about the automatic feedback. So hopefully that helps clarify it for you is, yeah, thinking about sort of the overarching piece and then what elements you'll need to ask about in order to know whether that's happened. I see we have a lot of questions here that have been shared and these are looking great. I'm not going to read them all out because I'm just mindful of time. But what I want to highlight is that a lot of you are using how questions, which is awesome. And something that I encourage is thinking about when you're creating that question, yeah, thinking about how this is happening or whether this is happening. So if you say, you know, are students learning more? That's a yes, no question. And you don't really know why that's happening. So just thinking about building your evaluation question around some, this sort of like a bigger picture that will then let you build a survey or interview or focus questions out of that that will tease apart how it's happening. So, you know, if you know, you see at the end of the semester, student grades have gone up. Why has that happened? What elements of your practice have led that to happen? Yeah, so I see, yeah, a little bit of confusion around and just, yeah, kind of clarifying an evaluation question is sort of that bigger framework that you're working around. So, you know, if I add if I add self-study quizzes to my course, are students more motivated to learn? That's the bigger evaluation question. If I was doing a survey, for example, I might not use that exact question to ask my students. I might not just say, you know, did the self-study quizzes help you learn better? I might ask that question. That might be something that I start a focus group with, something broad like that. And, you know, some of them might say yes. And I would say like, okay, well, what about the self-study quizzes were helpful? How often were you using them? Was it that you went back to them before the midterm to review? Or was it that they encourage you to discuss the concepts with your colleagues or your peers? So the evaluation question is sort of like the bigger framework of, you know, what it is that you're hoping to know. And then from that, you'll develop individual questions that you would actually ask your students, other instructors, whoever it is that your target group is. Hopefully that's helping clarify, yeah, clarify things a little bit more. I'm going to move on. I'm just mindful of the time. So, yeah, thinking about your evaluation question, the next step to do would be to figure out how you would know whether that's happening. So in my example, you know, how wondering about whether self-study quizzes are increasing student knowledge of core concepts. How will I know if that's happening? So thinking about the different ways that I might ask about that or the different components that might ask. So, again, there's an element on the worksheet that I shared earlier, and I'll share that link again just in case anyone has lost it or they've joined the meeting after I shared it. So this is built into that worksheet on page four. Yes, on page four. And so the idea here is looking at how will you measure this, that intended outcome? How will you know if that's happening? So again, building off this example that I've created, how do the self-study quizzes increase student knowledge of core concepts? Ways that I might be looking at student knowledge is by looking at performance on knowledge tests or perhaps increases in their confidence of those core concepts. So that's what I mean by measures here. They're meant to help you find answers, so specific measurable pieces of those outcomes. And then looking at evaluation methods is how you're going to capture that information. So in this example here by using, in order to look at performance, I might use quizzes to look at student confidence. I might consider doing a focus group or using a survey. So I hope maybe this helps a little bit with kind of that breaking down from the bigger evaluation question. There are specific elements that I want to measure to capture that. So when I say knowledge, that's an incredibly huge concept. So what is it? What am I thinking about when I say knowledge? And how will I measure knowledge in that way? So again, I encourage you, when you're in that stage, to use the worksheet to look at some examples. Again, these are just some examples that we've put together of different elements. So looking at the worksheet now, when we talk about student knowledge, an example of sort of the how, the measures that you might use might be looking at things like overall grades, projects, assignments or quizzes, some kind of standardized testing, knowledge retention over time, many reflection papers. So these are some of the measures, the ways that you might capture these elements. So yeah, great question in the chat about what are you comparing this to and sort of thinking about a baseline. So this depends, and this is a tricky piece. A lot of times when we're looking at TLEF projects is there isn't really a baseline for people to compare against. So you might not, you know, you might be adding something completely new to the course that has never been there before. And so it's not always about saying, you know, is motivation higher this year compared to last year, but more about the subjective experience of the students or the individuals participating. So, you know, in my example here, I might just be saying, you know, in one case, you know, their performance might be something that I want to capture to see, you know, track moving forward to say, you know, how is, what is their performance look like. It's possible that if I'm using the same testing element, so say I have a quiz about these core concepts that I've used in my teaching for years and years, I could consider looking at that like looking at just the aggregate like overall score from previous years and compare it to how my students are performing. That's one piece that I could do. But also just looking at, yeah, you know, what is, what is working for the students in this way. So in my example, looking at, if I'm interested in measuring student confidence, I might ask something like, you know, as a result of using these self study quizzes, how confident are you that you'll perform well on the final exam. And that might be a liquor scale question. So confident they'll do, they'll do really well or not confident at all. Or, you know, asking about, you know, as a result of doing these quizzes, do you feel confident that, you know, future courses that you take on this topic, you know, you'll have a better understanding of these concepts. So it doesn't always have to be about comparing it against something else. It can also just be about that experience of using the resource. So I hope that that answers your question. It's hard because we often don't have that baseline information. You know, we haven't asked these questions in contexts prior. And that's not, that's not the main, you know, it's great if you have that data and you can use it, of course, that's wonderful, if you can compare it against a baseline without the tool or product that you've developed. But the goal of the evaluation isn't just to say, you know, students were performing statistically significantly better compared to in 2021. I'm very, very rarely see that kind of analysis or results on TLAF closure reports. It's more about understanding what the experience was and whether it was beneficial and whether it was helping and what kind of improvement it had on the overall experience. I hope that answered your question. Okay, so moving on, I'm going to pass it over to Marissa to talk a little bit more about evaluation methods. Yeah, so this next part of the workshop will focus on the types of methods you might want to use to evaluate your project outcomes. So you'll notice on this slide, there's the blue that's circling around only a few individuals versus that orange bar that's covering all of the individuals. So this is to represent the different types of methods you can use to evaluate your outcomes. So in this case, the blue is looking more qualitatively at experience using a smaller sample, you know, for example, using that focus group method, that's only going to capture a few individuals experiences where that orange bar is using a larger sample, which tends to mean looking at broader, more generalized ideas. So in this case, you know, for example, using a survey, what we tend to suggest is considering a mixed method approach. So combining these two methods, it will provide a richer sample of information. So this could be doing a survey and then following up with a focus group to dive into those themes that you noticed from the survey or collecting more case study examples. You could also do the opposite and start with a focus group to look at common themes or issues. And then from there, creating a survey to talk about these experiences to a larger group. Next slide Trish. Thank you. So these tend to be the three common evaluation outcomes that we often use. And I've often used in my own projects and see others using theirs. So you have the interviews and your focus groups, which are more of those, if we're referring back to the other slide, those blue color. So whereas surveys reach more people and aren't as in depth as focus groups or interviews, which is like that orange bar that covers more people. So what's really important is to make sure you think about what you want to measure before picking your tool. So over the next few slides, I'll provide a more detailed description of these three methods. So the interviews and focus groups tend to provide richer data. They give a more detailed understanding of individuals experience, but only for a small subset of participants. So it's important to keep in mind, one thing to keep in mind is that focus groups and interviews are more time consuming compared to surveys. And so sometimes it is harder to recruit people to participate in them. And also they do require more time for yourself and your team to schedule and facilitate both the interviews and the focus groups, such as one thing to keep in mind. One of the differences between interviews and focus groups is that focus groups may reveal more than interviews. And that's because they elicit more discussion among the participants. Sometimes through these discussions, unique topics come up that maybe you weren't, you weren't even considering when you were developing the focus group questions, but they could offer some really important insights into your project outcomes. So that's where focus groups have a little bit more benefit than just interviews, but also it's important to consider the questions you're wanting to ask. So interviews can be useful when the information you want to gather. It could be a little bit more sensitive or more difficult to discuss during focus groups. And you might find that you get more genuine answers from interviews in this way. But that's not to say you couldn't ask these questions in a focus group. It's just something that you might want to consider when formulating your questions and deciding what method to use. So a couple tips on preparing and running focus groups and interviews is to make sure you have a protocol. Typically, I like to have around six to eight questions and also a few follow up prompting questions that might be helpful if you're not getting as much discussion from the question you've posed. So you also want to think about the order of the questions. You tend to want to start more broad to get your participants warmed up and thinking about the topic you're going to be discussing, but then and then beginning to narrow into the focus on what's important. So those important aspects that you want to spend the bulk of your time in the group discussing. Another tip would be to learn about how to elicit discussions. So for example, what I like to do when I run focus groups is to have a list of my go to follow up questions I call them. And they're things like, who else had the same thoughts about this? Or does anyone else has anyone else had a different experience than so and so. So just having those go to follow up questions will help with that eliciting discussion and ease the flow of the of the group. Lastly, recording the sessions. Of course, you need participant consent to record, but I it's so important because you're not going to remember. I guarantee you won't remember anything everybody says. And especially in a focus group where you have multiple participants. And when you're facilitating, you mostly want to focus on the conversation and what's being said and how better to elicit more discussion and more conversation. So you're going to be spending less time typing notes, trying to get verbatim scripts going. So definitely recommend getting that recording that permission to record. So those are just a few tips. And then you can see. Oh, I never mind, we'll get to resources later. Sorry. Next slide, Trish. Thanks. So different from interviews and focus groups, surveys can provide a wider larger, a larger sample of experience with the practice that you are trying to assess and evaluate. So surveys are a good way to compare groups as well. For example, with using those demographic questions in surveys. So if you know, if you're curious to know how your practice impacts international students versus domestic students, what are those differences? You can incorporate those demographic questions within the survey, which will allow you to compare the groups. So the other benefit of surveys is that they're easy to integrate into activities or assignments. For example, some of the projects I've worked on, the professor provides 15 minutes in their own class time to for students to complete the survey. So that can also increase the participation, which is another helpful, which is another useful aspect of surveys. And then I just want to briefly go over a few tips to share about survey design. So one thing you want to keep surveys brief, 10 to 15 minutes, and you want to focus on questions that will answer your evaluation goals. So it's kind of the exercise we were doing earlier. And some of you already had come up with a few really good questions that you'd have on your survey. And when you're asking your survey questions, you want to make sure that your question is only targeting one element. So for example, a question like, how did the assignment feedback increase student motivation and understanding? So those two, student motivation and understanding, two very different aspects, and you want to make sure that your question isn't asking both of those things, or you're not going to know to what the students answering. If you do want to ask more in depth questions, then you might consider using that mixed method approach and saving those questions for a focus group. So surveys are a great way to capture that quantitative data and often use liquid scales. And that's a great way to do it. Another good practice with surveys is to pilot them. So unlike with focus groups, you don't have a facilitator reading out the questions, engaging people's understandings of whether or not they know what they're being asked. And so you really rely on those who are participating to understand the wording of the items. So the results of your survey tend to be highly dependent on making your questions clear and simple. And then the last tip for using surveys is to use a FIPA compliant tool. So FIPA stands for the Freedom of Information and Protection of Privacy Act. And with, for example, UBC, we use Qualtrics for surveys often because that is a FIPA compliant UBC survey tool. Awesome. Thanks, Marissa. And I just popped in the chat last month, or whichever month, I don't know what month we're in, in March, which was two months ago, apparently. We had a workshop on survey design that I helped facilitate. So I just popped the link to those slides in the chat. We'll probably offer that slide, or that workshop again in a few months. But that resource is there for you to reflect on a little bit more information about survey design. And we have some resources at the end, a bunch of links that I'll share. We also have some resources on interviews and focus groups. Okay. So moving forward, the last thing that I want to talk about, focusing on the evaluation plan is just really encouraging folks to create an action plan. And again, on that worksheet that I shared earlier, there's the last slide, or the last page of that worksheet has a section that you can use that you can fill out to develop and track these milestones. So the idea is that you can kind of build a timeline of what needs to happen for your evaluation to happen, and figure out who is responsible for each of the elements, what types of things do you need. So this could be resources. So people, human resources, so perhaps someone to help facilitate a focus group or to do the interviews. You might need things like gift cards or incentives that you're offering individuals. Things can also be the survey itself, so designing the thing that you're going to use. So helping you to kind of figure out what those elements are, and building that into a timeline so that you can work back. So say, for example, you wanted to do a survey at the beginning of the fall semester, so a survey in September, what needs to be in place in order for that survey to happen? You need someone to build the survey in Qualtrics. You need someone to develop those questions. You probably want to have a couple meetings to kind of figure out whether the questions are good, maybe get feedback from someone like myself or Marissa on those questions. So working kind of backwards from, you know, September 12th to figure out, okay, what needs to be in place for those pieces to happen? And then also how will you know if those milestones have been met? So, you know, if you have an evaluation goal of understanding X, Y, and Z, where does that come in on the timeline? When will you have asked about that evaluation question? So just again, really encourage you to use the worksheet to help kind of make this plan actually happen. And I know with the Student as Partners Fund, there's a lot more student involvement, so Marissa, I don't know if there's anything you want to add about that piece. Yeah, no, I just wanted to say the difference with the Student as Partners Fund is that there is going to be students who are also, who are actually taking a lead role in evaluation. And so all of what we've talked about in this is also to help you as students feel, you know, empowered and responsible for the evaluation. And of course, you'll work in your partnerships with your faculty partner on this. And so keeping in mind, yeah, like what is your role as a student in the evaluation? And especially with Students as Partners, you have a unique role because you are students and you will be working with students. And so there is that level of, you know, trust. And that's just something that's unique to Students as Partners. So I did just want to share that and bring your attention to that. So now what I really want to is that final piece of extra considerations, some challenges, things that come up along the way that aren't wrapped around building that evaluation question, but things that are really important to consider nonetheless. So one piece to talk about is thinking about when you're creating your survey questions or your focus group questions, your interview questions, whatever it might be, is knowing that just because you understand the way something is worded doesn't mean that your participants will understand. So really, really encourage you to pilot your materials. And what this can look like is creating a survey, for example, and having a few students or even some colleagues just read through the questions, not with the intention of answering the questions, but just looking at them and doing sort of a talk aloud of, you know, when you read this question, what do you think that I'm asking? Can you rephrase this question in another way? To make sure that what it is that you're asking is being understood by other people. Oftentimes, you know, we use this technical language like outcomes or, you know, products or things like that, that aren't the way that other people think about it. Or, you know, you might have these objectives that are built into the course, but they're using a language that isn't as familiar to the people who are actually receiving that objective or participating in the course. So just making sure that you're kind of clarifying all of your items to do things like reduce jargon, reduce any confusing items, and just making sure that the questions are being understood as intended so that you can get the answers that you want. The other piece that I want to talk about is this idea of thinking about something as being interesting doesn't necessarily make it an evaluation question. So, you know, as we are working through the slides at the beginning, you know, and I'm asking you to talk about all of the different elements that you're interested in. So, you know, different practices or different outcomes. These are all things, you know, you could probably go through that whole list and say like, oh yeah, I do want to know about that. And I want to know about this too. And I want to know about that. I want to know about all these things. And that's great. And I'm sure you could collect interesting information about all of that. But something being interesting doesn't make it an evaluation question. So again, the importance of really taking time at the start to think about what is your overarching goal for this project, and asking questions that specifically will help you answer that question. So again, you know, in the example of a survey, if once you have your survey developed, you can go through and say, okay, how does this question help me understand my evaluation goals? Does it? Maybe not. Maybe it is still something that you want to ask about that is important. But really reflecting on making sure that the questions that you're asking are going to help you answer that overarching goal. So thinking about things like considering the relevance. So, you know, do I actually really need this item? What will it tell me? Considering the specificity of the question. So this is one that I often deal with where, you know, someone perhaps is interested in a concept like motivation. And so if you're asking multiple questions about motivation, what will you do if some of your questions show a positive outcome, but some of them don't? How do you deal with that difference? And how do you kind of come to terms with motivation was high in this case, but not high in this case. So just being mindful of when you want to ask about a broader concept like motivation, being mindful of the way that you ask it, and considering limiting the questions that you ask around one single concept. Also thinking about inclusion, so asking information in an inclusive way. So making sure that participants feel like they can answer the question and that, you know, if they feel like it's not relevant to them or they're not comfortable or that it doesn't apply to them, they have an option to either not answer the question, or they have an option to provide feedback on, you know, why this didn't relate to them. The next thing I want to talk about is thinking about how to integrate evaluation into the course. So Merce touched on this a bit briefly earlier, but just thinking about the flow and time and cost to administer evaluation. So just being mindful of, you know, when can you integrate these focus groups or these surveys into the course for it to make sense. So, you know, if you're building in a specific activity, it's probably a good idea to ask about the experience shortly after having done that activity, so that participants can recall the information that you're asking about. You know, if you wait until next semester to do some kind of follow up focus group, they might not remember what happened, you know, on January 12, 2022. I know I certainly wouldn't. So kind of trying to tie in the element as closely as possible and, you know, if possible, as Merce said, incorporating it into class time. So I've seen projects where students are asked to do sort of a reflection paper based off of an activity that they did and that's built into the course itself. Also just being mindful of the cost to administer different elements. So if you need to hire some students to help you do the focus groups or help you with the analysis, cost also in terms of other people's time. So, you know, if you're asking students to participate, being mindful to not create, you know, a 30 minute survey that they're being asked to do outside of class time. So all of those types of considerations. The final piece I want to talk about with this is thinking about ethics and whether you need BREB approval. So BREB is UBC's behavioral research ethics board, which covers behavioral. So any, any research that you're doing with humans, they're the institutional ethics board that that you would apply to. And this is a consideration that you need to make in when you're doing your project, when you're thinking about it, whether you're thinking about it as research or as evaluation. And if you are thinking about your project as more with more of a research lens, and you do need to go through the BREB approval process. We have a whole other series of, well, we have a whole other workshop on this and we have a series of resources that can help you figure out what, where your project falls under. I'm just going to post these into the chat. So the first one is a link to the Institute for the Scholarship of Teaching and Learning website where Mercer and I both also work are situated. So we have an application guide that helps you think this process through. We've also put together a short 10 minute video that kind of walks you through understanding whether this is a path you need to pursue. And finally, a checklist that's put together by the BREB office that can help you think about this consideration. I also saw someone shared some links in the chat and these are, if you went through the research path and went, decided to submit an application, there's one resource called the Tri-Counsel Policy course that's a sort of training program that you're required to do. It's required if you're going through the BREB application process, but it's actually a really helpful course to go through, even if you're not. It covers a lot of things like just being mindful about ethical practice. So even if you're not, if you're not going down that research realm and doing a BREB application, you still need to be ethical with the work that you're doing. And that includes things like using a fit book or client tool, ensuring participant privacy, being mindful of the questions that you're asking individuals and the ways that you're asking them. So it's a really great resource that I highly recommend. They also, this individual also shared a resource that's helpful for training for folks that are working with Indigenous communities. So also just share that in the chat with you in case that's relevant for you. Thank you for sharing that link. So the last piece I want to touch on, and then we'll have some time for Q&A, is just a bit of some ideas on communicating your findings. So for both the TLEF and the Student is Partners fund, you will be required to do a closure report, or if you're doing a TLEF, you have to do an annual check-in report. And in that progress report or your closure report, you will need to report out on what you've learned and how you've evaluated what you've done to date. So something that I always encourage people to do is think about, if you were writing a paper, and you had even just your abstract, thinking about what it is that key message that you're trying to get across. So what it is that you're hoping to do, what actually happened, and how you know that that happened. So what evaluation strategy you used, how the data was collected and analyzed, all of those types of questions. And this is sort of thinking about, okay, well, how would I talk about this in sort of a one minute elevator pitch. And so when you do your closure report or your progress report, you'll have space to elaborate. It won't just be one or two sentences in that section, I hope. But I just wanted to give some examples here of what that means and what we mean when we talk about communicating your findings. So the first is an example that I would rather not see when I read your closure reports. So we conducted surveys to determine student satisfaction. Okay, I know that you conducted a survey. I don't know what you mean by student satisfaction. What were students meant to be satisfied with? I also don't know whether they were satisfied or not. So what was the outcome? What was learned from those surveys? So the other examples here are ones that are based on closure reports. So they're more anonymous. But just give you some example of the way you might want to talk about this, the evaluation outcome. So the first example in a Qualtrics survey, so that's the tool you're using and the method you're using a survey on a new essay rubric. So that's the practice that you've implemented. Majority of students reported that the new essay rubric helped them prepare for the final exam. So this might be followed in the outcome of something like learning outcomes or increase in learning. So it could fall under something like student motivation and student experience. So you can see in this example that I'm able to understand as an outsider, not just as the evaluation consultant, but as someone else, what it was that you did, what you learned about what you did, and how you learned that thing. And it's again just really helpful when thinking about disseminating your results broadly, the type of information that you want to share with folks. So again, another example just to help hit this point home, the new open text. So this is the practice, the thing that you did in whatever course it was, save students X number of dollars. So that's the outcome based on the average cost of prior textbooks used. So this is a case where you have prior information that you're able to compare it against. Just an example of how that might look. So again, just really emphasizing that you want to be able to explain what the practice was, so what it was that you did, how that impacts were measured. So what the strategies and methods were that you used and what the outcome of that evaluation is. So yeah, again, like another sort of just like brainstorming activity you can do is write this statement based on what you hope will happen from your project and then work backwards to say, well, how will I know that? How will I find out that students saved this money? How will I learn that the students felt more prepared? What types of questions do I need to ask? What what type of tool do I need to use to get that information? So within that, I just have two final slides to share. One is some additional resources. So as I said, Merce and I both sit on within the Institute for the Scholarship of Teaching and Learning, iSuttle, just within CTLT at UBC. And there are many, many resources that we put together there under our resource hub. So information about surveys, interviews, focus groups, information about asking about demographic information for things like gender, ethnicity. There are also, I already shared some resources on the BREB application process, as well as sample consent forms. So again, regardless of whether you go through the BREB application process, still need to ask consent to use the data that you're collecting. So there are examples there of consent forms that you can reference. Also encourage you to go to the TLEF website that has some resources on evaluation, so including those worksheets that I shared earlier, as well as a couple testimonials. So reflecting back on that slide, I just shared basically we just got permission to use what people put in their closure reports. So you could kind of really see the fleshed out, not just the one or two sentences of what people did and what they learned from their projects. On the TLEF page, you can also see examples of completed closure reports. So from projects that have submitted their closure report, you can read the full report there. And then the last thing I want to share is just extra human resources. So email addresses for both myself and Marissa. And then point you to for a number of the faculties, there are specific faculty liaisons that typically help with TLEF proposals or TLEF implementation at large. And just also recommend that if you have broader questions about your project, there are great resources to reach out to years and years of expertise and always a very helpful bunch. So with that, I will stop talking. We have about 15 minutes left. So I invite anyone who has questions or clarifications to please unmute yourself. If you have very specific questions about your own project and, you know, I want to do this evaluation, and these are my questions, and can you read my survey? Please shoot an email to myself or Marissa. Myself, if it's a TLEF project, Marissa, if it's an SAP project, we're happy to do one-on-one consultations with you about your specific project. But you have bigger questions or questions that you think might be interesting to the whole group or relevant to the whole group. Please unmute yourself or feel free to type your questions in the chat.