 Welcome to Developing Multiple Choice Questions, Writing and Creating Formative and Inclusive Questions in H5P. This session, I'm Simon Lullio. I'm in the Psychology Department. We also have Dr. Lillian May, who's going to be joining us, talking about EDI considerations and developing multiple choice questions. Thank you so much, Lillie, for coming to share your time and expertise with us today. And Katie, who you've heard from extensively, is also part of the presentation. Just a little quick sort of roadmap as to what we will be doing today. We'll be doing a brief overview of summative versus formative assessment. And what I mean by brief is very, very brief. And then a little bit of a motivation as to why are we focusing on multiple choice questions? However, I hope that a lot of what we talk about today is going to be transferable into short answer questions or other forms of assessment that we may choose. We'll talk a little bit about the role that multiple choice questions can play in formative assessment, stuff that both Cynthia Brame and Kaylee have touched on. We're going to touch on likely again today. Then we're going to talk, most of my part of today is going to be talking about ways that we can improve multiple choice questions and how we write them. Before handing over to Lillie, who's going to spend some time talking about those sort of EDI perspectives and looking at multiple choice questions through the lens of the EDI. And then we're going to spend maybe the last five, 10 minutes in H5P talking about accessibility for multiple choice questions in H5P. And so a question over here is that this is an H5P symposium. We've had these workshops in H5P. Why the delay until we get to the end of the session to talk about H5P and accessibility with multiple choice questions? Why is spending most of this workshop in more sort of a seminar or lecture style? We already know that there's a fair amount of accessibility within H5P, sort of the widgets. A lot of the number of the widgets that are most commonly used do have high accessibility. And the trick with those is just sort of knowing which options to tick and which opens to tweak. On several forums and publications on accessibility and one was linked to earlier on in this chat, a lot of them kind of say at the front end that questions of accessibility actually go into the content generation. And a lot of issues sort of are within how the content is made. And then no matter how accessible your platform is that you're using, if you put garbage in, you get garbage out. So we're gonna spend a lot of time in looking at, well, how can we make crafties questions so that they are accessible so that we put good stuff in and we get good stuff out. And so then afterwards we'll look into making good H5P questions. And so I said I'd do a brief introduction into summative versus formative assessment. Summative assessment assesses students learning on an instructional unit. This can either be sort of partway through a term or like a midterm might be. It might be a written assignment or it could be a final exam. And this often tests how much a student has mastered the content or supposedly ostensibly learned the content. These are formal opportunities and these are often high stakes because these count towards the students grade that we assign. And so there's a fair amount of anxiety that comes along with summative assessment. Formative assessment on the other hand provides ongoing feedback. This can provide ongoing feedback to the instructor. So for instance, I put up that mentor meter and that gives me some idea as to how much you've had experience you've had in multiple choice questions, right? And so that way I can tailor my interactions to what the class knows or what the class doesn't know. But it also provides ongoing feedback to the student about their own understanding and their learning. It's informal because then it's low stakes because it doesn't form part of their final grade. There may be some instances where you can have a formative assessment form part of the final grade where you might do something for completion rather than right or wrong. But generally formative assessment is this informal and low stakes. So we're gonna focus on formative assessment. As I said earlier, it can be used for feedback for instructor about the instructor, what the students understand, what they don't understand and then where you might need to rethink your instructional style or how you present material or you need to represent material. But it also provides these learning opportunities for students, which sort of has been termed the testing effect or the doer effect. So when you get a practice question, students can see whether they get something right or wrong and over a century of research shows that these types of embedding, these types of questions in a formative assessment way leads to significant learning outcomes for students. So much so that they can need to attain a 10% difference in grades between students who have practiced tests or have formative assessment opportunities versus those who do not. And so this is a vital tool that I think most of us are familiar with and that H5P really allows us to utilize quite easily and quite nicely. Why am I focusing on multiple choice questions? Well, they've been part of instructional units since the 1910s and so they've been used quite a lot and they are quite honest, the one of the most common, if not the most common assessment technique, especially in psychology where you have 300 to 400 students in a classroom, multiple choice questions provide an easy way to assess a number of students learning where you can then sort of turn around grades rather quickly and doesn't provide a significant amount of labor to mark and to grade. And research shows when we think about this testing effect and we see two or three meta-analyses sort of a test of this, when the formative assessment lines up with a summative assessment, you get these stronger effects of learning. So if your summative assessment has short answer questions, well, then formative assessment should also have short answer questions. If your summative assessment has multiple choice questions, well, then your formative assessment should also have. You do still find effects that if you have a multiple choice formative assessment, you find gains and short answer summative assessment. But once again, when these techniques align, you get the stronger doer effect or testing effect. Now, given the proclivity, the abundance of multiple choice questions, one threat to student performance is not student misunderstanding or not learning, but it's actually how we construct our multiple choice questions. And if poorly constructed multiple choice questions can lead 10 to 15% of students to fail a test who should have actually passed. And once again, it's not for lack of understanding, but just some sort of characteristic about the multiple choice question. So it's important for us to focus on good practices for formative as well as some of those assessments so that we can sort of really squeeze the most out of this and we can really encourage and help our students succeed as much as possible. Now, to do this, when we're looking at, well, what makes a good multiple choice question? What makes a bad multiple choice question? We can look at several different metrics, which we'll be sort of popping in and out of today. So I thought I'd front-ended and let you know sort of what these metrics are. We can talk about validity. And validity looks at, well, does my question measure learning? It doesn't measure learning within a specific content type, right? So if I'm doing something on the stages of sleep, am I really measuring how much a student has learned? So is or are my questions measuring what they're supposed to measure? We can talk about reliability. If you have a pedometer on your hand and you walk to work and you get 600 steps, walking into work and then you walk home and you get 800 steps, that's not a very reliable pedometer, it's not returning consistent results. We can ask about the reliability of multiple choice questions. Do they, are they stable? Do they return consistent results? Are they consistent with the other multiple choice questions within a given testing unit? You can also look at difficulty and this refers to how many students got the question correct and how many students got the question incorrect. And then one of my favorite metrics or one of the metric that I find quite interesting is the discrimination index. This is an item's ability to tell the difference between students who performed well on other items in the test. So if we have good discriminability, we'll find that students who performed well on other items of the test also performed well on this item. So it's able to discriminate between students who did well on the midterm or the testing opportunity in those who did not. So just some benchmarks so that we're all on the same page. We can see how many students get a question correct. That's the difficulty. If fewer than 30% of students get it right, this is what Ebel and Miller say would be considered a tough question, a hard question. A medium difficulty question is somewhere between 30 to 80% of students get the question correct. Then an easy question is when more than 80% of students get the question correct. And what Ebel and Miller say is that we should be aiming for the majority of our questions to hit this medium difficulty somewhere between 30 to 80% of students getting a question correct. Now we also spoke about the Discrimination Index. How well does a question distinguish between high and low performance on investment statements? And this is sort of reflected in a ratio. We find that a test that is not discriminating has a Discrimination Index of 0.20. One that has fair discrimination has a Discrimination Index between 0.2 and 0.29. Those that have good, it's between 0.3 and 0.39 and excellence is above four. And so basically, let's say you get a Discrimination Index of 0.25, this tells us that the proportion of high performing students who answered the question correctly is 25% higher than the proportion of the low performing students who answered the question correctly. So that's how we interpret the Discrimination Index. Then of course we can get a negative number with Discrimination Index. And this is showing that students who performed poorly on the rest of the test performed well on that particular item. And so that also gives us some information. Now, just to kind of, once again, why am I focusing on multiple choice questions? Not only is it one of the most well-used ones, but when we look at the literature, we see Deepa Teester analyzed close to 1,200 multiple choice questions that were spread across 16 different classroom tests across several disciplines. And they found that the average discrimination across these 1,000 plus multiple choice questions was 0.25, which would be fair. What is quite distressing is that 33% of these multiple choice questions actually were considered poor multiple choice questions. They weren't able to discriminate between those who did well, high performers and low performers. And if we look at other research in Aria Guia, found also an average discrimination of 0.25, and we see Gaja who looked at medical exams found an average discrimination of 0.14. And so if we're trying to train doctors, if we're trying to train engineers, if we're trying to change psychologists, if we're trying to train the students that we try to train, it seems that we don't write as good questions as we should if we look at the literature over here. So there's a lot of literature on writing good multiple choice questions and it can be quite daunting. In my reading of the literature, I've sort of pulled out three or so aspects that I will focus on now. There's a fourth one, which I've asked Lily to come talk about. And so the first one that I'm gonna be talking about is the wording. The second one is response options. How many response options is best to have? Then how do we create good distractors? You've got three to five response options. How do we create good distractors? And functioning distractors as well. Then we'll hand over to Lily who'll talk about multiple choice questions from an EDI perspective before coming back into H5P. So that is the roadmap. I'll bring this slide up a couple of times as we move through the talk today. So let's take a look at wording. The first hint at improving multiple choice questions is to revise them. Not to revise them once, not to revise them twice but to revise them three times. Why are we revising so much? Well, we see literature shows that well-established multiple choice questions between 30 and 65% of them have some sort of spelling error. They have some sort of grammatical error. They have some sort of double negative. So in some cases, more than half of our multiple choice questions have some sort of easily corrected error. Nodo Keio, then once again, a medical setting found that up to 85% of the multiple choice questions used there had some sort of wording error. And these flaws affect student performance because it makes the question ambiguous. They're not sure how to interpret it and it has a real outcome on student performance. So revise, revise, revise. The second thing that I'd like to talk about is simplifying the language. This is a paper that Daniel Riccardi published in 2020. It's a fascinating study and we see from various linguistic perspectives, spoken English and written English are two different beasts. And even if you think about it within spoken English, the way that you might talk to a best friend is very different the way that you might talk to a student which is very different to the way that you might talk to a parent. And so we use language in different ways. And when we look at written English, especially in academia, we find that there's this higher lexical complexity with longer nominal groups. Now that sentence alone is quite high in lexical complexity in nominal groups, right? Lexical complexity talks about the proportion of content words like nouns, verbs, adjects, adverbs that you find within a given sentence. And this is often presented with advanced vocabulary with conceptual depth and complexity in how the sentences are arranged. So that's lexical complexity. When we talk about nominal groups, nominal groups consist of a series of words that are clustered around the noun that often include modifiers like adjectives or participles, prepositions, phrases and other nouns. And these all elaborate and add information to the main noun. So for instance, if you're asking a multiple choice question where you have the sentence, what is the likelihood of finding X in densely populated metropolitan areas? We find that densely populated metropolitan are three adjectives that then sort of used to describe areas and that leads to a fair amount of cognitive load on students. So this is an example taken from the Riccardi paper from 2020. And what I quite like about this is that this is taken as a complex multiple choice question, but it's relatively short and it states the class of psychological disorders characterized by people being deprived of contact with proportions of their consciousness that result in the disruption in the sense of identity is and then they had the five options. But what we see over here is that there's a fair amount of linguistic complexity, because first it states that there's a class of psychological disorders which states that there's more than one class and of these several classes, we are referring to one. This one class is characterized by people who lose contact with the portions of their consciousness. And then as a result of this, they might have a disruption of their sense of identity and then embedded in this over on top is a question, right? What is this one? And so we can see within a relatively simple short, you know, question, we've got a lot to unpack. So if you were to unpack a multiple choice question, it would look something like there are different classes of psychological disorders. In one class of psychological disorders, people lose contact with portions of their memory. This loss can disrupt their sense of identity. What is the name of this class of psychological disorders? And so you can see that we've not changed anything about the content, but we've just unpacked it into smaller bits that make it a little bit easier to understand to see how things build on each other. And what was quite astounding is that just by simply unpacking these really dense nominal groups by reducing the linguistic complexity, students who received in a midterm, these unpacked multiple choice questions were 8% more likely to get them correct than students who got the complex, the linguistically complex versions of them. And so what is quite telling to me about this study is that it's actually the linguistic density of the sentence that led to a poor performance and not the lack of a student understanding. The students understand the content, but if it's asked in a way where it's quite dense, it can lead to decrement. Now, one outcome of this is that we've taken a relatively short question and we've added a lot more words. So you might ask Simon, what about increasing the length of a question impacting performance? Well, we can look to another study by Al-Jahani published relatively recently and they wanted to see, is question difficulty a function of question length? So what they found over here, once again, remember the question difficulty, if 30% of students or fewer got the question correct, it was a hard question, mediums between 30 to 80% and easy as 80% or more. And we're going for that sweet spot that medium difficulty. And then they divided up their question length. They looked at question stems and some of them had fewer than 70 words. Those were considered short questions. Medium length questions were 70 to 100 words. I beg your pardon. And then long questions were questions that had 100 plus words. Now, over here, we can see this is the number of words, questions that were considered easy. These are the questions that were considered moderately difficulty. And these are the questions that are considered hard. And the blue bars are going to see how many short questions were easy, how many short questions were moderate, how many short questions were hard. The orange bar will be medium. The gray bar will be long questions. And then this is just the percentage of easy questions by short, moderate questions by short, hard questions by short. So when we look at it, about 55% of short answer questions within this, I think it was 257 questions that Al-Jahani looked at, about 55% fell into this moderate category. Just under 20 fell into the hard category and just over 25 fell into the easy category. We see a similar pattern for these medium questions. Actually not so much similar, about 15% more fell into the easy category. We see about 15% fell less into the medium question category and then about the same number that fell into the hard. When they looked at the questions that were long, they actually found that most of them, almost 80% of these long questions fell into this kind of sweet spot. There were significantly fewer in the hard category and significantly fewer in the easy category. Now, I'm not saying we should always do this. I think it's always instructive to include easy questions so that everyone can kind of get a sense of confidence within the midterm. And I think it's good to challenge students, but it seems to me at least, it went against my expectations. We find that these long questions do often fall into this sweet spot, so to speak. Catherine Lyon over here at UBC followed up on the students that were in the Daniel Riccardi paper, and these were students who had English as an additional language, and she followed up with several focus groups. And what I quite like about this quote coming from the Lyon paper is that multiple choice questions have more details. When asking the question, it's really helpful. So students actually find that these longer questions have more detail, that they're able to pass out more easily and they find it helpful. So it seems that simplifying the language, even though it may expand on the number of words you're using, if done correctly, that's not necessarily a bad thing, that it can actually help the students by giving them more details in a digestible way. The next one is try to avoid negatively-phrased questions, or I suppose I should rather say favor-possibly-phrased questions. These negatively-phrased questions increase cognitive load. It's a little bit more to wrap your head around what is actually being said over here. When we have these negatively-phrased questions, it increases the likelihood of a double negative. Now, this double negative can either be in the question stem, or you might get one negative in the question stem, and then another negative in one of the response options. And this double negative then sort of is partly what increases the cognitive load. Now, what's quite interesting with this Chevrolet paper is that they found a lot of students actually got these negative questions wrong when they got these negative questions wrong with these negatively-framed questions wrong. It's because it seemed like they overlooked the negative wording aspect. So they give this example, now this example was lifted directly from the paper. Tinted lenses for outdoor use are least likely to benefit a person with which of the following ocular conditions, palms, eddy pupil, retinitis, pigmentosa, keratoconus, and hemianopia. I don't know why I decided to read that over a recorded lecture. I'm not enough, I'm not pushing. What they found is that D, hemianopia, this is the condition where people are least likely to benefit from an ocular. From tinted lenses. But people who are most likely to benefit were people who had these retinitis pigmentosa, which is just the generation of the pigments inside the eyes within the retina, which makes it more sensitive to light. When they analyzed it, they saw 6% of students chose the home's eddy pupil, 17% of students chose B, which is the correct answer, 8% chose option C, and 69% chose the correct answer, hemianopia. But over here, we've got the discrimination index. And what's quite interesting about this is that it seems that the students who performed well on the rest of the midterm or the rest of this testing occasion were more likely to choose the incorrect answer than the students who got the question right. And in fact, the students who got the answer correct were more likely to get other answers incorrect on the rest of the midterm. So this item was actually discriminating against the students who were typically high performers on the rest of the paper. And what Sheva really went, they did several analyses and this is the conclusion that they drew. In the absence of alternative content-based explanations, the most plausible explanation is that in the process of working through each option, several high achieving candidates overlooked or forgot the negative orientation of the stem. In so doing, they appear to have been drawn into selecting the most appropriate response. That is the positive version of the question. And so we see, when I said earlier that students tend to overlook this negative wording, it seems to particularly disadvantage the students who know their work and that they might just for some reason look over it and choose the most appropriate answer. Now, Sheva Raleigh, and this is not only what Sheva Raleigh said, this has come from several other papers that publish on the avoiding negatively worded questions. They say that there's no real consistent derogation of negatively worded questions. So this is not something like we can point to revising the wording or simplifying the language like that. When we look at how negatively worded questions function, sometimes they decrease performance and sometimes they don't affect performance at all. But in this paper, they cited sort of the University of Kansas. They cited a quote from the University of Kansas which I think kind of really summarizes quite nicely in an educational philosophy when we think about whether we should use negative the phrase questions or not. And that's educational content tends not to be learned as a collection of non-facts or false statements that is likely stored as a collection of positively worded truths. So when we teach these things in our class, we're not teaching students maybe to avoid pitfalls but we're teaching them what the content actually is. And so I think just from sort of a pedagogical philosophical point of view, it makes sense whenever possible and it's not always possible but whenever possible to avoid using negative rephrased questions or rather favor positively phrased questions. Do we have any questions in the meantime before I move on? And the slides are available over here if anyone would like. So please feel free to ask any questions if you do have any. Let's look at response options. On this little Mentimeter quiz over here, you see how many response options most people tend to use four followed by those who use five and a few of you use two response options and a few of you use one response option. So let's look into what does the science say behind response options? What are the optimal number of response options? There's this battle between people saying three and people saying five response options. Personally, I use five response options myself but when we look at the difference between those questions that have three response options and those that have five response options there's actually no real appreciable difference between reliability, validity and discriminability. So these items where they have three response options or five response options are equally discriminating they're equally valid and they are equally reliable. However, this overlooks some important caveats. Three option multiple choice questions are superior in the follow-ways. They are more efficient. Schneide et al. found that students completed about 16% more multiple choice questions per hour which I think translates to them saving about five or six seconds per multiple choice questions. So students are able to move through them more efficiently. The Owen and Froman found similar results in that they found that students were able to complete 17% more multiple choice questions per hour. A number of studies show that there's no difference in performance between three item questions and four or five. However, the Schneide article did find that students perform better when multiple choice questions and three response options compared to four or five response options. Now from someone who has to generate a lot of multiple choice questions in my work, this next one is quite appealing to me. You are able, if you're only trying to think of three response options rather than five, it's you're able to generate these questions more quickly and you're able to cover more content in your class. And then also because you have fewer distractors that you're trying to think up options for, you're able to create better distractors. And so when we take all of these together, even though in the previous slide I said there's no appreciable difference between three, four or five response options, when we look at efficiency, performance, coverage and being able to write better distractors, better questions, all of these lead to an improvement in validity, reliability and discriminability, especially this last one, being able to write better questions. So this is where we move from the optimal number of response options. If you have fewer response options, you're able to think more about creating good distractors. I ran a midterm about two weeks ago in my introduction to psychology class. And this is one of the questions that I posted. Walrich Edel conducted a meta-analysis of 23 studies on the relationship between sugar and hyperactivity in children. Contrary to popular belief, this meta-analysis found that sugar consumption and hyperactive behavior were not correlated. What third variable might be parents to continue to believe that there is a link between a child's sugar intake and hyperactive behavior? The first one was the results of the meta-analysis show that the correlation between sugar consumption and hyperactive behavior is an illusory correlation. So we can see third variables can give rise to illusory correlations, but this is not what the question's asking, the question's asking about specifically a third variable. The second one is we cannot determine whether sugar that produces the energy spike, feeling hyperactive behavior, or people who are acting hyperactive engage in more pleasure seeking behavior. This refers to correlation and when we talk about correlations, we can't apply causation because we could look at one effect in the other in reciprocal ways. So that one is incorrect. These results from the study, this type of study cannot be interpreted with confidence because it represents a quasi-experimental design. This is not really correct. This was kind of trying to see can they disentangle research design? But once again, we're asking about third variables. This is the correct answer. Sugar is often consumed at special events like birthday parties, and it is the event that leads to both higher sugar consumption as well as hyperactive behavior. A kid eats cake, all of their friends are there, and they're going bonkers, but people tend to think it's because they've eaten sugar. So that's the third variable that could. And then honestly, I just ran out of being able to write good distractors. And so I wrote, hmm, cake. And I kind of put it in there because I couldn't think of any good distractors. And I thought, oh, well, I'll come back to it and I'll write a better distractor. And then one of the students in my midterm showed this out to me and I said, yes, I just thought I'd put it in there for a bit of a laugh. And so when we look at the item statistics for this one, how did this perform? There we go. So functionally, what I had over here is that this 11%, this A and this B had 13%. These are functional distractors, whereas only 2% chose option C and 0% chose my cake remark. Those are non-functional distractors. And so what we could have done is that I could have just deleted option C and option E, and I would have had absolutely no change in my discriminability. We see 10 to 15% of distractors are written in a way that is ambiguous or is poor. And we see 10 to 15% of students then fail who should not have failed given these poor, given these distractors. So a bad distractor does not per se affect discriminability, but it's actually creating good distractors that increases discriminability. And we see decreasing distractors at random decreases reliability because sometimes if you decrease one, you might decrease a functional distractor and keep a non-functional distractor. So this is where I was trying to say in the previous one, if you've got fewer response options to create, you can actually create better distractors that then increases discriminability. What makes a poor distractor? Well, if you remember from the slide, we had 11% choosing A, 13 choosing the 2% chose C and 0% chose E. The literature says that a good distractor is one that is chosen by, a bad distractor is one that is chosen by five or fewer percent of students. And so when we're thinking about creating these distractors, we need to go and look at how do they function, take ones that are non-functional out and then either write better ones or just leave them out altogether. Some other considerations, which I had some examples, which maybe I'll show after Lily's finished talking, question stems should have a question in them. You should almost be able to answer the question without seeing the multiple choice options below. Randomizer answers, most students as a test taking technique might guess C if they don't know what the option is. So if you randomize it, that decreases this guessing ability, unless you have something that's in order. So you could ask how many atoms are in the P2 shell of an oxygen atom and you can give two, four, six or eight. When you've got something that the numbers go sequentially like that, it's good to offer them order rather than to randomize them. Keep questions as short as possible. Now, this kind of goes against what I was saying about question length. And this is where some questions when you are unpacking the words do increase in length, but all of that is needed to answer the question. If you've got a whole bunch of stuff that is just extraneous, that is not useful, delete everything that you can, trim as much as you can out of that is not needed and only keep what is actually needed. And then also, once again, avoid grammatical cues. I had an example over here, for instance, which of the following are true in an action potential? Which of the following are true at plus 45 millivolts? And option A only had one option, the sodium gates open, option B had one option, the potassium gates close, option C had one option, sodium gates close, option D had one option, potassium gates open. And then option E had the sodium gates close and the potassium gates open. And so grammatically, which of the following are true, then points towards option E being the correct answer just from a grammatical point of view. So wherever possible, try and avoid grammatical cues. Now, there's more that I can go into, but at the same time I now think what I should do is hand over to Lily to talk more about multiple choice questions and how to write good multiple choice questions from an EDI perspective. Lily? Okay, awesome. Hopefully that is all showing up properly for everyone. Excellent. All right. So I'm gonna talk about multiple choice questions, particularly with an EDI lens. And whenever I come to talking about something from an EDI perspective, I often like to begin by talking about my positionality in this space. So my background is I am a developmental psychologist. I teach as a lecturer here in our department of psychology where I teach lots of classes about babies and kids and developmental psychology. And then also I work as the lead of our psychology department EDI consultation working group. And so I became interested in this question of EDI in assessment and EDI in multiple choice questions because I give a lot of multiple choice questions in my teaching work. I help to advise other teachers around multiple choice questions in my teaching work. And then in my EDI consultation work, we were seeing the different sorts of multiple choice questions that folks were giving and providing and realizing that in assessment and in multiple choice questions in particular where we think about them as being more of an objective assessment, a lot of attention wasn't necessarily being paid to considerations of equity and inclusion. So that's what sort of brought me to my interests in this. And then also in terms of my positionality, I come from the US where that's where my educational background is. I also happen to have parents that both have advanced degrees. And I mentioned that as part of my positionality here because growing up in the US, multiple choice questions and multiple choice exams really were the air that we breathe in educational practices. I can't remember a time where I wasn't filling out scantrons and doing multiple choice questions. But when I started teaching and teaching students who came to UBC from a range of different backgrounds and educational perspectives, some of them had never had multiple choice questions before. I remember when people asked me, how do I fill out a scantron? And that was a really new idea to me, the realization that multiple choice questions are not equally familiar to folks from different sorts of backgrounds. And then I also wanted to shout out a graduate student that I work with in the EDI consultation, Alana Wallace, who has been really instrumental in helping put together some of this work on EDI considerations in multiple choice questions. So I thought I'd start us off with sort of thought exercise where if we can imagine that we are giving a multiple choice assessment to students, maybe we're using multiple choice questions in a video in a formative assessment. And I've got student A, who gets 95% of my questions right. I've got student B, who gets 70% of my questions right. And I've got student C, who gets 45% of my questions right. And my thought question for us to start with is to think about what are some of the things that might explain these different scores? Why is student A getting 95% of our questions right and student C is getting 45% of our questions right? And so I think we've got the chat open. So if you all wanna throw out any ideas you have about why we might be seeing these different scores, I'm gonna force you to participate. Awesome. So we're seeing a bunch of great answers coming. Yeah, understanding the question, how we interpret the question, experience, yeah, experience with the topic, experience with the device that you're on, cultural background language, absolutely. Yeah, background knowledge of the topic. Tamara talks about dyslexia as well, test anxiety, awesome. Yeah, and I think to many of us in education or working in the educational realm, it's often easy to think about some of these factors that might explain why students do well on an assessment or not. You know, coming to my mind, many of these ideas that you all brought about, test anxiety, understanding of the question, some of that might be language understanding of the question, some of it is cultural knowledge of the question or cultural knowledge of the test taking. Also things like that I, as an educator, probably do not have a lot of control of, like sleep the night before, some of these are gonna be out of the realm of my control, but still factor into this. And so for me, when I am thinking about inclusion and equity in assessments, our goal is gonna be that what makes up students' performance or what makes up their grades on assessment is these top factors of do they actually understand the course material? Do they understand the things that I'm trying to test at? How much time and effort have they spent into learning and studying this? And that when we're thinking about inclusive assessments, as well as thinking about valid assessments, it's really around eliminating some of these other factors or trying to reduce the impact of some of these other factors. And additionally, this becomes an EDI consideration because we know that there are systematic differences across individuals from different groups in these other factors. So for example, we know things like test anxiety tends to be higher in women students and students from underrepresented marginalized groups. We also know that things like students with learning disabilities may have differences in things like self-doubt, as well as things like working memory load. The language factor is certainly gonna be relevant to students who are coming in to take exams from English as an additional language. And so to try and ensure that questions are equally valid across different groups, it becomes especially particularly relevant to eliminate some of these other factors or reduce some of these other factors. So that's kind of the framework that I begin with in thinking about multiple choice questions and EDI. When you start looking at previous research or writing on multiple choice questions and EDI, really the first recommendation that you get is to not use multiple choice questions or to move away from a strong reliance on multiple choice questions, where a lot of the work and the writing on inclusive assessment talks about using different types of assessments that if students tend to struggle on one type of assessment, that we would wanna weigh that out with other types of assessment as well or allowing for student choice and moving away from multiple choice questions that tend to focus on one right answer or one best answer and don't fully allow for sort of broader different ways of thinking or different ways of respecting knowledge. However, so while we see all of this writing to move away from multiple choice questions, as somebody who teaches very large classes, I am quite aware that multiple choice questions are sometimes necessary for structural reasons and sometimes for pedagogical reasons as well. So I don't think the answer to inclusive education or inclusive assessment is to entirely eliminate multiple choice questions. I also want to point out that while we have a lot of research building on inclusive education practices, there's much less of education around inclusive assessment practices specifically. And you can see this when teachers and educators are surveyed around how they want to make their educational practices more inclusive. They talk about content, they talk about instructional strategies, but we often are really hesitant to move away from assessment and the norms that are built within our fields. And even further, there's very little empirical research, particularly on multiple choice questions. So I want to caveat that I'm gonna try and draw from literature on assessment more broadly as well as guidance that has been written and suggested by educators. All right, so the other main theme that I want to highlight is that Simon's just given us a nice overview on writing good quality multiple choice questions. And this goes a long way towards also creating inclusive multiple choice questions that the more reliable, the more valid questions are, they are likely also to be inclusive questions as well. And we see that in some of the research on this topic. So Simon mentioned mistakes and typos within questions. And the idea that it is incredibly common for us to leave typos in our exams, 30 to 85% of questions thought to have mistakes or typos in that. And again, this becomes not just a validity issue, but an equity issue because research suggests that typos and mistakes may have unequal impact across students, in particular, for looking at students with learning disabilities. Research shows that students with learning disabilities tend to experience more self-doubt. And we see many examples of students with learning disabilities when they encounter a question that has a typo in it, not being sure of whether this is their mistake, that this is something that they are getting wrong, often coming from a history of having been shamed or experienced stigma for making mistakes. And so they may be less likely to ask questions for the instructor, less likely to think that it might be a typo and interpret it as such, and again, more internalize it as a mistake that they might be making. As Simon also talked about the complexity of language and this idea of unpacking questions, unpacking the language of questions, again, not just helpful for thinking about the validity of questions, but also for the equity of questions, where some of the research that Simon discussed from Katherine Lyon and Daniel Burkarty suggests that this unpacking of that complex language may be particularly beneficial for students who are English as an additional language learners, that simplifying could have particular benefits there. And then you similarly see research that suggests that using more simple sentence structure is also particularly beneficial for students with learning disabilities. So building on this idea that good quality questions are going to also be inclusive questions as well. And on continuing on that theme of thinking about wording and formatting, the idea, right, this thing that we're trying to isolate is content knowledge or content understanding that the questions and students' responses on these questions are tapping into that content that we're trying to test, that we're trying to move away from testing specifically the linguistic or cultural knowledge, unless of course you're teaching a language class. I just taught about IQ in one of my developmental psychology classes. So this one was a classic example that came to mind here. This is an infamous historical example from the SAT, the standardized achievement test that is commonly taken in the States where they're giving this sort of analogy question. They ask, you know, a runner is to marathon as, and what they found in this question is that the majority of white students got this question correct, but about 30% less racialized students got this question correct. And the correct answer has to do with it. It's an oarsman to a regatta, but a regatta is a particular aspect of cultural knowledge for anybody who's unfamiliar. A regatta is one of those rowing races that they do in fancy places like Oxford and Cambridge. And so you can see this question is not just testing vocabulary as it's supposed to, but it's also testing cultural knowledge. And I think this is a trap that many of us can fall into as educators. I fell into this one last year I was giving this exam and I had this question on here where it talks about two teenagers who are part of a nerd group in school. And I had several students who ended up asking me to define what a nerd was, that this was a cultural artifact, a language cultural artifact that I was using in my test that likely was skewing the validity of this question towards the students from my language, my cultural background. So research on improving questions to focus more on that content knowledge and less on the linguistic or cultural knowledge suggests tips like adding synonyms. So when you have a term in there that you can define it within the question or add a synonym to provide more of a background for students to replace any idioms that are included. Another thing that can come up when trying to focus on content knowledge along with wording and formatting has to do with the visual formatting of questions as well where research suggests that we should try and use larger print when possible for answer choices thinking about using uppercase answers and lowercase answers because within as the typed writing of multiple choice questions lowercase letters are more likely to be confused particularly by learners with less visual acuity. Another idea when I want to get to when we're talking about wording and formatting has to do with this idea of cognitive load and cognitive load theory refers to the idea that you're working memory, what you're able to keep in your mind what you're able to focus on has a limited capacity. There's only so much that we can keep track of. I think about this, I'm a mom of three kids and if they're all screaming at me there's no way that I can keep track of all three things at the same time. There are limits on our cognitive load. And again, this is an EDI concern as well given that we know there are differences across groups, differences across students in working memory in cognitive load. And for example, we know that students with learning disabilities neurodivergent students particularly students with ADHD tend to have cognitive load challenges or tend to have working memory challenges. Similarly, students from English as an additional language backgrounds sometimes struggle with working memory or cognitive load on assessments as do anxious students and potentially students who are new to the test taking environment in that when you are in a test that's new there are so many things that you have to keep track of. How do I fill out this answer? How am I supposed to do this? Somebody mentioned that the technology if you're on a new device that you're not used to that now your mind what you're paying attention to is split not just on the content but on figuring out how to click things in the right spot. So thinking about reducing cognitive load in multiple choice questions or in assessments broadly we have many suggestions on how to reduce cognitive load things like where you place blanks within a multiple choice question to reduce cognitive load to not overload so many things to keep track of. The research suggests to place the blanks at the end of a question stem. This is a bad example of a question in that if we're placing that blank towards the start of our question stem the idea is that you're having to split your attention the student who is reading this question has to keep in their mind where this blank is what the question is while they then go forward and read the rest of the answer and that split attention can increase the amount that we're trying to put on the cognitive load. Other suggestions are having answer options that are grammatically parallel so that again you can have this consistency any amount of consistency can be helpful for reducing cognitive load. When possible having the answer options in an order I think Simon spoke to this as well so if it's numerical options or ages or a year having that systematic order can help as well for that consistency. And then just things like the consistent formatting of this that sometimes it can be helpful to have the same number of response options so that students aren't wondering or trying to figure out or having to keep track of, okay are there four options here and then six options here the consistency of that formatting can be helpful for the cognitive load. And I've been speaking a bit about complexity we also have some research speaking directly to the complexity of the format or the complexity of the types of multiple choice questions. So questions that are called complex multiple choice questions or sometimes called type K questions. My understanding is this type of question is common particularly in medicine, in STEM fields so this might look like you've got a stem to the question and then you have primary responses and then what the student actually has to answer are these secondary choices which can involve the combinations of one or more of the primary responses. And complex multiple choice questions were initially designed with the idea that it would involve tap into higher level processing higher level reasoning and allow you to get deeper in that complex thought and understanding we want our students to get. But these questions have since been criticized quite a bit because they are complex as the title says there also is thought to be some tricks to solving these that if you are somebody who's been trained in multiple choice questions who has a lot of experience in multiple choice questions you can use some of these tricks to eliminate some of the answers. Like if you know that answer two is not correct well now that eliminates a whole bunch of the choices and you can use those strategies irrelevant to content knowledge to help you on this type of questions. And we have some interesting recent research that suggests that these complex multiple choice questions they're hard for everyone they're difficult for everyone but they are disproportionately hard for students from certain backgrounds in particular women students racialized students and low income students. I've got a graph here of this recent study it's a young at all study where they looked at this where you can see here they're looking at sex or gender and here they're looking at race and again the complex multiple choice questions here on the right they're hard for everyone but the difference here in sex or gender is more pronounced with complex multiple choice questions and for race there's no difference across students from different racial groups and looking at complex multiple choice questions or non-complex questions but there is more of a difference with complex multiple choice questions. Yeah and I see a great question of why might we see the difference with women here. One of the hypotheses is that it might have to do with test anxiety which is something I'm gonna talk about in just a minute where we know that women tend to have more test anxiety and complex multiple choice questions because they look so complex and intense might be something that trigger quite a bit of test anxiety. One of the other hypotheses has to do with again the idea of being trained on strategies around how to answer certain types of questions and it might be that people from more privileged backgrounds have gotten more training in how to do multiple choice questions and thus can use some of those strategies more. Yeah, I think it's a great question to ask why what's the reason around this. Some research suggests that if you like the complexity of these questions that there are ways to reduce some of the problems here by unpacking these questions and scaffolding them into multiple different questions. So here what we could do is we can still have the same stem. Hockey player uses her stick to pass the puck to her teammate and then we can have multiple different true or false questions so that you're asking each of the different components here in a true or false question. And sometimes people will use each of these. These could be 0.5 points as opposed to a full point within one of the bigger questions but scaffolding and unpacking the questions seems to reduce some of the disproportionate challenges across different groups. All right, I said I was gonna talk about anxiety as well as test anxiety because this is another concern. Test anxiety is incredibly common amongst students so research suggests somewhere probably between a third to half of our students experience test anxiety and research meta-analyses looking across many studies find that test anxiety is negatively associated with performance and that test anxiety is more common in certain populations. Test anxiety is more common in racialized students in underrepresented minorities and it's more common amongst women students. And some of the reasons why this is the case some of it does seem to have to do with more sort of self-monitoring and worrying about performance as was suggested here. Some of it probably has to do with the history of stigma and experiences of stigma. But what we see is that studies often suggest that test anxiety can help to explain some of the differences that you tend to see in test performance and particularly multiple choice test performances by gender and by race that where we see that women tend to perform more poorly than men in multiple choice exams and some proportion of that seems to be explained by the higher test anxiety in women students. So what we have then is thinking about, okay, how do we reduce test anxiety? One big tip for this is thinking about lowering the stakes of the assessment. So that's why I love this talk and thinking about formative assessment and using formative assessment as a way to reduce some of that test anxiety. Another tip is to use humor within assessment. And so Simon, I'm gonna be spoiling one of your questions because I thought you were gonna cover this earlier but this is one of the questions that Simon, I think we'll show later where he's talking about Schmeimen in the question. This sort of thing of putting a bit of humor within test questions, the research suggests that that is correlated with a reduction in test anxiety. So those things like transparency with students, predictable patterns. So if they know how many questions that they're getting, where the questions are gonna be, being able to not have to worry about what is coming up can reduce test anxiety and things like test taking training and formative assessments and strategies like relaxation training as well. Another component I wanted to touch on that is likely related to test anxiety is the idea of stereotype threat. And research on stereotype threat comes from Dr. Claude Steele who talked about stereotype threat as this psychological burden that we can experience when we are concerned about our performance confirming to a negative stereotype about your group. So if you are, for example, a female student taking a math or science test and you know that within your society there tends to be a negative stereotype about girls and science that we may have worry or concerns about that stereotype or about conforming to that stereotype. And the idea about stereotype threat is that it works in that we have an awareness about these negative beliefs about our group or groups that we belong to and that if those negative beliefs or awareness about that beliefs comes up in a situation we experience physiological stress, we may experience negative thoughts or emotions that worry, that psychological burden and that we have to then spend more cognitive load thinking about or managing those emotions as well as this monitoring and performance. The, oh crap, what if I'm doing poorly? What if I am going to be a stereotype of my group? And again, that adds to the cognitive load and all of that is thought to lead to impaired performance. Now why I wanted to talk about stereotype threat here in multiple choice questions is because sometimes I see recommendations for multiple choice questions that are all great recommendations in theory but sometimes can lead to fairly problematic questions. So for example, this is a tip here. This is from McMillan, the textbook company talking about diversifying the names and exam questions. Or I often hear use your students' names on the exams, use your students' names in the multiple choice questions so that students can see themselves represented in the content, can they can see themselves represented in the course. And again, theoretically, I think this is fantastic but what happens if I then have a question here? I mentioned I'm just teaching about IQ in my classes and I have a question about Sarah and Steven taking a math achievement test and Sarah doing really poorly on that. And what if I am Sarah in that class? Does that lead to a stereotype threat situation? And this is a pretty mild example of this but especially in classes like I teach about psychology where we are talking about biases where we are talking about differences between groups. We have the very real possibility of having questions that might bring to the forefront, might prime these ideas of negative stereotypes. Another case where I see this is that folks sometimes talk about wanting to use case studies for exam questions. Again, I think this is fantastic. It gives you some of this generalized learning into the real world but we really want to be careful that any of these example names, case studies, et cetera don't prime negative stereotypes. I was told a story recently about an indigenous student here at UBC who had brought up a concern because they were taking a class where indigenous populations were never talked about in the course content and never came up throughout the class until they got into an exam and the exam used a case study where it talked about rates of addiction within indigenous populations. And you can imagine for that student now you have this negative stereotype about your group confronting you on an exam. And then if we're thinking about things like anxiety and cognitive load of having to manage some of those negative emotions that these again can create these really very real situations of stereotype threat that can come up in exams. So I don't want to discourage anybody from using case studies or student names or diverse examples in multiple choice questions but again wanting to be very cognizant of the effects that this can have particularly if the case studies or the examples elicit some of these negative stereotypes. Now the last thing I know I'm out of time here that I wanted to mention was just this idea of universal design in learning. And so universal design and learning is a concept that many of you are probably familiar with that talks about learning more broadly with the idea of products or environments that can be usable by all people without the need for adaptation or specialized design that often in education we think about accommodations or adjustments that I have a student with a learning disability. And so I'm going to accommodate by individually adding that person extra time or I have a student with visual deficits. And so I'm going to accommodate by using a different format of assessment for them. But what if we didn't have to do individual accommodations? What if we were instead to anticipate the idea that we would have students with diverse needs, we would have students coming from diverse perspectives and learning spaces and that you can build that in from the outset. And so that's something that I think I want to sort of leave this theme of EDI in multiple choice questions with is to think about when we're designing our multiple choice questions from the outset in advance to thinking about the diverse needs of students so that you don't have to go in and make one-off accommodations but that we can set up the assessment to hopefully be usable by all students as much as possible without adaptation. All right, so I'll end on that sort of thought question for you and then turn it back over to Simon and Kaylee if Simon has a computer now that's working. So what I thought I'd do now, we've got about 10, 15 minutes. We've got a good overview. I was wanting to maybe show what I was thinking in terms of like I put up those questions from Mentimeter but I thought I'd just quickly go into H5P since this is an H5P symposium and look at a few things that we should keep in mind when we are creating questions in H5P. Not only in terms of multiple choice but also just more broadly how can we make these things more accessible especially with an bigger pardon. Well, interactive videos as we've been doing. So what I thought we'd do, this is taken from another video that we did. This is not a branching video but this is here's another H5P widget which is quite useful. After the students read about various different sleep disorders they again get a video where they've hired Preston, this incompetent 16 year old intern and he sort of asks a whole bunch of questions. So I thought we would maybe go and use this as a good example to show how we can make this a little bit more accessible especially when setting up. So those of you who'd like to follow along please do follow along. You can see in the chat box I'm going to take some stuff since we can't copy and paste from the Zoom link anymore. I'm going to copy and paste from some of these things over here. So what we're going to do is we're going to go to our UBC open hub. We're gonna go to content and we'll add some new content. Once again, we're going to create an interactive video. Now one of the first things you're gonna want to think about from an accessibility point is have a good title. Over here it says it's used for searching reports and copyright information but this title is also read by screen readers and so descriptive title can help know students who have a screen reader to know exactly what they're coming up what they're going to be doing in this. So over here what I've done is interactive about it occurring material on various sleep disorders. So that's a nice descriptive one this then also helps me when I'm trying to figure out where is this video in my H5P catalog? I've got something nice and descriptive for myself. Now we're going to add a video here. We're going to use something from Cultura. So if you click on Cultura and you go over to this document you can copy this code over here. I know that we can't copy and paste from the chats but I'll put it there anyhow. So once we copy this code you enter it under media ID and we take the media format and change it to raw and then once we've done that you can click generate and Ka-Blam-O something has been generated over there. So we then insert. Now one of the things that we're going to want to do Cultura is quite handy in that it gives you the option to go and create captions, closed captions. That's obviously something that we want to do. Kaylee kind of alluded to it a little bit earlier on. How do we insert closed captions especially from something like a Cultura? They work around for YouTube as well. But here we would go to text tracks. It says unsupported for YouTube videos but there are various work around that it all revolves around trying to get a VTTX file which I'm happy to explain as well. But if you go into Cultura you can request captions. It gives you a caption file. You download that caption file. Unfortunately from Camtasia you need to translate that from an SRT file to a VTX file but there are various free places that you can do that. Over here the type of text track is going to be captioned. And here I'm going to add this web VTT and then it should be in my downloads. sleetmachine.vtt. I'm going to open it up and there we go. Now you won't see it now when you click onto the interactions you won't see it now because it will once you kind of click on update and create it stitches those two together. I would show it now but it takes about 30 seconds for things to create. So just for now that's one way in which you can go and edit it. Now one thing that you'll also find those of you who are interested is the beginning of the video there's some pan sounds that are sizzling but the AI didn't do that. And so you are able to go into this over here. You can edit it with something just like notepad and this is what that will look like. Now what we can do is we can see let me see if I can find my timings. On this video that we've created you can see when we press play we have the preamble where I don't think I've shared my screen the sound which is another rookie mistake so let me quickly go share sound, share sound So we're here around 41 seconds you should hear there's something flying but there's no caption coming up for that flying. So if we go into this file we can copy this we can open that up we paste it over there and then we change this we want this caption to start at about 42 seconds and we want this caption to end at about 44 seconds and then you can say sounds of gas stove lighting and then that's how you can edit it. So that will now pop up as a caption so let me close this I've saved it let me go and re-upload this file and that will now enter into there. So what we've done is that we've added a close caption file that will now provide close captions we have created a searchable and screen readable what you call it, heading now we're going to go over here and we are going to create a multiple choice question so what happens over here is that Preston sees Wow. Hey doc I was taking the history of patient number one and they fell asleep am I really boring? So now we want a question to pop up over here and so what we do then is that we go to the multiple choice and we click on that now once ago I prefer the poster because the poster comes up and we don't need to click on anything over there more options mean more steps that people who don't have the use of a mouse need to navigate just using the keyboard another thing that we're going to want to do is we're going to want to make sure that the pause video is selected here because if you just go straight into a question and the video doesn't pause not only does it carry on playing in the background but by pausing the video this allows someone who's using a screen reader to activate the technology then to go and read this question what's interesting about these multiple choice questions is that they are kind of separate little boxes that pop up over top of the page and so that will be the thing that the person using a screen reader will point towards over here we started at one minute 350 so let's make this end at one minute and four seconds because we don't want it up for that long but it's going to pause this can be first question or patient number one now we've got the question over here if we go back to this I've really which sleep disorder is characterized by suddenly falling asleep that's going to be our first multiple choice question now what's interesting over here is the way that h5p codes this and they code kind of as a heading but it's not coded as a heading in a way that a keyboard shortcut will be able to grab onto it so if you're pressing tab and you just leave it as ifs then the keyboard shortcut is going to have a difficulty sort of flipping onto this widget but what you want to do is that you want to create this as a heading number two and then that makes it more accessible for people using a keyboard then over here let's see what were my questions we've got night terrors REM sleep behavior disorder narcolepsy somnambulism and sleep apnea so when we go over here we can then do the text and we can type in are the correct answers going to be narcolepsy and we will do that as correct and then we do somnambulism which is sleepwalking we're going to add another option over here and we will put in REM sleep behavior disorder we'll add in night terrors and then I think sleep apnea over here we'll be talking a little bit more about Kaylee Kaylee over here we'll talk a little bit more about tips and feedback tomorrow and we'll be talking about this tomorrow as well you can decide whether you want to show the retry button or show solution button randomized options so you can randomize the options for each individual question over here now remember from what Lily had said if you've got something where there are dates that appear in order or months that appear in order you don't want to randomize them so for some instances having that randomize is okay but for other instances you don't want that what's nice about H5P is that you can do this on a question by question basis someone asked earlier if you don't want students to skip past it and this is what Cynthia Brain was saying like a lot of research in how to help students learn material is you get them to do as much brain work as possible so over here if you select require full score for the task before proceeding then they cannot move on before they have attacked the question but in terms of accessibility if you do this heading over here allowing the keyboards to grab onto that then the rest of this multiple choice question over here is quite accessible and quite unmanageable over here what I like to do is that I will then sort of put this in the top right and then I will have it open so that the student doesn't need to scroll at all and then what they're able to do is just quickly answer this and check on their material and so so far we've covered setting each interaction to pause the video because this allows people to screen readers to engage it begin each interaction with a level two heading because that allows more accessibility for keyboards and then you can also add instructions at the beginning I forgot to show you over here under interactive video you can either add an H5P element like Kaylee did with those goals that you can put over here this text over here or at the beginning you can have a short description a video where you are a sleep expert helping an incompetent intern solve various sleep dilemmas listen to the symptoms and I see you've only got a few of those options there so you have to be rather pithy in what you say there but that is another way in which you can increase some accessibility and letting students know exactly what they're going to be doing now for some instances there we go for some instances often I might use a picture and I want students to click on a picture if you do include a picture over here you need to put an alternative text so that screen readers are able to do it also if the picture doesn't load then you get the alternative text and I would also include using a hover text in some instances you might want to use the picture of like a table so that students can point and click at it I would wherever possible avoid using pictures of a table because then that makes it less accessible only use pictures if you're able to provide good descriptions of the pictures and this is something else when I'm thinking about my paper and the terms last turn my head of student with visual problems who couldn't read the tweets or the cartoons that I had embedded and so what I started doing this year is that anytime I include a visual stimulus I write a little accessibility this is what is showing in the picture with the words that are being showed then so we can do the same type of thing over here if we need to use an image but if you're going to do a table just rather use the table option that is available over here of course there are other things once again I started out my talk saying that a lot of this stuff is accessible within H5P already but we want to use questions that have high contrast so that it is more students who have visual deficiencies or color blindness it's easier for them to be able to read it it makes more meaning visual information in the images provide accurate closed captions so the closed captions that I used in this one that I demonstrated how to use were generated by AI and it said it had an 81 accuracy and so it's good then to go and read through your closed captions and then just by simple using your notepad that should be available on your computer you can go and improve those or even in Kaltura you're able to make updates and I think that that is what I have available for now and that has taken me one minute over time I'm happy to talk through some stuff that we didn't get to because of what you call it a technology glitching out on us but what I would also just quickly like to do is say thank you so much for coming this has been one of the most interactive online experiences I've had and this includes teaching through COVID