 At this time, it's my pleasure to introduce Dr. Victor Benassi, Faculty Director, Center for Excellence in Innovation and Teaching and Learning at the University of New Hampshire. Dr. Benassi graduated from the City University of New York with a Ph.D. in Psychology. He has been a faculty member at the University of New Hampshire since 1982. He has a professor of psychology in the Department of Psychology and with a professor of college teaching in the graduate school. During the early 2000s, Dr. Benassi served in the Office of Academic Affairs as Vice Provost for Undergraduate Studies. He is currently Faculty Director of the Center for Excellence in Innovation and Teaching and Learning. Please join me in welcoming Dr. Benassi to the Naval War College. Good morning, everyone. It's so great to be here. I am looking forward to our time together and I've already had a full day of experience of being shown around the college by Tom Gibbons. Again, thank you so much for being here. This is the third involvement I've had with the Naval War College about speaking at this event. Three years ago, Tom asked me if I would be able to come and I had already been booked and recommended someone who came and then last year he invited me to come and I had already been booked and someone else came. And I told him last year that if he asked me early enough, so he did in March. So thanks, Tom. These slides are already available at the University of New Hampshire Center for Excellence in Teaching and Learning website. So if there's anything on here that you want to be able to access or link up to, you'll be able to do it there. We're also videotaping this presentation that will be available. So here are the people we thank. The Davis Educational Foundation from out of Maine provides support for the science of learning of work that we do at UNH plus the Provost Office. And here's a few of my collaborators. So in thinking about presenting to this group today, I had a particular challenge. And that is that nearly all of the work that we've done over the last decade in science of learning has focused, not entirely, but it's focused on undergraduate education. And the kinds of learning outcomes that one is interested, particularly at lower level courses in college and university, are different than the kinds of learning outcomes that you folks are interested in the courses that you teach at the graduate level. So that got me to think about how we can try to match up the kind of learning outcome that we're trying to produce with what we, mainly psychologists, know about how human beings learn and how people learn in academic settings, which then we think auto inform the kind of instruction that you do. So linking the kind of knowledge with the learning processes involved informs the instructional methods. And of course it's an iterative process. So let's pull this together. So the kind of learning informs the kind of instruction and learning processes which auto inform the kind of specific interventions that we use in our courses. So at the most basic level, we want people to learn facts. Of course, the people that come to your courses already know a lot of facts about a lot of things and a lot of the topics that you're teaching them about. But my understanding is that the students that come here come from many different backgrounds, from different countries, from different areas of emphasis at different levels in their career and of course in some areas they know more and some areas they know less. So although we're not primarily interested in fact learning at the graduate level, of course it's important. So what we know is that principles about spacing of practice and learning opportunities and trying to engage students in retrieval of information that they've studied is important learning processes. And that any kind of activities that prompt retrieval, for example, quizzing can promote fact learning. The person that I recommended to come here last year, Nate Cornell from Williams College. I chatted with him briefly before coming here today. And he told me that one of the topics that he focused on was the so-called testing effect in the role of retrieval practice. He's a world leader in that area. So anything that we can do to get students to engage in retrieval will promote learning. We think of quizzing and testing as a way to assess student learning and it is, but it's also a powerful tool in promoting learning. Moving up the hierarchy a little bit, you know, many of the outcomes that we're interested in graduate education is teaching students to make discriminations, to be able to classify information, to be able to put things into categories. I'm going to give an example of this later. What we know in terms of a memory learning process is that the, what we call the interleaving of study opportunities is a very potent and powerful approach to promoting learning of things that go into different categories, things that get classified here versus there. I'm a beginning student of the Battle of Gettysburg and one of the things that I had to learn early on was how the Army of the North and the Army of the South were in fact put together. My friend used to laugh at me all the time when I'd get regiments and brigades mixed up and so on. So I had to learn to make those classifications. The interleaving of practice is a powerful tool in doing that and I'll give an example of where you can use content review and quizzing to promote interleaving. What we're interested in here primarily is higher order thinking, sometimes generally called critical thinking. So what we're interested in is our students learning general principles, making generalizations from specifics. We're interested in creating a mental model about whatever it is we're studying. So across your courses, students are learning many mental models about how things are organized, how people are organized, how events are organized, how they interrelate with one another, creating a model of the operation of whatever it is we're studying. We're also, of course, critically interested in problem solving. So I'll talk about two memory processes, one involving what's called self-explanation. And the other, which is a type of collaboration. I've studied a little bit about your seminar structures across your courses. And I believe and hope that the collaboration approach that I tell you can apply to any of the seminars that you offer. So we'll talk about using self-explanation prompts and an approach called team-based learning. So the facts. I'm not gonna give you an example of a study about the facts. I mean, your speaker talked a lot about that last year. There are literally hundreds of studies that have been done and dozens and dozens of studies that have been done in real academic classrooms that show the benefit of spacing of practice and retrieval practice. So in any event that actually generates the retrieval, like quizzing can produce positive effects. So I'll just cite two chapters from a book that myself and two colleagues edited a few years ago. One, Shana Carpenter has a chapter on spacing and interleaving of study. And Mary Pike and colleagues have one on what's called test enhanced learning. The testing effect. Both of those appear in the book that I'll show you later. Then any of you can access, for free, the American Psychological Association, Division Two, Teaching of Psychology, publishes books about a whole variety of issues related to teaching and learning. And this book appears there and any of you can access it for free. So let's move up the hierarchy and talk about discrimination classification. So in looking at your theater security decision making course, what I see is a number, all your syllabi are great by the way. They're models, I wanna show them to UNH faculty as great examples of how to structure the connection between what we want students to know, what we do, and how we assess it. So theater security decision making, I think a lot of political science people were involved in that program. And so you talk about the levels of analysis approach, different kinds of analyses. In psychology, we do this as well. We talk about the biological basis of behavior, the social basis of behavior, the organizational or community basis. And those different levels of analysis require certain kinds of classifications and discriminations that are made. So I hope that what I talk to you about, you'll see it as applicable. So interleaving of practice, and retrieval, and content review. So unfortunately, I've not done any work with faculty here at the War College. So I can't give you an example of the studies I'm going to talk about that would be directly from this kind of approach and content that you teach here. But I can tell you that there is a growing body of literature across numerous academic fields and disciplines that show that these principles, if appropriately applied, work. By work, I mean they improve the learning outcome that we're trying to achieve. So the example I'm going to give you is from a course in statistical reasoning. This is a course that undergraduates at UNH, in fact, across the world take in learning about statistical reasoning and the appropriate statistical tests to use to analyze data from certain kinds of research designs. So what we did, in a statistics course, we've done this study maybe five times. So I'm going to show you results from one study, but we've done it about five times. So in a statistics course, this is the typical way that the course proceeds. Is that a statistical test is covered. So what does that mean? It means that students read some material, typically from a textbook. And then they go to class, or it's online, they go to class. And the instructor provides further instruction. And usually in statistics, like many sort of quantitative oriented courses, provide some work solutions of how to do that test or when that test is appropriate. What we do is after that work occurs, and we've done it with face-to-face courses and totally online courses. We just finished a totally online course that did this approach. After it's done, what we do is we have students go to our learning management system. Currently, we're using Canvas, like you. We just finished using Blackboard for several decades. As they go to this online module within Canvas, and they read about a particular statistical test, the one that's being covered at that point. They read about it. The critical point about statistical test is you need to know what assumptions are required in order to apply that test to a particular research design. They learn that. They go through the module and review it. And then they take a quiz on it. So if it's the first statistical test that's being covered, what they do is they take a test that covers that statistical test, end of it. What we do is we do experiments, studies in our courses. The way whenever we do a science of learning intervention, and if we just did it, we'd get an outcome. But of course the question is, compared to what? If we did this and we get an outcome, does it mean anything as if we didn't do that at all, or if we did something different? So what we do is we do these studies in which we have control conditions within the studies. So in the non-interleaving condition, what students do is they go through a module on a statistical test. They go online. They learn the material. They study the material. They take a quiz. Then they go to the next test in the course. They do the same thing. In the interleaving condition, what happens is once you go through that test, and now you go to the second statistical test, when you complete the instructional material, you take a quiz. But the quiz now asks questions about the current statistical test that you're studying. But it also asks you questions about the test that you already covered. And you can see that if you go through NRK's eight statistical test, what you do is in the non-interleaving condition is you cover a test, you move on. You cover a test, you move on, et cetera. In the interleaving, you cover a test, you move on. And now you're assessed on this new one, but you're also assessed on the one that you already covered. That's why we call it revisiting the past. And we repeat this in our case through eight statistical tests. These are the eight statistical tests, if you care. Any statisticians in here? All right. Look familiar? All right. So our experimental manipulation is within a course. So we had about, well, we had 38. Danny Rasko was the instructor of this course. We had the interleaving and non-interleaving condition. So I'll show you a figure here. On the vertical axis there is how students performed on a comprehensive test that was given at the end of the course. And that comprehensive test had 24 research designs, 24 of them. There are eight statistical tests. Guess what? There were three questions about each test, all scrambled up. And what students had to do is read the description of the research design. And then choose among the eight statistical tests which one was the appropriate one for that research design. So that axis there shows what percent they got right. If anybody's interested, I've got confidence intervals around those means. That's the non-interleaving condition. What we always do, of course, is save the best for last. And so this is how they performed in the interleaving condition. So what we had here was about a nine point difference in performance on the final exam. Now again, nine points, 10 points, 10 percentage points. Is that a lot? Well, I mean, if you're a student, I know maybe less so in graduate education, but in undergraduate education, would you rather have a 70 or a 79? But that's in terms of grades and so on. But in terms of learning, if you think about using this approach and accumulating it over many instructional opportunities in many courses, I mean, we haven't done this. I'm not even sure how we would do it. But to look at the net effect of this over, say, an undergraduate or graduate education, it could be substantial. Not so much in terms of the grade students get, but in what, in fact, they've learned. So how might you incorporate this principle into your course? Courses that you teach. I mean, I think it should be straightforward. And it could be done in a relatively easy fashion. So what I'm reading in your syllabi is students do this work outside of class. It involves usually reading papers, articles. There's usually some kind of what I might call learning assurance or readiness assurance where they post the discussion post, maybe are required to comment on other students. And then they come to the course, to the seminar, and then the discussions and activities that occur in class are informed by what they've already learned outside of class. It would be very easy that once the course gets moving along, that you spend a short period of time at the beginning of the class asking questions about material that was covered before. It's a very straightforward, simple recommendation. I have other recommendations, but I don't have time. And you could think about how you might be able to revisit the past and what you do. If you want to do an assessment of this, what you would be required to do is at the end of the seminar, I noticed that I think in all of the courses that I looked at the syllabi, there was an exam at the end of the course. If not all of them, I saw it frequently. They had papers that required, but an exam at the end of the course. That exam, you could do an assessment by, in fact, asking questions about the concepts that were covered earlier, and if you wanted to do a study, an experiment, you could do this across different offerings of the course to see if you bump up performance. So moving up the hierarchy, principles, mental models, problem solving. In the strategy and policy course, one of the learning outcomes that I saw was skilled in applying naval perspective through the use of analytical frameworks. I mean, that sounds like critical thinking to me. It involves mental models. It definitely involves problem solving and learning of general principles. So what I want to do is talk about two approaches that I believe would be directly applicable to any of the seminars that you offer here, whether they're the required courses or the electives. One involves self-explanation and the use of self-explanation prompts, and the other involves a type of collaborative learning called team-based learning. So making sense and meaning of new information. Again, the studies are at the undergraduate level. Thomas told me that many people involved with the work college have backgrounds in STEM fields. We're doing most all of our work now for the last two years in STEM education, particularly in the biological sciences. So this is just one course in which we've done studies in using self-explanation. So self-explanation. Self-explanation is a type of constructive learning strategy. And by that I mean that what the students are doing is they're reading material or they're watching a video or a talk, a presentation, a lecture. They're listening to other students talking in a seminar. The constructive part of it is, is they're not just reading it, not just taking it in through say visual auditory senses, but as they're reading, listening to the material, they're actually engaging in a constructive process. And so you'll see I think in a second what I mean by that constructive process. Self-explanation involves self-monitoring. Psychologists call this metacognition. I'm sure you've heard that phrase probably more times than you want. And what this self-monitoring involves is as you're reading new material, you're reviewing it in relation to what you already know, to your prior knowledge. That's critical for developing critical thinking skills, integrating what you're learning now with what you already know. But also when you're reading or listening to material, you're generating questions for yourself because you may be thinking, some of you may be thinking there's, I didn't really get what he was talking about this being a constructive process. So you're engaging in that kind of thing. So what happens is as you engage with new material and we do things to get you to engage in self-explanation, you're putting this together in an organized fashion. The mechanism that is involved in this from the psychological point of view is that self-explanation is an excellent tool for helping you to identify you being the student, the student to identify gaps in your learning. In other words, when you're reading this new text, there are going to be concepts and points that are not crystal clear to you that do not fit with what you already believe. In other words, you have a mental model. You always have a mental model. Might be very shallow and new, but you start off with a mental model of something based on prior experience and self-explanation helps you identify gaps in your learning and it helps you the process to modify your flawed existing mental models. All right, so how does this work in practice? So in this biological science course, what we do is we have students read and what they're reading in this course is our textbooks chapters. And so this textbook chapter is broken up. You just get a mental image of textbooks, right? There's a chapter on mitosis, but within it there are sections, right? Different sections. What we do is, before it was Blackboard, you could easily do this in Blackboard. Now it's with Canvas. Is what we do is when students read a particular section of a chapter, they then switch over to the Canvas platform and we have some questions that we ask them. And they, we have them type their answers. Sometimes self-explanation is just talking out loud. That is, people may be reading and just talking out loud to themselves or if it's a laboratory study, somebody's tape recording it. But what we did was we had them type answers to these questions, which I'll show you as they're reading along. So they read, they answer these questions, they go to the next section of the chapter, they read. In your case it could be they read the introduction to an article and they get some of these questions. They go to the next section on an analysis of the problem. They ask some questions, what does this mean in terms of the bigger significance or the application of these principles to whatever your topic is. So what are the kinds of prompts? I mean, there are many different prompts that could be used. These are three prompts that we've used. We've done others as well. What, so they read the section in the biology textbook. So we ask them, what information is new? What have you learned in this section? How do the new ideas work with what you already know? So now we're trying to get them to connect with prior knowledge. And then we always ask them two I wonder questions. In other words, now that you read this, what are you wondering about? Are we wondering like how this mitosis stuff is gonna work with, what's gonna come next or think of your own courses and what some of those I wonder. These are fun to read because students write all kinds of interesting things like, I wonder when this is going to end. They don't all do that. But one of the reasons, by the way, that we have them type these up is that we currently, literally this summer, for this course, we are actually going in and doing content analysis of the quality of their responses to these. And what we find is, in fact, the higher quality responses are associated with better learning outcomes later on. Surprise, surprise. All right, so what we do, again, an experiment, we're gonna randomly assign about half of these students to a self-explanation group and the other to a summary group. So the self-explanation group is just what I showed you. They read the sections and they answer these questions. In the summary group, what we do is they read the same exact material section. Once they get done, they go to the Canvas site and the question that they're asked is, or it's a statement is, please summarize what was covered in this section. Now, we could have had some students not do anything, but what we wanna do is we don't wanna be able to just to say, students do better when they write stuff down and it really doesn't matter what they write down. We wanna be able to say that it's the self-explanation process. So what we need is a control condition that's something more than not doing anything. We actually pick summarization because it's not a bad strategy. It's not useless. It's actually pretty good. So the point I'm making is that if we get a benefit above summarization, which research has shown actually is pretty good, then we're showing that we can actually improve performance. See, one of the things, see if I were sitting out there in the audience is, I'm thinking, why don't you know the answer of what he's gonna show us because he wouldn't have just gone through all that if it isn't gonna work, right? Not gonna set us up and not have any result. So again, on the vertical axis there is, in this study, they had three chapters of the textbook and then they had an exam. And then on that exam, they were asked conceptual questions related to the material was covered in those chapters and this is how they performed in the summarization group, about 60%. By the way, we have about a five year record of how students do on the first exam in this course. This is one of the things we've been trying to improve and it's about 60%. It's a difficult course and a lot of students, they've come out of high school, they've had high school biology, they get thrown into this course. These are science majors, STEM majors and it's a challenge for them. They get about 60% right. So again, now what are we gonna see? What we're gonna see here is that we see about a 5% bump. Now statistically, if we do a t-test for independent groups on this, it's a significant difference. It's 5%, not as big as what I showed you before which was almost 10 for the other but I gave you a little heads up about why we might expect that this result wouldn't be bigger. What Mickey Chi, she's the person that developed this approach is what she found in many of the studies she did under very controlled circumstances is that she could get a 20 to 30% bump in performance from self-explanation. I mean huge, I mean anybody would take that. Here, our students are doing this activity on their own. There's a lot of noise, I mean in many ways, they're at home, wherever home is, they're doing this and who knows what else they're doing. So there's that piece and what we see is in the quality of the self-explanations, if the explanations are not very good, the learning outcome is not as good but what we also find is if the learning outcomes, if the explanation is stronger, that the learning outcome is stronger. What Mickey Chi has also shown is that if you instruct students on how to do self-explanation, you get a bigger bump. The study that we're going to do in the fall, starting in a couple weeks is exactly that study. We're gonna compare this condition here with one in which students are pre-trained to use self-explanation. A lot of the studies we do, we wanna see whether the interventions are as effective for students with different skill levels and we've done two kinds of skill levels. One is we look at say how they performed earlier in the course or we look at their SAT scores upon entering the university. And so in this case, what we did is the students in the class, we did a split just to show you visually. So students who scored above the median on the SAT verbal in the high SAT group, students who are below the median for the class were in the low. This is relative low and high. Just split at the median, midpoint. What we wanted to see is whether the interventions were as effective at those two different levels. Let me give you a little general statement here. We've now done this kind of analysis but also looking at how they did on prior exams in the course. What we find almost always is that the lower skill students benefit more from the intervention than the higher. So one way to talk about that is the highest performing students, highest skilled students, they've figured out ways to do their study. I mean, how do we know that? Because they're high performing students. So what we find is that the interventions for those students tend not to be as potent but the low skill they do. So here are the people in our high SAT group and voila, what we see is that there's virtually no difference in performance between the two groups for the people that are in the low SAT group. What we see, the blue there on the left is that they perform significantly higher than the ones that are in the summary group. And in fact, if you look across the students that are in the lowest, they're performing as high as the high SAT group. We see this pattern of results time and time again. So how might you incorporate this kind of principle into your class? I mean, it would be very, very straightforward. Again, I know that, I think maybe Tom and talking with you is that you have students do their reading outside of class and they then do discussion posts. So one way to do self-explanation that could improve the quality of those discussion posts, which we would believe would improve the quality of discussion and activity in class, is that as students are reading these articles, is that there would be these appropriate self-explanation prompts that are given after different sections of the material. It would be very easy if one were inclined to do a study where you could actually assess the impact of these interventions on the discussion posts themselves and on the quality of the discussion in class and if there's that comprehensive exam at the end there. Anybody's interested in this kind of stuff, I'm the guy who likes to work with people for fun doing this kind of thing. All right, the last approach is team-based learning. This is an approach that actually developed not independent of, but somewhat parallel with the growing body of research and science of learning. In fact, Michelson, who developed this approach back in the 1970s at the University of Oklahoma, is well ahead of the science of learning people getting involved in education. That is, the science of learning people have been around since the 1800s with Herman Ebbinghaus and not learning nonsense syllables. I mean, we know a lot about how human beings learn, but the cognitive scientist really got into how to apply science of learning and education really when the federal grant money at NSF dried up for basic research and then all the cognitive psychologists became cognitive aging as the Institute for Health went into that area. That money dried up and now through the Institute of Educational Sciences is money for doing education. So Michelson's work is completely compatible with and in fact, we won't do it here so much, but we could contextualize all of the components of this approach within a science of learning framework. I picked it because this approach is particularly relevant and appropriate for graduate education. In fact, although there are a number of studies that are done have been published on this approach at the undergraduate level, most of them have actually been in graduate education. So what we do is we develop what I'll call permanent teams and a permanent team is a team that lasts as long as a project lasts that you're gonna do in a course. Project could be what we're gonna do next week in the class. The project can be we're gonna cover this topic in the course for the next three weeks or it could be a course long project that you start at the beginning and it culminates at the end with a paper or presentation or both or some other product. So when I say permanent, I mean permanent with respect to whatever the projects are that you're going to work on. So Michelson talks about readiness assurance and what readiness assurance involves is some kind of pre-training. So pre-training could be that out of class students watch a lecture capture, the teacher talks and presents information. It could be some kind of professionally made video. I'm gonna give you access to some videos on science of learning at the end of the talk here today. It could be reading material. You all have reading material for your seminars. People have actually identified that this next component is important as a way to boost performance. After the material is studied, read, is there some kind of assessment? So it could be a quiz, an actual quiz. It could be a discussion post. It could be both. It could be coming into class and the teacher asking questions randomly to different students. So what you're trying to do is you're trying to increase the probability that students do what it is you want them to do is read the material in a deep way. But it also is, as I said, giving these quizzes is more than assessment. It actually boosts learning. I mean, that's the big point about the so-called testing effect. And retrieving material improves learning. We could do these assessments individually. Students do them individually and they can get feedback. What Michelson does is at the beginning of a class session, he gives them an individual quiz. And after they're done, then their team gets together and they go through the quiz collectively and argue and discuss about what the right answers are and then they get feedback at the end of that. He's done, and others have done studies showing both of these interventions boost the quality of student learning. So application activities. What could you do? What kind of activities? What kind of projects? Case studies, of course, are everywhere here at the War College. Problem solving activities. Of course, you have a big focus on gaming and gaming scenarios. I mean, I don't know a lot about that and my understanding is I probably won't get into that building over there. But I know enough about the kinds of conceptual requirements and the involvement in teamwork and the involvement in competing ideas and how do you structure things so that somebody who is a dominant individual, may well be a dominant individual, but may not have the best ideas doesn't overtake the process and so on. So lesson learned activities and looking at your syllabi, I see references to Napoleon. I see references to the Civil War and I can imagine that with the kinds of readings and discussions that are going on there, which are lessons learned. And as we were talking about, I go to Gettysburg a lot and I see groups coming down from Carlisle from the Army War College that are doing a lessons learned tour with their teacher. So each team member has a role to play. And again, these slides are available. I don't expect you to digest all this. I just want you to see that this isn't, you just put groups of people together and that's the end of it. What I'm trying to get at is that each member of the team has a defined role. This approach here is based on what's called problem-based learning, very similar to team-based learning and there's a reference there if you wanna learn more about it. So a scribe, a tutor, a chair and the members of the team. Michelson says, I'm not sure how much research behind this, but teams of around six to eight are ideal. If your seminars are about 12 to 16 or so people, you could think of two teams within a course. So what kind of problems do we deal with in team-based learning? The problem, what kind of problems? Well, the best kind of problems to be worked on are the ones that you work on. The ones where they're open-ended with no clear, defined solution. In other words, you can't go look up what the right answer is. Is critically, the problems build on prior knowledge. So if you do a project that say last three class periods is that everything you do across there builds on what happened before and builds on your prior knowledge. Collaborative work to find solutions. I have a lot of background in addition to cognition in social psychology. And this is a social psychologist's dream to study small groups in this kind of situation. And of course, we're people and we have personalities and we have approaches and some of us are less dominant, more dominant. And so developing a structure where collaboration works is a critical part. And if you're interested in this, you'll need to go to Michelson or someone else to learn exactly about how to structure things so that you have an effective group. I mentioned this before, is these kind of problems create cognitive conflict. And the cognitive conflict could be in terms of conflicting with what your prior beliefs are. We all enter all kinds of areas having very firm beliefs about the way things are in that field or discipline. And in fact, the more we know, the more we have those beliefs. But your beliefs and your beliefs may not be the same and on a same academic topic. So this could create cognitive conflict. Team-based learning has an approach which in fact can allow you to take advantage of that conflict which actually results in deeper learning. There's not a single right answer. I mean, when you do gaming, right? I mean, you could do it three years one after another and you come up with hopefully not diametrically opposed, but you come up with somewhat different approaches, different solutions. And of course, the critical component of this approach is to try to promote reflective thought or what we generally refer to as critical thinking. So the team-based process involves preparation and that there are clear learning objectives. Your learning objectives in your syllabi are outstanding. You have usually some in-class pre-training where the instructor sets up the problem that's going to be studied over whatever the period of time. And then there's out-of-class exploration which could be doing many scenarios. That somebody is a team member, you're assigned to read that article, you're assigned to read that article and that has to be sent on Thursday night to this person right here who's going to integrate it so that we come back to class next Tuesday. This is what's going to happen. That's where the chair of the committee comes in. All right, the application process. In class, we have this group work that occurs on the project. The teacher has usually, not usually, always developed a well-defined plan for what's going to happen. So this is what's called a learner-centered approach, but it's an instructor-generated approach. So this is not an instructor sitting back saying, meet as a group and solve it. And critically, and this falls in closely with all the science of learning at work, is that assessment is absolutely a necessary component. The assessment approach could involve these items as well as others. There's usually, at the end of the project, a group debriefing where the members of the team usually are almost always led by the chair of the committee, does a debriefing, not only about what solutions were generated, but also about the team process as it worked for the group. Now, if you're gonna have a series of these projects across the term, and you keep the groups across, is that group debriefing could critically inform improvements in the group process moving forward. Usually, in team-based learning, there's a team project which could be a team-based paper or a team-based presentation. There's also peer evaluation and self-evaluation. And that that peer evaluation that occurs from other members of your group is shared with the members of each member of the team to help improve their process as group members. Again, this is critically important moving forward. Now, you have a final exam in your courses. You have, usually there, the students write two or three major papers in your courses. Those could be the ultimate learning outcomes. That is, by engaging in this process, even though it's a team outcome, you can still have the individual accountability in terms of the major paper that's written and or the final exam. So our in-class group work and our out-of-class exploration, they go hand in hand. Most of these projects last over several weeks of a course if not the whole semester. I'm gonna give you one quick example here from UNH. Again, not in an area taught here, but it is a graduate level course, has a few undergraduates in it. About 60 students, pretty large graduate course. Beth Stewart was the faculty member. And I'm not gonna go into details for the purpose of time of the study, but basically what we did was we had four projects in this course. And what we, the researchers, did was we randomly selected two of them to be team focused and two of them solo. So what happened is, teacher would introduce a project in class, clear set of instructions, students on their own would go do their readiness assurance. They would come to class, they would take a quiz. We would score the quiz, they would get feedback. If it was a solo project, then what students got was the worksheet that had all the tasks to be performed. And then each student individually worked through the worksheet. When the worksheet was done, that was submitted. In the team-based approach, so let's see, there were say 60 students in this class, there were about 10 groups of about six each in the class, then they would come for the two projects that were team, they would come to class, they would take their quiz, then they would get into their predefined groups, and they would get their worksheet, and then collectively that team of six would work through it, produce the product, and one report would be handed in. So that's the approach. What we were interested in is on a major exam, how did students perform on conceptual questions that were related to the material covered in those four modules, all right? And so here's what we found. Is in for the solo, right? And by the way, these are the same students now, two of them were solo, two of them were team, and the solo, it's about 78 or 79% correct on this midterm exam, and there we go on the team. There was a benefit of about seven points, and that's pretty good. Again, these are graduate students, the level of performance tends to be somewhat higher, but there's no ceiling effect here, so this is something that I think holds up. We've done a study like this in maybe four or five classes so far, and this is what we get overall. If any of you are experimental in design, research design people, any one study like this is potentially problematic because there are alternative explanations that might be for this or that. What we do is we manipulate different things in different studies so that collectively, if they all show the same kind of result, we have greater confidence. In other words, this is not a laboratory study, but it does show a benefit, and we're not the only ones that have shown this. Other people have shown it as well, all right? So how might you incorporate this? I mean, I think maybe even have said enough about this already is that you could do this in a fairly straightforward manner. What you need to do is simply to think about how you would structure a group activity in a systematic way and that has these in-class and out-of-class components to it. Not, wouldn't be that difficult to do. Again, if anybody is interested in this, I'd be happy to communicate with you over the phone or by email or whatever to give you some ideas. We're doing projects all the time now with faculty at UNH using this approach, so it would be pretty easy to connect with you if you have an interest. And I think in the kind of seminar structure you do, it would be relatively straightforward, but also you could see good learning outcomes. So here's a bibliography that you're not gonna write that down now, but you can get these slides. Michelson's book with Knight and Fink up there, 2004, it's still a classic, but there have been lots of more recent work that's done, including in graduate education. Just to conclude, do you already do this kind of thing in your course? Are you sitting there thinking, well, well, that's nice science of learning. People said that. I've been doing that for years. If that's the case, that's really good. I'm impressed. When I started doing this kind of work about a decade ago, I thought, that makes sense. I do kind of that. But I also know that a lot of what I did, hopefully less now than before, a lot of what I did was not very well informed by science of learning. And to be sure, when I was a student, the kinds of activities that I engaged in as a student would be like I'd be a poster child for how not to do it, right? I mean, just one example here. You read something, you don't understand it. What do you do? You reread it. What do you do when you still don't understand it? You reread it. What the research shows is unequivocally is that rereading material has limited benefits. Mark McDaniel had published a study a few years ago where they did about 15 experiments pretty much across a whole bunch of designs. And they found in one of them rereading had a benefit. In one case, rereading had a decline effect and all the others had made no difference. And so, but none of you reread, I know that, but I'm engaging in my confessional here. You can incorporate these evidence-based principles informed by the science of learning. I hope you see in a relatively straightforward manner. I mean, for some things, having the benefit of working with somebody from a teaching and learning center or somebody that knows about this kind of work, especially if you wanna do an experimental design to really tease apart what the effects are due to. But by and large, you can do these sorts of things. And I'm gonna give you some references. And you can expect strong learning outcomes. So I've only scratched the surface here. There is so much exciting work being done in science of learning over the last five or 10 years that if you had a serious interest in it, like I do, you could spend a lot of time learning about it and still only scratch the surface. But not all of you, maybe not any of you are interested in doing that, but you do have an interest in your students' learning and there's some things that you could access, some sources that would be benefit. So I mentioned before the book that we did, make no money on this. This is free at the APA website, Society for Teaching with Psychology. What we did, this is my co-editors, Catherine Overson and Chris Hackler, what we did is we asked about 30 of the world's leading science of learning experts, would you like to write a chapter, not for the other 15 people that read your stuff, but for faculty that really don't know anything about this stuff? I mean, after all, I mean, wouldn't that be relevant? And guess what? Every single one of them said yes. So this book has the people that are doing the really ground-baking work, and they all wrote a section at the end of it about how can the everyday teacher use these principles in their course. So the book is like three years old, but still quite up to date. Make it stick. Some of you might have, this is a very popular book. Henry Rotiger and Mark McDaniel are the science of learning people. Turns out that Paul Brown is the brother-in-law of Henry Rotiger, and he's a novelist. And over Thanksgiving dinner, they were talking about what they did, and they decided, Brown said I can write a book that people will actually buy, and that's it right there. And then my favorite person in science of learning is Richard Mayer from UC Santa Barbara. This is a very nice, thin book that's accessible to students, but also to faculty. It's not a technical book. Some user-friendly websites. There's one here called Retrieval Practice by Pooja Argawa, who was a doctoral student of Henry Rotiger's. The learning scientists, these are, I think, two faculty from UMass Lowell, and here's their website. This is an excellent general site. Megan Smith and Yana Weinstein. And what about some videos? Deep Learning. That's Mickey Chi. I mentioned Mickey Chi before. She's the self-explanation person. She's got a video where she talks to us, not to the other technical people. And there you can access that. Henry Rotiger has how people learn, very accessible, and I did a presentation two years ago now that focused on teaching, learning, and technology. I know you have your distance education component. I could have talked about a dozen or more studies that we've done in using technology to promote learning based on science of learning. So if you're interested in that, you can go to that YouTube video there and watch that as well. So thanks very much.