 Alright, so my name is John Paul Posada, I'm the Educational Technologist for the Faculty of Engineering at UNSW, that's the University of New South Wales, and that's mouthful. For the past few years at UNSW, I've been working a lot to manage very large courses. And by very large, I mean a course of about 1,500 students. To do this, and to run these courses, it's mostly project-based courses, and a lot of teamwork that happens in these courses. So, students join projects, they form teams, and then they produce an outcome from those teams. And that's what we've been doing for the past about five to eight years. And with those needs, I needed some help from a developer and a very talented developer. That's him, Shox. Yeah, I'm Morgan Harris, I'm the Plug and Developer for the Faculty of Engineering at UNSW. My mic should be on. Yes, it is. Right, everyone can hear me? Yeah. Good. So, we've made some cool new teamwork plugins, and we're going to show you one of them. It's called Team Evaluation. So, what is Team Evaluation? Why should you bother with it? Well, think about what happens when you mark a group assignment. Well, you generally are going to go with this solution. Can everyone read that? Yeah, it's fine. You give everyone the same mark. That's not really optimal, and it's kind of not fair. This is something that a lot of students will complain about with group work. It's that they don't end up with a fair grade of complexity amount of work they did. Of course, you have the other option, which is to assess everyone individually, but that's not really feasible, especially with the kind of courses we're talking about where there's like 1,000 or 1,500 students. So, there is another way, and that is to ask them. Ask the best way to determine the amount of the contributions of each team member is to ask their teammates. And you actually should get them to ask themselves as well. You get them to self-reflect because that is actually a very important skill that you need to hone in higher education. And to do this, we basically take the approach of team-based learning. And we follow four main possible groups of this, which we think Team Evaluation addresses. And that's, first, we manage the group. And we manage the group well through early evaluation. So using Team Evaluation, we can give the team members early feedback so that they know how they're doing within their team, what their teammates think of their progress, and so on. Instead of waiting until the end of the product to give them feedback then, which really isn't going to be helpful because they're already done with their teamwork. And we give them timely feedbacks, as I mentioned. It comes throughout the whole process of their work together. This helps with accountability. So every individual within the team feels accountable to the team because they know that they're going through this process. It's not just about what they produce, but it's about how they produce that. And that's important. And Team Evaluation helps with that. And it also helps promote team development, which is part of the assignment design. And that really separates the outcome and the process. And we really want to measure both. So the outcome is important because it's the product that they're going to produce. But we also want to make sure that the process of getting to that outcome was also evaluated and something that they need to consider about how that went. Because you don't want one person in the team to feel responsible for the whole team's work and do all the work, basically. That's not what team works about. All right, so what actually is going to happen in Team Evaluation? So let's take a look at the regular first, the way we do things now. We assign a mark and everyone gets that mark. But these people aren't these faceless gray automatons, they're individuals. And they've all brought different things to the group. So you want to know who brought what. So how do you find out? Well, if you ask them the right questions, you'll find that they have some pretty well-formed opinions about who did what and how the group works together. And in aggregate, that actually forms a pretty decent data set. And if you throw self-reflection in as well, then you're well on your way to understanding that group dynamic. So how do we use these emojis to turn that into fairer grades? Well, that's where Team Evaluation comes in. Team Evaluation takes them and turns those opinions into numbers. And then we can use those numbers to adjust grades. So I'm going to take you quickly through a little worked example. It's a very simple one. There's just a sort of one-to-five question. One's the worst, five's the best. How much work did each team member put in? So for this one and only question, Bill has given himself a four out of five. He's a big fan of himself. Mary has given Mary three out of five. He's given Lee five. He thinks Lee's pretty good. Mary is less of a fan of Bill. She's given him a two. She's a big fan of herself, though. She's given herself a five out of five, and so on. And so we can see with all these scores a couple of important things. First of all, Roberto here has not completed the question, and that'll be important later. But also, these bars are all different lengths. And that's actually kind of a problem, because we're not doing a quantitative assessment here. We're actually asking them to compare teammates against each other. So really, all fives would be the same as all ones. Whether you think your teammates are all equally terrible or equally fantastic, it doesn't matter because they all put in an equal amount of work. So we have to fix this. We have to adjust these scores. And the way we do that is pretty simple. We just stretch the bars out so they all add up to 100, except they actually all add up to 125 because Roberto didn't mark anything. We actually need the sum of all these grades to be equal to the number of people in the group for a reason that will become obvious in a second. So after that, it's very simple. We just take all those bars and we just stack them on top of each other. And you end up with people's team evaluation score and a reflection of how much work they put in. So you can see Roberto is obviously super lazy. He didn't even complete the questionnaire. But Lee has done the bulk of the work. So how do we adjust the marks? Well, here's the mark we gave the group. So the group assignment received a 65. And normally, you just give 65 to everyone. You can see on the face of it, that's not fair. So we just multiply those numbers together and we get our adjusted grades that you saw earlier. So I know John Paul is going to tell you a bit about the existing tools and why you decided to go into it. Yeah, so basically, the state of play. The state of play is when I first started looking into this, there were a few tools out there that I took into consideration. So the first one was WebPA. And WebPA stands for Web Peer Assessment. It's out of Loughborough University. It's an open source tool. And it was my number one choice. And my number one choice because of the fact that it was an open source tool and the fact that it had a lively developer community and user community. The other tools that are important and worth mentioning are CATME. CATME is a whole suite of tools which really takes the whole team process from building a team to evaluating a team's performance. And it was a project that's been going on for a long time. It was a National Science Foundation grant funded project. It was free for a long time up until next year. Next year, they're going to start charging a small fee to students or per student, not to students. That the fact that it's not open source and not easily integrated into the systems that I use was a reason why I didn't choose that tool. The other tool is Spark Plus. Spark Plus is developed at a UTS. It does a very similar thing. It basically tries to even out the score based on how students feel their team members performed within the team. And the reason I didn't choose Spark Plus was also because of the fact that it's not open source, not easily integrated into our systems. So we used WebPA for a while. It was a good tool. It was open source. I was able to host it myself and manage it myself for a while until it became a little bit too much work, too much administrative work. I had to put users in, take user marks out, notify users of their marks, get Morgan to help me develop special reporting tools from it. Finally, we decided, let's make this a little more integrated into the systems that we use. So we put it into Moodle with a plugin that pretty much functions like an LTI. It basically sends our users over to WebPA and then sends their marks and responses back into Moodle to give them a final mark. That was good for a while. And then I decided, no, let's make it better. So Morgan made it better. So what's different about a Moodle plugin? Why would you bother doing a proper Moodle local plugin? Well, first of all, it's about integration. So a local plugin obviously is gonna integrate a lot better into the tools that are already in Moodle. So the most important thing is that you find it where you expect it, which is in the activity, the group assignment tool. And you can see right here, you've got your assignment on the top, you've done your submission, and right below it is team evaluation. It's right where you expect to find it. And the other thing about it, of course, is that it's extensible with subplugins because everyone loves plugins. And the M in Moodle stands for modular. We're all about the plugins. So it is almost entirely plugins. We've got the best plugins. We've got the amazing plugins. We've got the question error. The question error is the first thing you'll see in Team Evale, your students will fill it out, you'll create it. All the question types are plugins. Then not question bank plugins because question bank plugins have the idea of right and wrong, whereas these are a lot more ambiguous. So we've made our own plugin architecture for that. Activities are obviously plugins. But we've got a little thing inside an activity that wants to adopt Team Evale called an evaluation context and that's how it talks to the team evaluation. And then obviously we have to have question error feed into the activity plugin to adjust the grade. So we've got an evaluator. Even the evaluator is a plugin. There's only one at the moment, but there'll be more coming. And then obviously we've got to get some data out here. We've got to do evidence-based learning so we've got a bunch of reports and all the reports are also plugins. So the plugins we've got at the moment. There are two question types. They have the like it scale, sort of zero to 10, one to five, whatever number to whatever number you wanna do. And the comments type, that doesn't actually provide a value, but just allows you, your students to enter feedback that their peers can see. We've got in future coming versions, we'll have a split 100 where you have to sort of divide up a pie amongst your teammates and a contribution matrix which is sort of a big matrix of checkboxes and you can say who contributed what to the group assignment. Activities is actually only one built in moodle activity that supports group submission and that's the assignment plugin. There is also a third party, well, we developed in-house a version of Workshop that supports group submission and that is available on both the NetSpot GitHub and the version that supports Teamaval is available on my GitHub. You can find a link to that in the boot page for this talk. So those are the activities that are supported at the moment. If you develop your own activity plugin and you want to adopt Teamaval, please come talk to me and please read the implementer's guide on the GitHub. There is, as I said, only one Evaluator plugin. We call it Loughborough. It's based on WebPA. WebPA was developed at the University of Loughborough. It weighs self-assessment equally to peer assessment. So if you think that might be a problem, probably the best thing to do is turn off self-assessment. And it, as you saw earlier, is just basically a simple weighted mean of all the grades. And yeah, it's the worked example we saw earlier. That's exactly how the Loughborough Evaluator works. So in future versions, we will have some more Evaluators based on the plugin, based on the tools that John Paul mentioned earlier. We're gonna have a Camperdown one based on SparkBus. We have Raleigh based on Katmee. Noticing a theme. It's named after the place where it's from. And we're gonna have Kensington which is UNSW is in Kensington. So we're gonna sort of try to devise our own cool take on how to best evaluate those scores. And then we've got a bunch of report plugins. Oh, good, I did screenshot these. Got scores plugins, just the scores everyone got. We got responses. So that's like the actual detailed individual responses that everyone's given to those questions. And we've got feedback. And the cool thing about the feedback plugin, feedback report is these little switches down the side. Because what they do is allow you to reject feedback that you think the person it was given about might not want to read. It's a good way to sort of protect against trolls and bullies and people who just write the F words six times in a row. And yeah, well I did that. And in future versions, we're gonna have a self-assessment to peer assessment ratio. So you can see people who are like big in themselves up or maybe a bit down on themselves. And a outlier assessments report which would sort of try and highlight people who you think or who we think might be trying to gain the system. And we've got a bit of built-in API. Do I have time to come to this? Not really. I'll dive through it. If there's any developers, are there any developers in the room? There's like a handful. Okay, I'll dive through. So we just got a bit of that accept and reject before if you're developing a plugin that does feedback, you should opt into that. We've got report downloads. It's a really easy way to do report. Downloadable versions of reports. And readable response formats. This is actually really important because we do it with the comments question type. If you just had the entire comment, people could potentially dump huge amounts of text into your reports. And you might not want that if you don't wanna avoid the scroll of death or the sideways scroll of death, even worse one. So we just take the first 50 characters and give you a little button to expand it. So if you're developing a question type plugin, you should take advantage of that. And there is some cool API coming in the future as well. That's my Twitter handle. If you're a developer and you wanna get in contact with me, I've got a slightly more dev-focused version of this slideshow, but I didn't have time to give that presentation yesterday. But I'll do like a lightning talk or something. So how do you get it? Well, it's in public alpha right now. It is currently not feature complete and the API isn't totally stable. So for that reason, it's not on the mutual plugins directory yet. Part of the reason is we really need modules to adopt it. Again, if you're a developer, please come talk to us. But it will probably be in future with a script and instructions on how to modify the assignment plugin so that it will adopt team evaluation. And that is where you can download it. And if you can't remember that URL, I don't blame you. You can find that link on the Moodle Moot page for this talk. All right, thanks, anyone? Everyone, any questions? Nope. Great. Cheers everyone. Thank you.