 Thanks, Mary. And hi everybody. This is a really tall podium for somebody who's only 5'2". Okay, Julia. Hi everyone. Thanks for coming to the final session. It's always nerve-wracking being the last people, but we hope that you enjoy this presentation. I'm Julia Avenall. I'm the Learning Designer at the Australian Film TV Radio School. Yeah, and I'm Jay Newton. I'm the Head of Curriculum at AFTERS. Is that better? Oh, that's better. Yes, we're from the Australian Film TV and Radio School. And we're talking today about advanced creating efficiencies with Moodle rubrics. I just wanted to start the presentation with a quote that I heard on the Moodle podcast the other day from Abby Fry that she said, Moodle is a journey of utilising the features better and better. And that really resonated with me because I think that it's not just about making iterative changes to the learning content. It's also about making iterative changes to your workflow and finding efficiencies in that workflow. So this presentation is about some of the efficiencies that we have made in our workplace using the Moodle online rubric tool. And I think basically we all want to spend less time marking and more time teaching. So this may be something that could help you as well. So who are we? So AFTERS was established in 1973 under an active parliament. So we're turning 50 next year. And our campus is based in Sydney, Australia. Our purpose is to support the next generation of Australian storytellers through teaching all facets of film production and broadcast. And so that includes writing, directing, producing, cinematography, editing, design and sound. So effectively, I mean, we have a range of award courses, both undergrad and postgraduate courses. And we also have a suite of short and industry certificate courses as well. Effectively, we're practice-based education. I mean, you know, you kind of need to be there making things in order to kind of teach the content that we do. But we use Moodle to support our learning. So what is a rubric? A rubric is typically an evaluation tool. It's to measure attainment against a consistent set of criteria. Rubrics are really useful in helping to position students against a set of learning standards. And it helps to determine how well they've met certain learning outcomes. It's a really transparent tool. So why do we use Moodle rubrics? AFTERS uses rubrics as our primary assessment tool. We also use some observation checklists and things as well. But we do tend to rely quite heavily on rubrics as our assessment instruments. A big part of the reason for doing this is it helps us to mark really efficiently and consistently across large cohorts and with multiple markers. So in the case of our Bachelor of Arts Screen Production course, we have 90 students each year. So that's a lot of students to mark. And we have multiple markers working with their cohorts across the year group. So it helps to keep things consistent for us. So then using Moodle rubrics with grades being calculated automatically in the back end, it means fewer errors and miscalculations by our markers. Plus the grades are immediately visible to students and we don't need to upload separate marking sheets so there's an efficiency there as well. And rubrics are a useful tool for students' self-evaluation. So we encourage our students to review their rubrics prior to submitting their work so that they can verify whether they've met all of the required criteria before they put the work in. So I'd just like to talk about the nuts and bolts of how you start building a Moodle rubric for those of you who haven't ventured into the advanced grading features in the assignment. So you start with the assignment activity and within the assignment activity you set the grading method to be rubric. From there you go to the settings and you can then go to, step number two is either go to advanced grading or define rubric depending on whether you're building one from scratch, whether you're adapting one that was used on that assignment before because they will roll over when you copy your courses or whether you're creating one based on a template that somebody else has pre-built for you. Step number three, once you get inside the rubric page you then build out essentially a grid that you can add rows and columns to populate with your marking criteria and the point value that is going to be assigned at each grade band level. There are also a bunch of rubric settings at the bottom which sort of effectively change how the students are going to see the rubric and I'd say get in there and play with all of those and see which ones are suitable for your students and your particular course. And the finished product looks a little bit, something a little bit like this. You have your marking criteria on the left and going left from right in our organisation. We go HD to a fail and in each cell that's associated with that there's a numerical value behind that, a positive numerical value and the rubric is going to calculate the grade based on the median of that grade band. One of the things that I would recommend doing if you are rolling these out across a bunch of different courses is actually to get everybody to make all their rubrics out of 100. We have had instances where some people are doing them out of 50, some people are doing out of 100, gets really messy. I would say if you go for 100 across the board it really simplifies things at the final stages of marking. What you do find is that rubrics will calculate based on the median so students in a particular grade band will always come out at the same mark. That means if you want to bump up a student up or down a few marks you can do that manually when the grades are released. What makes a good rubric? Not all rubrics are created equally and a poorly worded or overly complicated rubric can make marking really challenging. Ideally the marker shouldn't need to exercise their own subjective discretion when awarding a grade, the indicators should support the marker in selecting the appropriate grade for each student. I suppose if you're doing a little bit of a review of your own rubrics really do look at those indicators and see whether they would be easy to award or if they're a little bit opaque and difficult to interpret. The best rubrics are those where the indicators are concise and unambiguous and each criterion should ask for a very specific intangible input. Sometimes where we've gone wrong in the past is within one indicator or one criterion we've tried to ask for too many things and then it makes it really hard, it makes the marker sort of has to go oh they've achieved that but they haven't achieved this so where do I put them? I guess trying to really simplify your criteria and have those indicators really clear just really helps with that marking. Students themselves should also be able to use the rubric as a form of feedback so they can see when they get their grade and they've seen their rubric marked up they can sort of see why there's a bit of inherent feedback built into that process for them. If we're trying to engage students with rubrics student input can really assist in the design of a good rubric ensuring students understand the assessment requirements and even having them co-determine the rubric can be a good way to engage them. It's also quite useful to provide examples of kind of rubrics in practice so if you have some exemplar student work that you're showing you could always have the marked rubric sitting alongside it so they can sort of see how the marking would have happened for that piece of work and providing fun video explainers has been one way the after-staff have helped our students to engage with the assessment tools and we're going to have just a look at a quick example of that now. Hello. Oh, hi Amber. Oh, hi. Where am I? Welcome to the rubric explainer. I've brought you into the rubric. We're inside it to explain it to our students. So what we're going to do today is we're going to talk through the last two assessments, the mood reel and the director's statement and we're going to go through the criteria with the students and then we'll talk through what an HD looks like so that they know exactly what we're looking for and I'm going to kind of talk through it and feel free to jump in and ask me questions and chime in yourself because you've got notes from how the students did last year as well. So I thought we would start with the director's statement. Now only because in my brain when I'm doing this work I start with the director's statement. It doesn't mean you have to do this this way. Like you could start with the mood reel first depending on how your brain works. Okay, so director's statement. It's worth, you can see at the top there, worth 40% of your mark and we have four different criteria. So you've got industry forms and practice delivers a well-structured, clear and detailed vision. That's, to me, that's like an overall thing. It's like I'm impressionistically reading the whole thing and going, this is a good director's statement. This is an HD, well-structured, clear, detailed, sophisticated is an industry standard. That's what I expect from someone handing this in to Screen Australia or Screen South Wales or another state agency. And I just want to say that little video was actually completely lo-fi. It was filmed during lockdown just with a digital background on a Zoom using the immersive mode. So, you know, we have a lot of technology at film school that we can use to make videos, but we didn't, we just did Zoom. Because we didn't have access to it. That was literally made at home. So, yeah. So I just wanted to talk now a little bit about some of the iterative changes that we've made. I'm sorry the text has gone a bit small there but I'll just talk you through it. So many of you might relate to this problem in that Moodle can't easily deduct late penalties when you're doing marking in the grade book. So we apply our late penalties manually at an assessment panel. But when we were using Moodle rubrics to grade, what we were finding is that our teaching staff weren't able to apply those late marks because they were marking on the rubric. So in our workflow process, what would happen is we would make those adjustments in our assessment panel. And basically it was a matter of, you know, manually adjusting the grade and then typing into the second cell that a late penalty had been applied and noting what the original mark was so they knew what they would have got if they hadn't done, you know, handed in late. What actually the result there was a lot of fiddly work with, you know, 70 to 100 students typing this in and quite often we made errors and there were inconsistencies and obviously it was time consuming as well. So what we came up with in the next iteration was that we would use the online rubric to actually deduct the late penalties at the point of marking. So instead of having a positive value assigned to each cell, we applied a negative value, a numerical value. So the criteria was the late penalty column and then we had a, the first row, the first column was zero points if they were on time. We just said no late penalties applied, zero points were deducted and then moving left to right, if they were one day late they lost five marks if they were two days late they lost 10 marks and so on. Obviously everybody has different penalties that they need to apply but that's how it works out for us, it's five marks a day. There's a bit of a game changer actually because teachers were now, or tutors even were able to apply these late marks instead of the lead teachers actually trying to do all this fiddly work at the assessment panel when you're meant to be concentrating on, you know, bigger issues at hand. Some of the other efficiencies that we have sort of developed over time is we started to publish our rubrics as templates. To do that, all you need to do is to create your template in the first instance and then you publish it as a template. It's then available for everybody in your organisation to copy or copy and adapt. You, I would say if you're going to start using templates, agree on a naming convention. Once you start getting a lot of templates in there it can be quite tricky to find them especially when you have a lot of assessments that have quite similar names or when people don't name them at all. In our organisation, our administrative staff build the rubrics on behalf of the teaching staff. To help them apply the correct numerical value within the rubric, we started out with a spreadsheet that was a pre-calculated matrix of the grade bands and the weightings that were going to be applied and the numerical value at each grade band. Then we realised we could actually build a rubric template with those pre-calculated grade band numerical values into the rubric. We built a full rubric with every numerical value for every percentage of each weight band and it's really just a matter of the administrative staff coming in using that base template deleting the rows that they're not going to use, duplicating the ones that they are going to use and again that has been a game changer because it's taken literally days off the work that they have to do at the beginning of the semester now that they have this template published. So just some positive feedback. So the feedback from students about the video explainer we just watched was extremely positive. By using humour and a light but informative approach students were made familiar with the rubric and the assessment requirements. It was also interesting a few students were noting that it really helped them reading and interpreting the rubric and how our teaching staff approached markings. So together with the video explainer it really did help them to understand how they might position their work for a high grade which is kind of what we want. And then just some feedback from the staff. So they found that markings obviously easier and quicker and they don't have to do all of this manual calculation. They can just click the buttons in the automated rubric to get the mark that they need and it's also easier to moderate across multiple markers. And as I was saying before with the example of our bachelor course it's such a big cohort that we've got multiple people looking at student work. So they found that that was great and then they were able to really feel like their marking was really rigorous as it should be and that they could trust the tool, they could trust the rubric to give the mark that they feel the student deserved. And I just wanted to add as well there's a cost saving in that as well. It's really enabled us to get people marking within the allocated time rather than spending too much time and coming away feeling exhausted and overwhelmed because they can't get the marking done in time or we have to find more money to pay more people to mark. That's right. And we actually do in addition to using our automated rubrics we also ask the tutors to provide some written feedback as well. Having a more efficient kind of rubric marking system meant that then they could spend more time on that written feedback. So that's it. Thank you. This is us. Does anybody have any questions? Yeah, thank you. That was really interesting. We do have plenty of time for questions. I was interested in your it's not a question just a comment. Your late penalty negative marking row I thought. Yeah, that's a clever idea. Okay, so if you have a question put your hands up and wait for the microphone to come to you. Hi, thank you for this very interesting and clear presentation. I have two questions actually. One is can you have multiple graders for the same assignment same student because we do do that sometimes at my school and I just forgot the other question but it'll come back. Yes, we can if the grade hasn't been released yet. Yes, we can do that and sometimes we do do that especially if there might be some sort of contentious issue or we have a new grader who isn't sure especially the point that Jane mentioned before where we've made the mistake of cramming too much into one assessment criteria and they don't know whereabouts on the rubric to mark we might get two markers in or if the student wants to question it, generally we would have one person grading but we can have two. No, it'd still be the one overall result but it's editable before you release the grade. They would have to agree which would happen through a moderation process that we have so again for courses where we've got multiple markers looking at a lot of student work there's always a moderation session that would happen so that they'll do that little bit of benchmarking and they might adjust the grade bands just depending on sort of looking at a cross section of student results. Did you remember your other question? Hi I always have an issue working with faculty coming up with what's in the criteria and sometimes they put too much stuff in the criteria and then they can't get, they're like oh they can't get enough or I had one person that had a 38 criteria rubric that they used to grade. How do you talk to faculty about what to include and what not to include? Yeah, it sounds like for 38 criteria that sounds like using a rubric is the wrong assessment tool. It sounds like that's more of an observation checklist or some other tool but yeah that sounds like you're just trying to cram too much into quite limited tool. I mean obviously a rubric really should only have probably no more than four or five kind of criteria and then with the weighting kind of the 100% weighting distributed across that criteria so that sort of calculates to 100% at the end but yeah so my advice would be to really break it down like obviously look at the subject learning outcomes first and then think about how you might get the student to demonstrate those learning outcomes and then do consider maybe having some complimentary assessment tools alongside your rubric but yeah always really it is always a challenge to try and make sure that each of the criterion is really tangible and specific rather than being loaded up with lots of things. It's really tricky. Hi so I've got a sort of follow up question to that is it realistic from what you said it seems quite difficult do people get their rubrics right first time or does it take a few kind of goes of using the rubric and then tweaking it to actually get to a rubric if it really works well? Yeah exactly so there's always that sort of continuous improvement so you give it a good go you kind of say well this is the rubric that we've agreed on for this task but then through course review at the end of each year of delivery round of delivery we always take the time to have a look at it and go is that working and particularly I mean I guess the biggest indicators are is it easy to mark against like or was that really challenging and like I said earlier does it need a lot of clarity to go oh it's a little bit of this and a little bit of that and let's try and split the difference and put it somewhere sort of you know around a credit sort of grade because we don't know where else to put it like so I guess yeah we always do a review process each year and yeah we have found that we've tweaked and refined those criteria over time and then after a while it's like yeah now that rubric is really working I'll tell you what it does help with though actually in terms of like a learning curve for people we we have a lot of industry experts who come in and teach our students and they are not teachers quite often they're quite nervous about marking students because they have never done that before and they may or may not even be coming back you know the next semester you know they're in and out and when they get into the online rubric and they realize that they don't have to do any maths they don't have to sit there and you know mull over like where a student has landed they really just have to select the cells and trust in the rubric it really does free them up to think more about the specific feedback that they're going to give and they can just write a couple of paragraphs and the learning curve is really minimal and that is great because they you know they're here for a short time and a good time obviously you remember second question do you usually mark up or mark down meaning do you usually with the criteria that you put in place give points or deduct points for not meeting the criteria for the automated rubrics the inbuilt rubrics that we're speaking about we for each grade band we take the median so it sits right in the middle with you know within each grade band which means that then potentially multiple students can end up being marked the same if they're marked sort of like an HD for that criteria a D for that one a credit for that one so it could end up calculating to the same overall grade and then we will often then do a sort of a secondary moderation sort of process before we release the grades and have a look and just decide whether we want to tweak some of those grades up or down and so we've got and Julia talked to that before there's some discretion for us to override that automatically calculated total if we want to bump them up like if they're sort of sitting just just shy of being bumped up to the next grade band we might decide actually that piece of work was really good it's deserving of a distinction rather than a credit so we'll just we'll notch them up just manually by a couple of grades just to get them over the line so that is a discretionary thing we tend to find and again speaking to the sort of the success of the rubric if you know it does a good rubric does mean that we're we're sort of usually happy with where the student ends up sitting and and then then we're able to sort of go yeah that that kind of worked but yeah there is always that element of manual override if we need to the feedback that the students get is it you didn't do this or you did do that yeah that's right so that will come through in the written feedback so obviously like I say in each of the each of the sort of indicators in the rubric there's a little bit of inherent feedback in there but also then we ask them to write a little bit of written feedback as well and it's usually just a couple of sentences that's something good something bad something for later so you know we sort of tend to yeah give them give them that sort of guidance written and we also I mean after because we're we're a small school and you know we've we've got you know fewer students than I'm sure a lot of you we do a lot of mentoring so we always sort of invite students to seek feedback and have a one-to-one with their tutor and better understand their grades so there's additional feedback there too great thank you you one last question yeah yep going going go on okay thank you just in time very last one thank you it's like an auction you talked a bit about peer-to-peer assessment the rubric you developed can it be used to in peer-to-peer assessment or does it have to be something completely different you can but we don't we haven't done that yet we did talk about it actually because we did have one assessment where we do have a peer-to-peer assessment actually what we ended up doing because it was basically like a screening where the students had to screen their work and then they've got an audience response on their work we actually chose to use a mentimeter for that and then we manually put those grades into the rubric but the students were given a copy of the rubric to make their judgment about you know it was basically like the movies one start kind of situation but yeah we haven't we haven't done any other peer-to-peer using the online rubric thank you