 Hello, good afternoon, everyone. I'm Julie Vos. I'm head of digital education at City University of London and I'm the chair for this session. I'm delighted to introduce our speaker Laura Hollinshead from the University of Derby, who will be talking about when support and quality assurance collide a learning technology journey to maintain staff wellbeing in a world full of data. Over to you, Laura. Thank you. So as Julie said, I'm Laura Hollinshead and I'm a learning technologist at the University of Derby within the Department of Digital Learning. So the presentation today is really a reflection of the journey myself and my team took when we were asked to review programs at the University on their delivery of a set of digital learning baselines which were introduced for the 2020-21 academic year. So the area of quality assurance isn't really something that we'd ventured into as a team previously, but obviously we were becoming more aware of some of the issues around universities being asked to evidence where policies had been implemented, you know, how do they know that they've actually been implemented and met. So this was a challenge that was brought to our team. And so we were kind of in a position where we had to develop a process for doing this. But we also were really aware that we had really good relationships with our academic staff. And we didn't want any kind of process like this that might be seen as kind of monitoring people and their performance to get in the way of that relationship because it was a really important part of our job and the support that we provide. We were also aware of the considerable amount of pressure that the academic staff were under in relation to their workload as well and also the workload that we had as a team. In addition, we had to consider that actually there's a lot of diverse practice going on in terms of teaching and learning. And although we were in a position where we had to create a consistent sort of data set that the university could use to judge whether the standards had been met, we wanted to make sure that we were not dismissing different ways of delivering those baselines. So these were the baselines that we had to work with at the university and they were developed off the back of research from the sector, sector reports and also feedback from our students and staff through surveys at the university. So prior to the review taking place, the academic staff had actually taken an online course during the summer to understand these different baselines and reflect upon their practice in relation to them. However, as you can see, and as I've mentioned previously, there's different ways that you can interpret some of these baselines and they can be met through a variety of different pedagogical approaches. And we wanted to ensure that this was represented within the review and that staff didn't feel that we were pushing the idea of a one size fits all approach that they had to do it in a specific way in order for these to be met. So we wanted to make sure that the review also re-emphasise the fact that we weren't expecting them to do that in a standard way. So what was the challenge? So as we mentioned, I've talked a little bit about staff wellbeing at the beginning, but also that some of these things couldn't be automated. So one of the baselines as an example was that we were expecting people to present learning content in a structured and manageable segment. That's not something that we can necessarily automate in terms of looking at data from the VLE that tells us whether somebody is managed, organised in a manageable way and in a structured way. So that takes a human judgement in order for us to understand that. We also knew that the evidence might not sit within the VLE as we did have some in-class teaching and there were also people using other platforms such as Microsoft Teams. So again, not everything is going to be sitting in one place. So we wanted three things to be kind of core in terms of the principles around the way that this process was to work. We wanted it to be manageable, fair and supportive. So we wanted it to be manageable in relation to our workload so that we were able to actually manage the workload of assessing programmes and whether they were meeting these baselines. We wanted to make sure that that was the case for the academic workload as well so that the time that they needed to input into this was kept to a real minimum. And the other thing we wanted to do was we wanted to make sure that it was fair. So whatever assessment came out, that was a fair representation of the teaching and learning practice that was taking place. We wanted to take into account the different approaches that people might be using to meet those different baselines. But we most importantly wanted to make sure that it was a supportive process. So again, our role is about supporting our academic staff to use technology in their teaching. And we didn't want that to be then something where they thought that we were just going in to check whether they're meeting these baselines or not. And we weren't actually supporting them to meet the baselines. So that was an important element. So now we're going to have a bit of a look at the design. So the design of the review started with looking at how many programs it was possible for us to actually review. So we have four colleges within the university and we chose to review 20 programs within the academic year for each of those four colleges. So 80 in total. And, you know, we wanted to make sure that as a reasonable idea about how many we could assess within that period. These were both undergraduate and postgraduate modules. And what we had is that for the undergraduate modules, we were assessing, sorry, undergraduate programs, we were assessing three modules each. And for the postgraduate programs, we were assessing two modules each. And these looked at the different levels. So we had a year one module, year two, year three for the undergraduate and for the postgraduate. It was two of the modules that they deliver within their program. The modules were partly selected based on the student, the most amount of students taking those modules. So obviously some modules are optional. So we wanted to try and target those modules that were mandatory in terms of their progression through. In terms of the staffing levels that we had, so we had 4.8 learning technologists in terms of the full-time equivalent of the amount of time that we had. So that was the number of staff that were actually working on this review. We also used some personas as part of that. So we tried to get in the mindset of the students at the different levels to types of experiences they may have had prior to the point that they were experiencing digital learning on their programs. So again, we tried to kind of think about that as part of doing the review when we were actually looking within these modules. We thought about the types of evidence that we were likely to see against each of the baselines. But without making that too prescriptive. So we wanted to still be open to some of the practices that we might not have considered that we might see within those modules, which might still contribute towards those baselines. But we wanted a bit of an idea about what those could look like before we went into the review process. And again, just to reiterate, we knew not all the practice was going to necessarily sit within the VLE. So we wanted to make sure that we were also having conversations with module leaders and program leaders as well as part of the review. So in terms of the process, this is the process that we developed. So we went in talking immediately to the program leaders informing them about the fact that their program had been selected and what was involved within the process itself. They were then requested to get their module leaders of the modules that had been selected to do a self rating against the baselines, but also to do some preemptive work around identifying certain practices, which we felt were not going to be obviously seen within the VLE. So for instance, one of the baselines was around socialization. So that might not be something that we can clearly see within a VLE environment. So we wanted to give the opportunity to the module leaders to give us a little bit more detail about the types of practices that they've been doing in order to meet that. We also had student program reps who were also completing like a rating about how they considered those different baselines were being met as well. So at the same time as this was taking place, we also had the learning technologists doing their review. And I'll show you in a minute a rubric that we actually used as part of the review in order to give feedback to the academic staff, but also to kind of rate how well these baselines were being met. So the review itself, alongside the information from the module leading the program work, was then compiled together into a report. And this was then sent on to the program leader to then have a further meeting to discuss the outcome of that. And to sort of signpost it further resources for support and also to identify any actions that might need to take place in order to meet those baselines. So after that point, we also did some follow up to see how they were getting on. We also found that through the conversations with the program leaders and potentially the team, because sometimes it would be the team who are involved in that, the ratings or the review may then have subsequently been updated just to include more of the practice that they've been able to articulate, which we couldn't see within the virtual learning environment. So we also had a lot of support resources already prepared for the different baselines, follow up sessions that we had and also some drop in sessions, which meant that people could easily kind of come in and get additional support in relation to those baseline areas they were unsure about. We also found that through the process of talking to the program leaders, we felt that it was a fairer representation of their practice that was then being put forward in terms of what was included in the review. So this is the type of rubric that we actually completed. So this is an example for one of the programs that we reviewed. And just on here you can see some of some of the baselines included along the side. And what we did was we rated them on a kind of a three point scale. So we had zero where something wasn't present or was unavailable. So it wasn't actually being met. Something where things were being partially met. So there was evidence there to suggest that they were doing something towards it, but there may have been that one of those elements wasn't quite working appropriately. And then we could also say that two was green. So that's where they were doing it effectively. And we could see that those things were present within the module area. The other thing to just say is that we've set out from the outset that this was never going to be a judgment about how effective their learning and teaching was. Because we were not in a position to make that judgment. But what we did do is we focused on whether there are opportunities being provided for those things to take place. So for instance, the socialization are there opportunities for students to socialize. That's what we were really looking for, not whether the students were engaged within that socialization activity. Because again, you know, we weren't in a position to judge that because we were not subject experts specialists in relation to the way that that subject should be delivered. Nor are we necessarily experts in pedagogy as well. We were mostly focused on the digital experience for students. So you can see here we had some actions that we detailed. But we also were where people were already meeting the baselines really well. We were also providing additional information that helped them to consider how they could also enhance that practice. Further, so although it might be green, it's not necessarily going to be something that they can't improve upon. The types of things that we did find within the review was that accessibility did come up quite a lot in terms of being one of the lower scored areas and also socialization. And as a result of that, we did put forward some additional resources and training sessions that people could also attend. As part of that. So the rubric data is obviously the data that was then taken and reported back up to those people within senior leadership positions. But they would also still be able to see that detail upon request if they wanted to see that further information about how those different baselines were being met within that program. So we did do some feedback gathering from the people involved within the process. So we wanted to make sure that it was delivering what it is that we set out to do. So again, this was on a five point scale in terms of satisfaction around the support they received for these different areas. So as you can see, most of the staff were satisfied that it helped them to identify actions to implement the baselines. You can see that they were satisfied with the fact that they could identify what support was available for them to help them meet the baselines. And also just that it could help them to identify some of the practice that could be used to move beyond those baseline expectations and enhance that practice. We also got some feedback about the training and guidance materials that we provided alongside the actual assessment itself, the review. And so again, most people were very satisfied and somewhat satisfied with the training and guidance materials that we provided. And 82% were overalls satisfied with the review process itself. The other thing that we were keen to do within the review was actually identify some best practice and identify where really good practice was seen within the university. And so through that, we also within the reviews and the information being fed back, we also did identify some of that best practice and feed that back out. And we encouraged the sharing of that practice within the institution as well. So again, 37% of our staff felt that it helps them to identify practice that they were planning to then share with others within the university. That could have been their own team or it could be more widely. And also the other thing that we were keen to understand is how well the review would help them to identify practice that actually they could see the benefit of going forward. So even when back on campus, they could see the benefit of perhaps continuing with that particular digital practice into the next academic year. So again, we were quite pleased to see that about 44% of people felt that that had happened within the process. So now we're going to talk a little bit about what didn't work so well. So the workload for us, it actually turns out to be a lot bigger than we originally thought it would be, even though we felt that the original plan was reasonably scalable in terms of being able to achieve that. It was also difficult with academic staff because again, their workload was very high at that point and was higher than expected. And actually this kind of led to a lot of difficult conversations with people. So we sort of anticipated that a little bit as well within that. So often the conversations we had were less about the baselines, but more about workload and well-being rather than the process that we'd been through and the baselines themselves. It was more about their ability to meet the baselines based on the workload and how that related to their wellbeing and their team's wellbeing. We did streamline the process as well for semester two, so we made the review a bit more manageable. Mainly this focused on the reporting of the data back to strategic leaders within the institution. That was mainly where we focused some of those changes rather than the process itself. We are planning to carry out this activity again for this academic year with a revised set of digital learning baselines which have been updated. However, having gone through this process last academic year, we're in a position where actually we're able to hopefully anticipate some of the requests that were coming from senior leaders, which did add additional workload to what we originally had planned. So that was a good kind of process to have gone through. One of the things that we wanted to make sure that we do again is just focus on the support that we provide people. So we're going to be better able to sort of plan that support and ensure that it's available knowing some of the problematic areas. Okay, so I think that's my 20 minutes. That is Laura. Thank you. Very well kept a time there. I don't think we've got any questions that have come through in the chat yet, but please do post your questions. I just wondered, I mean, one of your aims was about providing this as a supportive approach. You did the sort of feedback from the academics, but did they feel it was a supportive approach? Yeah, so that most of them did. They didn't feel that it was being done to monitor them. And I think that was our concern is that we felt that they were going to think that, oh, okay, you're monitoring us in terms of whether we're meeting these baselines. I think with the way that we designed the process, we'd built in the support from the beginning. So we'd been putting in links in terms of the review document that we sent back that links to further support. We emphasised support within the meetings that we had with people to say, look, we're available to help you. So and the conversations that we generally had with people were done in a really supportive way so that it wasn't sort of focused on, right, you're not doing that. You know, it's more focused on, okay, so some of that could be really helpful for you to think about this and this is the support that's available to you. As to what they then felt about the data aspect of that and how that then got taken up through strategic leadership, I have to admit I'm not completely sure how that works. But, you know, and how they felt about that, because I think strategic leaders in the institution tend to take the data in a slightly different way than perhaps we were taking in terms of it being supportive. So we don't feel that it's had a negative impact on our relationships with academic staff. If anything, it's somewhat exposed to perhaps more academic staff than we were in contact with prior to that point. That's good. And you may find people actually asking to be reviewed this year rather than you having to select people. I'd like to think so. They should also be in a better position because although they are the baselines have been developed that obviously there are real similarities to what the baselines were previously. So there should be a lot of practice that kind of helps with that in terms of it coming over. Great. So we have got a question from Kat. So did you have discussion ahead of time with the module leaders about how much time it would take on their end? So no, we didn't. We set out with the program leader that essentially we were looking at having a one hour meeting with them. And then for the module leaders, it was just a case of them completing a form and which was delivered through Microsoft forms. But that's dependent upon how much detail they decide to put in a form. So some of them will just do the multiple choice questions and then leave everything else blank. Some of them would put more detail in and obviously, you know, so that's fair is how much time. But they also did then have pressure put on them to complete the forms. So they were chased if they hadn't completed them, which I suppose that's possibly the negative aspect. It's not necessarily more time, but it's more pressure being put on in terms of them completing it. I can't see any other questions in the chat, but quite a lot of good feedback. So thank you very much, Laura, for the session. Really enjoyed it. And thank you very much everyone for coming along. Oh, hang on. We've just got one more. Do you follow up with the ones who have completed the form and use interviews podcast video to promote the initiative to others? So we don't necessarily promote the initiative just because of the fact that it's kind of a mandatory type activity in the university. But we in terms of the best practice that we capture, we did take and follow up on those things to try and get blog posts written and to get that further shared. But again, the difficulty there was the amount of time that the academic staff had available to do that. That was one again, one of the challenges. We did follow up with people who had completed the forms and then felt that they weren't doing things very well. And often what we found out that they are is just they lacked confidence. They often rated themselves lower than they would actually have been rated if we'd when we looked at the reviews. So often that their scores would be lower than our scores. Just because again, some people are so pressed for time. They're sometimes thinking, oh, I don't have time to do it. I've not done it very well. And actually they've done it well enough. It's just not, you know, it's not exemplary. It's not how they would want it to be. But that wasn't really what we were looking for. Great. Thank you, Laura. So thank you, everyone. And thanks again for the session. Enjoy the rest of your afternoon. Goodbye, everyone.