 So, yeah, so just in a couple of minutes, we're gonna have Louise to go through her talk about defining evaluation. Louise has had about 20 years experience in this area and has just taken on a really interesting role at the University of Leeds. They're based in the UK for anybody that's not familiar with University of Leeds. They've had a long and very prominent history of their use of learning technology and have been quite influential and innovative in this area. So, this is quite an interesting role, but I'm sure Louise is gonna tell you a little bit more. I'm just here to represent Elysig, which is a community of researchers and practitioners. And we're here to try and build up practice and knowledge and a community around this area of evaluation, of particularly of the student experience of using technology. So, if you enjoyed this little talk that we're gonna give to you, hopefully you might join us on our just mail list there or you can keep a little eye on the Elysig events where we'll be doing more of these types of things. And we are launching our Elysig Scholar Scheme as well, which you'll find on the old list. I'll share it in the chat in a minute where you can find out about how we're supporting a community of early starters in this area. Louise, do you wanna take over and pop your slides up? Sure. Yep. Do I just, are my slides, hang on, maybe? They are, let me... Do I just need to share my screen? No, no, here you go. I can't see where they are. It's okay. I got you. I got you. Okay. Thank you. Okay. So, can I just check, just Louise, just before we start, because sometimes these things work in different ways, can you move the slides forward? Just one, yeah, you've got control. Okay. I shall relax now and just hand over to you, Louise. Thank you ever so much for giving up your time this afternoon and take it away. No problem. Okay, okay. Thank you very much for inviting me to speak. It's really lovely to be here and share my ideas and I'll be interested to see if there are any questions and hear about what others are doing in this area as well. So, as Jim said, I'm reasonably new to the University of Leeds. I joined back in January in this really exciting role of Manager for Innovation and Evaluation, which basically enables me to really specialise within the wider dev team on how we're evaluating and then using that to develop an innovation programme. So, I wanted to just do a sort of, I didn't want to make this into a sort of death by PowerPoint, but I just wanted to take you through a few key areas of what we're doing and how we're doing it so that we can then be some time for discussion. So, I thought I'd just give you a little bit of an overview about the Digital Education Service at Leeds, talk a bit about how we define evaluation and why we feel we need to evaluate and why we feel this is about the definition of evaluation and why we need to do it is actually moving beyond the more straightforward definition that we tend to think of with evaluation. Also, I wanted to touch on what opportunities come up as a result of a more intentional and strategic evaluation plan and then just a quick look at our plan at Leeds. So, looking at what my team are doing, the underpinning principles we're using in our programme of work and our team structure and then I'll leave a little bit of time for questions at the end. So, in terms of there's at Leeds, so the Digital Education Service at Leeds is actually pretty large compared to some of the other universities I've spoken to. It's grown very quickly since COVID. I think not long before COVID, there was a team of five. We're now over a hundred and still growing and we're supporting faculties throughout the university as well as a growing fully online plan of online masters and professional and external partnership programmes and projects that we have. So, our teams are broken down to a fully online learning team of learning designers. We've got student support for the online masters programmes. We've got a blended learning team that serves our different faculties. We've got a specialist professional learning team that looks at some of the CPD programmes we develop around our degree courses. We've got a systems team that support online as well as campus learners. We've got a production and creative team which is really growing at the moment and we're developing skills in immersive and VR resources and courses. We've got an engagement and comms team and we've got my evaluation and innovation team which sort of straddles work across all of the teams supporting them in evaluating. We've recently developed a Helix team. We've recently launched a new facility called Helix which is an area where we're able to try hold evaluation events but also then use that to trial and pilot things such as VR, there's studios and all sorts for us to encourage innovation across the university. So we encourage academics and students alike to come and use this. And then we've got an operations team which just looks after all of the teams so they've got their work cut out. So in terms of what is evaluation for us at Leeds. So we see it as an understanding of our user base and how they interact with our programmes and services. We see it as an opportunity for continuous learning and improvement as well as innovation and increasing user success. And I think that's probably the one most people resonate with in terms of evaluating programmes, evaluating courses, evaluating specific tools that you're using. We also use it for evaluation for a mechanism of identifying what went right so that we can then look for opportunities to use these approaches and tools again. So we identify, define and celebrate stories where we've seen real impact or real success. We also see evaluation as understanding activity in the wider field and sharing best practice. So having an awareness of what best practice looks like for us, what else is going on in the sector and what other people are doing so that we can share and improve that way. And evaluation is also understanding that the space we work in is, I'm going to just add up here. I think if I remember correctly, that the space we work in is continually changing. So the HTE sector and the landscape is continually changing. So we need to ensure that we're continually evaluating that the space that we're creating our resources and courses and that our understanding of who our user profiles are is up to date and making sure that that's a continual exercise, not something that we just do at the end of a certain timeframe, like the end of a year or an end of a course. So why do we need to evaluate? So external factors, the HTE landscape and our user base are shifting and the HTE market space is shifting. So I suppose in those two points, it's the sort of more than ever before we've got students that people are continually using phrases like customer experience, consumer experience because we have students paying a lot of money to come to do their courses and student expectations are high. So we've got a little bit of a shift in terms of how we're treating our students and we're seeing it a lot more as a learner experience from start to end rather than sort of silos of modules or silos of tools. It's about how we package the whole learner experience and how we resource that. So that needs to be looked at and evaluated sort of from the, within understanding of those expectations and the space that we're in and how competitive that's come. Knowledge is power. So actually understanding what other people are doing, understanding what other organizations that are moving into the HTE space that aren't necessarily institutions are doing so that we can just understand and how we compare and how we can continually improve. And also the HTE space is continually looking to fully online programs and overlapping with the professional space which is quite a complex and already firmly established space. So being able to evaluate with that in mind through that lens and understanding those spaces as well so that we can factor all that into how we create our services. In turn, our factors probably something that will resonate with everybody here. The need to cultivate a detailed knowledge and understanding of our user base and our product or course portfolio to enable organic innovation. There's a lot to talk about the need to integrate certain technologies especially digital technologies and AI but it's making sure that we use these as a result of proper evaluation, proper understanding of where they can be used best and more organically. Also to identify and address any misconceptions I quite often see where we have evaluations that don't really come in until the end of a program so evaluation hasn't taken place earlier. Quite often there'll be a misconception that's established after the events that could have been identified, worked with and resolved earlier on if we'd sort of made our evaluation process more circular and embedded at the start. To use knowledge and insight to review existing courses, resources and informed design of future ones. So insights that we get that can then feed into general course and tool decision making. And to influence key strategic decision making where multiple stakeholders are involved. So in universities, I'm sure you will know you get involved in projects where you have multiple stakeholders with ideas maybe of what they think that their students need or what they feel a faculty needs but another faculty might need something different. I often find that using our evaluation at all stages of a project really helps actually identify what we feel we think we need from what we actually do need. So it's a really useful tool in that regard. And I can't remember what that bottom bullet was that I can't see, so apologies for that. So the hope here at Leeds is that evaluation activities will offer insights and guidance to relevant functions as needed at all points of the content creation cycle. So this is about it being not something we just do at the end it's something that we embed at the start and it becomes part of our BAU as we collaborate with the different functions whether that be the course designers whether that be the launching on the platforms and then obviously supporting the deliverers whether that's the academics or whether it's digital education support staff that we have within DES. And to do this, the evaluation of our courses our resources, our tools against user needs is central and that needs to happen early on to ensure we understand our user base and their changing needs. So we can meet them where they are and factor these into our solutions as well as identifying opportunities to innovate at an early stage. It's to establish the needs from the once especially where budgets are tight and also to ensure that we use digital technology not just for its sake but for where it can make a real meaningful difference and to make sure that it's really not just at the end it's not just an afterthought. So that's the why. How does our evaluation and innovation team plan to do this? So it's collaboration really and partnership is at the centre of how we're working. We obviously work and feed into helping evaluation across all the different functions so we help the instructional design team evaluate their courses from that perspective. We help the systems team assess how systems are being used and to inform decisions around what systems to procure or which ones to renew. And we also help with all of our other teams around blended in faculty around how they can help evaluate how support is being used or the tools or the course evaluations are taking place to support the academics as well as the learners. So in partnership, but we have established a few key principles and frameworks to keep us grounded as a team but also to ensure that we're taking a consistent approach. So our activities, developing a thorough understanding of all our categories of users. So whether that's campus, whether that's fully online whether it's professional course users whether it's users using systems using quantitative and qualitative methods. We already have and we're continually developing data insights dashboards and benchmark stats to inform a range of different strategic decisions at all stages of the product experience lifespan. So it's something people can come to retrospectively they're starting a new project and they want to get data and insights on something specifically to help inform design but also it's something that we do at the end as well so that we can get the actual versus the kind of you can use benchmark data and odd data but then we can get the actual on products at the end and then see how they fare against other projects and then we update the dashboard continually so that it's continually growing with our products and as our users change. We offer design based market insights and recommendations to help inform solution design. So this is evaluation which is probably slightly outside the parameters of number of valuation but this is just looking at when we look at our solution design we obviously look at our users but we also look at what other people that are working in this area have developed what other courses, what design values to other courses have is there anything we can learn and bring to it. So my team does this kind of work early on before our learning design team finalized their design so that we can just give them a bit of a summary and an overview so that they can take it or leave it ultimately but it's just making sure that they're informed. We use our baseline data knowledge to develop and implement an innovation plan. So at the moment this is very much a work in progress. So alongside evaluating specific programs that we have for the online masters for example that we evaluate each year but alongside that we're starting to embed continual evaluation program events so where we give our learners the opportunity to attend feedback sessions so that we're continually doing qualitative and quantitative analysis to feed into our data dashboards so that we can then start identifying patterns to feed into a meaningful innovation and in terms of where do we put our money for specific digital technologies where would it be most meaningful and most impactful to try X, Y or Z. And we also support and facilitate ownership of these activities within different work functions. So my team at the moment is quite small so we can't obviously do all of this for everyone but what we're also trying to do is to empower and facilitate leads within other functions to actually train them up to some degree and support them to do some of this activity themselves and then we support the coding end. Obviously I've got a team of coders that can actually work with the data if they want to start gathering more on a routine basis themselves. So in a nutshell, our mission statement is that we very much hope to encourage, facilitate, champion and support the development of a new way of working which embeds valuation at the very start of the workflow thereby prompting activities to cultivate a thorough understanding and appraisal of our users inform our design and to create a clear benchmark against which a deeper and more insightful evaluation process can be carried out and from which opportunities to innovate will hopefully surface organically. And where I talk about the clear benchmark against which a deeper and more insightful evaluation process can be carried out, I think this is really important. And when I first joined, we had access to stats, we had lots of stats on completion rates but without that real understanding benchmark stats and baseline stats and an understanding of our users, it's really limited what you can deduce from that and how meaningful it is. So putting that groundwork in and continually working on that baseline in the background actually enables you to draw more meaning and more insights from the data that you get when you then come to do an end of program evaluation. Values that underpin our approach, qualitative and quantitative and that's really important. Holistic, so obviously evaluating from a pedagogical perspective is central to everything we do but we need to ensure that we are taking into consideration all elements that might be affecting or driving or motivating or not motivating our learners and their ability to progress. Neutral, so making sure that we come to all projects neutral without any assumptions. Sometimes easy instead than done but we have to check ourselves to make sure that we always are open. And I also find that even though we're the Digital Education Service and our focus is on digital, trying to leave that word out is really important in not leading, especially in feedback groups where we're trying to get, focus groups where we're trying to get feedback. Dropping the word digital to just make sure that we're not leading them down that focus on the digital side of things will hopefully get a more organic picture of what the ideal scenario looks like rather than, and then working out what the best digital solutions are rather than trying to kind of shoehorn digital in where maybe actually it might need to be a light touch from the digital perspective. Learner experience, learner gap and user analysis led that's embedded in all our qualitative activities. Activities to ensure that we get data that allows our data team to carry out diagnostic, descriptive, prescriptive and predictive analytics. We've developed a data dimensions framework whereby we've got different categories of data that we want about our users across the board so that we ensure that we're creating the same types of data for all of our programs. And this is for our university. We don't use this for external partnerships because that's a slightly different market and a slightly different relationship but for our campus and online degree learners we use the same data dimensions because that then allows us to do analysis across programs as well. Cross-functional working as I've said, it's always in partnership with the different functions and we seek to empower and encourage everybody to be able to take part in this type of activity and make it more of a kind of culture and mindset within the service and within the university. And of course an experimental mindset as well being open and making sure that we can then be experimental with our innovations once we have that foundation of knowledge from our data. Very quickly in terms of how it translates to a programme of work for us I've done a review of my team's processes to make sure that we are better organised to facilitate this type of work. We are limited in capacity so we have to identify priority areas each year at the moment. We're juggling business as usual external partner activity for which we're defining our external evaluation offer as well as supporting our internal teams and the university faculties. We're prioritising supporting our fully online masters programmes and evaluating those at the end of each year. And we've been reorganising our data dashboards to enable us to basically scale up. It's the only way to do it. Baseline and benchmark activities taking place as is workshops and consults and advising. And we're also supporting the scholarship and innovation activity to make sure that where we've got pockets of evaluation going on with different teams, which we do, we've got various data faculties within the university and we've got the skills and other functions in the team that are able to do dashboards activity but it's making sure that we harness that and bring it all into the centre of dashboard. Otherwise, we're finding that we've got all this useful information and we've got dashboards here in different places but unless we have them all together they're getting missed and we're also not able to do the comparisons that we need to across the board. So it's really important that we ensure that we close the loop on all activity that's going on so that it can be harnessed. My team is organised into two main teams. I've got an analytics and quantitative team who are my data wizards. So they do a lot of the data analysis and the coding. And then I've got a qualitative team who do the qualitative evaluations and also lead on the innovation activities as well. But as I've said, we're quite small at the moment. So we do plan to grow but for the moment we are working with a small team and we are currently recruiting and sort of training up innovation and analytics champions within different functions and different faculties where people have a real interest and passion for it. We are then helping to sort of help support them and give them the skills and the background that they need to start doing this activity themselves. So Jim said that he thought it'd be really useful for me to give a few tips for meaningful evaluation. So just thinking about some of the things I've discussed, I'd say my key tips are don't just do it at the end. This can be very limited. Think carefully about the data categories that you want to catch it and why. So I would say really if people tend to come to us with questions that they want answering and you need to step back from that and go right, what data do I need to really be able to holistically answer this question and what question might that then lead on to? So it's about making sure that all the data points you're capturing will really answer all the questions that you've got. This one's really important, ensuring you have baseline and benchmark data to work with. No, you won't have that to start with and that's okay but it's just making sure that you've got it in mind and start trying to kind of establish what that might be and gathering it slowly. Never assume anything, always be neutral. The repeat here is apologies, establish key data categories to use across the board. Think qualitative and quantitative. When I first joined, we had a bit of an imbalance there and most of our analysis was quantitative. So we're really ramping up our qualitative activities. I would say where possible, embed it as part of your standard processes so that it's part of your content creation and culture and mindset and work in partnerships with wider teams and academics, silos don't work. So that's my quick overview. Thank you for listening and I'd be really interested to hear if anyone's got any questions or comments or feedback on how they do things at universities. Thank you. Jad Keenan.