 Idina's work with learning technologies helps to develop skilled data literate students who can change our world for the better. Teachers and students can develop and share coding skills with notable our Jupiter Notebook Service. Our DigiMap services deliver high quality mapping data for all stages of education. Heterological paradigms, so that sounds very inspiring indeed. If you are just coming in, please be welcome, find a seat and make yourselves at home. But first up, we are going to hear from Sarah and Neil, we have colleagues, and the title of the talk is Scaling Assessment with Adaptive Comparative Judgment. Also, I think in the program entitled From A Thousand Learners to A Thousand Markers. So with that theme, please welcome Sarah and Neil. Hi everyone, thanks for coming. So we're here today to talk about a project that we've been doing at Glasgow University. Unfortunately, Jeremy Singer, who's our lead academic, can't be with us today. So we're going to do our best to bring out our inner Jeremy. So we're going to talk about adaptive comparative judgment. This is a method of ranking artefacts like student essays, maybe, and making comparative judgments about what is better, what is worse, what you prefer, what you don't prefer, rather than making absolute judgments about this is an A, this is an E. So it's intuitively plausible as a marker. It's an intuitively plausible way of how we actually make judgments. And it removes the pretense that we're experts with objective standards when we're marking. Controversial. So what this does, what this system of grading does is it produces a fully ranked set of scripts from best to worst. And it allows for a separate consideration about where you put in the grade boundary. So you don't need to think when you're marking about this is an A, this is a first, this is a two one, this is a 60, this is a 61. You just think better worse, better worse. When you get to the end of the process, then you can decide where you're going to put your grade boundaries and whether you're marking to a curve or whether you're marking to rigid standards. So it accommodates all of these different ways of marking. And it frees you up just to make judgments, academic judgments, aesthetic judgments about what's better, what's worse. And for my point of view as a marker, it's a lot easier to mark this way. So it doesn't want me to have the next slide. It's not letting me go to the next slide. Help. See, as soon as he walks on to the stage, it does it. You've gone for the one after now, Neil. It's going on. Next one, please. That one, okay. So what it also does is it allows you to judge using a single implicit criteria. And again, what's better or what's worse, rather than trying to use complex, explicit sets of ILOs. It's much easier. And it can be used for, I always used to think it could just be used for questions that only had, that had subjective different answers. But actually you can use it for questions that have a single correct answer as well. All of this is stage scene setting. It will become a lot clearer once Neil starts talking, I hope. Okay. So distinctive benefits before Neil starts, it scales. You can use it for tiny sets of submissions like 20 or up to 10,000 potential for use in MOOCs. I think he's fantastic. And indeed, Jeremy's been using it in his MOOC. So it's compelling the naturalness. It is intuitive and plausible. It can be used with one marker. It can be used with sets of markers. So you can get your inter- and inter-rater reliability. It can be used for peer review, which is how we've been using it. So you crowd source, you get the students to do all of the judgments. And then as an academic, you come in and award the marks. So it saves a lot of time. It can be used to mark things that are very, very different from each other because you're just judging better worse rather than a set of ILOs. And you can also put in exemplars. So if you really, if you want to say, I want to see where my grade boundaries are, you put in the exemplar for the A, B, C, D, E, anything that was at the A or above would get the A so forth. And I think I'm handing over to you now. Oh, okay, I'll talk to you this time. Okay, so we've used the software at Glasgow. We're using it in the House School MOOC. We've used it to judge our conference submissions because when conference submissions come in, they're very, very different. But we were able to rank them to best fit to the conference and then decide which are the ones we were going to put in each of the streams. There was a major experiment done of this adaptive comparative judgment, which Pollitt talks about in 2012. And what Pollitt found was the expert markers who were highly skeptical initially of using this process. But by the end of it, they judged that this process was a better way of marking and it was a faster way of marking. So faster is always good. As academics, if we can do our marking faster, that's always great. But we also want to know that we're marking and keeping our academic standards. And it found that it kept the academic standards. Indeed, I think it outperforms regular ways of marking. This is it. Our implementation. We've Pollitt used his own implementation, or one that was at Cambridge. We've done our own very lightweight implementation. It's a simple LTI application, so it doesn't have to deal with user stuff. That's all done by the LMS. We use it with Moodle, or have used it with Moodle a bit in experiments. The study we're reporting on here, we used it with FutureLearn, which allows you to launch separate LTI tools. Our tool lets the submissions be text, PDF, a YouTube URL, a picture, things like that. In this instance, it was source code. And the students just paste in text and then a standard source code formatter makes it pretty printed and nicely colored to make it easier to read. You can also add students by the software allows staff to put in a set of things for students to review to make it a reviewing only exercise. That's our learning to judge from the student's viewpoint. Like other tools, Moodle Workshop, many of you will be familiar with, Aropa is a similar tool we use at Glasgow University. And I think a few other universities use. It's got a phase for submission and then a phase for review. We are thinking a bit about getting that more blurred in future. So the process, the algorithm, it's a round of sorting. So in each round, the people doing the marking or the grading, just look at two things at a time and decide which is better. And each artifact, each piece of work will be judged at least once and probably not much more than once, depending on random things about who turns, when people are looking at things. These rounds are put into three slightly different phases, which I'll talk more about in a minute. But they have slightly different scoring algorithms to improve the quality of the sorting. And this algorithm, it's a bit different from Pollitt's. Pollitt's algorithm is problematic because in his paper there's a typo and you can't quite work out what the algorithm is. So I developed my own base more on his description than his maths and used a simulation to refine the algorithm. I'll be showing you some outputs from that simulation. So this is sort of helping you think about it, but also understand the next few slides which show output from my simulation. So you start off with your artifacts, your pieces of student work. Here, six examples. They're just in a random order, numbered in order so you can see what they are, but this is just would be the order in which the students had submitted probably. To make things easier to view in my simulation, I color coded. Darker means more to the left. In this sorting, lighter should go more to the right. So in the simulation, these colors are effectively assigned to be the perfect score. And then a random number was added or removed to that so that it's got the slight error of reality. So in each round, each thing's compared, the one more to the right is picked and given a point. So and then a first sort. All the ones that got one point are on the right. All the ones that got zero points are on the left. So get on to the next round. And again, the better one or the one judge better gets an extra point. And now some have two points go further straight. Some have one point in the middle. Some have zero points. This sort of sorting over a long, long period would get it right. But that's not quite enough. But and so there is a sort of scoring algorithm put in there so that as the first two or three rounds are just done that simple way. But later on, it starts waiting the comparisons depending on how far out way in the sort are. In the second phase, it's waited to allow things to move quite fast because if randomly, a few really good or really bad bits of work had been put together at the very beginning in the first random order, one of them could have ended up very out of place. And then later, it goes so much more refined sorting things close together algorithm. So here you're seeing the first four rounds of a simulated one following one particular thing, number 43 highlighted in orange down as it sorts. And the red highlights are ones that has been compared against. And so its score at this level relates to where these are in the previous sort level. And with two more. This is then using the more refined algorithm. As you can see the sort of graciousness goes very smoothly almost from dark to light across showing that this is actually quite effective sorting mechanism. After about 18 rounds of sorting. It's near enough perfect. And this scales. This would work. However many artifacts were in there because it's comparing with a sample and using their positions. So here it is the same with this is just the middle third of a thing. All you can see of course is the color. But as you can see, it's quite smooth towards the bottom from being quite random towards the top. So that's with 600. I've experimented with up to 1000 on the little server I was experimenting on that's getting to its beginning to run a bit slow. So you can try this out. If you have your mobile device handy, you can log into our demo site. This is my first ever trial. I just keep running to show off. And I put up some pictures for people to start. It did show up some interesting things. These are pictures of wildlife, flowers, insects, birds. They're not very good pictures. They're just ones I found on my camera. But it was noticeable. Some people do sort looking at this type of artifact, which has got different categories. You could look at it. So some people would sort to make a Robin very good, even though it's not a great picture, but it's a nice bird. And other people would say spider goes right down, even though it's quite a good picture. And so there's did notice there's two artifacts. And that's my the two aspects at least to the way people judge. And this might mean you need a bit more guidance. But that's these pictures, maybe in academic work, it's going to be a bit better. So our case study functional programming in Haskell. This is interesting course in that it's run as the first half of an honors module, but also as a MOOC. So about 1000 people in this particular run, we're using it from around the world on future learn, and a class of about 80 honors students at the University of Glasgow. It's a as a programming language Haskell, it was developed at Glasgow. So that's a good thing for us. But it's from our students viewpoint, it's a slight paradigm shift in their way of programming. So it's a quite new language to them. Previously, they've programmed in very conventional languages like Python and Java. And then they do this Haskell, which is a functional programming language and quite different style in honors. They were given a problem specification to implement something which could be done in a writer page of code. Some guidelines as how to judge. So some criteria about, you know, look at readability, look at actually solving the problem. And then in the marking phase, this grading page, they were looked at their peers solutions to compare. So in this instance, we're using this as a peer tool. And then finally, at the end, they'll see their own ranking just as a what quartile it got into. We don't want to say to people, you are worse. But we do say you are in the bottom group. And I'm sure they will all have known this by having seen the others. And they also got given a sample solution. So this is a question, write a spellbook generator. It's the sort of thing that Haskell's a good language for doing. And they were given these instructions about how to write the good quality code. And then here's a sample solution. So you can see it's quite concise language. And this is so student comments. I'll hand back to sailor. So we did some evaluation. Of course, we did. We like to evaluate it. We sent out an online survey to all of the students, including the ones from the MOOC and our honor students. And we asked them a range of questions about what they thought about the ACJ software, because they're computing students, we wanted those sort of answers from them evaluation, and also what they thought about the process. So these are students who were probably fairly new to doing peer review anyway. And it was peer review using ACJs. We asked a range of questions. And what we got with a lot of students telling us that they liked the way that they got to see lots and lots and lots of different solutions. Because typically in peer review, you'd see two or three maybe. But with this, they could see a lot. And the ones that were really invested, of course, could just do more. We weren't limiting them. So it can be quite addictive. When we first use it for the conference, as me and Kathy Boval, we just got really, really addicted to actually doing the reviewing. So we did masses, because we just love seeing these photos. And we love seeing the abstracts and all of that. It was great. Anyway, again, here's one they said, they thought doing this helped them to think differently. Because they're having to think how to evaluate their peers code. Here, this is exactly what we wanted. We wanted to show students early on what their position was in the class without having a leaderboard. And they could see how well or how poorly they were doing for themselves. Instant feedback, early feedback, really important. And again, I love this one. I'd like to thank the course educators. Well, I think that's thanks to Jeremy, not to us, but we'll take it. And again, this, I thought this was really interesting, that as time went on, they could see that they got quicker at doing it. So yes, the process speeds up. And it's one more. So how could we use it some interesting stats? Well, we can get some interesting reports out of it if we wanted to write them. So we could set the software up to say who is the most deviant marker. So you know, sometimes you have a marker who's a bit problematic. Everybody else maybe is judging something to be good. And they're judging it to be really bad. Well, you might want to look at this marker and take them out. Or you might want to look at this marker and say, what is this marker seeing that everybody else is missing? So you can get that sort of information out of it. You can also see which submission was the most divisive, which was the Marmite submission, the one that some markers thought was brilliant. Some markers didn't think it was brilliant. Again, is it the case who's missing? What? Because, you know, I know myself sometimes I've marked something and I think it's maybe round about a C. And then somebody else comes in says, actually, do you know what? There's something really novel, really interesting. I would give that an A. These are the conversations that we have as markers. And these are conversations that we can see in the software. And again, we can see how converged the judgments are. Is it the case that everybody thinks that one is the best, that one is the worst, or is there a bit of a controversy? And so it's not the case that we just given a bit of ranking and we have to accept it. We can interrogate the data. So where next? Well, the software we have is still in development. It's still a pilot tool. We've been piloting it successfully for how many years now? Three, four? Before years? In small things. It's living software. Neil is the developer. Neil's a fantastic developer to work with because he understands what academics want. And he will work with academics to get a bit of software that works for them. So if there's a restriction in the software, very often Neil will work to get around that restriction. Obviously there are restrictions that you just can't do. You can't do magic. He can't give you a unicorn. But he has developed this bit of software in line with academics and working with Jeremy, who is a fantastic academic, who's very engaged in his teaching. And indeed, it's very engaging when he teaches. It's really, really useful because you get a bit of software that is suited for academics, but it's technologically robust as well. And I think at this stage, if you think it would be useful in your teaching, I think Neil is putting out a call to ask for a collaboration. The software, of course, is open source. It's on GitHub. Anybody can go and pick it up. But what we'd like is if people are picking it up, if they'd work with us. Because we're doing further research in this. We're still using it. We're trying to extend our pilots. And what we would really like is to work across the academic community to work with you to do a proper robust study. So scholarship, research, or people who just think, yes, I want this in my teaching. I don't want to do any scholarship. No research. Please just let me use it. All of these are fine. And I think that's it. Fantastic. There is a roving mic. So if you'd like to raise your hand and ask a question, we have had a few questions here as well. So one of them was around providing feedback to students. And the question was, if a submission is ranked at or near the bottom of the rankings, how does the process provide feedback so that the students know what needs improving or why they got a low score? So we'll start with that. But if you have a question in room, we'll come to you next. So there is no traditional feedback given. No thing. You should have done this. This bit was poor. I've been a student recently enough to remember that wasn't very helpful. What there is, is what David Nicol would consider to be internalizing of feedback. The students are seeing a range of work. There's a very good learning potential there. It probably needs to be studied more, but I think that's a, that is potentially a much more useful form of feedback. That's an interesting new solution. If you do have a question, can you just let us know who you are and where you're from? Thank you. Okay. Thank you. Steve Rowett from UCL. It's a bit like a binary sort algorithm, I guess. Yes. And by the end of that, a student is comparing two things that are probably very similar. They're near the bottom end, they're near the top end, they're in the middle. Does that make it very difficult because it's quite hard comparing two things that are similar? It probably does make the comparison harder at the end, and yet ideally we'd get students going through from the beginning to end, but you can't unless you've somehow got them timed into it. And since we're doing this fairly open, like they've logged in and they're on time, we can't do that. So that's something worth looking at and thinking about how to make it better from that viewpoint. But yeah, when doing the conference judging, which I took part in, yes, things get slightly harder at the end, but you also because of being through it, you get quick, you get good at it. If you're just jumping in at the end, yes, I can see there's an issue. But I think at that point, really at that point, you just say, well, we've got the sort, no more judging to be done, and if students needed, if it was a pair exercise, we could, we've talked about starting it again for students to go through the process. So we could have multiple sets. And then, and then we, fantastic. Then we could rank all of those against each other. Wow. What a, what an evaluation. One thing that if a student is seeing two very close together, the next pair they see will also be close together, but they won't be close to these two. They'll have been, it's quite different in this argument. Thank you. I think we've got time for two more questions. So we'll take one online and then if there's any more in the room, if you just hold your hand up, please, and we'll come to you for the last question. So there were a couple of questions around algorithms. And there was also a question around the ease of use and whether it's openly licensed. So maybe if we focus on ease of use and openly licensed. It's incredibly easy to use. I have been using Moodle Workshop for many, many years. I understand it's affordances. It's not the easiest. It's a lot easier to set up because there aren't as many settings in it. So from the point of view of a member of staff setting up is very easy. From the point of view of a student, it's really, really easy. They get two things on screen and they just have either push left or push right. That's that's it. In terms of licensing, I'm actually too licensed. There's a few bits of other open source in there, which are various different licenses, but they're all quite liberal open source funds. Great. Thank you. So time for our last question. Is there is anyone in the room or have you all posted them online? Otherwise, I think we'll finish with a question around the algorithm. And so we have a couple of questions. One is how sensitive is the algorithm? One is, is the algorithm just rules? And the last one, is it ethical to use an algorithm without checking the results or are you checking the results? And could you expand a little bit on that? So if we maybe round up on that, is that all right? The algorithm is just rules. Yes, that's what the algorithm is. It says, given this, we will award this number and use that in a sort. The checking, well, the simulation shows it works. Simulation shows very convincingly or with a lot of noise in it, which is interesting. That's where this algorithm is different from a standard computing binary source because there's it deals with noise. It deals with some of the judgments being different from others. Fantastic. Thank you very much. And we also want to give a big shout out if you have been watching this online. I think there are some colleagues who Sarah and Neil who might be watching this on the live stream. So if you have been joining us online, a very warm welcome to you as well. If you could just put your hands together for our presenters. Thank you. Fantastic. And next up is Matt Kornock who is joining us from the National STEM Learning Centre and where Matt is the online CPD coordinator. And our second session in this half an hour is the enabling professional development by letting go of the pedagogical paradigms. And please put your hands together for Matt. Warm welcome. Thank you very much, now. My role at the National STEM Learning Centre is the online CPD coordinator and that means I'm responsible for the learning design of the online program and also the program manager. So I spend all my day thinking about how we can improve learning design for our online participants. And our participants, they are teachers. They are teachers in schools and colleges around the UK and of course our international audience because all of our online courses are available on the FutureLearn platform. So our program looks like this. We have over 20 courses. Most of them are free. There's a couple of paid ones for small cohorts. But most of them are free and they've been developed mainly over the last two years. We had five courses that we started with about five years ago and then I joined two and a half years ago and we've been rapidly upscaling the amount of online courses we have. So that we can create professional development pathways. And that I think is an incredibly important part of any online programs to have a pathway. So the professional development isn't just a one hit wonder. There's a journey that people can go through and we can support them through that. So the big numbers then, last year we had 50 course instances, 57,000 enrollments, 109,000 hours of professional development delivered. And that is actually quite an achievement when you can say to myself and my colleague Karen who is solely responsible for the online program, we bring out a lot of people to run the courses, to write the courses. But there's just the two of us that keep the lights on. So this session is a reflective session and it's a reflection on some of the ideas that have influenced how I view online learning designed for MOOCs. And I'm actually going to stop referring to MOOCs. I'm just going to say open online courses because I think the scale of a MOOC varies considerably. We could be looking at 100 people. We could be looking at 5,000 people. And that's the sort of range we get in our courses as well. And these ideas that have influenced me characterize the type of learning that's been designed for and typify how pedagogical approaches for face-to-face and online might not be able to be labeled in the same way for the open online course area. So I've been grappling with this dilemma that learning design is not necessarily the learning experience. It's an obvious point and as I stand today I think criteria has taken me that long to come to that conclusion. But this issue has started to pose very existential questions for me as a learning technologist about the role of learning design, the theoretical basis, learning outcomes-based activity. And as I explore the literature particularly around open online courses, I became increasingly skeptical about how particular pedagogies were being evidenced in open online education. So let's have a look at some of the theories that have influenced me, in particular around teacher professional development, Ana Lorela's recent paper in 2016, using MOOCs for teacher professional development, talks about co-learning. And she cites Avalos, a paper from Avalos in 2011, co-learning is about networking and interchanges among schools. It's strengthened through peer coaching, supportive collaboration, joint projects, teachers naturally talk to each other and therefore why aren't those conversations about education? Now within the STEM learning ecosphere, we have a network of partners across the country who do this on a face-to-face basis. One of the big things for me at the moment is to make sure that online and face-to-face are blended together and really draw upon the value of that. And that's where that network and interchanges among schools are taking place. But are those, are those, is that networking, are those interchanges actually taking place on online courses as well? We'll have a look at what that might be happening there. And Lorela's relates the idea of co-learning to communities of practice through MOOCs that have curated digital resources and orchestrated collaboration. And that idea of orchestration of collaboration that people are working together in an orchestra to come up with a great output, I think, is incredibly important and valuable. Now, Luiz, Monica Luiz in 2017 and her colleagues looked at a study in the Netherlands, not necessarily for online courses, but for teachers' professional development choices about how they are self-directed in their learning. And the range of aims that these teachers are trying to achieve, and the same applies to HE lecturers as well. So when I'm talking about teachers, you can also think about the colleagues if you're working in the university about who you're working with. Whether they're the newly qualified, their mid-career, or the very, very experienced, the types of things they're looking for are activities that help them to reflect on their practice to keep up to date, to experiment, technical things like managing the classroom, getting up to grips with the subject matter that they're teaching. And student care. So these are all traits of professional learning. And in the teaching sector in the UK, there's DFE, Department for Education Guidance, on what the professional's learning standards should be. And key to that is sustained and embedded practice, which is another reason why we can't just have one-hit-wonder workshops or anything like that. There has to be a journey, a longer period of development. And online is perfectly situated for that, because you get courses where you can try something out, put it back into your praxis, practice that week, get a bit of feedback on it from your mentor online, do a bit of reflection, try something out next week and do that across a whole term. You sustain that in a journey by linking these courses together. So these are the traits of professional learning, but how are professionals actually supported in adopting those traits? Our approach at STEM Learning is to have structure points of interaction in all of our online courses. We have the sharing of ideas as key to the learning activities. All of our content is evidence-based, whether it's academic research or whether it is through practice itself, the idea of action research, ongoing reflection, which we have reflection grids each week for our learners to complete and action planning to sustain the impact. What are they going to do? What are they going to take away from our course and what are they going to do next week, next month and next year? And this is what we do on future learning. We have that weekly structure. We break the structure down into sequences of steps, pages of the course that meet many learning outcomes, if you will. We have really high quality video. We go out to real schools, film their real classrooms, interview teachers, talk about what they're doing. We use quizzes, not as a mechanism for assessment, but as a mechanism for delivering content to challenge our learners' way of thinking. We have expert Q&As, asynchronous Q&As, learners post their questions, we get a response, sharing through Paddler for anything that can't be summarized in a short 500 character text box on future learning. So things like lesson plans, could be videos, we have some excellent videos of teachers and technicians demonstrating what they're doing with their classes and their practical work in science. We have our own high quality demonstrations from the National STEM Learning Center and we also have fundamental to all of this, of course, is the social learning pedagogy, those discussions, those constructed activities, those orchestrated collaborations. But we're trying to address here a range of learning needs. This is just from one course. These are the different learning needs that these teachers have. Want to know more about practical work in science, want to be able to meet the assessment criteria, help their students meet this assessment criteria, want to be able to evaluate their own practice, know how to evaluate their own practice, want to be able to relate their content to the real world so that students have that relevance of what they're being taught. And if they're new, there might be a non-specialist, an increasing proportion of the teaching workforce, the teaching subjects they don't have a degree in. It's not the same in academia. And that is a key area for us to support. And that all wraps up with confidence and confidence is the tricky one. We always aim for our courses to build confidence in our participants. But how do we measure that? How do we make that a learning outcome that we can actually measure? So I think there are some contradictions in open online course design. And that goes along the lines of trying to meet everyone's needs, but also having a very clear structure to a sequence of activity. All of our courses have a weekly structure. They're linear in that respect. But we have designed them specifically that you can, if you know how I think is the question I'm going to come to, you can choose the different activities that you want to meet your own development needs. But how do we do that as a design process? That's not a design process that I know of very well. And we've got ideas of personalization of our own where you can have these crazy maps and find your way through. But there's still a big challenge that was made by Kate Lindsay earlier in the session here about supporting our learners to be better learners, essentially. We've also got individual times. If you join before the course starts, you're more likely to complete because you might not be part of the cohort if you join later on that's already had the discussions. So how do we account for the openness of that timeline and how do we still build in that socialization? How do we make openness of access whilst also making sure that all of our learners have that self-efficacy to be able to learn with us? So what do we normally see in all the literature? Retention graphs. And these are for retention graphs from a selection of courses that we run. And we might see, okay, we know there's a dip in week one. Oh, that course is slightly different. The bottom course there, that I've just highlighted, is a five-week course which has a research story behind it, essentially, how we learn science with learning. And so we take people through a model and so they have that hook very much early on week one. The one at the top, Managing Behaviour through Learning. It's about practical approaches for managing behavior in the classroom. You could cherry pick if you wish. So there's naturally gonna be a drop-off. Three-week subject specialist course, a few bumps here, bit of a bump at the end when you've got a Q&A. What does that actually tell me about what my learners are doing, the decisions they're making? Not a lot. I don't think we can take much for retention curves. Now I was really grateful for the input from Seb Smiller and David Jennings who said, well, why don't you look at retention on a week-by-week basis? And actually this is more interesting because it tells me here that once we get people to week two, they're in, we're okay. And so the people who have committed to week two are gonna be the ones that are gonna complete more of the course. But again, I have to ask myself, do I really want them to do that? Do I want them to be able to complete the whole course? Do I need them to be able to complete the whole course? And that focus on retention is still driving a lot of the discussion. So again, we can look at the blobs on the graph. So open online courses, access measures that need to focus on outcomes. And I believe what you need to let go of retention because the outcomes are more important, I think. What a teacher is going to do with the course is more important. One of our learners is gonna do with the course is more important. And this is summed up beautifully by DeBur and colleagues in their 2014 paper that MOOCs open online courses, have loads of data, but it's the diversity of user intentions, their backgrounds, the unconstrained asynchronousity of their activities. What a wonderful phrase. I mean, that's just telling me that there is no way that I as a learning designer can control when and how and why a learner is engaging with my course. But there's still a quite a lot of faith in the data and it's a need to seek out patterns to explain the learning that's going on. And I'm questioning that quite a lot at the moment. I'd also sort of challenge perhaps that some of this has come from changes in the platform that we use. So FutureLearn has an upgrade model. So if you want a certificate, you pay, but if you want that certificate, you also have to mark the steps on a course as complete. So you have to go through and take a little box for our courses to say, yes, I've done that. And to the extent that some course designers will put this instruction at the bottom of every step that says, don't forget to complete because it ticks a metric somewhere down the line. For me, I tell my learners right at the start of the course, use the completion measure because that will help you keep track of where you are. I don't care if you complete or not. I want you to make the most of the course. But the platform and the process behind that is slightly different. Success then does not necessarily mean engagement in my view. And I want to move beyond the view that you must complete the course according to my design. We need to allow learners to be able to choose the steps that are appropriate for them. We shouldn't be chasing for correct responses to tests and quizzes. We should be thinking about, well, how do those tests and quiz responses help us understand our learners better? So non-engaging in a course, non-engagement on open online course does not mean a lack of design at success at all. We use the normal model of activity design. We have an activity in the middle that we're designing to lead towards an outcome. We think about how the relationship between learners, the content, the educate, and the cohort all mixed up in that different form of activity. We use the ABC design model. But for me, I would like to suggest that it's not necessarily that activity that's determining the outcome, but the context in which that activity is sitting that determines the outcome for that learner, which means you could have a whole host of unintended learning outcomes, which are more important to that learner than they are perhaps the learning designer, which makes the design challenge incredibly difficult. But we know that learning cannot solely exist online, and particularly in professional development, it sits within a practice context. So our learners should be supported to learn and develop, and we get the feedback at our interval course surveys that they have been able to take the course flexibly around work, they've enjoyed sharing, they've been able to try out new ideas, they've been able to rethink about their practice. And I love that quote because that came from a teacher who said, I've been teaching for 20 years, and I've had my practice challenged and changed through this open online course. So we can look at some of the data though to maybe think about, well, can we maybe infer something from the way that these types of learnings are taking place? And I've looked here at what I'm gonna call course wobble, which is the deviation of a learner from the intended trajectory of a course. So normally you go step one, two, three, four, five, six, seven, eight. If you're a wobbler, you might go one, four, 11, 12, two, three, and so on, jumping all over the course. And I find that interesting because that really shows me our learners making decisions about missing steps. Our learners making decisions about, okay, this isn't relevant for me, I don't need it, or perhaps they're saying, oh, this might be interesting, this might be an unintended learning outcome. And we see a weekly spike, so learners jump to the first part of each week, and that's probably triggered by the weekly emails that the Future Learn Platform sends out. But in a three week course, those spikes aren't as pronounced, but we do see some patterns happening at the start and at the end of this particular course, again, due to the interventions we put in to do with synchronicity. And I don't mean synchronicity in terms of people being online at the same time, I mean, things that are stuck at a particular point in the calendar, like our Q&A with an educator. And that forces people to skip ahead. So some interesting ideas are about rhythms and learning routines, but most participants will take the course in a linear pattern. So if we're trying to encourage our learners to identify their own needs and use the course to meet their own learning needs, why are they taking the course in a linear pattern? Very few participants will also start elsewhere and then at the beginning, I had a look at four courses last night, that 10 to 13% of people will start the course anywhere else than at the very beginning. That's a very, very small percentage when we're thinking about people making their own decisions about what needs they're trying to address. Social learning, the platform is all about social learning, but it doesn't necessarily mean it has to be restricted to social learning activities. We know that maybe 20 to 30% of learners are commenting and commenting perhaps has a relationship to completion as it does responding to others, as does receiving a response to others. But of course, as Swinerton and Hotchkiss and Morris said in their paper on this exact issue, that completion and commenting are intrinsically connected. You cannot use measures like this to measure the success of a course. You cannot use measures like this to measure success of social learning because the more you complete, the more opportunity you have for commenting. So the dependent variables to begin with. Most comments are not replies, but does that matter? Anecdotally, the quality of the comments on our courses has improved over the last 18 months and our mentors have said this, but we may change this course to facilitate this. And the comments are becoming more thoughtful, they're longer, which the mentors delight in because it gives them a lot more rich data to play with to extract themes and ideas. And we're getting fewer of the, oh, that's a nice step or great video, getting fewer of those and more thoughtful comments, which is great. They're reflections on that teacher's practice and they're using the platform, not necessarily to talk to someone else, but as a way of capturing their development. So in terms of planning to engage with the professional development as well, the data shows us that if you, as I've said before, if you join the course in advance to the start date, you're more likely to complete. And I'm also looking now at whether there might be something to do with the time difference between steps. So if you engage in all the course in one day, how does that change your completion, perhaps of metrics that we can think of, performance on the course, compared to somebody takes over much longer period. And the outputs we have from our model, our approach and our challenging of sticking rigidly to a single pedagogy, as shown through the outcome surveys. And I have to admit now, these are obviously people who have completed the course or have completed enough of the course to get to the survey point. But we see here that 98% of our participants say there's been a positive impact on themselves, 80% on their students, 63% on their colleagues back in school, which when you consider most of our participants are probably taking these courses individually, with a lot of work we can do there about that blended model of embedding these courses as a structure for schools to use. So impact on their practice, but also improved understanding of the subject matter of the course, changing their practice. The course is relevant to them and it's a good use of their time. How have I met all those differing learning needs through a course that we've designed six month prior? These are questions I don't yet have the answer to. But we know that they like the structure, they like the discussion, they feel more confident. That's a feeling. How can we design learning for a feeling? Seeing the practical aspects and being feeling part of a community. So I'd like to conclude by saying that I think pedagogy should be open too. When I first started this role, I said I had to rethink everything I knew about online learning, social constructivism, communities of practice. They just don't work the same way in a course that is only three to five weeks long and is essentially got an anonymity cloak for our learners. The data shows by the absence of engagement as much as evidence learning paths that pedagogy is open. We shouldn't read too much in the data. We should think about the outcomes and the outputs from that learning experience. So to conclude, I'm very much steered by my own professional viewpoint. What I believe about education, I believe it's very much situated within a context. I believe that social learning is important because it helps you to relate ideas to your context. I believe that discussion is important. But they're my own prejudices about what I believe is right. I always say that our online courses are not online textbooks. Absolutely not. That is not the type of online learning I want to deliver. But how can I convey my prejudices about pedagogy to our learners? And I have to acknowledge that my way isn't the only way. Learners might be dipping in just for five minutes and being able to take something out of that. So I think where we're coming to then is a situation where, that's interesting I've been talking to a little on the slide, coming to a situation then where we really do need to think about as learning designers, letting go perhaps of our own prejudices, acknowledging that we have a professional viewpoint, conveying that to our learners, helping them to understand that, helping them to challenge not just their practice in the form of professional development, but also helping them to challenge what they think about online learning. And through that, hopefully, they'll be able to sustain their professional development as well. Thank you very much. Thanks, Matt. That was a really interesting talk. We have a roaming mic here as well. So please do either post your questions online or raise your hand if you wish to ask a question in person. Matt, one of the things I'm really curious about, I'm just going to use my chairs, to ask you to pass the question. Go on, chairs prerogative. What is kind of the next step in this development? You've talked a lot about what's been achieved. Where do you see this in like two to three years' time? Where would you like it to go? Within our own programme, we certainly need to do more in terms of course maps, which actually relates to the question there as well. And I know that the course maps approach has been explored by Granja Canola as well, in terms of helping learners think about the types of... The guidance they need about how to learn, but the types of activities that are available to them, what type of activity they want to be engaging, how they're going to be supported through reflection. And we put in our courses already extracts from Dino Royal Arts Conversational Framework as well to help them think about the types of comments they should be posting. So that was one of the changes we made recently over the last 18 months is to think, well, can we encourage our learners to be more thoughtful in their engagement? And they have embraced that. In terms of where online learning and professional development needs to go, it is definitely a blended model because we want to be able to equip our online learners with the capability to understand how to use the online courses to support their departmental needs. So if you're a teacher in a school, what are the needs of your context? How can you cherry pick what you need? So there's almost a level of metacognition that needs to wrap around all this as well. And I think the same with applying university context, where I have worked at university before quite extensively. So I know the sort of the challenges that you face. And I think that the most valuable part of working in a university in a learning town, our role is having those conversations with the academic staff. And you need to make time for that because you understand what they're trying to achieve through their learning. So you need to wrap your professional development around that particular context or enable them to make that those links. Thank you very much. Anybody raising their hand? No, we have some questions online. So how do you encourage to explore different pathways through the courses? Or do you encourage that? So that's what we're gonna be looking at at the moment with the course maps. But as I was putting them together with some colleagues, I was debating, well, should we do course maps based on activity? Should we say, okay, this is a reflection step. Here's a discussion step. Or should we say, is it a video? Is it a video classroom footage? What's more likely gonna be useful for those learners? So it's still something I need to unpick. And I think there's an interesting point there about encouragement of your learners as well, related to that. Because you want to encourage your learners to be able to work their way through a pathway and see the links between courses, but also buy into that idea of sustained practice as well. Yeah, so I presume that's the question on how do we encourage learners? So it comes to looking at the outcomes, it's sharing what people have achieved previously. But one of the key parts of our online courses is the mental facilitation. All of our courses have it. And they do a video diary in some of our courses where they pick up on some of the key points that that cohort has raised from the content. And that's been incredibly valuable for the learners to see the links between the content and their context. But we also encourage our learners through not necessarily reward mechanisms, because the reward is actually the changes in practice in the classroom. So that's how you encourage your learners by saying, and by learners we mean teachers and we mean academics in this context, is by saying, well, look what's changing in your practice, what's changing in your classroom. So having something that's small and easy win to begin with before you go into more depth as the course progresses, that's how we try and do it. Thank you very much. I think we have time for one more question if there is one, or maybe you've met everybody is sufficiently hungry that it's nearly time for lunch. So an opportunity now to ask any final questions. Oh, there's one more. So in your opinion or experience, what can't you teach through online learning? I think that's a nice way to finish before lunch. I think you can teach anything through online learning, but it is the activity that doesn't always work online. We had this wonderful debate when we first started on this journey at STEM Learning with my colleagues and the subject expert team. What works face to face? What doesn't work face to face? What works online? What doesn't work online? And so you come up with a whole plethora of different types of activity that will work in these different situations. But I think you can probably teach anything probably apart from, you know, I won't be very specific here, but there are some relationship building approaches that like human to human development skills that I think you can deliver something online, but you have to link it back to the face to face environment. You have to have that face to face practice opportunity. You can't build relationships with your students just by reading or completing activities in online course. You have to have that link between the two spaces. Thank you very much. And talking of links between people and in spaces, we hope that you will all join us for lunch downstairs now. We have, we're just five minutes ahead of time, but please put your hands together once more for Sarah and Neil and Matt. And also thank you for those who posted.