 okay thank you say my name's Christine Gardner from the Open University and I've been working on on a data analytics project with Alan Jones David Chapman and Helen Jeffries Alan and David are both part of the TM355 module team as am I that's a communications technologies module with the Open University Helen Jeffery's is also part of our communication and our computer communication school. I'm not directly attached to the module but she does work with a lot of the Open University stuff in our school. So just to tell you a little bit about the module itself, it's a distance learning module. Most of the materials are print based but as it's a level 3 module that equates to sort of the third level at Conventional University. There are some quite difficult concepts within the module. So we've developed some online resources to supplement the printed text. The focus of this research project is actually on those online resources that supplement the written text with the module. The module itself is divided into three blocks. The first one is about signal processing and the second block is about coding and error correction of those transmitted codes. And the third block of the module is about those kinds of topics. So if we just have a quick look at the sort of outline of this research. There are two main aspects. We're using data analytics in the first part of the project. And specifically what we want to know is how students on this module engage with the technology enhanced learning and teaching tools that we've devised to help the students with those tricky topics as they get to that point in the module. We want to know whether they're using the tools at predicted points. We want to know if they go back and use those tools again. And we want to know if the kind of student engagement is changing with the different presentations of the modules because we don't know whether this previous presentation was typical, for example. But to supplement that, we also wanted to interview some of the students just to get more of an insight into how they're using the tools. Do the technology enhancing tools actually help them to understand the topics better? Or are they deterred from using these online tools because they're too complicated or they're too time consuming, et cetera? We really want to know a bit more about why they're using them or why they aren't using them. So what we actually used was this analytics tool, analytics for action, which was developed at the Open University. What we've actually been doing is looking at the analytic data, investigating specific issues and where we are at the project in the moment is identifying actions and prioritising those actions before thinking about selecting the methodologies and evaluating outcomes. So this is a walking progress report at the moment. What we're trying to do is actually see how specific students are using the online tools. But the analytics for action analytics tool actually identifies how students are using particular tools at a top level. It doesn't give us the actual detail on which students are using which actual online resources. Just to give you an example, this first example at the top is actually showing the discrete cosine transform online tool and how students are using it. If you look at the top there with the blue bar, that's actually an assessment point on the module. With this particular online tool, you can see that the students realise that this particular topic is relevant to their assessment at the end of a block and they've gone to the relevant tool and they've used it. There's a very obvious peak there. If you look at the screenshot at the bottom with Hammon codes, there's actually a different pattern use there. Again, the blue bars are indicating assessment points. The first one is the end of the first block and the second blue bar is the end of the second block. The third blue bar is the end of the third block. With this particular online tool, you can see that the students have used the Hammon codes online tool throughout the block. There's some peaks and troughs, but it's been used throughout the relevant block. Then there's a peak at the end where the students have gone back and used it for revision. What we did was actually have a look at how the students performed during the exam at the end of the module to see if it made any difference whether the students actually used the online resources of revision time and also question them to see if they didn't use them. Why aren't they being used more extensively? A lot of time and effort has gone into devising and developing the tools. We'd like to know a bit more about why they are or are not being used. The particular prompt for this research was an exam question that was set in the 2016-2017 presentation. It was based on error control codes. One was a block to topics. The exam scores for this particular question were relatively low. We knew that there was a specific online tool which would have been very helpful for the students had they used that tool in their revision. Using the analytics for action, we could actually have a look to see how that tool was used and whether it actually made a difference to those students. The particular question was not a popular choice for the students. We could have a look at the top level data. We could see this particular tool was used in the relevant block of the module. Some of the students went back and used it at revision time, but only about 20% of the student cohort actually used the relevant error control codes online resource. What we wanted to do was check the analytics data. We could see that there were a subset of students that we would like to investigate further. That is those who actually answered that particular exam question. We worked with the tel team to interrogate the analytics data more deeply. In particular, we wanted to know if those students who answered the question actually used the associated online tool. To supplement that, we also wanted to ask the students questions about whether they used the tool and also check to see if the analytics data was actually given us the type of responses that we were expecting. When we actually looked at the data, we could see which students used that particular online tool. What we found when we matched that to their exam scores was the overall exam score for that particular question was 45%. For those who didn't use the technology enhanced learning and teaching tool at all, their exam score was 30% overall. If they used that particular tool at least once, their exam score averaged at 53%. If they used the tool specifically at revision time, the average score was 52%. If they used the online tool multiple times, they actually averaged out at 58%. Although this is quite a small sample of students, it was encouraging to see that the data actually was looking like those students who used the online tools were actually performing slightly better. As it was a very small sample, we wanted to see if this particular group were any different to the cohort as a whole. We used a different predictive online tool. This is a tool that is used by the student support team to see whether our sample of students who answered that question were any different to the students who didn't answer that question. Looking at the overall predicted pass rates, our sample was very slightly weaker than the cohort that didn't answer that question. We had 0.85 probability of passing versus 0.87. There's no particular reason why those students should have performed badly on that question. What we wanted to do to follow this up was to find out a little bit more about why students were or were not using the online resources. We interviewed a small sample of students and had some quotes from them that seeing the coding in practice and having interaction actually helped with their understanding of the topic. The online resource is also very good for self-testing and that was noted by several students in the sample. One of the comments from the students was why wouldn't you use these resources? That's exactly what we were trying to find out. Obviously there were also some negative comments and a comment there, just a video clip, didn't really have anything, but overall the comments were relatively positive. What we want to do now is actually to progress this. What we think will help is to give some indication of the time needed for the activities and then add some descriptions on what kind of activities they are because some of them are interactive of different types. Give some indication of what they're like, promote them in the student forums, possibly have some sort of talking heads on how students are using the interactive tools and also mentioning them in our introductory or revision sessions that we're producing for the module. We're in this phase now of actually thinking about how that will best work. We're going to produce some descriptions of the activities and review the use of the technology-enhancing and teaching tools for the next presentation, add advice to a revision podcast and interview the next cohort of students and then see if that has actually made any difference. What we also do is consider additional guidance when we come to the mid-life review of the module itself. I'm going to say any questions. Thank you very much. Any questions for Christine? I can't seem any come through me too. Obviously you're very welcome to do that. A question here from the floor. Can we just wait for the microphone please? I'm Richard Treat also from the OU, so I'm in the total team. I was just interested. So it was just video. It didn't actually get the students to practice anything or do anything? It was interactive, so the students were using various interactive tools to input different data, for example, and actually playing with the interactive tools. So it wasn't watching videos. It was interactive. OK. So Helen's here who was also working on the project. OK. So it wasn't just interaction. One of the questions at the top. The data you showed there, the sample size isn't massive, but it does show some compelling evidence. Had you thought about carrying out some statistical analysis just to see whether or not there was a statistically significant difference between the samples? Yes, so we haven't done that because the sample size was so small. That was why it seemed to make more sense to sort of compare that cohort to the other students who answered the questions. But it's certainly something that we can look into so this is sort of a work in progress. So I think what might be useful is to actually break down that cohort of 48 students into those who did reasonably well compared to those who didn't and then review their online tool use further to see whether there's any difference there and then possibly work on that data to see if there is some significance there. Maybe one more question. Maybe I'll ask it then. You were shown aggregated data there and it was the only chance you could actually look at individual students' usage patterns to see any interesting categorisation of use. Yes, that was where we managed to get the student details from the aggregated data to go back and see how those particular students answered the exam questions. But that is something that to say we can work further on that to break it down to see who did well and who didn't do so well. Thank you very much indeed.