 So I'm really grateful. I've seen many people have traveled all across the globe to come here, and I'm very touched and privileged that you've come all the way to this talk. I'm also really touched, I know that many of you couldn't make it, and you're viewing this online, so I'm waving to you online, so thank you for coming. Before I'm gonna talk about learning analytics, I'm gonna talk a little bit more about the three, or the two other amazing things that have happened to me in 2017. But before I do this, I wanna say a big thanks to some really amazing people. These amazing people at the Open University are continuously pushing the boundaries of learning analytics, and unfortunately, I couldn't fit all the people in this. If you're not on this picture, I do apologize, but it is an amazing group of researchers are really pushing the boundaries. At the same time, I'm really privileged to have been working for the last 18 years with some world-renowned researchers across the globe, and they've really, in a way, pushed the boundaries of my own thinking, and I'm really grateful for all of you who are watching them online, and the person who might have noticed is the person all the way at the left is my wife. She always has to hear me talk for hours and hours about all the articles, and she's the one really pushing me. So, I've asked you to get your phone out, and some people have already started to vote, which is good, which is surprising. The poll is already full, so 40 people have already started to vote without actually knowing the result, which is amazing. It's a little bit like Brexit. Perhaps people didn't really know what they voted for. So, if you want to re-vote, you can basically go to that site that I gave you, poll.ev. If not, then let's assume that, indeed, the majority of people were right that you've recognized 10 or more faces on the previous slides. So, while I will give this presentation, it will basically ask lots of questions. So, don't give the answer already if you don't know what the question is. That's the narrative. Right, amazing. So, 53% of you have already... Good. So, my first amazing thing was, of course, I became a professor in 2017, but the second amazing thing that happened was this. My wife thought, okay, we don't really have time for a dog, but let's just go on this website, borromardoggy.com, and see if there are any dogs out there who wouldn't mind to spend some time with two academics. And we came up with this amazing good dog called Tabitha. And this amazing dog is really amazing because she is a puppy entrained to become an assistant's dog. And these assistant dogs are absolutely amazing. Not only do they provide compassion and support to the people, they actually have some amazing skills. Like, for example, they can help you do the laundry. Who doesn't want to have a dog who can help you to do the laundry? How? Let's... The other thing that these dogs can do is they can withdraw money from the cash machine. Who doesn't want to have a dog that can withdraw money from the cash machine? Or they can help you to untie their laces or to help people to dress and undress. So that's really amazing. So we came into contact with Tabitha, who unfortunately is ill today, so we would have brought her in today, but unfortunately she's not in. But at the same time, we had another dog visiting us, which was Robbie, an amazing dog. And we had him for five or six days. And most recently, we also had Valencia, which is super exciting. And with these dogs, you can do experiments, like Pavlov, right? So you can collect lots of exciting data. So let me give you a little bit of exciting data about these dogs, and then you can afterwards see what do you think these dogs can do based on the data that I'm providing to you. So let's talk to Valencia. Valencia is black, has four paws, is a female, and is 12 months old, right? Robbie is also black, wow, has also four paws, amazing data, this is a really high quality data set, male and is nine months old. And then finally we have our star Tabitha, who is also black, four paws is a female, and she is 13 months old, so she's very experienced. So get your remotes out again and think about which dog can fetch, recall, roll over, sit, and wait on command. Is it one Valencia, two Robbie, three Tabitha, or you think all three dogs can do this, or perhaps none of the three dogs can do this? So you can now start to vote, because I made this available. So you can now start to indicate which ones you think and see data starts to emerge. And then suddenly the poll will be full because I have only a license for 40. So you have to be quick, if you're too slow you can't vote. So 63% of you think, or those who have voted, like with Brexit, those who, 63% think that all three can do this, well, let's have a look. Indeed, all three dogs can do this, isn't that amazing? Right, so the next thing, and I've already given you a hint on Twitter, I've uploaded a movie of one of the dogs playing with a purple squeaky toy. So a purple squeaky toy is the favorite toy of Valencia, Robbie, Tabitha, all three, or none of the three. Let's vote on that again. So get your big data set out, think about it creatively. Who do you think, oh, Tabitha is currently leading? Oh, the poll is already full, who thinks that Robbie, who has not voted, and who thinks that Robbie is the one? Yes, oh yeah, I see it in the back, oh, fantastic. Well, I unfortunately have to disappoint you that only 20% has got that right, because indeed it is Valencia, who loves a purple squeaky toy. The other two dogs love a blue ball, and they can play for hours. Okay, the next question. Some dogs, maybe all dogs, sleep during the night, and when they sleep they have a soft toy. So which of these three dogs do you think either sleeps with a panda, a moose, or a monkey? Sorry, it's the other way around. Does not sleep, trick question, negative reverse question. Which dog does not sleep with a soft toy? So voting again. Oh, it's getting close, it's getting close. Oh, it's a draw, it's 30, 30. So some of you think that none of the dogs sleep with a sleeper toy, and some of you indeed rightly predicted that it was Robbie who doesn't sleep with a soft toy, because Robbie is a male dog. I'm kidding. Who needs a soft toy, right? And then last but not least, and these are special skills, these special skills are special, all these dogs in a way have special skills, but these are the skills that we observed when they were visiting us. So it might be that these special skills they might actually have when they're not with us. So one of the dogs is when I blow my whistle, immediately comes to me, like within a flash, but more importantly, she can also hold the wait. So basically sit still for one minute when I say wait. So which of these three amazing dogs, or maybe all, can do this at the same time? Tabitha is currently in the lead, oh my God. I don't know who is voting so quickly or whether people are voting online, but it's impossible to get into this voting, it's good. But indeed, Tabitha, who unfortunately is ill today, is able to do this. So it's really a fascinating to see that just from some basic data we can already start to predict who is doing what. The next thing that was really exciting in 2017, I don't know why this is always moving around, it might be due to the wireless that everyone is on the phone, was that I joined the World Transplant Games and Hayes already gave the answer away, but let's pretend we don't know what happened in the race. So this is me clearly suffering, trying to hold onto the wheel of my mate, Frank van Impel, and I'm waving to you, Frank, and Eric Paul. And we were going and going and going, but what you don't see from this picture is that there are three other people here on this picture. First of all, if my remote works, if I'm standing, I should probably stand here, is my mom who's sitting here in front. So she completely, altruistically, donated one of her kidneys to me, which is amazing. So I became world champion with a 70-year-old kidney. So who says that an old part doesn't work? I don't know, it's great, I'm very super grateful. At the same time, the brother of Eric completely altruistically donated a kidney to Eric and gave him a new lease of life. And then finally, an anonymous motorbiker saved many lives, including the one of Frank, which of course is amazing. And our coach, Nico, I'm also waving to you, I know he's currently ice skating or speed skating. Our coach said, do a so-called negative split. Is there any cyclist here who knows what a negative split is? What is a negative split? Yes, so you start relatively slowly and then you keep your energy because at the second part of the race, you're gonna give it to full beans. And our coach recommended, let's go slowly first and then go faster afterwards. That was the idea that our teacher gave us. So in the amazing world of 2018, we're continuously collecting loads and loads and loads of data. And one of the things you can collect is where you're cycling. So we were cycling on that circuit. Another thing which you can see is you can see the altitudes that we're cycling on. Are we going uphill? Are we going downhill? And I have a laser. Then you can see the speeds that we were doing. On average, 36 miles an hour, which is kilometers an hour, which is quite given that it's quite a hilly stage. Then you can see the power that I was putting on the pedals. You can see my heart rate. And for those who are keen cyclists, my cadence. The cadence is basically a frequency in which you're typing. And some keen observers might see that there was no cadence here. So you might think, oh, Bart was just hanging in and not actually doing something. But sometimes when you collect data, things go wrong. But I was definitely, I can promise you, I was cycling tremendously. So what you also might see from the data is that there were consistent dips in the data. So what might that be? Bart was riding at the front. There were strong headwinds when we were going around. There was a mechanical problem. We were riding uphill. All we're doing, a turn 180. So turning around on a roundabout. So what do you think? Perhaps that could be as well, yes. Oh, thank you for those who are having faith in me. And thank you also for not mentioning any mechanical problems. Indeed, the answer was right. And you could have seen this on the Strav output because in this amazing circuit, there were two corners that we had to take 180. So of course we had to break in order to make the turn. So that was very well spotted. And later on, I will come back to this. The next question is, which of the four laps was actually the fastest based on the data that you present here? And my wife kindly said you can't see that from the data, so I've given you an extra option. It's hard to tell this from the data. So I'm going to give you one more time. So five more seconds. Which lap was the fastest? So AKA, did we listen to our teacher? Yes or no? Great. You all think lap four because that was a negative split. That's because we listened to the teacher. One thing is you can manipulate data. So I just made this first arrow slightly wider. But I've just manipulated that. Because actually the first lap was the fastest. If you look at our output, we drove the first lap in six minutes and three seconds with an average speed of 40k an hour with 297 watts. And then the second and the third lap, you can already see the data decreasing. So in a way, we didn't listen to our teacher at all. Nonetheless, we became world champion. I'm very proud of that fact. And we beat 23 teams, including the English, unfortunately, to say it. And the Spanish were 40 seconds behind us. So the next question, and you don't have to vote on this, who thus far has answered all questions thus far correctly? Just raise your hand. OK, I didn't predict that. So my next joke was, OK, you either must be family because then you would know the answers. You must be a real cycling enthusiast. Or you must know a lot about dogs. But no one had all the answers correct, so it highlights. It is, in a way, what have we learned thus far? Some data and some data points are really useful, while others are not. For example, the data that I showed with that the dog was black did not in any way predict whether that dog was able to do certain activities, whether it was a male or a female, who cares, right? Similarly, if you would look at my cadence that I was doing the data, that doesn't tell me how fast in a way I was pushing. But at the same time, some data is really useful, like, for example, speed. So at the same time, if you looked at my speed curve, you wouldn't have been able to predict the dips in the data if you wouldn't know what was going on. So you might predict, oh, it might be hills. But if you don't know where people are going, it is actually quite difficult to understand where our learners are going. So the crucial question in a way, and now I'm getting to the core of my lecture, is can we actually use all that learning data to basically help to improve our provision at the OP University and give students what they want? That's a big question. So what do students want? Big question. One way to do this is to basically ask them at the end of every module, were you happy with the provision that we provided to you, to this amazing 170,000 students? And what we did with work of Nylee and Vicky Marsha, I don't know where they're sitting, and also Denise, we collected lots and lots of data of all these students. And then we started to look which factor actually drives what students indicate they want. So we looked at characteristics of the module at the OP University. We looked at the student, for example, their age or their motivation, all kinds of big data. And we looked at this big data and then started to see what actually predicts their satisfaction. So we did this across 110,000 students, and we compared over 400 modules. And then we were able to say, OK, this is an excellent module. This is a good module, and this module, the students indicate they're not very happy with that. So what could be a reason for that? Well, bring out your remotes again. It could be, one, excellent modules have amazing teachers who give good advice. Two, it links well to their professional practice. Three, it links well to their qualification that they're doing. If you're studying nursing and you get a course on quantum mechanics, that probably doesn't necessarily link very well. Maybe it's the quality of the teaching materials. Or finally, it could be the quality of the teachers. So bring out your remotes again. And what do you think is the key indicator, according to students, what they think distinguishes excellent modules from not-so-good modules, according to students? I should have bought more licenses, you're so quick. Maybe the OU should buy some of these licenses that would be quite nice. So at number one is the quality of the teaching materials, quickly followed by the quality of the tutor. So you're thinking it has to do something with how we teach. And good advice from teachers would be useful, but perhaps you're thinking that's not the primary reason. So what we did in the data, we looked at two cohorts of students starting in 2013-14. And then we redid the analysis in 2014-15 with those students. So let's have a look at the top 15. So at the number one position, according to our analysis, was modules that were rated excellent had really good teaching materials, according to the students. And at number two were whether students were happy and satisfied with the assessment, the assessment facilities that they were provided. Quite interestingly, there are also some subtle differences. So the students starting in 2013 and 2014, they indicated, while the qualification aimed at was only the sixth most important thing for new students, while for continuing students, students who were already successful, at least with one modules, that was their third most important factor. While for new students in 2014-15 and continuing students, that became number two. So slowly, our students who are coming in are slightly changing their perspectives of what they actually want. And I think there's also quite nicely links to the change in tuition fee system, but also in terms of the students coming in with our redesign. Students want that their modules link to the qualification. So if you're doing a degree in nursing, I would love to do quantum mechanics, but maybe that's not the most logical option to choose. So if we then use another way of looking at this data, if we, for example, look at the satisfaction scores of these thousands of modules and looking at how well these students were doing, you would expect that students who are really happy would also do really well in terms of passing modules. So you would expect a nice upward trajectory in terms of that data. For those who understand scatterplots, it's not a straight line. Actually, there are modules where lots of people pass. Nearly everyone passes, but the satisfaction scores are not within the benchmarks that we would like to aspire to. At the same time, on the other corner, there are modules where all the students are really happy, at least those who filled in those surveys. But very few students pass. In fact, follow-up research we did with Lisette Totenelle and Kwan Neugen, and a solid set came all the way from Switzerland, which is amazing, is we then looked at, is there any relation between student satisfaction and student retention? And we found none. None. So it's really nice. Should we listen to students' feedback? Yeah, of course, we have to listen to students' feedback. But at the same time, if there is no relation with how students are performing, we might have to be more critical towards all the data that we're collecting. So in follow-up research, we then looked at, OK, how are teachers designing courses at the OU? And how does this link to the perceptions of students and their performance? So what we, for example, found, surprisingly, was that students like constructivist learning designs, and you might wonder, what is constructivist learning designs? Constructivist learning designs are designs where we put lots of stuff in front of the students, and we take the student by the hand. This is slightly over-exaggerated. So I see Lisette saying, well, that's not exactly what it is. But in a way, it's providing lots of content to students, and students can study individually. And I'm happy to debate later on. And the socioconstructivist designed absolutely led to a negative student satisfaction score. So students didn't like when they had to work together with other students or when they had to talk to, say, a teacher, which, in a way, given that I'm a socioconstructivist, I'm like, ooh, this is not so good. But the number one predictor for whether or not students are passing modules and continuing going through those modules is how teachers design courses, in particular, when they have communication in their design. So that's building discussion forums, wikis, working together, having OU live sessions. So in a way, this is really exciting big data, because we can start to unpack what are the recipes that help our students to progress. Another piece of data then showed that, indeed, if you design kind of traditional online courses where you study at your own pace, eventually students engage less over time. Well, if you design socioconstructivist learning designs, you have to work hard, because you continuously have to engage, which, in the end, leads to better retention. So this is a really interesting picture. So should we give students what they want? I'm just going to pause and drink. I can tell you what I want. This is it. This is the new Wilier Aero bike. I got one last year. This one is so much better. It only cost 10,000 quid. And I'm pretty sure that if I have this bike, I will win the European transplant games. But maybe sometimes you shouldn't give students always what they want. Maybe you should give students a slightly different way of thinking about their next goal. So by being clearer, OK, you're already halfway up that mountain, but where is the next curve leading? And maybe you should think about pairing it up with another student to give them enthusiastic reminders. Or maybe you have a coach at the back that potentially gives them advice. Take a lighter gear. Oh, it's only two more miles. Keep on pushing. Another thing which, in a way, I don't think we do very well at the Open University, is signposting where our students should be going. Should we go left? Because it's an easier road. There are less mountains to climb. And you get quicker to qualification. Or do you want to take the really tricky route with lots of challenges, but really exciting pathways? So I think in the commercial world, they do it already in a much more intriguing way. So if we go back to Strava, what you, for example, could think about is to give students like me data about, hey, there is one bloke who did the exact same track faster than me. And this would be really motivating for me to push harder, because I think that would be really good. At the same time, many students would find this competitive nature very off-putting. For some students, it might just be, OK, you're currently here. We do know that the next two weeks is going to be tough, because you have to climb that mountain. But rest assured that two weeks afterwards, it will become slightly easier, slightly easier. And rest assured that at the same time, there will be another big mountain to push. But if you know psychologically that you have to climb that mountain at four weeks time and by signaling this clearer to the students, you can actually help them, hopefully, to overcome some of the psychological barriers. And we're not very good at doing this with actual data. And lastly, it could be that you just want to provide some students with general sense of direction. Some students just want to see the kind of options. Maybe I go left, maybe I go right, maybe I go off-piste. But in a way, we should think about how can we provide this smart data back to students, because we're tracking, as Hayes will say, lots of data. But we're not necessarily giving this kind of data back to our students. So I'm pushing a little bit to just hopefully get a good discussion afterwards. So in follow-up research with Juan Neugen, we then looked at not only aggregate pictures. We started to look more fine-grained. What are students doing on a week-by-week basis? And what you see here is on a week-by-week basis what teachers design and what students are designing. So the blue is assessment activity. So you see a peak. And the orange-yellowy is a simulated activity. So read this chapter in this week or do this interactive activity this week. And you see the red line is the average engagement of students in that particular week. Is there a link between how teachers are designing courses and what students are actually doing on a week-by-week basis based on this visualization? There's no voting. Just say yes or no. Is there a link? Every time when there is a peak in terms of assessment, on average, it goes up. But what is really cool is that we can use this big data to unpack what's really happening. So I, as a teacher, would be really fascinated to see, hey, what's that in week 20? In week 20, the teacher indicated a very high workload and a big assessment. But in fact, students weren't really engaged in that week. So not only does it provide a mirror to teachers, it also allows them to start to think about reflection. Another course, which is brilliant in a way, in this particular course, the teacher has a very useful programming task in week five. And decided, let's give the students in week four free time off to basically prepare and to relax and not be too afraid of actually doing work. And what do you see? The highest peak, or the third highest peak, is when students are supposed to be free. So by giving this data to teachers, but also I think in the near future to students, we can start to map out these journeys to students so that it becomes clear what students do. And in a way, the next figure is quite provocative because our research shows with modeling all these data across 40 modules where we have this data is that 69% of what our students are doing on a week-by-week basis is determined by what we do as teachers. So that gives us as teachers an enormous responsibility. So think about it again in your mind. If 69% of what our students are doing on a week-by-week basis is determined by how we design courses, maybe we should take the responsibility of designing the courses to the best way we can for our students. And our students are increasingly diverse. So then we can, of course, dig a little bit deeper. I hope that you're still with me because you're quite quiet. Then with our friends from the Knowledge Media Institute, we can start to then look at do students actually engage with the core structure that we expect them to engage with. So we looked at three different kinds of groups. We looked at excellent students, students who pass with a 75 or above, passing students and failing students. So what you see here is the so-called excellent group. And some of you might belong to this group. Excellence, this is the kind of course structure. So the 45 degree line is when they're supposed to study it. And what you see from the excellent students is that many of them, this is a heat map, many of them study well in advance. And at the same time, we call this catching up. And in a way, catching up is not the right word. Catching up is basically also you could have this amazing chapter in week three. And then eight or nine weeks down the line, you're revisiting that chapter three because it's a really amazing chapter. So what you see here is the kind of heat map when students are catching up. But if we compare this with the past group, and if I do apologize if you get migraine, if I quickly switch between the two, does anyone in the room see any difference between the past students and the excellent students? Anyone? There's no difference? Not much? OK. Our modeling at least shows that as what you would expect, that excellent students study more in advance in comparison to past students. But also the past students are more likely to already starting to get in the kind of catch up phase, which in a way, it's not necessarily bad. They're still passing. But I think what is quite nicely highlights in a way is that, yes, we as teachers have a particular core structure in mind. But many of our students are going off-piste. They're not necessarily following the core structure the way that we think they design. And in particular, if you look at the so-called failed group, in the failed group, if I quickly switch between those, you can see that in the failed group, relatively fewer people study in advance. But in particular, especially here, lots of students start to catch up, but in a way, never catch up. So this is, again, really useful data to look at in terms of, OK, if we want to give these amazing students a chance, maybe we should give them a breather. Maybe we should give them a little bit of time to catch up. Or maybe we should provide a pass. Or maybe we should provide amazing Strava dashboards. Keep pushing, and you can get to that line. So what are our take-home messages? I think what I've hopefully showed to you today is that not all data that we collect is meaningful. So what I'm encouraging you to do is to focus on what matters, which is basically actual behavior of our students and our teachers. The second thing, which I hopefully have showed you today, is that big data without context and without theoretical framework is meaningless. I can show you hundreds of pretty graphs. But if you don't know what the peaks and troughs are in terms of data, if you don't know that there are turns in the data, it probably will make limited to no sense. The third one, and I'm going to be slightly provocative given that there are two provice chancellors, I think listening to student feedback is important. But I think what I've demonstrated is that student feedback is not relinked to what they're actually doing and how they're performing. So maybe we should be slightly more critical towards student feedback. And I'm hoping for some provocative questions on this. And last but not least, I think what I've shown you today is that our learning design shows that many of our students are following the learning design. But at the same time, lots of students are not. And what we're currently not really doing, and I hope you appreciate the cycling metaphors, is lots of our students are diligently sticking to the road and following exactly the same pathways. But there are lots of students who want to go off-piste, who take different routes, who take different pathways, might not like to talk to other people in a classroom or in an online session, but we're not necessarily providing great pathways. And I think learning analytics has some of the answers to provide some of these pathways that we're doing for cyclists at this moment in time. And I'm looking forward to your comments. Some of you might already seen the two amazing people in the back. So if you want to know more about canine partners, Penny will be demonstrating Granger later in the lobby. So you can go and say hello to them. But you can also do more than just go and talk to them. You can actually volunteer. We're looking for volunteers within canine partners. And we're also looking for foster parents. So if you have a day in a week that you think like, oh, I don't want to work. I want to take care of doggies. Great. Go and talk to Penny. And if you think I want to make this a full-time job, great, go and talk to Penny as well. And don't forget to donate your money. And lastly, I'm hoping that you have thought about organ donation. So if you haven't already thought about this, please think about this because everyone deserves a second chance. And I think what is also really useful, whatever you choose, yes or no, talk to your family because your family in the end is the one who makes the decision whether or not you can actually donate. So there are some flyers not to push you in the lobby. So thank you very much. I'm looking forward to your questions.