 Ddod? Fawr i. Fan o bobl ydych i ddim yn ymlaen. Rydyn ni'n rhan fydd sydd yn y cwmwr i ddechrau mae'n holl o rhaid i dechreu Uri systemau. Rydyn ni'n gweithio i adidio i gyd o'r ddechrau i ddweud i ddim yn gw Culturaidd o pob yn ysgrifennydd. I will be reminded towards the end just to finish on time that I hope to see you soon. Thank you. Thank you all for coming. Yes, my name is Emily. This is going to be one of those sessions where what was advertised in the abstract has slightly changed. Our original intention was what we were going to do was do a workshop, which was almost like a live coding session where we went through R and everything like that. We then realised after submitting the abstract that actually it wasn't a sign up workshop and so on. So actually getting people to come with the technology was not entirely possible. So what we're going to do instead is rather than do that kind of workshop where we go through like line by line actually how to do the code. What we thought instead would be useful is to go through the tutorials that we have created and basically explain what is possible in terms of analysing lecture capture data using R. And then we also thought what might be really useful is just to have kind of for the last 15 minutes or so is to have a discussion about the use of learning analytic data. So my primary research area is lecture capture. So that's what I do my research on. And it's a really bizarre area to do research on because I thought that we had kind of finished having some of the conversations about should you record your lectures and stuff. I thought COVID had kind of taken care of that. And no, we're back. We're back where we were in 2015 in terms of because attendance and engagement has been problematic over the last year. We're now seeing those calls again to get rid of recordings. And I actually think that using analytic data is really important that we do robust research. So this is a very long wind introduction. What's going to happen is James is going to give you an overview of our project. So I should really say we're funded by Echo 360 for this project as part of the Echo 360 champions grant. Is that that guess getting it right? So James is just going to kind of give a description of that. We'll then walk through the tutorials that we've developed to show what we think is possible in terms of how you can use the analytic data echo provides. And then we'll have a discussion if that all sounds OK. So thank you for coming with us on this workshop. Cool. So I'm just going to introduce what projects about what we're trying to do. So me and Emily are both psychology lecturers. So we've been like the butt of many jokes like last couple of days of all the learning technologies said like complaining about how lecturers don't listen and like all the technological problems like they cause. So Herotfield will be like a nice little pallet cleanser and it's like a slightly different presentation and some of the other things. So we applied for this gram and Emily's research is focused on lecture capture. And what we're trying to do is for the people who do this stuff and really into it, it almost comes second nature about being able to look at it, see what you can use it for. Try and look at markers of engagements of students with learning materials. But then what we're interested in is do people actually know what's there? So are they aware of what information is there? Do they know the tools available to them? Do they know what they can use? And once we have that trying to think about what if we know what you do know what what don't you know. So what we're trying to do is to get to a point of being able to develop resources to try and target the barriers that we sort of identified in people. And this is where our tutorials come in of in the psychology department at Glasgow. We put a big emphasis on data skills. So being able to analyze data, being able to wrangle data and work with it. So we wanted to use the same sort of things we teach our students to demonstrate how you can work with this kind of data. So I'm not not familiar. Hopefully is, but just to get one on the same page. This is the sort of data we're talking about. So different different bits of software like this different options, but at Glasgow we use Echo 360. So to either capture lectures live or record lectures and upload them. So once you see that you can have an overview like this. So this is one of my courses on research methods. And you can essentially see like how much students use it. So the bars are like how many views they have across months tells you how many unique views are how many total views are that sort of thing. But then you can also download the data. So this is like Echo Echo's little dashboard. And that's what they provide if you click on your library, if you have it at your institution. This is what you can. This is what you can see. And I guess one of the other things to highlight also in our focus groups of child talk about in a second. This might differ depending on your role. So because we're lecturers, we kind of see this lecture facing version where for your lectures for the things you've uploaded. These are the kind of statistics behind it. But when we start interviewing learning technologists and they might have oversight over not just one course. They might have oversight over a whole college over a whole university that can differ slightly. But then if you do download it, this is the sort of thing you get out of it. So you can get it to an Excel file. The thing I have deleted from here is student names. Obviously, we don't want to get GDPRs kicked out. So deleting that information. But that is the only thing missing from here. So you get like a student by student overview of how many times they've viewed a given video, how long on average, how long in total they've seen this videos. So this is the kind of data we're talking about and what we're trying to work with. Before our little project, we did three focus groups. 11 people spread across them and it was a mix of lecturers. But then predominantly learning technologists was interesting. So I'm a little bit more new to this kind of research, this kind of area. So it was very interesting to see the perspectives of learning technologists. People were more behind the scenes on facilitating this stuff rather than the sort of lecturers we normally used to like discussing things with. So most like one of the main things that came out of it was particularly for the lecturers, but even some of the learning technologists almost never used the data had no real idea of like where it would come from, what they could do with it, and the possibilities around it. The only thing they really did was they would look at the dashboard, see X number of students viewed it and then move on. Some had much more in depth understanding. So because Emily shared it, it was some of the people who, the networks of people who are interested in like captured data. And one of the main things that came out of it was it's this very broad level view, which is fine if you're looking after sort of general patterns. But if you are interested in answering more specific research questions, you don't really get like individual student view data. And some people there is actually more granular data behind the scenes, so you can access the API and look at more granular data, but you need quite a high level technological understanding to be able to access that and make use of it. So from this, rather than having some very fancy, here's how you can do this very specific thing because we saw these barriers that people were just not as familiar with it. We just wanted to demonstrate in these tutorials how you can work with this data, so how you can read in, how you can wrangle it to put it into different formats, how you can combine it with other data sources. So for example, for a VLE, we work with Moodle. So being able to get data from Moodle, combine it with what you have with Echo360 and to try and answer different research questions. So after the focus group together with our colleagues in maths and stats, developed these tutorials on just a general overview of what the data is. Sort of exploring video course level data, so almost trying to recreate those dashboard visualisations you get. So just like your bar charts of how many views there are across videos, across time, combining it with other sources of data, which we feel is probably the most valuable and sort of novel approaches to it. And then one of the things that I did, because I was talking about, I deleted the student names off it, typically it's very difficult to work with student level data. So Emily has horror stories of trying to apply for ethics to work with this data from them because it's so sensitive at times. So depending on how detailed you get it, there's lots of privacy concerns behind it. One of the things we also will include is so it's not something that was going to focus on today, but different ways you can try and work with this kind of data using something called like synthetic data to essentially preserve the relationships, but it's not the real students data anymore. So it's one way it's like when you have big genetic databases, it's one of the ways that you can still share it and work with it, but it has fewer privacy concerns if you're just sharing the data. So we were going to get people to vote on the approach when we sort of recognise most people that have a laptop, they didn't have detailed instructions on what to come with, we sort of changed the approach a little bit. So we're more just going to give you an overview of what we've prepared, the possibilities, and then seeing what people think and then getting more of a little discussion going towards the end. Pass back over to Emily. Just before we'll go any further, can you be my roving mic? Yeah, of course. Has anyone used, I just don't have to be FF360, but has anyone actually used the learning analytic data that comes from ECHO, any other kind of lecture capture to do any kind of analysis? Can you run over that? How have you used it? I just be really interested to know. I struggle to do the things that I want to with the data that I can access easily through the interface. So I'm in a faculty ed tech lab as a data analyst. One of the things I was trying to do was for a year convener who wanted to know during the pandemic whether the lectures that they had pre-recorded and then asked students to watch were in total for a module the same kind of length as if they were giving 50-minute lectures and whether the students viewing behaviour was or how it differed between what they might do if they were watching a lecture and then re-watching bits of it. Sorry, if they were having a face-to-face lecture and re-watching bits, whether their behaviour when they were just viewing the pre-recorded material was very different. Okay, so were you able to do all that? It never got finished because there's sort of fitting it in, man, lots of other things because one of the bits of information that I wanted, which was just the length of the recordings wasn't very easy to get. That sounds really stupid, but it just wasn't. Yeah, okay. So the data you can get from the echo dashboard does actually have... Yeah, if you go back a couple... Oh, there is a... If you look at column D, that's one of the things you see down there. Is there anyone else who has used it for any kind of... You know, I'm just going to hover over that. I've used it to my previous institution, both of them are Penocto. In this particular instance, I was actually doing a study on the VLE, and I was looking at the correlation between exam results and the amount of time they spend in the VLE. And I noticed that there's actually... Across the cohort, it's a positive correlation, but if you go beyond double the average amount of time that people spend in there, it actually becomes an inverse correlation. So I was interested to see if the same thing happened in the same module with the Penocto videos, and it didn't. So actually the correlation stayed positive beyond. So that's an early part of the project, so I'm still investigating how and why. Another stuff that's really interpreted really... Yeah. If that gets a completion, send me it, please. Yeah, so there's not... I think one of the things that came out of the focus groups was there isn't really that awareness that you can use this data. And I think for electrocapture particularly, because it's one of those... It's this really emotive technology that when it comes to academics, it's the thing that we will talk about banning and stuff without actually realising that they have access to all of this data. So this is what we wanted to try and give people easier ways to show that you can actually try and evidence some of the things you have these emotional reactions about. And we also wanted to do it in R because it's open access and open source and everything like that. So we have written these tutorials. James, is the slide to have the link on them? The slides don't back and get it open in a second. I think that the link is actually in the abstract. Yes. Yeah, so the link to the tutorials is in the abstract of the talk because it's on the alt conference programme. So we've got three tutorials in total. So the first one is just about getting started with Echo360 data. So what this is doing, this tutorial is showing what you can do with that spreadsheet that you download from Echo. So the dashboard that Echo gives you with the bars and stuff, you can then download the data as an Excel file as a CSV file. The first tutorial that we've written is really just about almost essentially recreating the dashboard. How can you, working on your own computer, take the CSV file and recreate some of those analyses. So, for example, I'm just trying to find, this is the total views. So just giving, this is how many times students watched the videos. And actually what you can see is most students only ever watched the videos once. Okay, that's what that graph is showing. You could get that from the Echo360 dashboard. This isn't actually giving you anything more than you would get. But the point is that you are doing it with the data that you are able to download. So that first tutorial is really just very, very, it's really just showing you how to do what you can already do on your own. The second and third tutorials then try and go a little bit further. And there is a problem with the data. It does lack granularity. I would like to do analyses on how often students pause the recording. That to me is one of the biggest questions. That to me is like if I could, I don't have the time, the budget, or I don't have any more of the expertise to do this. But for me pausing is the big issue because I hear my first year is telling me that it's taking them three hours to watch a one hour video. So I know that that data is available in the API. If you go behind the scenes, it's not available and what's kind of done with the dashboard. But anyway, there is still a lot of stuff that you can do with the data that's freely available that you couldn't do in the dashboard. You need to be able to do a little bit more wrangling with it. So let me just. So one of the things you can do is work with time and date data. Now, if any of you have, have any experience of working are you will know that working with time and date data and are is awful. It's it can be really complicated. Thankfully, there's packages from the tidyverse that have made this a little bit easier. But for example, this is what you wanted. So this is a graph of the duration in minutes of each video. So these are James's videos. So this is week one to nine or video one to nine. I can't remember what week. So the duration in minutes there. So I don't know what happened to you in week five. Maybe just gave up. So I'm like, we have a reading week. So it's probably about reading week when it's just like having that little thing. But because you've got the full power of R, you can choose how you visualize that data. So this is the same data. It's just as a line graph or a bar chart. But you can also then, for example, because you can wrangle the data. This is what this analysis is doing is pulling out the months that the video was last viewed on. So again, it's a little bit blunt. OK. But this is like the last time the video was viewed was January, February, and so on. And this was a second semester course, I assume. I think this is our ODL. So it kind of spans the whole year. The videos would have been done in January. So you can see that they're not really watching them beyond the point at which the week. So the question was, are these people who have, can you tell if they've just viewed the video or watched all of it? So this is just views. This is just kind of very blunt number of views. There is then data about their average viewing time. Which I'll come to which bits of video. So the question was not which bits of the video have been watched. I think you can get that from the API if you get the call, but it's not available just through the dashboard. One of the quite nice things about being able to import the data and use it in R is that you can then actually use. So this is a package called GG plotly. And this gives you interactive graphs. So this is the same one. But if you hover over them, it will come up with accounts. So 1339 views were in January and so on. And you can customize these graphs to be interactive, which if you're trying to kind of produce something you could give to academics to show them, to allow them to interact with their data, I think this would be a really, really nice way of doing it. We've got some more, they're a little bit fancier in a second. That kind of stuff is what's already in the dashboard, but there's, like I said, you can go beyond it. So what this analysis is doing here is it's looking at the duration of the video. So let's say the video is 13 minutes 55 seconds. It's then looking at how long that student watched that video for on average. It's not again, it's lacking granularity. So for example, if the student had watched the video twice, you would get an average view time of 13 minutes and 35 seconds. You don't actually know how long they watched it for on education. So it's looking at the duration of the video. How long they watched it for on education. So it's a bit blunt. But what we then did was look at, so we created a variable called time difference. So this is the difference between the average view time and the full length of the video. So basically you can figure out what percent of the video do people on average watch. And because we can kind of transform that you can see it visually. So what this tells you is most people, there's zero difference between the total length of the video and how much they watched. So most people are watching it in full. But we also have people who are, you can see, so the further along we get with this graph, the less of the video they're watching, the bigger the differences between the total length of the video and how much they're watching. We could also make this interactive. So 710 people, zero time difference. Whereas 7 minutes, 71 people had a 10 differences. So they've watched it more than once. Did they then become a new data point? That's why it's all minus. So let's see. So what you're finding about, so to answer it a little bit. So the date you get out of it is student level. So it's aggregated to individual students. So if a student watches it once, it's perfect because the average time is the time you watched it. But if they watch it more than one time, there is another variable in there that is the total view time. And that will go above the duration because it's how much they've watched it in total. And then the average is somewhere in the middle of that. But it also tells you like their total view. So you can see if it's one view like the top row. The total view time is the same as the average view time because that's all there is to go on. But as soon as it gets to two or more, it starts to get more complicated. So for example, this person has viewed the video. What can I just say? These names are simulated. These names are made up. James did something with R. I don't really understand, but it was great. So this person has watched the video four times. The total viewing time is eight minutes. So the average viewing time is two minutes. You don't know how long they watched it for in each unique time. It is just one row of data. If you go into the API and you can get full access to the data, you would have that granularity. But in terms of what's available straight away, it is a bit of a blunt tool. So you do have to be careful about some of the conclusions. So what I did with this plot here is looked at per cent. So obviously, if you think about the difference between the average viewing time and the total viewing time, it does really matter how long the video is. Like if you miss one minute of a one hour video, that's less bad than missing one minute of a two minute video. So it does matter. So instead, I was able to wrangle the data into per cent. So this is actually what percentage of the video on average with that caveat that people watched. And you can see here we're kind of about 75% on average is is what people make it through for a video week five. It was a shorter video, so you get a higher percentage. But this is the kind of I think what's really important about actually downloading the data and doing it the analysis yourself is you can use it to tell the story you need to tell. And what that story is really depends on what argument you're facing. And I don't mean story isn't kind of fabricating, you know, whatever, but we all have. There's always different reasons why you need to be analysing this data. So it's also like this is the stuff that's available through the dashboard as well as the echo through 60 dashboard to total number of views per video. That's fairly blunt, but we can also then wrangle that into the total number of unique student views. So whilst you can't separate out how long each view was, you can say how many unique views there were, which can be quite helpful. That's just doing that. So, let me just get sorry, was there a question? Just clarifying for it. So this is this may be a good example of something that's a little bit more advanced in terms of using this plot. I don't know how to get the thing down and then. Okay. But this is an example of a more advanced interactive plot where we're able to overlay quite a bit of information on the plot. So this is. See if I can. That's maybe this is the unique views per video. So along the bottom we've got video 123456789. And then we can overlay with the interactive plot. So this is video one total views was 291, the unique views was 195 the duration was 13 minutes and 11 seconds. And the average view time was was nine minutes so you can get a lot of information into this dashboard and again, depending on the story you want to tell. And it doesn't actually take that much code. I'm not going to go through this code because now I'm not the point of the session, but that's the R code that makes that plot. It's not that difficult in terms of the amount it takes and very, very customizable. This is also another one where we have both the repeated views and the unique views plotted. As well. Where I think this so I quite I, I love a graph, love a good graph. Generally will do digitalisation for fun. I'm really fun to live with it. But I think where the analytic data really comes into its own is being able to combine it with other sources of data. That's really where the research is going to come from, because it's not actually that useful just to look at how many views you have you want to compare it to, you know exam crater or attendance that would be, you know, obviously, one of the big ones. By using on by downloading the CSV file and do it in our, you can then combine it so what we've done in this tutorial is combined it with scores on a multiple choice quiz. So these, the echo data that we have is data from James's research methods course which has, you know, statistics and everything. And we also have the data from the multiple choice quizzes from Moodle so two completely different sources of data. The way in which the way in which our does this is through joins, so through relational joins. Basically, all you need is for those two data sets to have one source of data in common. So in this example, the echo dashboard and Moodle both have the students email address. Probably going to be an email address or a, you know, a student ID anything like that but you just needed to have that source of data in common, and then you can join together anything. Okay, so what we can do is we can take the lecture data and we can take the multiple choice data and merge it. There's a bunch of different joins you can do so you can just keep the rows where you have data in both. So you, it might be that a student hasn't watched any recordings but they have done the quiz. So you can choose whether or not to keep all of that or just have you know people who have complete data sets. Sorry, I'm just going to skip through this here. So doing this and again I should just highlight this is simulated data. The relationships in here are based on the relationships that were in the real data. Okay, so this is kind of. This is what we found from our course but it's just it's been it's like a photocopy of a photocopy, if you will in terms of the data. So for example doing this we were able to plot the relationship between how many times students have viewed the video so that's on the x axis there, and then the score on the multiple choice. This is obviously an incredibly simple analysis, but you know in terms of getting the basic principles. You can see the applicability of this kind of, you know, wider which is that if you can combine this learning analytic data with any other form of information that you have. It then becomes very powerful in terms of trying to run predictive models and trying to see what the relationships are. And we also have again the interactive. Thank you. The interactive graphs so that you can have a look at, you know, for example that one there they go 100% on the MCQ. It's a stats MCQ so that does happen with with 16 views and so on. And then we're able to start running some we've just done a correlation here which was not significant, but it then allows you to do that. I was really really surprised when we did the focus groups, it how few people were aware of it. And then I suppose I think about my own behavior as a lecturer, as a like if I separate out Emily the researcher from Emily the lecturer as a researcher I'm really interested in this as a lecturer. When was the last time I went and looked at my analytics. I think I don't know the only time I ever would do it is when there's a problem so when someone comes to me and says oh attendance was bad today. I'm watching the video. That's that's normally the thing like are they watching the video. And I think one of the things we wanted to do with this was to try and highlight that you do actually have more information than you might think. And sometimes with academics they will knee jerk and say or will we need. We need data we need evidence, and it's actually they have it available to them. It's there. So we were trying to make this. We were trying to write tutorials that people could actually use the data they have to help inform their learning and teaching practice because it does tend to be such an amount of topic. I will say that in terms of what's lacking from the data that that granularity does cause issues and I would be really interested to hear if anyone has any thoughts about what you would like to do. But I think pausing is a really big, really big issue and for the echo data, not knowing when each view was so you know the last view, but you don't know when each view was. And so for example, I think one of the things that would be really useful from a research perspective is to be able to look at distributed practice. So I want to know not when was the last time they viewed it. I want to know the first time they viewed it. Okay, because actually to me that's a bigger thing which is that if you if I know that the last time a student viewed the video was two days before the exam. I could be like, right, that could be any kind of student. If I know that the first time they viewed it was two days before the exam that tells me something very different. So there is that lack of granularity does cause some issues in terms of what we can do with it and I know different platforms have different amounts of information and data that are kind of readily available. So I suppose I think what would be kind of useful from from what we've presented that here is just to really open it up to a discussion which is if you had the time, the skills and the data. What would you like to do with it and do you think in terms of your roles. I mean, I'm an academic surrounded by academics. How do you see this being able to help your roles as well. Or what would you like to do with it or how do you think we could actually try and get people to be more aware of what they have. Does that make sense. The roving Mike. My lovely assistant James. I think for my point of view I'm, I'm in, I'm a learning designer. And so it wouldn't be so much about lecture capture because I design online, fully online courses, but knowing the engagement data in particular types of interactive activity. Well as videos can see right what actually works what the learners actually do. Yeah, are they engaging with these beautiful interactives that we've created or are they not. And then that will then inform how we design in future. So that's the kind of data that I would like to use and then visualise in our so that we can then see the trends and then make decisions as design relation to that really. Yeah, and I suppose what are your thoughts on. I am, I love working in our, and I think the power of our is, you know, even just with extremely blunt data, what you're actually able to do with it. You can do quite a lot because you can rip it apart and put it back together how you want. Is that actually a barrier though. Or is it a barrier that is just inevitable because you have to be able to do it like this. I mean, I quite like our personally. And I've used our visualisation before. Yeah. And I think the ability to to join data so if this person engaged more with the learning activities that we created, did that them have an impact on their eventual outcomes that they score more highly than their exams blah blah blah. So, because it could be that not many people engage, but the people that do get higher marks in which case we need to leave it in because it's actually a useful activity. So I think our allows us to make those kinds of connections that we can then inform our processes going forward. So I, I love our. I'm not going to. I won't say anything bad about it. Thank you. I don't know. Yet I'm going to start course, which it's central to, which is what caught my eye. I suppose my observation on this is that you two are in kind of numerate disciplines. So the idea that good analysis of lecture capture data would help your colleagues understand better what's working what isn't isn't mad. But if you're talking about academics in other fields, all one could really hope for is for lecture capture suppliers to provide much better dashboards that non numerity people can draw inferences from. And the other thing that just struck me. Listen to this is that working out what's what what is the inference that can be drawn from data when presented like this. Yeah, that's an incredibly important facet of it and isn't always obvious. And that sometimes people who are data aware, get very kind of excited about the fact that they can present the data visually. But the really difficult thing is working out the inferences that can be drawn from it. Yeah, I think that's a really good point. And then it kind of comes back to in terms of learning analytics. I mean, I know at Glasgow we have the amount of data that we are collecting on a daily basis is just absolutely unreal. But I don't I don't know where it goes. I don't know what's done with it in terms of a practical level like is this something. Do you know we've so we've got planning insights and analytics that do all our data for us and is it the case that. I don't know if it's a lecture once in the city. Yeah, yes. The gold density in the all is very nice. And with the right tools, the gold can be extracted from the gold mine. It's a huge frustration of mine in terms of lecture capture research and I haven't been able to overcome it myself is that like, I know we have the data to kind of conclusively show people what the impact of attendance and recording usages. The data is there, but getting access to it and doing it on on a scale that's useful when you don't just have, you know, like 150 student opt-in samples is incredibly difficult. So, yeah, try to mine that for everybody's. I see the point in terms of we're very quantitatively minded in the department and that makes it an easy. Hi, thanks very. That's really interesting. I am also not familiar with our. I think moving away from learning analytics, but more to do with kind of content management and information that can help in our case. Team to enhance their learning provision or teaching provision. I think this data. Well, this tool could be used to help teams to review their content on a summer basis. So one of the things that we do is, you know, I'm not familiar with our. I think moving away from learning analytics, but more to do with kind of content management and information that can help an academic team to enhance their learning provision or teaching provision. On a summer basis. So one of the things that we do is we provide reports to, you know, year leads, block leads who want to review their content when they're thinking about refreshing it for the following year, so that they can see which content has been most actively. Watched and what which, you know, which lectures haven't had that much attention, particularly when you're looking at pre recorded content, as opposed to a lecture capture recording. That's different. But yeah, I think it could be very useful to provide a dashboard for course teams looking at archiving old stuff and replacing and refreshing it. So I know I mean with with our it's not something I have done myself. But you can make dashboards in our using something called shiny. Yeah. Which is slightly beyond me. But so it is actually possible to take the data and answers and then put it into a format that other people could use it as a dashboard. It would take learning. But certainly to be honest with chat GBT has made coding. Yeah, a lot. Yeah. My question is really about the kind of privacy then so. Okay, you get over the hurdles and you learn the tools to create, you know, to do the analysis to create the dashboard you want to then share it with teammate or team B. But you don't want that data to be publicly available. So how do you, you know, present those dashboards behind password protected screens that then becomes. I mean, I suppose power BI would be another option. I don't use it myself, but I believe you can use our within power BI. So yeah. So that would and I think you could restrict access through that. It's just to plug our materials. So the address for this is site teacher dot GitHub dot IO. If you if you don't know our and you'd like to know our applied data skills is. They're probably the best starting point. So this is my course. It's a micro credential. It's a 10 week micro credential. So it's aimed at. Yeah, the non academics. It's actually most people who take the course are in the NHS. But that goes from zero to fully reproducible reports and visualisations and stuff. And it also has all of the walkthrough videos and everything. They're mostly my face. I apologize. Yeah, but the entire course is open access. So if you want to learn our then that's I'm told it's it's OK. Yes. No, that's another one of mine. But so the full course is applied data skills. Data visualisation using half a research user is a paper. So that's just it's quite a short thing. The applied data skills is like a full 10 week, 10 credit course. That goes from from nothing to. If you're domiciled in Scotland, it's free. But that probably doesn't help that many people here. Well, so this one is everything's available online. So if you were domiciled in Scotland and you wanted to actually get the credits for it, it would be free. I think it's £700 to sign up for it if you're if you're not Scottish. But everything's there. So we just make everything open access and it's all got a CC by license as well. So actually if you want to steal it, that's also fine. I like all a question. Maybe not so much a question as a rambling thoughts. But I kind of thinking back as well to what Anne-Marie Scott was saying yesterday in terms of small steps. I think we didn't get here until last time because of coming from Scotland. So I highly recommend watching. You can watch it online. It's available. And so she's in the leadership role. And, you know, part of her guidance is looking at those small steps. What you've done here is you've got that small step. And I think it's, I think you're seeing value from this in terms of interpreting, analysing the data. And I think it would, you know, for your institution, you know, is presenting this as we've got this small step. We think it's beneficial showing that they're, you know, in terms of going forward, there's value in terms of, you know, improving. The educational experience and looking at where you go next as part of that. Because I appreciate, you know, something like data governance is a really complex area, but I think it does. For institutions, there's a real opportunity. And to do stuff with data in a controlled way that isn't also making very small gatekeepers. And that's something my wife works in higher education and the frustration I hear from her every day is like, they've got a data lake, but they don't have the access or the skills to utilise it. I think data skills are so important. Does it moving away from lecture captures or something, but it's basically the same skills that are in this tutorial is. So one of the things I care a lot about is about widening participation. And actually, one of the things I've been able to do because of the data I have available, and because of my data skills is to merge a couple of different forms of data that we have to show that actually some of our policies are disadvantaging our MD 20, SID 20 students. And that's only possible because we've been able to combine these things and wrangle them in our another thing we've been doing. I know everyone's been doing it over the last year is about extensions and late submissions and trying to get that number down. And one of the things I've been able to do is to actually look at the proportion of late submissions and extensions for the last four years. Then combine that with the extension data so I can say who actually had an extension versus just submitted late. That's then helping to inform our practice and, you know, various interventions and looking at stuff from Moodle. And it's all the same underlying skills. I think if you can kind of convince people to upskill and to kind of show them open the door into how you can use the lake of data. Is all the lecture capture stuff Moodle canvas everything and give them relatively basic data skills. It opens up so much. I actually remember a couple of years ago, another old conference where University of Manchester were looking at our analysis of the MSS data. I think what was, you know, they basically went to someone within the institution who did research on pterodons or flight patterns of pterodons, but they knew are. And I think that's one of the huge things within our education institutions is we have people like you or other departments that know these tools inside out. But as an institution, we never tap into that. Obviously, they've got priorities of their own in terms of research and their own education delivery, but I think there's so much untapped capability is finding ways of unlocking it. I think it's the challenge. Glasgow, we have, we have very, very distinct. So we have research and teaching to kind of traditional lecture, then we also have the learning teaching scholarship track, which is what me and James are on. And it's very, very distinct Glasgow, it's one of the most. But one of the things I'm trying to do is kind of encourage those people who are more quantitatively minded to think about it in this way because that a lot of people struggle to find the scholarship area of research that's away from their field, like, you know, if you are a biomedical scientist, suddenly flipping over to educational research feels very foreign. Whereas if you actually go, well, you can apply all of your skills, all of that statistical model on that you have to these questions. I think again, it's that untapped pool of data untapped pool of skills and knowledge that were not very good. It was just a time warning. Okay, very much. Right, complete data noob. So I'm going to bite the bullet and look completely. Yeah. You mentioned the API when it comes to data granularity. So all that extra data is available in the API. What's stopping using that? Is it because it's behind a paywall or you don't own it? So what's going on there? Well, you can extract it, but it's very hard to do that. Yeah. You have to spend days sitting with you, showing you how to use the data. Yeah. But it's acceptable. Yeah. Yeah. No follow up question. Thank you very much. It's like in our focus groups, like one of the academics, like one of the few academics we had. She's just very similar research family and she had a similar thing where she had that expertise. And she's strung to access it. But do you know SQL in SQL? So yeah, it's in that. So it's like database language. So mean Emily, very strong quantitatively. But that's just like a different category for like the average academic, even when you know like data skills, it has to work with like database like proper databases. So it's just one of those. It's like an additional level of complex data management skills before you then can have the data to then work with and have to do other stuff. Yeah. I think there and then. Hi. So I've got a question that relates to the access to the API or getting more granular data. You have echo 360 reports that administrators within an institution can access that have more granular data because that's what I discovered was the case. So we I'm from Imperial College in London and we use Penocto and I discovered that our IT department can get more useful reports to wrangle for analysis, then we can get through the user interface to extract. Yeah. So, but is it something that can be readily accessed so we've now got this so it's still not ideal because I would like to get data through the API but we need a data engineer to work on that because it's technical. That level of technical skill required and then we'll sort out reinterpreting what it means. But we have got a halfway house where we've got some reports that are available to it that they're now running on a daily basis and we're pulling into a data lake and then creating into data tables that we could build reports from. So is there something that they could set up that is that sort of system also possible with. So it might be worth asking whoever's has that admin role with it in Glasgow. Yeah. And that might give you access to some more data. Yeah. So in terms of like as a researcher I would that that's something I would definitely do as a kind of as a course lecture I think there's like you just want it to be as easy to just it just needs to be there. Like that's the. Yeah, that's yeah so I think there's a. I've got to follow on and that's so you taught I was sitting here wondering about who's role who who is whether it's better to have data analysts roles with which is kind of like what mine's supposed to be but I've really struggled getting access to the data that I want. Whether it's better to have roles like that if so that if it's not the vendor that's providing information in a dashboard that if there are bespoke things that department want to look at and combine as you show data from a VLE and data from a lecture capture system and marks data and all sorts of things. Is it better to have skilled data analysts within those departments that can work with the academic staff or I liked you. The fact that you talked about having that scholarship pathway for academics is that why you're, is that why you've developed these. Online tutorials so that it helps to support people to develop their own academic. That's that's that's part of it is is that it's part of that I think in terms of is it better to have like a data analyst role or I would and this is this is not an expert opinion this is just my opinion. I kind of think both in that like you know that the whole thing if you can't expect everyone to be an expert at everything you need people whose jobs are just to do it right. So I think there's definitely a need for more more data analysts roles because we have more need for that than than this capacity. I also think there's an argument to me made for people giving people agency over their own data, because if you can convince academics to care about certain things, it might you know and if they could actually see the data behind it. So I think both. I think if I had to pick one I'd probably go with the data analyst role because you'd rather do it right, but I think there's an argument for both. I am conscious of the time and I think there was. Did you have a question. I want to do with accessibility. Yes. Yeah, yeah. Can I just say one thing unlike the other thing that I like, and it's pretty much just to agree with you and that not intentionally not just to not just to appease but the previous institution I worked at. They gave data through Power BI so everyone had access to that so why I think it's quite important is if like say like having a dedicated data analyst would be helpful because and like what you touched on earlier. Not mean people like me and Emily are like the exception, rather than normal like you're not going to get every even without within our department, you have a mix of skills in terms of being able to apply the data skills, or even just the interest in it. So it's good having like a dedicated role for it. But then on the flip side, you kind of need like the skin of the game sort of thing where you kind of need to know what questions you want answering to be able to have useful kind of data. Because when I worked at my previous institution. We had like a similar kind of like data now data analysts collection sort of it. And then they used to just dump it in this power BI and it's just like right go find your data. So complicated even like me is someone who teaches statistics is like a day job. It was just a nightmare like trying to even just find the things that I was looking for. So you kind of almost need a bit of like curation of like this is the sort of things that we usually goes back to what Emily said about the story to tell not like manipulating a way that you get the answer you want but just a focus. So this is what this is the like the question I'm trying to answer. This is how I can. This is how I can hopefully answer it that sort of thing. I am very, very conscious of the time that I think we need to stop there. But thank you for coming. That wasn't quite the really structured session that we had in mind, but I hope it was still helpful. And as, and yet if you want any of the resources or anything like that, then please do get in touch quite happy to give everything away.