 Okay, thanks. So, yeah. Okay, my name is Martin Moran, and I'll be talking about today I'll be talking about building an accessibility recommendation engine for blind and partially sighted students. Okay, so our motivation needs to figure out what kind of materials are usable, are accessible for students that are blind and partially blind. And on the first glance one could argue that of course it's relatively straightforward, straightforward thing. You just check if the materials for blind students are accessible with screen readers and bread readers and stuff like that. But it turns out that there are great differences between learning styles of students that have vision associated disabilities, right? You have screen and bread users, meaning partially sighted and blind students. But then you also have different skill levels between those students. Some students grew up with computers and some students have completely are not familiar with the computers. But then you also have different learning methodologies, different learning techniques within the same accessibility group. For instance, some blind students rely more heavily on the braille reader while some students rely more heavily on the screen reader. So our goal is relatively simple. We want to be able to create a machine learning model to model the preferences of the students and then use that model to recommend materials that will actually be usable for those students. So in the presentation I'll briefly talk about the machine learning algorithm that is behind the recommendation service for blind students. And secondly, I will also present the results of the first pilot study and the recommendation web service itself. Okay, so the starting point to even start talking about recommending materials to blind and partially sighted students is how we're going to describe them. And as a starting point, we started with the ISO accessibility standards, three relevant topics that are perception of durability and sustainability. But of course, our machine learning model is flexible enough to make alterations to that descriptions and, you know, if need be, add new attributes and new descriptions to the materials. Okay, so because, and I will briefly talk about the machine learning algorithm that is behind it, right? Because we are working in educational setting, it's really important for us that we have a transparent algorithm behind it. We want to be able, if the students, if the teacher asks us, why is certain students getting some recommendations, we want to be able to answer that questions. Or for instance, if a machine learning researcher asks us how the algorithm behaves, how the algorithm behaves precisely, we want to be able to answer that question. So in a nutshell, we are learning the preferences for materials and the preferences are modeled as preferences for materials attributes. And the main idea is really simple. We want to model interaction with the materials as either successful, the material is accessible to the student or unsuccessful material is not accessible to the student. And of course, if some of you, sorry, I will address the things from chat afterwards. If some of you have the background in statistics or even machine learning, you would say that this sounds really similar to logistic regression. And in a sense, it really is. But the difference here is that we are not modeling preferences as a scalar values, meaning numbers. We are actually modeling preferences as a Gaussian variables, as random variables drawn from a Gaussian distribution. Or in other words, we are modeling preferences in factors of functions that we are then trying to defeat. And why we're even bothering to do that, right? The students that the blind and partially blind students use the computers very slowly. In order to build a reliable model, in order to build a sensible model, we have to be able to learn really fast. So everything that we do is done so that we can achieve really good results on very small data sets. And in fact, we are achieving state-of-the-art results depending on the data sets by a big margin on very small data sets. Okay, so what are the outputs of the algorithm? First, on the level of individual student, it is whether they engage the prediction, right? If the material will be usable to the student or not. Then this prediction is used to rank the materials. The materials are ranked higher if we estimate if you predict that they will be more usable to the students. And then we also give additional information to the student's teacher, which is on the level of individual student presentation of preferences together with uncertainties. This is important because you can see that on the right-hand side, on the bottom. We can plot the preferences as Gaussian variables, right? And that even if you don't have the background from statistics, you can kind of intuitively understand that the flatter plot means that we are more certain about our estimate. Whereas the plot that is closer to peak means that we are really certain about our estimate of the parameter. So in other words, we try to be as far as possible when we are communicating the parameters of the model to the teachers. If we are not certain about the learned parameters of our model, we try to be fair and communicate that to the teachers clearly. The second thing is we can also try to plot and observe different classes that might occur inside the classroom, which should give enough data corresponds to different learning preferences, different learning styles that appear within the classroom. And of course, the algorithm developed for this was not made in a vacuum. It relates to the TrueLearn, which is used to deploy to our GONE project, where it is used to model and predict interest and skill levels in relation to educational materials. And of course, the whole project of recommending materials to blind students is part of the X5GONE project, which is a European project that tries to create machine learning tools to empower the students to access open educational materials. Then the mathematics behind the model is inspired by Microsoft's TrueSkill algorithm. This is from where the basic idea about updating the Gaussian variables takes its inspiration. We unfortunately don't have enough data to conduct to really assess the quality of the algorithm on the specific task of recommendation for the blind student, but we do have the unique opportunity to assess it on a similar problem. I mentioned before that our main task is to achieve really good performance in very small datasets. In other words, we want to be able to learn really fast. So what we are trying to do with the same algorithm is to explain why supercomputers break down. And of course, as you might imagine, supercomputers break down at extremely rare events. So if you want to explain the supercomputer downtime, you have very small datasets. And there, we achieve state-of-the-art results that are significantly better than any other available approach. We also text this is in classification, this is in classification prediction task. The second one is from a Slovenian educational repository called IUJBIKI. There we were testing the ranking capability of the algorithm. And there we actually achieved slightly below state-of-the-art results compared to collaborative filtering and variations on the team. But unlike collaborative filtering, we are offering explainability of the algorithm. And the second part, which is also important for us, if we are not exactly Amazon and we want to create a sustainable solution that we would be able to run even in the long term, even when funds from European projects run out, we are computationally really effective. We offer better scalability than state-of-the-art approaches, which means that it is computationally significantly cheaper to run this recommendation service even for large amounts of students. Okay, now I'll briefly talk about a recommendation engine as web service and the results of the pilot study we conducted on Slovenian blind and partially blind high school students. So the web service was co-developed with me and Grego Junich from JSI's Artificial Intelligence Laboratory. And it consists of two parts, recommendation engine for blind students and management port for teachers. What I would like to highlight here is that we envisioned this recommendation service as a very much helping tool for teachers. We are not really focusing on students being able to learn themselves. We want to be able to create a tool for online learning, of course for online learning, but online learning in the context of students working with their teachers. So we want to be able to, of course, provide recommendations of relevant materials to the students, but we also want to give the teachers opportunity not only to select which are the relevant materials for their classroom, but also to get additional information about their students, how they're doing, and if the method they're suggesting are suitable for them. Okay, so this is the front-end as the students see it. It's by design really simple so that it is accessible to blind and partially blind students. The students log into a certain classroom and then the classroom recommends the materials that are planned to be relevant for them to the top. And that is the main functionality, right? And of course the materials inside the classroom are pre-selected by the teachers. And then for the teachers who have the management platform where they can add materials, create classrooms which classrooms are meant as mini courses. For instance, I don't know, introduction of course on climate change and so on. So they create classrooms and then through that they can manage both the progress of their students, but also the learned parameters from the models in regards to their students. Okay, so I will talk about the students, the actual platform for the teachers while I talk about the results from the pilot study. We're conducting a pilot study on partially blind and partially sighted high school students. And we wanted to first evaluate if our main assumptions about why we do this service even make sense, right? First of all, we were interested if there are in fact differences between different students in the way they access materials and in the way which materials are in fact accessible to them. So let's go to, and we were testing which created just the first pilot study, which was with four blind students and two partially blind students. So the first of all is I was talking about that clustering presentation that our platform allows and here we can see that the platform detects no clusters. That was partially by design because we really wanted to create for a pilot study students that are as different as possible, meaning that we could screen for usability as wide spectrum as possible. So it was by design, we include the students that use different learning methodologies that have different skill levels that belong to different age groups to use our system. So let's go to the results. I could also do the live demo of the platform. I mean, I can, I mean, I'm running out of time. So I'll just show it in PowerPoint. Right. So this is all right. I can do, can I do screen share? Yeah. Okay. It works. So hopefully you can see me, right? Yeah. Here we can firstly could first partially blind student. Right. And we can see that for her, only one was unusable. And we can see that for her, all three topics, meaning perception, this inability and the probability are somewhat equally important. And then on the, on the summary page, we can see that our estimates for understanding the ability and the probability, we have far more certainty in that as we have to the estimate about about the her preferred perception. If we go to, if we go to another blind student, a partially sighted student, for instance, pardon, we can see that for him, understanding ability is far more important than perception and probability. Right. So we can see that there are some sort of difference in, in what is important for different students. And then, okay, I can stop sharing now. Can I get my slides back again? That's not me. Yeah. That's me. Thanks. I can go to the correct slide. Okay. So I'll go to, to, to conflu, to, to, to conclusions. Now, so to sum up, right? Oh, we have one more partially blind and this is a blind student. Right. And interestingly, for her, the most important attributes are the ones regarding and describing the perceptionated attributes that describe the certain educational materials. Right. And interestingly, we have the least certainty about the estimate of her perception preferences. We can see that here. Okay. So the first encouraging, encouraging results from the pilot studies where the study were that one size fits all approaches not in fact suitable for all students. It makes sense to create a system like that. The other thing that we did, the pilot study was done in person. We were actually talking with the students as well as recording their interaction with the system. The general really exciting about the idea of having a utility to filter materials and recommend the materials that they will be actually able to access. Right. We are excited about having this service integrate with other services. And of course, to have added to have more materials added later on. What is also, what we're also excited about is the possibility to expose. I mean, our recommender recommender services build so that it can be exposed to be integrated with other services. Of course, because of our project, we are mainly focusing on recommending educational resources. But it doesn't mean that we can't expose our recommender service also to other platforms for instance, educational platform, which would get from us the recommendation for a specific student what are the most accessible materials for them from that educational from that entertaining platform. Whereas we would in return get information about the about that student with which we could use, which we could use to update our model. So that means that we will be able to update and get better and more tailored tailored tailored predictions. Okay. Just yeah, just just a second. I'm finishing anyway, so I'll have time for questions then. And of course, other thing is that other the area of future work that we really want to explore is, as I mentioned before, the way we described the way we described materials that we then add to the system. Right. We want to be able to explore the possibility for automatic material annotation. And we want to explore what is the most sensible way to describe materials. Okay. Yeah. Sure. Can I get the question in? Yeah. Okay. Do you have any questions? Oh, so I think there was one question there. Martin, for you, which was how are you handling consent to gather data from minors and people with disabilities? And are you able of algorithm to prevent bias and discrimination? Right. That's really good question. We have, I mean, our pilot study and every studies we do is was first, was first approved by the ministry. And then we are actually requiring consent both from the student, if they are the minor and from their parents. And then we, of course, have the contact where we can delete all their data from the system. And that's it. Also, how we are about addressing bias, right? What we're really proud of this algorithm are two things, right? First of all is that every model that we create is personalized. So that means that it's not that the possibility of being influenced by some sort of bias data is minimal, right? And the second one is that we are able to expose, of course, to expose either to the students or to the teacher or the parameters and all the interactions, all the steps the algorithm took in making the model that is tailored to that student. So these two factors, these two factors, I believe, are the reasons why the possibility of creating a biased model are minimal. If you compare that to the state of the art approach for recommending services, right? The stuff that Amazon uses, which is collaborative filtering or some variant in that form. There, because it's a black box model, you have no way to know why certain, for instance, when you go to Amazon and you get customers that are similar to you also bought. There you have no way of knowing how that model was, how that model was strengthened, why you get these sort of recommendations. And of course, if that is about what sort of thing you will buy on Amazon, that is fine. But it's not fine if you have some sort of bias recommendations when it comes to educational materials. But again, we are, sorry. Sorry, Martin, I'm afraid that's all we got time for. Okay. But I'm sure people can pick up questions with you in the chat during some of the other sessions if they need to. But I think you did answer the question we had. Thank you. Okay, sure.