 So, first of all, my name is Sarit Medina, I'm a Ph.D. student in computer science from the University of the Basque Country in Spain, and my research group is called Tonnequin. Today I'm going to present a tool that my supervisor Oscar Diaz and I have been working on, which uses web annotations for assignment marking. First of all, what it means by assignment marking. Assignment marking consists on setting or defining a marking rubric, then the students complete and upload their assignments, the teacher marks all the assignments using the defining rubric, and finally they publish the grades and the feedback comments to improve their learning. But currently, all the activities can be done in a learning management software except marking assignments. Teachers need to download or print or whatever the assignments highlight to highlight their strengths and their mistakes and then translate those marks again to the learning management software. So, our approach is just to translate this activity to the web. So, which ones are the inputs or the inflow dependencies to conduct this activity? The defining rubric and the uploaded assignments that they need to be assessed. And which ones are the outflow dependencies? The students receive their grades and also the teacher provides usually feedback comments or online or in class to help the students understand where their marks come from. So, the inflow dependencies are student assignments and evaluation rubrics. Evaluation rubrics are composed by the criteria to be evaluated, mainly which ones are the competencies that are evaluated. For each criteria, which ones are the possible levels of completeness that the student has reached? And for each level, they have a description describing when the student has reached this level. And the output, basically, is the selected level for each criteria and the associated comment for each criteria. But how are the assignments marked along the rubric? It's mainly composed by three sub-activities. The first one, the teacher usually highlights the evidences which corresponds to its criteria, for example, the mistakes found or the strengths or whatever. Second, they provide comments. And finally, they take into account all the evidences and all the comments, and they decide a mark. And how we can translate those activities to the web? We use web annotations to assess the student's exam. So, to this end, we have developed mark and go that it's a dedicated hypothesis client for rubric-based marking. We provide a color coding highlighter to highlight the different evidences for each criteria that it's as easy as select a portion of text of the thing that you want to highlight and click on the criteria that pertains to that highlight. Then, the teacher can add comments to the highlight which will help students to know where are the mistakes or what is the teacher referring to. But it is not only the only thing. Comments should be personalized also for a student. Otherwise, it won't be valuable for them. So, for that, we provide a direct link to previous assignments and the possibility also to attach the reference to the previous assignments to let the student know what is the teacher talking about. Usually also students have the same mistakes. So, it is possible also to reuse those previously created comments. Finally, the teacher must provide its final mark. For that, they can use the sidebar just to navigate through all the evidences in the assignment. And finally, they decide the corresponding mark. It must say that the highlighter is consumed from in our implementation from Moodle. You can define the rubric and it consumes from Moodle and automatically creates and configures everything to start a student's marking. And the outflow, as we said before, is just to translate or write the marking activity to Moodle again. It automatically fills all the marks in Moodle and some of the comments that they provide. But also, we create a report in Moodle where we have links to each of the annotations, to each of the evidences that the teacher has marked. So, the main advantage of this idea is that the student can click on a link and they will be redirected to the exact point where the teacher has assessed its criteria. But as we are using also web annotations, they can reply also to the teacher assessments online, asking for clarification, doubts, etc. And which ones are the benefits that we found translating this activity to the web? First of all, the first benefit is that the feedback is provided as fast as the teacher assessed. Providing feedback on time is especially important in large classrooms and in continuous assessments, and it's not an easy task. Also, automatic translations reduces the number of mistakes done by the teachers when they translate and publish a student's mark. Then the feedback can be also assessed online, so the student's access to the assignment as soon as the assessment is done. But also, they don't need to go to tutorials or even discuss the specific tabs in class. They can do it online, everything. And they can take their time to find the extra information just to ask the questions to the teacher. And the most important thing is that they can answer in the context of the activity in the assignment. And as the feedback are web annotations, students can trace their grades, look for the evidences highlighted by the teacher and how they get the corresponding mark. And the last benefit, it is related with the assessment reviews. Assignments are referenceable, so the teachers can reference previous activities to give more personalized feedback to the students. Comments are also reusable, reducing the time required to provide the feedback. And also, annotations are reusable data sources, so we can consume those annotations to analyze the global performance of the classroom or one student's progress, but also it's possible to learn from previous assessments to automate or at least semi-automate assessments in further courses or in the future. But not everything are advantages. We have to face some challenges to develop mark and go that are related with our specific domain, but other ones are extrapolable to other domains. The first one is that we found a model implementation that is no way to open a student's files online. Assignment files are online resources as they are hosted in a model, but due to security restrictions, model automatically downloads the files to the computer. So what we have done is just to map the downloaded file, in this case, the unified resource name, for example, the document hash or a unique ID, and the new URLs. And have to mention that currently also hypothesis use this approach to annotate locally safe PDF documents and relate the local PDF and the web PDF. It's the same, but in different instances. So we have also extended this idea to support also plain text like files. Then another challenge is more related with the domain of assignment marking. In this case, the annotations must be limited to be only accessible by the student and the teacher. Otherwise, other students could be able to see their college marks. So our current implementation, our current approach is to automatically create private hypothesis groups for each student and teacher. It implies that the teacher will be enrolled in a lot of groups, one for each student. And we would like in the future to have an annotation access level permission. So they can be enrolled in the same group, but they won't share the marking related annotations. And the last one is more related to a legal issue, more than an implementation related one. In 2018, in Europe, becomes effective the general data protection regulation, which regulates how the data is collected and stored. In our case, the data was collected and stored in hypothesis, and we treated anonymous or we need to get the consent of the students at the university to be hosted in a third party server like hypothesis. So currently, our implementation, what it does is basically anonymize all the data, students' names, IDs, emails, et cetera, never externalize from Moodle, and the marks and assignments need to be stored in hypothesis. So for that, what we have done is just a one-way hash calculated based on the student's course and assignment ID to create, populate, and consume those hypothesis groups. And apart from that, another important aspect is that Mark and Go is a Chrome extension for Moodle, but there are plenty of LMS. For example, just to mention some of them, Canvas, Blackboard Learn or World Classroom, and we can use Mark and Go as a mediator between the LMS and the annotations. And we think that one of the possibilities, as it is mentioned before, is just to use a learning tools interoperability framework, which is a framework to interpret with those LMS and external applications. As far as I know, hypothesis already is using it for shared annotations and single sign-on, but nothing else. And to conclude, Mark and Go is fully available in the Chrome Store. We are using it to assess assignments as a pilot evaluation in some classes, and if someone is interested on it, feel free to ask for a demo, whatever, during this conference, tomorrow, maybe on Friday too. And we are looking for external validators for this tool. So that's all. Thank you for your patience, and if you have questions. Wow, that is some fantastic new functionality that I haven't seen before. Thank you, Harit. That was great. Do we have questions? Come on. Everybody's completely sapped. Anybody see anything there that they hadn't seen before in an annotation tool? I did. Automatically created private annotation groups between teachers and individual students, for instance. Granted, it's kind of a hack, right? I wonder if you have a question, Tim? The question was about the functionality that you were looking for, and you described it as, I'm trying to remember the phrase that you used. The annotation-level permission, or something like that. Annotation-level permission. Can you explain, I'm not sure I quite understood that, because that could mean a couple different things if you could elaborate on that. The idea is that currently teachers and students are in different groups. Just to reduce the number of groups, it could be a good idea just to avoid the access to some students. It depends on which one is the student. The annotation is from which student, just to decide if they can have access or not. Just to avoid the idea of one student looks, the other student marks and comments and whatever. I think that they have to be private just for the student and the teacher, the marks and the comments and the feedback. That's more or less the idea. Just to reduce the number of groups and the number of... I think that currently hypothesis in their permission, you can define the permission of annotation access. Just to read, you can set if all the group can read everybody or just you, but it could be better just to... It could be good to have... To define a list of the uses of the group that can read those annotations. An idea people have suggested is that if you create a group, say a private group, any member of that group can essentially DM any other member of the group. You could have as many one-to-one level conversations as you want. They're all inside the group and then as the member and a member of that group, if you can look through, you can see any conversation. The teacher could have one-on-one conversations with anybody or any two students. I think just to clarify, I would say and you have it correct that in the hypothesis world access to an annotation is actually... Unless you've marked an annotation as private to yourself, all annotation access is determined through the group that that annotation is in. The problem you're trying to solve would be better solved if there were more granular permissions model that would allow you to not just rely on the group that the annotation is in. Is that sound correct? Yeah, yeah, yeah. That's it. This is more of a comment than a question, although it's kind of a question for the hypothesis folks. What was really interesting to me about what you presented was the mapping to the rubric because I found myself, I was recently doing a peer review and I found myself naturally trying to use hypothesis to keep track of where I wanted to tie in my notes on what I was reviewing to the rubric that I needed to review against. I know that hypothesis has been used for peer review and I'm kind of curious whether you had looked into how hypothesis was used for peer review and if those features of mapping to a rubric are there or if the hypothesis folks are planning to add rubric mapping into their peer review tools? I think most of the peer review that's been done in hypothesis has been open peer review in the context then where all the annotations are publicly available or maybe available inside at least a private group to everybody who's participating. So it hasn't been extended to that level like a one-on-one kind of privacy level. Anybody else have anything to say about that? It was more the rubric map and the one-on-one was the rubric mapping that's really interesting. I think it would be interesting to go through common rubrics and map the permission models that are implicit behind those and then think about how we would take the permission model that we've got right now and evolve it to be able to support those because we tend to think things as just generally as she said, what is the model and how can we expand the model? I think one context is that hypothesis is often used for post-publication peer review rather than kind of the peer publication. I think that offers a lot of opportunities because you don't have to worry about the fact that this isn't publicly available yet. So that's another way to think about it. Maybe pre-peer review comments can also become post-peer review extensions. Any other final thoughts or questions? Well, we have actually reached the end of the day so another big hand for Haritz.