 An education group, I am not this tall. Wow, we're really excited about this. We had a really good conversation. And in this group, I don't think one of our other groups is back, but that's OK. We talked about kind of what is working and what isn't working with annotation and education. We all know how powerful using annotation is in the classroom. One of the biggest concerns that's there is figuring out FERPA and privacy regulations. So how do we know that the annotation service that we're using is compliant with the educational institution's privacy and FERPA rules? And so there are some questions that from a hypothesis. So by the way, I failed to introduce myself, but my name is Arthi, and I'm a product manager at Hypothesis. So there are some questions, obviously, that Hypothesis has to answer in terms of figuring out, how does Hypothesis make annotation in the classroom for FERPA compliant? We talked about a lot of use cases of things we want to see in the annotation experience in the classroom. Basically, a lot of those use cases talked about how do we engage the students more deeply in the reading experience so it's not just a linear annotation experience where they just do it just to get the homework assignment done, but they're really engaging in a community-centric way with the text and the other students in the classroom. And then the last thing that we went over is what are things that we want to see for next year? One of them, which I think most of us, maybe all of us in this room can agree with, is students annotating a Trump administration legislation and getting national press for the project and maybe even getting him to leave his post. More documentation on building your own instance for Hypothesis or for any other annotation tool, extending out the hypothesis search capabilities and actually building out a dashboard so you can see a heat map of where the annotations are being made, how those connections are being made across the students, who are the high-level annotators, things like that. So it's a really great discussion. If you want to continue the discussion, please contact Jeremy Dean, who's our program director at Hypothesis in the education space. Thank you so much. Actually, so there weren't too many people in the group that weren't already intimately familiar with the product or working on the product itself. So there were only a few extra use cases that were really problematic and kind of Mary Ann's phrase kind of stupid in their workflow right now. So first of all, actually, this is something that's come up very recently with the EPUB JS stuff, is multiple iFrame support. As Sebastian said, people send me links and there are iFrames with the PDF inside. That's really problematic. So that's a really interesting use case. And when it comes to EPUB, they're basically a website with multiple iFrames in it as pages. So that one, just a little benefit here. Like once we get this EPUB stuff done, that is going to include this multiple iFrame support. So everyone, once we get EPUB JS done, will have multiple iFrame support out of the box. So that's just an awesome thing in that evident point. And what was that timeline again? And so another use case that came up was potentially maybe the ability to reply to anchored annotations with annotations themselves so that you can basically contain a conversation or collect a list of annotations in multiple areas of a document within one kind of thread instead of having just no structure to it. The next kind of this is probably popular with this crowd is maybe we should support the pulling up annotations by DOIs. So that one came up and we looked a lot into that. URLs, as we all know, are very susceptible to things like session data or little bits of URL differentiation that breaks annotations. So DOI is really important for large use cases. The next thing we talked about was maybe the ability to do multi-selection because there are people curating papers and they have to do the same step multiple times about the page. And that is error-prone and that could be better served with being able to select multiple things and talking about it one area. An interesting thing that came up with that was maybe doing something with replying to annotations with annotations could alleviate this problem in the sense that you can annotate all these pieces and collect them into one thread in a structured manner. The next thing was? Do you want to pick the best of the next things? Well, there's only one more. All right. So again, for a lot of you, these aren't surprises nor are these things that we're going to immediately say, oh, we're going to jump on immediately. It's not like it's broken without these things at the moment. So the next thing is to document relationship management could be improved. Moving annotations, copying annotations, potentially one annotation to multiple references, and then migrating annotations. So we have this one URL, all of our annotations on it. We moved it or it's things like that. So anyway, it was a very useful discussion for me to hear a lot of really good use cases from people who have not yet met. So that was awesome. Thank you. Hi, I'm Benjamin Young with John Wiley and Zuns. We did the Apache Annotator Group, which kind of forked like any good open source project. And we had two groups. And these two guys are going to cover what's in the groups. Hello, everybody. Learning about the data model for some of this new stuff that was coming out of the Apache project. A couple pet projects that I've been dreaming about. One, which is just the simple ability to export your bookmarks to an URL, because it retains the hierarchical structure because that's available. Seems like a simple hack, and that was confirmed by this group. And so we talked about what it might look like to get together and actually do that, so I will share with everyone. We're going to try to build something which does that. So if you're interested, Benjamin would love to hear from you. And the second is this notion of silos and the way in which the annotation that happens within silos is very difficult to get out. And I've been playing around with this idea of sort of hijacking the like button in Facebook. So through a browser extension, that vote also goes to ultimately be a curated newspaper or let's just call it a newspaper. And that also seemed to meet with some interest. And so we were talking about those two as very simple hacks that we would like to do and may take a whack at. So again, I think Benjamin would be the best point of contact since I'm so new, but that was a part of the discussion that I was leading and interested in. Hi, I'm Josh again. We talked about, everyone knows about those standards, right? Those new standards we all celebrated. They came out on March, I think. We don't want those to die, right? Join the community group, the web annotations community group. If you Google web annotations GitHub, you'll see our repositories. Please participate, comment on our issues, open new issues on GitHub, and there's a mailing list there. Make our mailing list active again. Don't let web annotations die. Okay, so we had a wide-ranging discussion over at the nominally Wikipedia table. But I did manage to sort of boil down three things I could say, I could report back. So three lacks in the standard annotation model that seemed to, well, merit discussion. So one is versioning the idea that even in the W3 spec, it's a URL, but the contents of that URL aren't constant. And either is if you have a DOI, it references an article, but that article could get retracted or corrected at some future point. So even a DOI isn't actually a sort of reference to the latest version of something. So we really sort of wanted the ability to extend the base reference for an annotation with something, maybe from library science, there's something called Ferber or Wemi, the idea that there's works, expression, manifestation, and instance. Hope I'm reporting that right. So the idea that there's more data beyond that simple URL that could be used to disambiguate some of these things. And so the second report back was, for example, for typos, there could be a standard annotation format for simple edits to a source. So we can record corrections as annotations and people could apply them easily. And so if it was a journal article which appeared on paper and the paper journal didn't want to publish my one character fix to that article, if we get published as an annotation to that. Or on Wikipedia where I am happy to update the actual article, but maybe it's just easier for you to type that thing and submit it to us and editor for Wikipedia will one click apply that later. So in general, for all people who, we can cooperate on some basic format for these edits. So we could share them. And then that goes down to the next thing which is a sort of idea of bi-directional links or a search without requiring centralization. So once a base document is corrected, chasing down all the annotations based on that and applying or ignoring the correction. For example, if there's a data set for a paper which was missing some points when that data set is corrected, all the things which refer to that get chased down. Maybe this was based on the peak is still present even after these data points are added. And so that you can sort of say, okay, yes, I've corrected that, that still is valid. But this other conclusion I drew was not valid. An AP article, a wire article could appear in many slightly different forms and a dozen newspapers. If each of those published an annotation indicating that was a version of this AP article, when that AP article was then later retracted or corrected, it should be possible for a search engine or some tool to be able to chase down all those annotations and either notify or apply those corrections to all the different places that appears. So that was it, versioning, some standard for typos or simple edits and bi-directional links slash search without centralization. Thanks. All right. Hi, I'm TS Waterman. We were talking about search for annotations and search over annotations and what that means. It's a very small group with Graham Knott and Lauren Bianchini in the back corner there. So a lot of it was brainstorming and scenario building because this is something that doesn't exist yet. But some of the dimensions include, just like Scott was just talking about, if you have an article that's distributed over the web, how do you aggregate those things so you can search annotations for it as a single unit? So if you had a DOI, could you collate all of the annotations for that in order to search over them and sort through them? Or even if there wasn't some unique IDs, there's some way to collate all those things and put them together. We had some ideas like the number of annotations on a thing as a measure of its quality or at least its popularity or presence in the culture doing the annotation. Other things that are quality signals are who's commenting. So for instance, if you're trying to follow a reputed scholar or a movie reviewer or whoever who you've got some interest in, the things they're annotating are all of a sudden objects of interest for you. So that may be a filter that you wanna search on in order to find interesting things under a topic that are annotated by some member of a select group that indicates their quality. And we quickly got off onto what actually is an annotation for the purpose of search. So things like book reviews, movies reviews, summaries, abstracts of scholarly works, but also reviews of products came up. And so these things are all annotations to not necessarily a web thing with a URI, but a thing with a unique identity and a location. And that gives you the opportunity to aggregate all those and search over them, including things out in the real world that don't have URIs, but made through the internet of things. So a particular Uber driver, a particular hotel, Amazon products, restaurants, Yelp, all of these things, and then eventually ending up as people. So you have sites like Rate My Professor. Can you aggregate that stuff and search over who's leaving comments or search on the targets? And that extends into other ridiculous things like doctors, physicians, service people, dating sites, all of which can be aggregated and searched as annotations onto real objects. Thanks. Quick. All right. We were focused on adoption strategies. So basically looking at the researcher workflow around scholarly publication and potentially annotation as something that's embedded in those publications as a narrow focus. And it was a very wide-ranging discussion. So I'll just mention a few things that kind of came out of it. One is some differences of opinion. In some, there was some discussion about having annotations be on the paper already potentially by authors or the community as a way to kind of socialize the fact that there was an annotation there already to make it safe for other people to kind of come in after an annotated post-publication. But then there was other people, a difference of opinion including if there's a single annotation on my paper at publication then I failed because that would be a comment that should have been incorporated actually into the body. So clearly we have some work to do in terms of identifying the value proposition for annotations pre-publication. Post-publication annotation. One insightful comment is that we don't need to get people to annotate. We need to facilitate what people already want to do. And I think that was something that we should pay attention to. We need to really focus on ways to make it relevant to the author in terms of metrics, things that authors care about. Getting published, drawing awareness, increasing their age index, their sightability, getting through tenure committees and so forth. So really focusing on the needs of authors and that's kind of along the lines of facilitating what they need, what they want to do. Need to figure out how annotation is going to drive more views of scholarly work because awareness is really the key currency of what authors care about. A great idea was to look at what software carpentry community has done and to look at ways of perhaps replicating that in and around annotation carpentry or kind of the analog for that. Integrate with Citation Managers, a program called Org Mode, which I had never heard of before, but Org Mode, O-R-G-M-O, it's like a lot tech editor but also a way that people kind of gather notes and all sorts of things. I need to learn a lot more about it. And let's say build it into all sorts of places, particularly open access ones and Caltech said they're in and they're gonna make it happen. So that's it. Thank you very much. Hi, so we talked about annotation in the context of notebooks and what we mostly focused on was the context of people learning to use various kinds of code and the ability for them to immediately give feedback back to the developers of whatever library that is. And so one of the sort of use cases that came up is one of the things that makes a notebook really nice is to be able to have many different kinds of views on the underlying documentation and doc strings and whatnot and having the ability to annotate that but have the target of said annotation be not where you are annotating it but be a completely separate website whether that be on GitHub, whether that be on Read the Docs, whether that be wherever it happens to be wherever it matches based off of the semantics of whatever the view is. Probably going to require a bunch of finagling but it's an interesting use case and we went pretty deep into that sort of topic.