 So, kicking off first, we're going to have David Millman from NYU. He's the assistant dean to digital library technical services at New York University, who's working on a project together with Evident Points. So he will be speaking and then Juan Corona will be demoing. Our second speaker is Kathy Fletcher. She's the technical director for OpenStacks. She'll be followed by Nathan Lisco, who's the senior Drupal developer for E-Life. And finally, batting cleanup will be Drew Wignett. He's a software engineer and project manager for Mirador. He'll be telling us about annotation of images. Hi, I'm David Millman. The piece of the ecosystem that we want to show you today is annotating books. I'm from the NYU Digital Library Technology Services. And we're working in the context of a project called creating the architecture for enhanced network monographs funded by the Mellon Foundation. And what we're doing is trying to make scholars do what they usually do in discourse around monographs updated to the internet. So our values are pretty traditional. It's just we're infrastructure providers in the library. We want to provide a stable platform that is citable, that does preservation for scholars now and in the future, and enables the kind of discourse that people had been able to do traditionally with print. So we're just trying to update the old fashioned into how people work with monographs, how they could work with monographs today. There are a couple of interesting coincidences that have to do with the way our press works. The NYU Press reports to the library. So they're a customer of ours for providing infrastructure. They are not a journal publisher, they publish monographs, and they're not a science publisher. So we have a couple of interesting things related to the kinds of materials that are available from our press. So they're humanities monographs, and so that's what we're starting with. We're doing a couple of things in addition to annotation. Because they're humanities monographs, we want people to be able to navigate around them in using the kind of language that is appropriate for the field that they're in. What we've done is a little semantic indexing work that takes indexes back of the book indexes created by people and combining them into a meta-index, and you heard people talking in the panel yesterday about the imprecision even in scientific literature, and in humanities literature the meaning of words is even more difficult to pin down. So this indexing of indexing is one way for people to navigate across the corpus. And then we want to encourage discourse between scholars and in teaching through annotation. And so then we've been working with Evident Point to do annotation in monographs, and I'm going to turn it over to Juan, who will show you how far along we are. Thank you. Thank you, David. So I'm Juan Corona, and we want to talk about annotating EPUB publications. I work for Evident Point Software. We develop EPUB readers and so on. We want to give a voice to web readers and publishers of published content, so it works with EPUB technology. I have this slide here just in case you don't know what an EPUB is. But if you do, this sets the stage for me. So EPUB is a format for eBooks. It's very popular. Books can have layouts. Reflowable layouts are kind of the killer feature, but they can be more precise like in PDFs. But unlike PDFs, they work with web technology. So it's effectively a packaged website. You have an ordered collection of HTML with assets and so on. It's a standard made by the IDPF, the International Digital Publishing Forum. It was recently merged with W3C. They're coming up with evolutionary new standards, and we're really excited about that. So web annotations for EPUB, why now? Well, I mentioned that we're excited about the W3C merger. And the cool thing is that the publishing group also includes the web annotation working group. So there are two groups, and they're working collaborating at the spec level. So why not have the people who work with the implementations that make the implementations work together too? And that's what we've set out to do. So who is involved? Let me tell you a little bit about us. We've been making, like I mentioned earlier, EPUB in eBook-related software for a long time now. We're pretty excited to bring our expertise to this community. So we've been looking into the advancements that have been made in the open platform at the global community scale, and we want to join that effort too. And with the help of NYU and hypothesis, with this project, we're hoping to make a first step in that path. So yeah, I mentioned NYU, David, we want to make a system that you could annotate eBooks. And like David mentioned, this is going to be a part of a larger system. So in order to not steal your thunder, I'm just going to not talk too much about that. So I'm going to show you what we built with current technology. So what's available now, there is this project called RETIUM that lets you provide a full reading web experience, a full reading experience in the web. And evident point actually built the first few iterations of the RETIUM web ePUB reading system. So we've been involved with that project since its inception, and NYU uses RETIUM to provide this web reading experiences to their users. So it made sense that we got together and worked on something like that. And we chose hypothesis because it's the most prominent platform, it's well established, has good technology. So what do we need to do? We hit a problem right off the gate. Hypothesis, like some of you have probably seen or noticed that if there's content inside an iframe, such as you have an IPDF inside an iframe that it doesn't really work right now with hypothesis, we saw that same issue with ePUBs being inside an iframe, that's how it works right now with the current technology. So we had to make that work as if it was a web app that is not serving static content, but the content is loading in, loading out. And we needed to do other things too, like how to identify resources, how to identify the publications and their locations, and how to do the precise selector level targeting of the content when an annotation needs, for example, the range of text that's selected and so on. So I don't want to get too technical, but just to give you a brief idea, these are some of the implementation challenges that we've been kind of going through. We've been seeing issues with the, you know, we need to make decisions on the UI, UX, the hypothesis and W3C standards don't really kind of show us how that could be done with ePUBs, so we need to make ePUBs fit in. You know, in a simple example, when we list something in the sidebar for hypothesis, are we showing the user all of the annotations in this book? Are we just narrowing it down, you know, grouping it at the chapter level? What kind of, you know, UI, UX changes need to happen there? How do ePUBs fit into the annotation data model? What's the best thing to use as the UI? Do we identify the package or the content document and so on? So let me step out here and show you a quick demo of hypothesis working with, you know, with Redium, and here it is. As you can see, we're in a chapter called accessibility and usability. Oh, sorry, let me move it. Thank you. Yeah, let me change that. Hold on, let me switch the, okay, is that better? So we're in a chapter right now, and the screen got bigger so you get to see more, and this is how reflowable ePUBs work. We're in one chapter, and we see in the sidebar here that there are a couple of annotations. The sidebar is not really showing the UI perfectly yet, like I mentioned, but you can go ahead and annotate some content. And right now the sidebar is showing, you know, all of the annotations in the whole book. But you know, we have something working. You can scroll through some of the annotations. You can't navigate to them yet, but we're working on that. But if you do find one that's on the page like right now, the focus does change. That's why you saw one of them, you know, turn blue and yellow. And if we flip the page, we can see that, you know, everything works like we expected to, and the numbers on the sidebar are changing, and so on. So, you know, this is our work in progress. We're pretty excited about the technical changes that are going to come into Hypothesis. We're hoping to contribute this all as open source. So to move along, I will go back to my presentation here, if I can figure out how to do that. So like I mentioned, this is Open Flap Forms, open source. We're going to contribute everything. And now I want to show you on the other side of the coin what our idea for our, you know, adding a layer to published content looks like with our 10 years of experience as evident point. So we have a platform, our own platform. And we've kind of been building annotation systems, like I said. But you know, they've been proprietary on our own and, you know, just for our customers. So, but we have a way of adding new layers on top of publications. We have annotations that are, that, you know, the idea is to enrich and add new content on top of, you know, post-publicated content to engage students, teachers. That's why our product is called, you know, Active Textbook. That was our first vision. But it's expanded more to support other types of use cases for authors, you know, viewers, internal documentation for enterprises and so on. So we have different types of annotations. We have comments, notes, highlights, of course. But we also have rectangular regions. We can create an annotation where you can link some media, some widget, and it'll show up in a pop-up. Users can collaborate on their own at the book level locally or they can share their book, make a new version of that book, send it off to others or collaborate in a classroom setting. And it supports PDFs. It uses Redium for EPUB support. So we contribute a lot of what we do with Redium back to the project. We can deploy it in the cloud. It's in the browser. It's for the web. We have mobile apps that synchronize for offline support. And I'm just going to give a quick demo of what I've been talking about, having to switch again. Sorry about that. So here we have a PDF book and it's been pre-added. It has pre-added annotations. This is demo. You can click on that. It takes you to a link. And we can detect links inside the PDFs to do that or the user could add their own links. You could create inline comments, start a discussion among the readers of the book. And like I mentioned, the rectangular regions pops up with a video and other things like that. So now let's talk about our vision for the future for our, you know, I've kind of already set the stage with all of what we're doing with NYU and hypothesis, but we want to come to the, you know, maybe become part of the coalition, I don't know, with our product as well. So our ecosystem changes that we could make for Active Textbook are we could add in hypothesis support to not just support book level annotations, but, you know, as a global provider of annotations. We want to align our technology so it becomes interoperable with other tools. And one of the things to do that is to support the W3C's Web Annotation Data Model. So our APIs could export that and so on. We want to also import, export annotations, for example, with hypothesis. And yeah, so thanks, thanks for your time. And this is the first time I've attended an annotate conference. I'm pretty happy to be here and thank you, everybody. I'm Kathy Fletcher. I'm from OpenStacks, and I'm going to talk to you a little bit about the little bit that we're doing with annotation right now and our ideas for the future. And hopefully we can talk more about that this afternoon. So just briefly, if you don't know what OpenStacks is, I'm going to give you two slides, only two slides on marketing who we are. OpenStacks is an organization, we're basically a publisher of open textbooks. These textbooks are published under a Creative Commons by Attribution License. So that's the most flexible reuse license. We have 27 textbooks in the first two years of college that hit a bunch of really, really basic topics, chemistry, physics, sociology, psychology, economics. We've got a business series that's going to come out in the next two years. So we are a publisher of books. We're supported by a group of foundations that have poured in about $20 million to create this library of textbooks. And so far, 1.7 million students have used these books. We try really hard to make sure that's an undercount. So that's a faculty member who has come to us, registered with us, and said, this is how many students I have. So that's how we get to that number. So anybody who just downloads the book from the web and doesn't register, we're not counting them. So they've saved about $160 million. Again, we're trying to be conservative, so we're not like assuming that these textbooks are the highest price that students are paying. Some students are paying $300 for a textbook. So we're estimating about $100 per textbook for that savings. So that gives our sponsors, the people who put the money in, six times return on their investment just with the people we can actually count, for sure. About 35% of degree-granting institutions use these textbooks. Physics, which is one of our older, and by older I mean five years old, textbook is in the top three of textbook adoptions. And for maintaining that library, we have partnership with a lot of different, not a lot of different organizations that do a variety of things. We have print partners that work with bookstores to print the books. We have partners that do homework systems that are based on these books and a variety of other things. And those partners contribute to sustaining this whole ecosystem. So those books can be maintained, new versions come out, corrections, etc. So that's who we are. What are we currently doing with the annotation? So we have one small project in our research department to incorporate annotation into a research and homework system called Tudor. So the questions that our team is trying to answer, so this is a National Science Foundation funded research to look at how students highlight and how those highlights correlate with how well they do in the course. So are they highlighting good stuff? Are they highlighting the same things that other students in the class are highlighting and what can we tell about their actual highlighting patterns that will let us gain insight into their understanding of the material? And then as a second phase of this research, looking at can we use natural language processing, the highlights that students do, the group, the highlights that the group of students are doing to generate review questions for students? And can we use our cognitive science principles and research to give them those review questions in a spaced way so that they are retaining knowledge and retaining understanding? So that's our small project and we are using Hypothesis to integrate with this tool so that when students are doing their readings, we are capturing that. And over the, it's a three-year grant. We'll be doing some of those other things to generate questions from that. So I'm going to give you a brief tour, hopefully a quick tour of kind of the landscape that having this Creative Commons buy attribution licensed content has created. And then what we want to think about with that landscape with respect to annotations. So I'm going to use sociology as a case study and just give you a visual of where sociology lives. And thinking about we are creating a distributed network of this content. And then that actually creates some existential questions for us and also lots and lots of opportunities. Okay, so this is sociology, the original at our site. This is the digital copy of this book. It's hosted at cnx.org, which is a part of OpenStacks. This is what it looks like. We have an adapted version of this that we embedded a quizzing tool inside of. The textbook is adapted, each section of the book is shared between those two. And then some smart technology figures out, hey, what section of this book is this? What questions do we have that go with that section and pops that in so students can do review and practice? So that's one other version that's still within the OpenStacks ecosystem. This is BC Campus, which has a version of the sociology textbook that they adapted for the Canadian market, for the Canadian students. So that it wouldn't have a whole bunch of US specific examples that we're trying to illustrate culture and examples, etc. This is what the book looks like there. That's hosted in press books. If any of you guys know press books, press books is another way of displaying the books. And it's using the same HTML that was produced, but it looks a little bit different. And they actually have to do a lot of work. The math titles are harder to get in than the non-math titles. Math is always hard. Sorry to quote Barbie. And this particular example, the caption as we reworked and the text is significantly reworked. So thinking about annotating, we would love for annotations to show up even on adapted contents of books. But you have to realize that some pieces of that are going to start to change and evolve as you move throughout this ecosystem of the content. They also have some stuff that they haven't embedded into the textbook, but is really interesting material, which is case studies that faculty have produced to go with the textbook. So these are BC campus case studies. And these things actually live all over the place. They live basically in every faculty member's learning management system. They are developing these case studies and they go with the textbook. Lumen Learning is another organization, for-profit organization that produces courses. And they reuse OpenStacks content. So this is the sociology book in Lumen. They also host using press books as the underlying technology. Here's what it looks like inside of Lumen. So pretty much the same, I don't know whether you remember the example. But they format a little bit differently, they actually reworked. And then within their course, they've added quizzes, they've added additional case studies and things like that that teachers use. They've added laboratories, et cetera. And then finally in this distributed ecosystem, we actually produce PDFs. And a lot of usages coming through those PDFs, faculty are downloading that PDF, uploading it into their learning management system. Every student is downloading that PDF. And that's, we're causing that spread. I went to Canvas, the main site for Canvas, which is a learning management system, very popular learning management system. I think it's just behind Blackboard. And they have a set of public open courses, and I just search for sociology. And I found this sociology course, which is using the OpenStacks book. And then I clicked on that link, I don't know whether you can see it, because I remember when I was at the back of the room, I couldn't see stuff. But it says, OpenStacks College Intro to Sociology, Chapter 1. Click on that link. That actually goes to Sailor, and Sailor is hosting their own copy of this. So they've hosted that, if you, again, probably can't see that link, but link inside that green box, that's Sailor's copy of that chapter of the book. So that's the landscape of where all these books are, just for one, sociology. So, what are we thinking about doing with annotation? So I'm going to talk about, and this is the part where I'm going to go back to something you can't see, because it doesn't exist yet. These are the questions we have. Okay, can we use annotation to foster meaningful conversation and sharing around these textbooks? Because people are sharing stuff around these textbooks, but not necessarily in a very connected way. And then we have some existential questions that all of this distributed use of the OpenStacks content creates. And one of the existential questions are, I showed you that number at the beginning, that 1.7 million students who are using our content. Well, we only know that if somebody comes to OpenStacks.org, if a faculty member comes and says, yeah, I'm adopting this textbook. And we have ways, I mean, we keep some answer keys and things like that to try to encourage them to come and tell us. But that's limited. As these textbooks spread in ways that are super useful to be using, we're gonna not know that. And then we're not telling our foundation funders, hey, what their impact is of their social investment. So that's a big deal. And we also have this sustainability model that we've built off of having partners who know and can advertise what they're doing on top of this content. And we also have things that we do on top of that content. So how do we, so the question is, can we use annotations as a way to measure the impact of this distributed network of the content? And can we use it, and this may be a sneaky, I don't know, but can we use it as a way to actually encourage people to come back to OpenStacks and to come see what OpenStacks has? So we have two interns this summer that are gonna just do a series of explorations. And what we want is for them to build, we have a bunch of interns, but these are the two that are getting to do these prototypes. We want them to build some prototypes so we can show what this might look like, to see if it's even interesting, to see if it's compelling, that kind of thing. So there's some things that we wanna try. One, we have a very traditional errata report errors and corrections process. Every two years, the PDFs or every year the PDFs of the books get updated. The digital content is updated every couple of months because it takes a little while for that to go back to the subject matter experts and make sure that the corrections are correct. So we have a very traditional process, but could we use an annotation tagged with errata throughout all those different copies? Could that actually come back to us? And could corrections get pushed out through annotations so that somebody knows, hey, there's a new version of this and it's got a lot of improvements. Anyway, so that's one experiment. Another experiment is could we use it as a way for the community to advertise additional resources that go with it? So the books are all modular. Each section of the book is its own URL, its own address. So somebody can say, hey, there's this really great video. Or, hey, the sim, like our physics book uses simulations from Colorado Boulder. The FET simulations, they're fantastic. Two of those are being made accessible and they would like to make more of their simulations so that if somebody comes to this, and this is not easy to do, this takes a lot of work. If somebody comes to this sim and they're blind, how do you make that exploration, that laboratory experience actually work for them? So that takes some work. Well, what if that didn't get updated in your copy of the book? Could we use annotation as a way to say, hey, there's this great new resource that's gonna make this experience better for these students. Can we use that to attach those case studies that are localized to all kinds of different ways? So those are some other ideas. And then, can we even use annotation possibly as a way to connect the textbook experience to physical locations? So if you are nearby a museum in San Francisco or a museum in Houston, could you have an annotation that shows that there's something that goes with this course you're taking right now? Hey, walk into the science museum and check out that exhibit on evolution, because it goes right with the chapter we're in right now. Those are just ideas. And then a third exploration that I want them to work on is assessments. Because the books come just like a traditional disc book with a bunch of assessments, but could we have this as a way to attach flashcards, quizzes, things that the community's generated just in time when a student needs it when they're reading a particular section? So if you are interested in that and you wanna work with these interns, let me know. If you wanna advise them, let me know. I think, anyway, that's it for me. Hi, I'm Nathan Liscoe. I'm a developer at Elife. We're an open access journal that's online only. We're funded by HHMI, the Wellcome Trust and Max Planck Institute. I've been there for three years. Our mission is ambitious and it goes beyond our own shores. We don't just wanna solve problems that meet our needs and the needs of our users, but we wanna help the ecosystem. So I'm eager to talk with anyone where our goals and aspirations align, and then we can work together. Another shout out to the Annotate in All Knowledge Coalition. This has been a real driver for our prioritizations in incorporating annotations into our platform. We wanna ensure that we're investing time into things that are sustainable, interoperable. And our executive director attended 411 last year, and he came away very energized. The discussions he had there have fed into the work that we've been doing with Hypothesis. I just wanted to, this was one of his slides at 411. And I just included it because it gives an indicator of the progress that we've made since that time. So some of the possibilities that they discovered at 411 that Mark was involved in the discussions with other scholarly publications is the opportunities to have journal clubs and ask the author sessions. I think that's one of the ones that our editor in chief, Randy Scheckman, is really passionate about the ability to, if the author is willing, to engage with the readers of the publication of the article, and we envisage perhaps opportunities where there can be live dialogue on the article. And other possibilities, it was really interesting to see the presentation by a journal press, and the possibilities that that can bring to surface more of the kind of meta-knowledge, the dialogue that's happening in the peer review process. And if there are opportunities too, can we publish that? And can that help to advance the scientific method? Can it help to advance the knowledge around an article? Some of the barriers that we had to integrate in the annotated tool into our website, we needed the ability to authenticate around a third-party account because we, as a publisher, we've got other ideas and other ambitions. Annotation is one of them, but we want to build kind of author profiles and help to sort of motivate and draw attention to good behaviors around science. And so annotation is one of these things that can help with that, but we also wanted kind of contributors to that dialogue to be willing to not be anonymous if they wanted to have that dialogue on our channel. We're not discouraging people from commenting on it, on our article from the public domain, being anonymous or using pseudonyms, but we wanted to have some editorial control to ensure that there's a high quality of dialogue on the Elife annotation channel. So to that end, we need to be able to moderate. And we also, we put some effort through our product team to engage with our users already, even before kind of kicking off the work with Hypothesis to ensure that the annotation tool could work for our users. So we've been able to feedback some kind of UX improvements, and that's led to us being able to make some customizations, just mild customizations to kind of improve the experience. I am going to do a demo now. Hopefully the Wi-Fi can, it's been holding up really well over the conference. I just want to congratulate the people who have been doing that. That's kind of a big stress for attendees. And so thanks for that. I'm probably jinxing it right now. So everything I'm about to demo, I just didn't kind of rely upon side knowledge and sort of quiet conversations to get this working. All the documentation is already available on the Hypothesis website in the Read the Docs. So yeah, I'm going to kick off the demo. Just going to change my display. So those familiar with the Life Sciences website, this isn't our current life site. This is the next version that I've been actively working on with the other members of the development team. And this should be live in the next month or so. So you can see we've got the Hypothesis client integrated into our site. But rather than having sort of public group as the default, we want to lead people to the editorial controlled space, the Elife publisher group. So that's a new feature that we can configure the client to divert to that. And then we encourage people to log in. Now this is directing us rather than the Hypothesis login. It's the login for our website. So just log in. And then it's directed the user back to the article page. I'm now logged in, and I can highlight and annotate. And that's not new, obviously. That's not new features. So that's published it to our group, and that's public. That's the moderation level that we want it to go for. We will moderate after the fact. But we understand that as other people want to integrate this functionality in their website, perhaps they want a different moderation model. But this is going to work for us. So I've left a comment. I can flag my own or flag other people's comments. And that will alert a moderator on our side. And then by email. And I'll just log out and log in as a moderator and just show you the experience. And then I'll just show you a couple of other UX improvements that we've been able to make. And then there's not much else to show you. So the moderation happens in the same client. As well, I don't need to log into Hypothesis to do that. I just log into my system, and one user has been designated as a moderator. And then if a piece of content has been flagged, I can hide it. The other features that I just want to draw your attention to, it may be hard to see here. But the colors and fonts have actually been selected by us so that they can match the branding of the website. And the comments number here, that's the new feature that they've added so that you can just add a bit of markup on your site. And then the JavaScript can come along and bump that up a number once there's a new comment. Yeah, that's it really for the demo. I just wanted to say that I'm hearing a lot of kind of interesting ideas, even though it's not directly in the field that I'm in. And my brain has just been going everywhere, just thinking about the possibilities and thinking about the links for our own data, the models that exist already within publishing that would allow us to kind of harness the annotation model in new ways. If you're seeing things that we can collaborate with you on, we're an open access journal and we're open in many ways. We like to share a lot of the code that we do. All of the code for our website will be publicly available as well. And yeah, so basically if there's anything that we're doing that you can use, go for it. And also if you're seeing as a kind of an interesting use case and you want to, all of our content, we want it to be as minable and as usable as possible. It's published under a very permissible license. So if we're not doing things in the way that makes it easy for you to mine our data and get the most from it, then please talk to us. That's it. There we go. Hi, I'm Drew Wynget. I'm from a department of Stanford Libraries called DLSS, Digital Systems and Services. And we're a group of a couple dozen engineers who write open source software for libraries. One problem that we've been dealing with for a few years and been working on around annotation is annotating images. So I am a maintainer and early developer on a project called Mirador, which is a piece of open source software that allows collections of deep zoomable images to be brought in from multiple institutions interoperably and compared and annotated. And I'm going to talk a little bit about the background of this problem and what Stanford is doing to address this and then how it connects with publication and hopefully the future of annotation in research. So the problem kind of began with the fact that images are really important for many types of research and libraries have a lot of extremely high resolution image data that can't be easily given to researchers to use in their research. And this sets up an asymmetry where some scholars at well-resourced institutions are able to go and directly deal with certain books and whole fields of researchers are not able to have access to the same materials. So you can see, for instance, if you're interested in the materiality of a work like this, it's useful to have a really high resolution imagery. But it's really, really, really big. And it's too big to be sent over the web. So people began creating these tiling viewers that could view these huge, huge works in one sort of seamless interface without having to download all the pixels. You only download as many pixels as you actually need at the time that you're using the work as you scroll around. But we kept the community of library developers kept re-implementing the same software over and over again. So we needed a standard for expressing this. And Rob Sanderson, who's here today, was basically the mastermind behind this. So the standard that was created is called IIIF. And there are a couple of APIs. But the one that lets us get these images is called the Image API. And the answer to unifying this in an interface like this with other works ended up being annotation. We needed a way to represent these things as combinations of materials from all over the web with particular relationships. So to get more into that side of things, I just want to show some things that have recently been released in the Mirador project around annotation and show specifically how these relate to images. So in IIIF, there's a notion of a canvas or a blank space where onto which images can be annotated. And this allows multiple images to be shown in a spatial relationship. So in this example, from the Bibliothèque Nationale de France, we have a manuscript which has had its illustrations cut out of it. And they were sold on the streets of cities of Europe for hundreds of years. And then a university finally found this piece. So someone can say something like, this is missing, sad face. And that annotation can go off to a server somewhere. And if they're in that research group, they can know about it. But through the magic of annotation, another institution can have digitized that resource and found it separately and then annotate it directly onto that canvas. And it is still zoomable in the same space. So this manuscript has been reconstructed digitally. And the same thing has been done to allow images from different time periods. So images taken in the 90s, images taken in the early 19th century, multispectral imaging to be brought into a single interface like that. And we have a really good example here with a book that has a particular page, which I'll just bring up, with many, many layers of multispectral imaging attached to it. So you can see that in this case, this ink here has a certain chemistry that when exposed infrared light makes it invisible. And that tells us something about where that ink was sourced, the history of the manuscript, and even who wrote this and when it was written. And if there were inaccuracies introduced, who might have introduced those inaccuracies and when? So to cover the publication part of this, I want to talk about how these annotations get brought back in. Because I feel like this is the part that is missing from publication. Just going back to this old diagram from the W3C, it's this reintegration of that information that is really hard to achieve interoperably. And one way that IIIF has addressed this is by representing the objects as annotations. And as a result, we can open something like this manuscript. And these aren't going to be user annotations, but rather these are published directly with the objects by the library as a transcription. And that looks something like this as linked data. We can click that. And it's a part of that published object. So I'm really excited by what I heard from the other panelists about this reintegration process and tightening this loop. And I hope that that will continue. At Stanford, we're hosting a version of Mirador for our scholars to use. And annotation is a major part of that. And it's all in open annotation and web annotation format. So it will be interoperable with many of the other tools here, including hypothesis, I believe, very recent addition to hypothesis. And as a broader vision, I just have to echo what some of the other panelists have been saying, that I would actually like to see this relationship between publishing and annotation dissolved a little bit and the scope of the definition of publication expanded so that smaller things can be published and scholars can take credit for them. And I think it really, really matters the work that's being done here on this stuff. Because if you think of the way that a particular insight or a particular new piece of information, like a simple novel phenomenon seen under a microscope, has on the whole cascade of the tree of knowledge, holding on to that small insight for three months to publish a paper. And thousands of researchers doing that means that millions of people will actually die sooner. And it really is important to close that loop and make sure that we can make sure that whenever someone has an insight, anyone who is concerned with a particular topic that that insight pertains to is also notified according to this full vision. So I look forward to talking about that with you all. Thanks so much to all the panelists. I think we have a little bit of time before lunch to take questions. Not sure as I get back to Kristoff, the human mic stand today. Welcome, Kristoff. Certainly there must be questions. Thank you to all the panelists. I have a question about the idea of the Journal Club that you were referencing in your presentation. So the idea that, in a way, a variety of annotations can actually accelerate a discussion on a given research product. And we can do this in the open now, leveraging open annotation. So projects and platforms like academia have been bragging for years about how they provide superior Journal Club experiences. And we know what's happening right now with academia, so I'm not going to go over that. But I think there's something interesting to be learned from the fact that these closed, like walled gardens, these approaches to annotation and discussion have an advantage in terms of social density, just social network density, that I've not seen yet in an open infrastructure like the ones supported by open annotations. And they also have the advantage of consistent UX, which is one of their selling points to people who are joining them. So I'm curious from your perspective, given that we all believe that these Journal Clubs, these discussions should be happening in the open because they're part of scientific discourse, how can we build systems that can provide a superior UX and a superior social density to be able to achieve that goal? So I'll just remind you all that I'm a developer at Elife, but I do have a perspective on that. And the vision for integrating annotation into our website is not realized yet. We don't know what it will be. We want to facilitate. We always engage with our content providers, our authors. And internally, we have a large reach, a lot of researchers. So we've got some interesting ideas that we know some of our authors are going to be super excited about and others are going to be less excited about. So I think once there's a few success stories and we see it happen and work, the hope is that it'll sort of snowball. I don't know if that answers your question. It's kind of really vague. I'm not committing to anything. But we really want to be agile about this. We've deliberately not implemented features that we're excited about and we know will be cool. But we want the use of it and the buzz around it to determine what the next features will be and the next activities will be. I just have something really quick to say about that, which is that keeping the representation of annotations decoupled from the user interface will really address part of that problem. Because what I just came from a conference about Mirador where a lot of researchers have extremely idiosyncratic needs. And many of them are capable of building their own interfaces or working with people who can build an interface for their need. So that decoupling, I think, is an important thing to make available as well. I'm not sure who this question is for. It's kind of, it's been a very interesting panel because you have some folks that are kind of from the more traditional reader slash book slash journal kind of platform space and then starting to move into workflows as well. And all of those have that kind of, that accompanying annotation need in there. And I'm wondering, as you guys are either a traditional e-reader starting to put in more workflow stuff or just a workflow solution, what are the differences in as you're thinking about users and their annotation needs? What are the key differentiators that you're finding you need to start changing in the way you think about annotation and what the users need as they move out of platform into workflow? That probably wasn't the best word of question, sorry. Well, I want to say something. We have, we built a product like I was saying to create workflows for individual users, students or teachers. And we're just exploring about how that could break out out of these gated areas and go into the open annotation space. So I don't know how much I could add, but it's good to hear that there's a wider community out there with annotations and then we should start looking into that. And I just wanted to add that. Anyone else wants to say anything, Cathy? Just the open stacks, because you guys are doing more now than just like creating a textbook. You guys are doing things like let's see how the students are engaging with that textbook so that we can see what they're annotating so that we can build questions and frameworks and workflows around that and feed that back in. So as you guys are starting to break yourself out of thinking I'm a content creator into I'm a workflow part of this ecosystem person and annotations are part of that. How does that change? Like what are the key things that you guys are focusing on? Like you didn't use to focus on two years ago. Yeah, well I would say just from the point of view of workflow that really is the key to everything. And so when open stacks darted out it was very focused on what is the workflow of a faculty member adopting a textbook and getting it to their students and a student either reading it or ignoring it. But that workflow was the whole workflow we were thinking about. And as we started to build more learning software around that which we are also is supporting our research team and some of those investigations we were still thinking what is that workflow? How does that fit into a classroom? Some of the most interesting things that we've seen is that we were trying to take some research that there's a couple hundred years of support for say practicing, spacing your practice out over time. So we wanted to introduce this one tiny workflow change into what a classroom does but still keep the classroom workflow traditional because you're introducing change at a pace that people can do. And it wasn't easy to do. I mean the first comments that we got from some of this stuff were students saying this looks broken, it gave me a question from chapter three when I'm on chapter six. And teachers saying I don't have time for my students to go back to chapter three ever. We're like really it's the same exact amount of practice and it's way more effective if you do it in bits instead of doing it all the day before the test, right? Like as far as retention and knowledge. So those are some ways that workflow is really important. And all those explorations that I talked about with annotation they all have a huge amount of workflow embedded in them and I don't know the answer to that. I know it'll be key to figure out what's the workflow because somebody cannot look at annotations for Arata and added resources and pop ups about you're nearby this museum. Those can't all happen all at once in the same way or it's just never gonna work. So yeah, we think a lot about workflow and it's hard to get right. Sorry if I miss this, but this is a question mostly for NYU and E-Life. So really cool integrations, very exciting. Like how do we know or how are you planning to mobilize your communities to make use of them? Like do you know that there's already people ready to annotate or there are gonna be events or how are you gonna, you've built it, are they gonna come? Kind of question. Yeah, I think what I've heard from talking with the E-Life team is a couple of the features that I mentioned in the presentation. One of them being asked the author we're really excited about that. I think with the current functionality that dialogue is gonna have to happen like not in a concentrated period, but we do envisage a time where we'll be able to invite people to a session, a live session. We're pretty excited about what some of all the journals are doing, posting, they're getting articles published and then navigating a discussion on Reddit about the article, inviting the author to engage with the readership. And we just want that to be happening in place in a way that we can give a strong status to the comments that are being added. So that's just an idea. But it offers a lot of different opportunities. My, the executive director, Mark Patterson, one of the things that he was really excited about which I don't know can be met by the current functionality so we might need to do more work on it. But that would be the idea that there'd be kind of private study groups that were committing a period of time studying a piece of work. And then when they were ready or if they felt it was appropriate then they could publish that and that could be on a separate channel, City Life, as well, but that we'd be willing to publish it if it was on one of our articles. I would add that when... More? All right. As part of our grant planning for this project we included a marketing budget that goes directly to our university press and they know what to do with the marketing budget. It's what they usually do with it. And so these collections will roll into their normal process for making contact with faculty and getting adoptions for this set of materials. The other thing that we've done is we've pulled a set of titles from the press that work well together. And so that was also intentional in the part of the press that they think they can manage adoptions better because this set of this corpus interacts with itself. Just a couple of clarifying questions from my side. I know that when it comes to timelines they're always a little bit fluid and I hate to ask people to commit to a timeline but generally when, what timeframe might we expect the medium integration to be done? Well... Is everyone so excited about it? Yeah. Well, we're working with you guys and we're working with NYU and the timeline is kind of up in the air. We're hoping that we can finish our integration and get everything contributed but I don't really know, maybe the timeline isn't within weeks, not months, so yeah, not years of course. That's good. We started about, so we started exploring ideas in January but the project really took off mostly in April and March. And so, I mean we've done a lot of the work. We've gotten ourselves integrated and used to the hypothesis code base. We're already pretty familiar with Redium so we just need to get our stuff all working together and contributed back and everything will just start to fall in place and you'll start seeing the foundation level support on the EPUB side for hypothesis starting to work. It could be used to not just target Redium at the end. It could also use, EPUB.js could use this as well. So things are just gonna fall into place and it organically, I hope. Well, thanks everyone for, is this fun? Thanks everyone for a really interesting and diverse selection of use cases. In terms of the question on how do you get authors to use a tool, again I'll put a plug in for an unconference session on that this afternoon and remind you that if you have other unconference ideas, certainly during lunch break and during the education panel this afternoon, don't hesitate to add them to the white papers in the back of the room and Nathan will. Yeah, don't stand up and walk away yet though. Thank you, clap.