 This particular short presentation is, we're gonna take a shift away from all the fantastic tools that are currently being built out, all of which have very ambitious aims, and just as ambitious, excuse me, is that okay? Not only ambitious aims, but great hopes and aspirations. I'm here to talk about a failure to note, something that we tried out at PLOS for about six years, and I'll try to run through this very quickly. Please come catch me if you want any more detail for any of these points. So I'm going to steal Palo's opening gambit, or just at least copy him, and offer some numbers to corroborate his claims that science is big and growing. I'm from PLOS, the Public Library of Science, an open access research publisher of biological and medical research. All of our content is freely available to all at the moment of publication, free to access, free to reuse, OA. The data shown here is just of One Journal, our largest one, and it's PLOS One, as you can see, it's skyrocketing. There's a lot out there, and it's growing. So the next frontier for OA, as we continue to push on in publishing more and more research, making it open access, is the kind of complementary and sister goal, which is to make all of this information effectively discoverable, navigatable, and manageable. This is just critical if we are not to just kind of groan and fall down from the weight of everything that is out there. And by content, I wanna say that, excuse me, it's not just the article of record that we are publishing, but each of its versions, all of its component pieces, including the underlying data and so forth, and all the conversations that are happening around the article itself. We know that researchers are engaging with our primary content in a number of ways. Research articles are being viewed, they're being downloaded, and they're being cited. Here's a broader view, not only into usage and citation, but other ways in which researchers are bookmarking, commenting, blogging, and sharing this literature. So the tool, one of the tools that represented one of the most basic modes of engagement alongside this viewing and downloading is annotation. We have, we termed the annotations program that we had been, we started up in 2006 of December. We called it inline comments, so that will be the terminology I used to reference it. What happened to the program over the six years is that we found the usage was very, very low. These are some of the numbers to show that in a very stark manner, less than 6,000 total in over six years. Only about 3,800 articles had inline comments. There were only 1,200 or so users that used it. And per, on average, for the articles that did have inline comments, there was only 1.53. The ratio between comments, general comments at the end of the article to the inline comments was three to one. Even these numbers are inflated, in fact, because the way that we used to mark up minor corrections in the event that that happened was to display it as an inline note. On top of that, many readers highlighted random passages in order to write general notes about an article. They're looking back, if one were to put on your investigator hat, it doesn't take very long to figure out exactly why over the six years it just didn't work very well. One of the main problems is design. Due to design, it was very, very vague for the user as to whether or not any of these notes that they created were gonna be strictly kept private to themselves or made public. It was, in fact, the latter. However, as the way that we designed it, it was not very clear. We also lacked deep integration with the user profile. So once you logged on in order to create a comment, there wasn't a lot of ways to access it back and also to make edits and receive notifications when someone commented on your comment and so forth. A system workflow overall was very, very difficult to use. For one example, readers were prompted to make an inline note any time a passage of text was highlighted. And in fact, whenever you perhaps clicked on a piece of text, if you double clicked or you clicked and you moved and you did not release the clicker, that highlight marked the text and the highlight was made. We know that highlighting does not necessarily imply textual elaboration on behalf of the user. The second big problem was that the discussion ecosystem was just in its most nascent stages. There's a wide array of options and they're just proliferating each year we continue. We don't have best practices. So the researchers don't know exactly what is the difference between an inline comment and an end of article comment. And in fact, perhaps these distinctions are false distinctions. But within the UI for our former plus journal templates, we had those two distinctions. There's overall privacy control lacking within this larger discussion ecosystem. And also the conversation was not portable. More importantly, the failure to note is not a failure to think we know this. Researchers are having the discussions about the articles and they're having these conversations in many places. Why do it on a single publisher site? Or more precisely, why do it if you have to return to that same site just to get access to the comment and the rest of the conversation? So I would say we are uncomfortably poised between the past and the future. And our new journal redesign kind of showcases this. That hell, the situation is not ideal and one that we hope to move out of going forward. This is the way that we have to translate the old inline comments into our journal, new journal templates as of the end of December when we sunset it. So I'll try to go even faster now that I've knew. We'd like to think that strict annotations are critical to the research or workflow. We're also thinking about the more broadly within the larger ecosystem of commenting. If we think of just two axes, visibility and exposure as well as depths and modes of engagement. I've done just a quick map using these two axes as dimensions. Just laying out a small sample of ways in which researchers are both annotating and commenting on research literature. The location of each of these are not precise and we can argue to the end of the day about each and every one. It just gives us a quick view of how they may relate to one another. So with a view to larger information environmentalism, two final points. It's important to capture the conversation which is what we're here to do. It's important to measure, rank and summarize the conversation. Fundamentals and needs to be real time, integrated as metadata, mobile, moves with the object. And we'd like to have quantitative and qualitative summaries of these conversations. And I'll just quickly move forward. One of the examples in which we have otherwise chosen to direct our efforts in this direction is with article level metrics. Because we know as I mentioned that researchers are engaging in many forms of commenting not just at the article but in many other ways. We have a program called article level metrics that gives us analytics as to how the scope in which these different modes of engagement are happening. These are a few of the ways in which we are capturing this information. And this is only the beginning of the suite, the program that we're building out with more and more sources to capture. How do we fully realize the potential of annotations? I wanna revise my original title. It's not a failure to note, it's a failure to capture. We know that there's an imminent and an imminent need for the service. Let's make the next generation annotation tools stick. It's very exciting to be here. Thank you, Dan, on a hypothesis. I know this workshop will contribute greatly to this effort. Thank you very much. Access is critical. Keep it open.