 For people who don't know me, I'm Tom Beyer. I'm the Director of Platform Services here at Pubfactory. Hi, I'm Heather Steins, the Director of Partnerships for Hypothesis. This is the agenda for the webinar, and we've done our introductions, and we'll start with a little bit with what Pubfactory is and why we are partnering with Hypothesis. So we're a publishing platform with a lot of journals and books, mostly in scholarly publishing. And we have this sort of ongoing effort to invest in things that will increase our user engagement and enhance content presentation. And we think that Hypothesis is a really exciting tool that really goes a long way in helping with those efforts. And as we hope to demonstrate to implement, at least at the initial level. So just a little bit of, you know, Hypothesis, they're an open source and nonprofit organization and have this industry wide annotation infrastructure that's really, that we hope you find really cool. And as part of the platform, we're always looking for these kinds of third-party tools because we want to be able to give our publishers as many options as possible. So just a little bit about Hypothesis, and then I'll do a quick demo for you from the researcher's perspective. So as Tom mentioned, we are a mission-driven nonprofit. We are open source and strong believers in standards. We started working with publishers a few years ago until now most of our researchers have been, most of our annotations have come from the researcher and the education space. You can see some of our funders along the bottom of the screen. One thing to note, about a year ago, February 2017, the W3C, the standards body for the web, formally published the annotation as a web standard. So what does that mean? That means in future versions of browsers. Just like you tell your browser what your default search engine is, you should be able to tell your browser which annotation client you're using. And if multiple annotation clients use the same standard, we should be able to work together. My annotation's using one client, your annotation's using another, should be able to interact with each other. I need to update this slide. Just yesterday we passed 2.5 million annotations. About a quarter of those are actually completely public. So if you get a free Hypothesis account, you can actually surface these public annotations that have been made across the web. We have a lot of annotations that are made in private collaboration groups and we're keen as the publisher functionality launches to start to be able to track that as a separate user segment as well. Just a little screenshot of how the sessions and Hypothesis play out around the world. Not surprisingly, a lot of the sessions take place in English-speaking countries. You can see they're on the left, US, UK, India, Australia, Canada. But we're really excited if you look through the long tail to see users really across the globe from South America to the Middle East to Asia. And we're excited to see how this will grow as annotation proliferates. A little bit about how Hypothesis works before I jump into the demo. We think of annotations in layers. Hypothesis annotations are created to anchor onto the publisher content itself so that the conversation doesn't have to be taken elsewhere, for example, to Twitter or a blog post. And you can have multiple layers of conversation happening across the same document depending upon what you've actually come to the page to do. So there may be one conversation taking place in a public annotation layer, a private conversation taking place for a class. Publishers are also using annotation to create additional content to expose commentary by experts, for example, or notes from authors. And I have some use case slides which will go a little bit more into that as an example. So let me just jump out of this and into my demo. So you should see in front of you on the screen some content that is hosted on the PubFactory platform. In this case, from Edward Elgar, Edward Elgar online. This is an open access article. I'm actually bringing, at the moment, this view is actually bringing the Chrome extension to the browser just to show you how a researcher can work on content today. And this, as I mentioned before, you can get a free account and you can actually start to try this out for yourself. So this is an article that I was looking at earlier, the Declaration on Human Rights and Climate Change. When Hypothesis is not in use, it's simply hanging out here on the right side of the page. Just close that back up for a moment. So as a researcher, as I'm going along and I'm looking at stuff and I find something that I want to take a note of. I simply highlight the text that I want to make an annotation on. Hypothesis will ask me if I want to highlight or make an annotation. So in this case I've selected annotation and I can just start. This is interesting. I can tag it Climate Change, maybe Human Rights. And then Hypothesis will ask me if I want to keep it private just for me or if I want to post it to the public for anyone to see. So I'll post it to the public. Now I have an annotation that's actually stuck to this particular piece of text on the Edward Elgar page. I can share this annotation through various types of social media. I can email someone the link. And the person that I share the link with doesn't have to have a hypothesis account. They don't have to know that it exists. If they can get to the content, it will actually pop open the client and scroll them down to the annotation. So it's a great way to share information with folks that you are working with. In addition to choosing between public-facing annotation and one that's completely private just for me, I can create a collaboration group. Just select a little arrow next to public. I have a lot of groups and you can select the tab to make new group. Hypothesis will ask you what do you want to call the group. So let's call it Human Rights Group. And I can put a description in there if I like. And it's just that simple. The group is created. This is my group dashboard. Every group has one. You can see when the group was created, who the individual members are, and you can invite new members using this clipboard link. Now I'll just highlight that this is how invitations work in a private group. Some of the things we're building for publishers are a little bit more formal with email invitations and dashboards to manage members and the like. So if I go back to my original article, I can just continue reading along. I come up with something else that I think that I want to make a note of. I'll just go ahead and select that text. Instead of making my annotation in the public layer, though, let's see here. I need to refresh to get my group to come back just a second here. When you have a lot of groups, they can start to stack up. So let's just wait here until everything opens up again. And I will go ahead and find my text. This opens. See what I wanted to take a note of here to annotate. And let's hope my human rights group has appeared. There it is. Sorry about that. And so now I can make an annotation that's aimed at members of my group. So learn more about this. Again, I can add tags. Human rights. Climate change. And I can post it. Now within a group, I can still make notes that are private just for me. But in this case, I'm sharing it with members of my group. So someone who comes to the page who's not a member of the group, they won't know the existence of the group. They'll only know about it if they are a part of the group. But they can see this annotation and they can actually come in and respond to me. So thanks for sharing if I can type. And in addition to supporting text, we support full markup. So let's say that this was an article about human rights and climate change that was talking about. Carbon footprints, for example, I can go ahead and let's say I needed to add a mathematical equation. And I get math and I can post it. Now I've got a mathematical equation as part of my annotation. Similarly, we support images in rich media within the annotation pane. So let's say I'm taking a look at an interesting video about climate change in human rights. And I'd like to drop that in for my group to see. I can go ahead and include that. And you can play it from right in the browser. So this is an example of a detailed threaded conversation that's happening right on top of one sentence on the publisher content. You can also connect content across the web. So I was looking a little earlier at the declaration on human rights and climate change as published as part of the, the UN. And I can actually take my group with me. So let's say there's something here in the document itself that I want to take a note of. And I can put this in my human rights group as well. And I can say, check this out. And I can add that to my group. So other people in the group will be able to see that. I can also grab from the clipboard the link for that annotation, go back to the original one I made on our first article and add some information here. Also check out this article and behind this I can drop in the link. So now I've got two articles that are linked across the web. I can also annotate on certain types of data if it can be viewed in the browser. This is actually a CSV file about earthquakes and not precisely about climate change but forgive me there. So I can actually select a cell in this CSV file and I can pose a question to my group. And then I can go ahead and put that in my human rights group. And then someone else in the group can come along and answer me. So for anything where you need to ask each other detailed questions on top of content, it's a great way to work. Now if I go back to my group dashboard from a human rights group and actually refresh that, you'll see that all of the annotations I've been making across the web automatically appear here. I can get back to any of my articles in context by clicking on this arrow view annotation in context. I can see what other members, if there were any of the group are working on and those tags that I've been creating are great for filtering. So if I only want to see the articles that I have tagged UN, I can add that to my filter at the top. So as a research tool, it's it's incredibly powerful. If I remove the different filters here from the search, what I'll have access to is all of the publicly visible annotations that hypothesis users have made across the web. It's always really interesting to come in here. You see a lot of students annotating and if you find an annotation that's interesting, you can explore other things that that annotator has put together. I come in here and find annotations in Chinese, Japanese, Arabic, lots of cool things. So the publicly these publicly facing annotations are fed into the crossref events data project. Also every user of hypothesis gets what we call a profile page. If I click on this, I have my own page access to all the annotations I've made anywhere on the web. You can see the annotations that I added as part of the demo here. In November, I annotated I moderated a session at Charleston that was on artificial intelligence. So I can simply select the articles that I've tagged about artificial intelligence. It'll add to my filter. I can see all of the notes that I made to myself, I can share them with my panelists. So it's pretty nifty from a use case perspective. So what I thought I would just do is is since Heather showed all of the sort of cool things that you can do with with hypothesis. I thought I would just show you how easy it is to actually add it into the pub factory platform. What we're looking at here is just a regular article. It's also one from Edward Elger, one of our publishers. And as you can see, there's there isn't the little annotation widget that Heather was using for hypothesis. But when she was showing it, she had basically used a Chrome extension, but we can actually add it to the platform so that it always comes up for your content and allows you access to all the functionality that she was showing. So what we're seeing here is our our page layout editor in in pub factory. And if I want to add hypothesis as a as a feature to the layout, it turns out that there's a simple little bit of JavaScript that I need to put into the template. So Heather had given me the, the key, the key line of code that I needed is as you can see it's super simple. So I, I created a little static block to to add to the page. And we just save that, and then come back to the, the article layout page here. If we just drop that static block on to the layout. Grab the hypothesis one. And now when we come back to this page, if we just play, we should see that the hypothesis tool appears. Here we see it up in the corner. And now if we come down and annotates content, we immediately get the full hypothesis interface and I'm already logged in but again we'll have to add to all of the functionality that that Heather just showed off. And so you can see it's just super easy to add to the platform. This is now, basically this will now appear on all article pages in the platform having done having having just dropped it into the template. So, at the, at this sort of top level it's really easy to to add. And we are of course interested in deeper integrations and Heather's going to talk a little bit more about that and the kinds of things that are possible with, with deeper integration so now I'm going to hand it back to Heather. Great. Thanks so much, Tom. It's really cool to see. I don't always get to see from the developer side. I just have a couple, a couple more slides and I'm going to jump into use cases. The annotation dashboard functionality that I showed for you can be set at an individual user level can be set at a journal title level collection of content on the same topic or can be published or domain wide. It's very flexible. We have a very robust API that's available now. There's lots of things that you can do with the API. You can, it's pretty, pretty simple here but what some of the interesting things are you can use it for text and data mining. You could repurpose it on your website for example have interesting annotations deliver elsewhere to kind of promote your content and there's full documentation available for the API. So in addition to the integrating the hypothesis client, which is the simplest integration and because we're open source, it's free. We have what we call customized hypothesis that we will work with you and if the factory is your host, you know, work with them to enable for you. We can talk a little bit more if you're interested about how pricing works we can connect to existing publisher accounts there's certain branding and moderation that's available. We can customize to fit your AI and we work with you on a rollout program. Here's an example of some functionality around for publisher groups. Understandably, publishers are going to have annotations and information that live on top of their content pages. They want to make sure that it's going to be within their control. I've showed the multiple layer scenario before publishers can have for example one layer that is a general discussion. Another layer that's restricted just for authors and maybe another layer that shows review summaries it's it's completely flexible publisher can decide who can create annotations and who can read them. If there's an issue with one of the annotations a reader can flag it and we'll go to the moderator for their further review and perhaps if they need to they can take they can hide it or take action against the annotator. We're also supportive. When we work with you like one of my colleagues Nate Angel from marketing is on with us here today. We want to we always want to talk with publishers from the outside to find out what they hope to achieve by adding annotation to the site and that could be a number of things. If you're looking for training. If you implement free hypothesis version we're still happy to offer training and promotion on your behalf. And we'd love to, you know, work with you both in conferences and putting together case studies, etc. So, Tom and Michelle asked me to talk a little bit more detail about how hypothesis is being used. Probably the thing which pops into people's heads most readily when they when they think of annotation is some sort of a discussion layer that's happening after content is published. And we definitely do see a lot of this happening. There's interactions with authors interactions with collaborators who are researching their own article and the like. It's having this these conversations right on top of the content seems to be, you know, very effective in keeping users on site and increasing stickiness for example, as well as meaningful interaction on top of your content. We're doing a lot of work with the pre print space pre prints are really designed for collaboration. And of course, the results of that collaboration may be published. So, having information flow through into manuscript submission systems, and ultimately perhaps be connected with the final published version of works. Here's an example where an author has annotated her own paper to provide additional updates and information. They also provide updates marking departments can provide connections to media mentions and the like. Just another screenshot of an example of publisher and institutional layers. The way that we've created hypothesis is to allow the client, which is the browser piece of the tool to be separated from the annotation server. To be able to talk to multiple servers if that is how the user desires. So this is just a little example of an instance where we have annotations happening in a publisher space, for example, elive also happening in a number of public and private groups. Let's say I'm a researcher for a pharma company, and I'm working on R&D and I need those annotations to be absolutely secure behind my company firewall. So depending upon what I've come to this page to do I can direct my annotation there to the appropriate layer. As mentioned, manuscript submission systems of course there's a lot of interest in annotation around peer review. E-journal press integrated hypothesis into their GEMS platform. They trialed last spring with a few of the AGU journals and then did roll out across all AGU journals in September. And my understanding is it's being offered to anyone who uses e-journal press for manuscript submission. One of the things that I think is the coolest use case around hypothesis is around automated annotation of entities. You may or may not have heard of something called an RRID. It's a research resource identifier. Very, very critical for reproducibility purposes. If you need to know which stem cell line was used in an experiment or where a particular reagent was purchased. At least 125 journals in the neuroscience space currently use these RRIDs. So they're widely used. A group out of University of California San Diego created a tool using hypothesis called SIBOT. So if you open up a paper, if you have hypothesis and you open up a paper, either HTML or PDF version doesn't matter. The tool looks for these RRIDs. You can see them here highlighted here. They're a combination of letters and numbers. And it pops up information from a number of external databases along the side of the article in the form of annotation cards. So you don't need to navigate away to find out that information. It's a combination of auto-generated and human curated. If there is an issue with one of these RRIDs or it doesn't resolve or you want to ask a question about connecting it to text, you can use the reply function to get in touch with someone at the project who will help you. Another project we're doing is we kind of refer to it as illuminated footnotes. It's with Syracuse University qualitative data repository. And they're actually working with a number of projects in the social sciences to connect footnotes with their original source material in the form of annotation cards. So in the event that the footnote or endnote does refer to a specific snippet of text or a particular figure in an original work, that information can be available via an annotation card. More information can be added, you know, translations and the like. We have a workshop with them in February and this should be visible to the public later this spring with the collaboration of Cambridge University Press. Just a few more. Journal clubs, highly used. There's actually some great groups out there who are trying to put together tool kits for best practices for journal clubs and we're happy to see hypothesis included as part of that. As I mentioned in the in the metric slide, private groups are a key driver for the creation of annotation. So it's great to see that here is a screenshot from graduate students at the University of Texas at Austin, having a journal club on top of the University of Michigan. Holding the media accountable. I've referred to experts creating annotations that become part of content. We've been working for a couple of years now with a group called Climate Feedback. It's organized out of University scientists around the globe. And whenever an article comes out, it might be the mainstream media might be a scientific publication, they distributed amongst themselves according to their specialty. They do a couple of things they give the article a credibility score you can see here on the right this one has unfortunately a low credibility score where there's certainly articles that you know run the range there. And they do in line annotation to look for information that might be out of context, incorrect and more importantly they provide links to better information. So we're working with some other communities annotating in the public interest to expand upon this climate feedback example if you're interested in maybe exploring something like that with some of your authors your journals, just let us know. And finally, annotation in the classroom. Our, our annotation seemed to rise and fall like clockwork in conjunction with the semester now. We have an integration with the canvas learning management system that's being tested now at a number of schools. Professors are assigning close readings and collaboration projects it's a proven they're proven pedagogical examples of how students working together in a group, improve the outcome for all of them. And with the hypothesis technology and the creation of private groups the instructor can see what all of the student annotators are doing, what they're getting stuck on in the lake there. So we're really excited to watch that as it progresses. Last slide, I promise. These are some few a few interesting use cases that I've just heard about in the past six months. They're a group of publishers at Springer Publishing, actually production team that has put together a group to annotate right on top of their XML staging site. So that was I thought a pretty cool use case again if you can view it in your browser and you can select it, you can annotate it. There's another set of publishers who were doing a big migration project within their platform host and they knew that a number of landing pages were going to have to change. So they made a group with hypothesis they asked each other questions on top of those pages and made note of the things that we need to change. We have editorial sales groups working together if sales is doing a campus visit editorial can mark up a number of articles important authors who might be on campus important specialties that those sales colleagues might want to reference. And even folks who are annotating back and forth with each other on invoices, the CSV file example that I showed, rather than in this the emailing invoices back and forth and saying hey you know column column C row 23. What do you think about this you can actually select it, ask someone the question right on top of the content. So we'd be curious of other use cases that you might think of that we could add to a future presentation. So that is it for me. So we can open up to questions I know before I started talking before they're there were some so Nate, if you can manage questions. Yeah, so we, you know, Michelle and I have been answering some by text in the background but I think it would be helpful if you and Tom both talked a little bit about, you know, who would actually implement hypothesis factory publication, and that we answer by text already about it but just to clarify who would do that and then also how it might work with the pub factory base app annotation capability and if they're one should use one of the other or both or how they might interact. So a couple of the questions that we tried to address by text but it might be helpful to talk about it on screen as well. And then we also after that Heather, if you could talk a little bit more about the cross ref relationship to the annotation data and go into a little bit more detail about what happens there. Tom you want to kick off. Sure. So yes, the pub factory does have its own annotation technology that we built long ago. And some of our publishers are using it and some are using it extensively but what we liked about hypothesis is the is the sort of industry standard and nature of it and the fact that it's really working across. And it allows for annotations that that span all over the internet, as Heather showed. So, I think, you know, over time, we may decide to phase out our own internal implementation. Another another reason why I was initially quite intrigued with hypothesis is one of the things that does is it is able to match up annotations across both the HTML and PDF versions of your content. And that's something that we had never quite gotten to with our own implementation and I, and for publishers who were publishing a lot of PDF content. It's a really nice and pretty key feature to have so you know just in terms of forward looking capabilities I think it's interesting I think for end users, you know, we would probably want our publishers to choose one or the other annotation technology I think it's probably confusing if there's multiple technologies in place on a single platform. But we're happy to work through the implications of that. And in terms of who does it. For those publishers of ours that are using the layout editor that I showed it's, you know, you don't need us to turn on hypothesis you just need that little magic piece of JavaScript and it turns on. So, and so for those publishers they can absolutely turn this on whenever they want. For other other publishers we'd have to just, you know, update the templates to to include the hypothesis link for the deeper levels of integration that Heather showed, you know, we would want to work with with you the publisher and with hypothesis to come up with, you know, what you want to do, what's the use case you're trying to solve and how best to do it. And then, you know, work that out as a little enhancement project to to the existing platform. So I think there's a lot of different opportunities and I think there's opportunities for getting in very easily and cheaply and then sort of expanding as you go. I would just clarify that Tom, when annotations are made through hypothesis they're not also stored in the pub factory native annotation capabilities are right. That's right, they are two entirely separate repositories of annotations. I would just mention that we, we do work with publishers who have had annotation capabilities available before. And so in some cases it's a pretty straightforward mechanism to kind of build a little tool that would bring those annotations over so if you do have a lot of annotations that you wouldn't want to lose certainly something that we can discuss. Yep, absolutely. And I think we would probably, if we were working with a publisher that has existing annotations and wanted to move to hypothesis we would definitely do what we can to make that as easy and painless as possible. So I'll take the cross ref question. So, in May, I think it was last year May or perhaps early June the Crossref event data project launched, and annotation is one of the items that Crossref had identified that they wanted to be part of that. So, again, using the API that I mentioned, we were able to feed information to Crossref on those publicly visible annotations. And so, as the Crossref event data project is indexed by Google, those publicly visible annotations will be discoverable via Google as well. So for publishers who are interested in having, you know, yet another channel to for users to find their content. You know, that is a great way for that to happen. So we're excited about that. Another piece of news in the in the Crossref camp in the fall of 2017. There were some a number of presentations that Jennifer Lynn did together with us and a number of venues. Crossref has identified annotation as one of the new potential DOI use cases. Another of these cases, for example, you know, individual comments by reviewers as part of the peer review process. So we are in discussions about what best practices around DOIs for annotations might be. So if you have thoughts on that, we'd love to hear from you, particularly, you know, to make annotations citable annotations as research objects as part of faculty work. We're doing some work with ORCID. In some cases, you might want particular annotations that you've made, for example, to to roll up to your ORCID. So there's a number of great inter industry collaborations on that slide. Heather and another question that sort of queued up here in the background is around the EJ press use case that you were showing with AGU and the the questioner is especially wondering if they're already using annotations for peer review comments, or is there another use case or other use cases going on with EJ press. All of the main ship submission systems operate, you know, a little bit differently. So in the case of EJ press, they took the they have the hypothesis functionality, but they've tied it very specifically into their dashboard. So there's a number of things that they created that are useful in particular for peer review different levels of permissions and roles, for example, so that the journal editor can see who the multiple viewers are but they cannot see who each other are, and and that a certain point adding in the author, the ability to do a single blind and double blind of those different permission roles. Also, EJ press wanted to find tag set to be easily implemented. Major vision, minor revision problem with a figure such as examples like that. And probably most pertinent to this question. They wanted to offer the reviewers a choice between using the traditional review process where you returned a Word document that said on page five paragraph four line two you said which might you had to chase it down. See what that was versus making that comment actually on page five paragraph four sentence two. So, both the reviewer and the author have some flexibility within their system. They can look at the reviewer comments in on top of the article itself, or they can pull up what's called a review summary, which basically, you know, lists lists those issues. So, the very very early data. We had a call with Joel in I want to say, very beginning of December. And they did have a number of peer reviewers, you know who were using it. And we're looking forward to, you know, additional data that comes out of that. If you're working with a manuscript submission system other than the journal press. We have conversations with with scholar one editorial manager, you know river valley bench press, you know, so, you know, just, just remind them that this is a conversation that could be useful for you and in most cases they take the open source code and build it into the systems themselves so it's not something that needs to be scheduled on our roadmap or anything that something that they can just do. It looked like there was a question about costs, and I just wanted to clarify that the little example I showed where we just dropped the, the, the JavaScript into the template is that's entire that would be entirely free. That would be a deeper level integration. You know where the kinds of things that that had their shared in the second part of the presentation. That would probably involve some costs that would depend, you know, we would depend on the project would depend, you know, exactly what we were doing, but at the at that simple level that I showed that's entirely free. There's an open source company, you know that's one of the, you know, the great benefits that is is available, you know, to the community is that you can you can take the code and you can, you know, implement annotation. It's important to remember that when you do that those annotations that are made in the public layer are going to be visible to you know anyone who comes to your content. If there's a problem with one of those annotations, they can be flagged hypothesis operates as the moderator for those public annotations. So a key differentiator between just enabling the client to be visible and incorporating a publisher group like we're for E life and some other publishers at the moment, you know, is that the control is in the hands of the publisher, moderator or moderators as the publisher wishes, and that multi layered option, where you can have both groups that anyone can participate in, but also groups that are committed, say to the authors or to the staff. That is something that we charge for. We think it's important that publishers of, you know, any size should be able to work with annotation. So we have a simple pricing model that goes by how many documents you publish as a proxy for publisher size. I mean some some groups use, you know, different revenue bands. We look at the number of documents you publish per year. But we have conversations about, again, what you're trying to achieve and how many documents you want to deploy across so you can certainly deploy across just a small range of books or you know one journal at a time or even a particular part of one journal. You know that's that's up to you. It can be that precise and the pricing does include essentially unlimited publisher groups and layers, their connection to your own accounts if that's something that you wish and we do have certain, you know, customizations that are available right now. And you can, you can, if you wanted to deploy hypothesis across all of your content, we would look at how many documents you put out in a year, but that pricing would cover deployment of on your content back, you know, back to the beginning of time. I know additional charge as well as the training and success programs that I referred to. Heather, you might just also clarify, you know, as a nonprofit open source organization, you know, why hypothesis charges fees and what those go to support. Yeah, so we are a nonprofit and we believe in supporting an annotation ecosystem overall it's one of the key reasons that we worked with the W3C to get the standard approved because we from the earliest days of the web. The idea that people could add their own information and comments was envisioned, but it took a while for technology to actually, you know, catch up with the desires and the wishes that people have. We started the annotating all knowledge coalition a couple of years ago it now numbers more than 70 cultures universities and tech companies that feel that annotation and open particularly open and interoperable annotation is important. We also run the largest industry conference around annotation it's called I annotate. This year it's the, I think it's a six and seventh of May of June. Sorry, it's usually May this year it's in June it'll be in San Francisco. We can send you information on that. So as a community driven and, you know, mission driven organization. You know, we feel that the code base should be available to all for them to contribute, but in order to do that effectively, we need to be able to, you know, be sustainable on our side. So scholarly communications is one of the market segments that we're working with education is another and you know there's probably some others coming along the line. When we're coming up with our pricing, you know, we're doing it based on you know what a particular publishers proportion of the entire scholarly output might be. That's where we're starting from, you know, we want things to be sustainable from that side. If you want something that we already have available. There's no additional charge. If you want something that we don't have available, we can certainly, you know, have a discussion about prioritizing that on the roadmap but that would be additional. You know, and there's no other open questions right now but somebody had asked a little bit of detail kind of very specific technical detail on the W3C web annotation standards. And I've linked them to the blog post on the hypothesis blog that first announced the standards being published. I want to just mention that there's a lot of technical detail behind those standards that hasn't, you know, they've only been live for a year. So not every annotation service out there in the world is already conforming to every, every part of the standards and the standards themselves will mature as they move on. And there are specific components of the standard that do relate to the kind of role that an annotation plays in a text page, which was what the questioner was asking about specifically. And so there is a specific kind of vocabulary word within the standards that has to do with the purpose or role of the annotation. And that may be related to what the questioner was thinking about, but I invite folks to go ahead and explore the link to see the full vocabulary and standards that were published by the W3C. Heather, I don't know if you wanted to add any other comments to that. I just wanted to say that, you know, talking to publishers and, you know, folks in this space, you know, we don't have to sell you on standards, you know, that should be a no brainer. Our vision for that, you know, annotation ecosystem, you know, is such that we think annotation should work like email. And we all don't have the same email client, but we can email each other if we needed a separate account every time we needed to email someone on a different annotation or email service, it would become quickly unwieldy. So the success of annotation in general, you know, will largely be determined by the uptake on the standard. So we are excited that there are other annotation services out there, looking at and working towards the standard. Right now, if you were looking at one of the reasons why comments really have not been successful. One is that they're siloed on sites and closed off. There's no way to, you know, interact with them across the web and using different services. So, you know, it should be, it should be pretty good everyone on this call. But, you know, the success really is, you know, on interoperable. And we can all work together to make that happen. You know, we, we seem to have run out of questions with folks I've invited them to ask more. And I'll just mention, if anybody's thinking of something else they'd like to talk about that you will receive an email within a day that links to a post that contains both the recording of the webinar, and the link to the slides that were used in it, as well as links to any websites that were mentioned during the course of the webinar so you'll be able to visit that and that's a public page that you'll be able to share with other folks in your organization, or, or colleagues at other publishers as well. We'll give a pub factory the last word. Sure, thanks. President here today. Oh, no, it's great. Yeah, and thanks everybody for joining. I hope this proved useful. Please do reach out if you have further questions. We'd love to talk about it. We think it's really exciting technology.