 Awesome. Good morning, everyone. I promise I'll let you go a break in just a second. My name is Marek Huaska. I work for Research Square. We're a company which offers publishing solutions out of North Carolina. We've been on the market for 15 years now under the brand name AJ, which some of you might have heard about, mostly offering B2C services for our academic authors, offering editing. We have recently also engaged with more B2B work and that's an example of one of these projects we've been doing. Everybody loves preprints, right? If you are in the publishing world, preprints have been making a storm. You've heard about archives. Probably you've been existing for a while. The numbers look great. Now, if you overlay them on the numbers of total publications in the world, I promise you the same graph is included in this graph. It clearly shows how little impact preprints have made so far. We feel like part of the reasons is people don't understand the value that the preprints can bring to their scholarship. It's a shame because the process of peer-reviewing the documents which happens at the preprint stage, or somewhat past the preprint stage, is extremely valuable. We feel like this process is a scholarship on its own and it's worth preserving as a valid piece of research. We've been trying to push this project that we want to bring the peer-review process as a valid piece of scholarship of its own, and we also want to increase the transparency of the processes because if you submit a paper to any biological journals today, you probably get published sometime in 2020 if you're lucky. So this is somewhat of a problem that you can't expect your work to be public if you follow the old standards. We were very lucky and fortunate that we were able to convince Stringer Nature to pilot with us on the new platform called InReview. During your submission process, you get the chance to opt into preprinting in a fully transparent fashion and with no extra effort from you. And many journals have actually agreed to do it many more than we were at first anticipating and I will tell you that the opt-in from the authors was very good as well. So it seems like if you make the process easier, people will want to preprint. But we're here to talk about annotation. Our preprint platform supports multiple ways of annotating it. Some of them come from us or come from the journals such as the editorial badges. They signify that the journal has done basic C checks on the platform. Community comments come from you. People like peers of the authors publishing the work. And then there is hypothesis. Thank you, hypothesis for helping us make it happen. And we wanted to integrate hypothesis as first class player very early on with the least effort possible. So you will see what happened. There are also, this is the public comments. Everybody knows how this works. We wanted the public commenting to be as frictionless as possible. All you have to do to comment on our platform is to fill out the form. We don't validate your email. We don't validate your name. We barely validate your content for just the basic spam tool because nobody wants to get spammy links under their articles. So we do a bit of moderation. But the threshold is very low. We want to encourage people to comment. We want people to engage with our content. Whether it's through our platform or through hypothesis. There are more public annotations that actually might go away. We want you to be proud of the fact that you are being considered for publication. So we do say the name of the journal from which your preprint came from. We also saw the peer review, as I mentioned earlier, the peer review in itself is an annotation of your work. It shows that something is happening to your paper and it shows that this is not just a stalework lingering on a server somewhere. However, the first question we received from people who are participating in this is what happens when my work gets rejected? Nobody wants to have a big red stamp on their article that says rejected or across the document. And the only answer we could give that would satisfy everybody was we will remove some of the annotations. So that's something to keep in mind that people will not participate in some of those annotation projects if you can't remove some of those annotations down the road if the outcomes are not what they were expecting. I don't love it. And I'm sure many people in this room also don't love it. But this is the reality of the situation. We were unable to launch it without making such promise to the community. And as we made such a promise, we are sticking by it. Moreover, there are some annotations that are persistently private. Peer review, as an annotation, we would love to make it public and it would become public if an article is published. But it's been determined that during the process itself, the peer review cannot be made public. We don't fully agree with this decision, but we have to do what we have to do. And this is an example of private annotation that we can't currently put into the public sphere and we're handling in-house. Let's talk about our numbers. We have 1,007 opt-ins so far since November 2018. We also were able to rescue a preprint-like server that was threatened to go under protocol exchange where researchers can preprint their work. We've had 47 comments using the least frictionless process possible. And just to give you an idea, we have about 50,000 views on our articles every month at this point. Out of those 50,000 views, we had 47 comments. We had three hypothesis comments. So I think the question for this crowd is we are trying to do the right thing and we're trying to encourage annotation. We're targeting a crowd that should be willing to discuss the work and we're trying to be the good citizen. Everything you see is completely free to the participants. There's no cost in participating in this work and obviously there's no cost in annotating this work. And yet we are struggling to find the sources of annotations. We would like to have. Thank you. Any, we got a quick question for Amarak too. So thanks. I'm at Elsevier and we've done some experience with this as well. And the thing that always comes up with us which I think may be similar for you is we don't know how much annotation we should expect. Like we don't know what if 47 out of 1,000 is a reasonable baseline or if it's high or if it's low. And so the resource I guess you're managing is not the cost, but it's the cost in timing attention. And that would be what I would suggest that you might want to look at. And we are definitely interested in annotation as much as annotation as well. For this conference I was talking about annotation and I think I'm also mirroring Gabe's argument from a few minutes ago that it's hard for us to push that particular development in this direction if our early opt-in is so weak. Because as attention we are getting, we're getting very little follow-up of that. Oh yeah, that's my point. You know if it's weak or not. Because you don't, there's not been a lot of empirical work on what a reasonable baseline is. How would you do it then? How would you establish that number? Because, you know, my gut feel, all I'm going at this point showing 47 out of 1,000 and saying that's low is, you know, the ratio of 0.04 comment or 0.003 annotations. Let's talk about that more. I have some ideas. Great. Thank you, Merrick. Thank you.