 My name is Beth Rudy. You have seen me once a day the entirety of this I annotate conference But I will introduce myself again for all of the new faces in the crowd I run something called science in the classroom at AAAS. This as a caveat is not science magazine I'm not involved in the science family of journals what we do is Annotate papers from the science family of journals and soon papers from outside the science family of journals for educational purposes So that non-experts or amateurs if you if you would like to call them amateurs can access this material a little bit more easily I Was thinking about this sitting listening to our first panel Which was an amazing kind of broad strokes blue sky discussion and Realized that my discussion and I don't know about the rest of my esteemed panelists is very much looking at the forest And the trees and in fact may be focusing on the trees I don't know if that's really what Heather had in mind for us, but that's what I'm talking about now is our workflow And how annotation Through an actual annotation software has helped us So the vision for science in the classroom is relatively simple We want to make real research accessible to non-experts slash amateurs And thereby enhance scientific literacy hopefully We use annotation as a tool to highlight the collaborative nature of research So a lot of people think that the scientific method is a thing that actually happens in a linear fashion anybody who has done science knows that that is not in fact the case and what it is is just kind of a collaborative effort across many years and many subjects revising and iterating our knowledge and Finally, we want to empower educators. So there are a lot of educators out there that would like to speak about science They'd like to speak about real research, but they also feel incredibly intimidated because as was mentioned in the first panel Scientific papers are really difficult to consume and it's kind of a barrier for the rest of the world As an overview of what science in the classroom has to offer We currently have 84 published papers. We have 14 in our queue. We have 90 and counting contributors Those are ranging from actual high school students to grad students to postdocs to educators and research Research people who actually aren't very skilled necessarily when they start the process of Communicating their science to an undergraduate level by the time that we're done. We're hoping that they feel a lot more comfortable with it Each article comes with the unaltered article We think it's really important to present the information as it was originally presented But it has an interactive annotation fabric That is woven over the top of it We also have tabs for all of the figures in the paper and those tabs are used to kind of describe the methodology visualization etc That the authors decided to use in order to convey their information We supply educators with educator guides A lot of our resources have data activities that allow students to actually get their hands quote-unquote dirty And and play with the data a little bit News and policy and external resources and finally in our new website We also have the ability to create collections So that's external resources news and policy and multiple annotated papers in one subject Here's what our past process of our workflow look like We would ask an annotator to take an article This is a PDF version of the article and we would ask them to use the adobe commenting tools in order to annotate it for us so as You can see that the highlights have different colors We asked them to use those different colors to indicate which learning lens They wanted to categorize those annotations in And then you can see kind of from all of the little Bubbles up there each one of those has a comment on it and those comments are the annotations that they wanted to put on the paper and then we used a lot of Copy-paste was really fun to put each one of the individual annotations Directly on to the text using a Drupal 7 widget So we had to highlight the text that we wanted to annotate copy paste from the adobe file over into the Drupal widget and then that would put an HTML layer on top of it And then this is our old website, which we never told anybody to go to after Starting about a year ago because it was kind of embarrassing. It looked like we'd made it in 1995 Looks a little better now Once we started partnering with hypothesis We were still in like kind of the learning process. We're still in the learning process now We either gave annotators the login for the actual administrator account so that they could go in and Write the annotations within that account or they developed their own hypothesis login And then we had to go in and copy and paste all of the different annotations and put them back in under the Sitsi administrator account in order to get the learning lens, which is how we How we provide our annotations to the users to import them And then now thankfully working very closely with John John, where are you John? John you tell are you there? That's so sad. He is our savior and now we have a Situation where as you can see the annotator in this experience. They Their their title is beyond roots. It's not the administrator account. It's their own annotation account We have it set up so that now all we have to do is Kind of deploy push a deploy on our our end on the Drupal 8 and and it will start bringing the annotations in through our learning lens So our overall needs as a product I hate referring to something that has educational value as a product But it's the easiest way to refer to it. We need editorial control over our annotations So a lot of times it's a good idea to let people annotate As freely as they possibly would like because we're actually providing educational content that is a And a layer of expertise put on top of our papers We need to have that editorial control. So if somebody says hey, that's not correct or This annotation is written for like somebody who's a tenured professor instead of an undergraduate We need to be able to go in and change it We need a streamlined annotation process. So as you can see there's been a whole lot of Copy-paste going on in our lives And I would ideally like there to be no more copy-paste ever In fact, I'd like to light fire to copying pasting and never do it again Another problem that we have with our annotation process is that there are frequent breaks And so anytime that there is an annotation Flub going on where we can't get our information provided to the people who are using it This is a problem. We end up having an email chain of like 500 to 600 emails It seems sometimes to to troubleshoot Eventually, and I know we're not there yet, but eventually I'd also like to set fire to those email chains We also need something that's hardy meaning it's really difficult to break it And we need something that's flexible. So ideally at some point we would like to be able to let Let there be a learning lens that's just user-generated content So that's more students coming on reading asking questions interacting with the authors Annotators interacting with other annotators that type of thing And then being able to turn on different learning lenses based on what paper the person's looking at or what's relevant for that particular content Issues that we've had so far. We have a lot of learning lens breakage with the hypothesis integration We found all sorts of really cool ways to break it. I'm sure we'll find more ways as we go so if there's a reply on any of the Annotations it breaks the whole thing if there's an orphan annotation it breaks the whole thing if there's a page note It breaks the whole thing. So we're working on that the learning lens presentation Because we have this proprietary software that delivers content in a very beautiful and accessible way To our users It also is stuck with a specific sized block that pops up when you're when you're clicking on the annotations And what that means is that the lovely hypothesis automated integration of images and videos Doesn't convey nearly as well with our own learning lens So that's something that we need to work with our developers to get that looking a little bit better And then one of the things that shall be lake is the program associate For science in the classroom and he and I are always sad at how much we totally rely on others to be able to do our job I feel terrible every single time I bug John And I feel terrible every single time that I bug our Drupal developers And I realized that working with them is in fact part of our job But eventually it's at some point we would really like to have that control so that we don't have to Rely on the developers to deploy a change to the website in order to get the annotations to be pulled in automatically Poor John has been awesome and has made us a custom Hypothesis Chrome extension just for us So that we can do kind of a few things to make our workflow better But that's really not sustainable for him or for us in the long run And so at some point we'd really like to be able to to walk around on our our own two feet And it sounds kind of like Where hypothesis is going where other annotation clients are going is very much in that way I will leave it at that and Next up we have Sebastian Kircher And you are the associate director correct of qualitative data Repository Let me see if I can pull up your paper there Good morning, and thank you. Yeah continuing on the Tree-themed topic. This is again very hands-on and I don't know hopefully a little vision but low on the vision probably So qualitative data repository. We are a small social science data repository we archive qualitative social science data and Anyone who is in the sciences more broadly or reads about the sciences has probably followed along with what many people call the transparency revolution in the sciences the expectation today is that if you Publish empirical science you are transparent about the what's behind the science that you publish and that includes publishing your data and how you analyze your data and Broadly speaking with some caveats We know how to do this with quantitative data Not everyone does it by all means, but we know how it should be done, right? So you have some type of Matrix form data of spreadsheet columns rows then you have Some code in language that analyzes that data it could be our it could be python it could be state I could be something more fancy And that code produces a table or if you're cool it produces a figure And then you put the code and the spreadsheet the matrix and the data repository and boom You have transparency and it's great That's not how Qualitative research usually works when you do qualitative research in the social sciences Oftentimes you have you know You write a sentence and then that you point to maybe one or two documents that might be an interview that might be a Archival source and then you discuss how you analyze that and then you move on in your story And you point to the next document you go to the process again Then you do that again and again and again and you do that and like I don't know 50 60 70 different passages in your text So your nice kind of data set script put it in a repository Doesn't really work that neatly There is a traditional way scholars have dealt with that for There's some debate about this but roughly the essence the end of the 16th century And that's the beloved footnote or in this more horrifying example an end note. No one likes end nodes and So the idea is pretty similar right so you write something and then you put a footnote in there and then you Write a little bit about what that footnote does and you point to the sources that you use and if you look at that Footnote to a problems or end node in this case two problems come immediately to mind the first problem is The two sources if you can see them, I don't know how large does this yeah, they're kind of viewable there are books published in Russia and or the Soviet Union in the 1920s there I don't think in any US library So kind of hard for you to find those and you're also Given that it's printed your editor will not be very happy if you go into great length in your end nodes There was this time when people published, you know three quarters of their pages as Footnotes in history at least in the social sciences. No journal editor will let you get through with that So both not good for transparency You can actually go to the underlying data unless you travel to all the places the researcher travel to I'm a comparative politist takes scholar. I travel a lot. So that's usually not feasible and you have space constraints Not good With annotations we think we can get around a lot of this So instead of having a footnote we just put an annotation Over the relevant passage in the article and then the author has as much space as they want to Talk about, you know, why did I use this source? What is this source doing? What's the level of credibility of this source? Etc. Etc. It's all digital. So space constraints not really an issue and I can then link to a primary source that I've digitized and put into our data repository and a reader instead of traveling to a Moscow can just click through and Look at the link the cool thing is this is not a vision. This is very much a practice. So qr code url if you want to try this out and Yeah, I'm gonna let you play with this a little bit I should warn you you will need to register for the repository to actually click through the source. There are a couple of sociological Not technical actually Reasons for that, but otherwise I think it looks really fun and It adds a lot of depth to the article Vision we were supposed to talk about vision so so this is some shorter term vision But some of the things we started working on this about a year ago And we're very proud that we actually have something tangible up and such a short amount of time and a lot of credit for that goes to Hypothesis for providing the annotation infrastructure that works nicely with that The biggest kind of issue for us is that this is still very tedious for the scholars To actually do so a lot of what we're thinking about is how can we integrate? Writing these annotations into the process in which the scholars create their work rather than having them do this at the end When they may have forgotten all about The research they've done and one of the principal ways that we're thinking about this is integrating this with the tools that the scholars use to Store and annotate their research be it reference managers be it Qualitative analysis tools that's in vivo there or be it for the more adventurous qualitative scholars tools Like github or the open science framework or both of them the second one and that's Obvious for who we are is Preserve and protect those annotations. We are a data repository That's kind of the core of our mission is that the stuff is going to be there and accessible not in five years but in 50 years and That has a lot of challenges Especially when you think about these kind of two moving targets that we have to deal with with annotations Which is a the annotation and be the underlying publication Where especially what we work with high processes and with a publisher. We don't really have control about of over either of them, so that's a bit tricky for us and then at least as As as an idea that we want to explore Socialists may be a little strong, but obviously this is a very interesting way to do interactive source criticism, right? So your researcher makes a point about you know I've consulted this source in this sx and then this is a great place to reply. Well, no, I've I've seen this source I've looked it up and and no it doesn't and this debate is taking place in social science currently, but it's very isolated because The only people who can engage in the debate have other people who went to the same archive So so it's very narrow and we think that this can expand this obviously there is all of the kind of considerations that we talked about yesterday with you know common systems are very risky places and academics in spite of what those of you who aren't in academia might think are not actually very nice people all the time so There needs to be a lot of kind of safeguards in place to make sure this happens in a constructive way Okay, we were also supposed to talk about creating a better ecosystem And that's where I get to scold people and to compensate for that. They're gonna be cute animal pictures Creating a better ecosystem So we accounted a bunch of problems with this that we didn't really expect and I just figured I would showcase some of these One of the bigger ones is if you embed a PDF in an HTML frame, I cannot annotate it Please don't do it and this is kind of the old Wiley layout and But a lot of the kind of cool new kids on the pre-print front There's a no-dose and the OSF framework do this too, and it's not nice. Please don't do it cone of shame Even worse is Reapcube I Just gonna leave it at that because you know lock it up in a proprietary system Make it impossible to annotate with anything except the Reapcube annotation tool and look how angry that guy is This is actually not Elsevier's fault Although I put up the Elsevier example, but it's an obstacle that I wanted to raise If we want to link to an article a lot of the time that goes of course through the hypothesis proxy which breaks the authentication to Pay walled articles which is really unfortunate because a lot of the people we want to read this may not have the Hypothesis Chrome extension installed So this is in this case not a blaming the publisher necessarily except for you not not being open access But a problem that we need to solve Or look very sad The last thing is Two things that actually and these are begging for better metadata one thing is that Good metadata on article pages helps us solve Tying the annotations to the right page and so these three meta tags are the entirety of the meta tags on One article page and that's not terribly helpful The other thing is that session specific cryptic PDF URL So horrible if I want to point in a reader to a PDF and say look at that annotated version of the PDF And I can't do it because the PDF is session has a session specific URL That's sad, so please don't do it and this is how I'm looking at you I think sad and happy dogs make everything better. So that was a good call Next we have somebody that you have already met, but I will reintroduce her Jennifer Lynn She is the director of product management across rough something that I love very much cool, I Don't use max anymore on purpose All right as Beth mentioned I'm Jennifer Lynn and I'm speaking here about cross rough and Though for those of you who aren't familiar with the organization we provide scholarly infrastructure There are digital object identifiers that are a big part of what we do Which prevents link rot essentially from happening with scholarly? Research that are that have been published we do a lot of other things and Some of which I'll cover over the course of talking about actual annotations and why all of this matters so a big part of what we do is the connecting up the linking up of of many things through metadata and This is not only research articles to research articles through say references, but Even with metadata within a specific research work such as the contributor through an orchid Thank You Rob to the funder with a unique funder ID through a funder registry to the research article to the data set through the software etc But all of that linking up is enabled through metadata We're here because of annotations and you know annotations are I think we can all agree a Very valuable asset that can enrich the research findings that are shared whether or not that Finding is represented in the form of a research paper or a preprint or a data set or even software annotations are a scholarly resource and We need to figure out together as a community how we properly treat it as such and So part of that is getting it into the scholarly record for what reasons so that people can find it So that the researchers can cite it and that the annotators who are contributing who are knowledge producers can get credit for it so that it can get linked up into the larger map of research Research production and so that it can be discovered whether it's because it can be text and data mined or And then linked up to the article which is which then is also text and data mined all those things That are associated with the annotation are ways and inlets to finding The research answering the research questions that that that we have So I think the the point is that we need to be that researchers Have the potential to be annotating along the way and then it's not just the publication workflow Right, but the entire workflow of the researcher conducting his or her work It that might start off at the level of the grant, right? If the grant is published it can then be annotated, right? And I know that many funders by law perhaps are Required to publish the grants private philanthropies are also beginning to move in this direction I think this these these initial Proposals can be very very useful not only to the funder as well as the research group involved But the larger community annotations then would enrich that the soft the software in the data sets that go into That our output of the research itself, you know, these are Obvious to us all but all the way down from the early outputs that are shared or perhaps shared within a small circle and kept private All the way down to the final publication as well as all of the different reuse objects that come out of the these Publications all of them can be annotated and all of these can are are then you know important Enrichments which we all need to know about So getting this into the scholarly record by way of the metadata is the way is the mechanism by which Crossref we understand we can help out This diagram is very very unuseful in the sense that it It's article-centric and that's not At all what we mean to say so much as to illustrate that there are things that link up and Things these things that count traditionally Are only a subset of the thing other things that could count and may count if we Properly support it as part of the formal scholarly record So you have the article that's connected to other articles We have the data and software that underlie the research findings in that article the preprint the videos the protocols the published peer reviews We also have of course the annotations here and by linking all of this up We can create what is essentially a massive graph or a map of Research in itself and at crossref, you know because we're open scholarly infrastructure We make all of this free lay available through our apis to the entire community This map then is able to link up the research works to the contributors whether your contributor of As a research paper the authors or the editor the reviewer or say the author of the annotation Or the curator of the data set Etc to then also the activity surrounding it and this is one of the things that we have been working on for two years to build and Beta is actually going to start in a couple weeks. It's called crossref event data where we're going to be Collecting activity events what we call events surrounding these research Publications and then knitting it up into this open scholarly map so that any Of the links between social media and say a book or a data set That's been registered with data site or a research article, etc Can then be linked up to all the other things that say the research articles also linked up to References and mentions in Wikipedia, you know mentions and blog posts What this will include when we release the beta is also hypothesis annotations that occur on any Any publications that have been registered with crossref and data site so anything that has a DOI With a hypothesis and an annotation we are going to be making available through our API So we hope that this will be a useful thing For everyone as we begin to figure out, you know, what can we do with all of these links? What can we do with this open scholarly map? So again, it's not just the publication workflow itself That's important is the entire research workflow all the way down and with this open data as I mentioned, you know The the the use cases of it are unlimited How people interpret it is wide open what we do is just make the data openly available to everyone So I think I'll echo a point that Sebastian made earlier in terms of what we need is We need all players online to play well and make the content Available in a way where systems robots APIs etc can find it how you set it up There are very many ways and some publishers or some platforms have set it up. So there's lots of cookies which block you There are ways in which you can have so many URL redirects that that also becomes a challenge I won't go too much into it But please don't do any of those things and make it easier for all of the systems who are trying to knit everything together and Propagate this to everyone make please make it easy for hypothesis to know about The annotations that may they may happen surrounding your your platforms on your platforms and also one last thing is that to the extent that Yes, absolutely one last thing is that you know as Infrastructure provider we want to make sure that we are supporting everyone So in the event that you have a platform that you would also like to be included in event data There are links. There are events. There's activities surrounding research publications that are happening on your platform Please do talk to me. We would like to include you in the cross-off event data tracker. Thank you While we're trying to get Joel on the screen for all of us You may have already actually figured this out Jennifer sound check 1 2 3 1 2 3 How are you going to assign dois to the annotations or are you going to assign actual dois to the annotations themselves sound check 1 2 3 Sound check 1 2 3 sound check 1 2 3 Is it no it's on yeah, please go okay I'll answer the question really quickly the Cross-off event data actually tracks activity surrounding Research publications with a DOI it itself does not need a DOI right so a tweet About a research article will be part will be an event that we share a tweet does not need a DOI. We're capturing it That makes things easier. Well, I would like to introduce Joel Plotkin. He is the CEO of e-journals press He is joining us remotely Yep, yep Hello everybody good morning Thank you for taking a few minutes to listen more presentations What I'm going to talk about today is how we use the hypothesis annotation tool in the classical purity process and slide presentation all five minutes and Basically so the use case that we're trying to adjust is that journals around the world use a basic online web-based peer-review software and Traditionally editors invite reviewers to review the main scripts and reviewers provide reviews or feedback You know web forms and their web forms are based on a series of questions Ask about the scientific merit is a science novel. Yes. No type questions And so the question that we were working with hypothesis e-life and American Geophysical Union was is how can we use? annotation tools to help authors and reviewers and editors Redline the documents and collaborate a little better Here's a sample of the way that we're based full forms that asking the questions and comments to author comments to add or and If if the audience has been a part of the peer-review process they're very probably familiar with this So what we did is we started looking at the hypothesis tool and Wanted to make it so that that reviewers and editors couldn't mark up the PDF article file We also realize that we wanted to share this marked-up main script with the authors But we didn't we need to still be all the blind to the reviewers are so the authors could see oh It's reviewer number one reviewer number two's comments or annotations But the editors should see that we're number one at that that person is John Smith or salary waters So there's different security models that will show you today and then also as you start making annotations We've got feedback from American Geophysical Union. It'd be nice to Tag these annotations whether they're major or minor concerns They're just small edits whether they're made to figures etc And then as you go through and start using the annotation or hypothesis tool There's supposed to be a lot of comments or annotations So be nice to have a filtering system. We say show me only the major concerns or show me only the concerns regarding figures Or show me only the concerns regarding this reviewers of your number one's comments And I'm going to show you some what we ended up coming up with on the next set of screens So within our peer review system You'll see that the real link to annotate the merge PDF and also show a summary table We'll come back to that in a second But when you click on the annotate link will actually display the hypothesis annotation tool and You'll be able to highlight a specific set of text in the tool This is as if you were reviewer and you wanted to mark up this manuscript You can click on this annotate button after you highlight it And then a comments area will the classical hypothesis comments area will be displayed and you can type in your comment and On the F underneath the comment you can tag it whether it's just a summary major minor edit and all that's configurable And you can also specify whether this is a confidential comment just these plates the editors or whether these annotations should display to the author's also and then in The hypothesis tool we've configured it to the deal to display the annotations on the side panel And as you can see we'll show you the role of each person reviewer number one number two and this is in the interim Security model so you actually see the full full name of who's commenting But if it was this is being used by the author just see the tag reviewer number one, but not their full name Here you can see that there's a whole list of annotations are going on with the different tags and so forth And as I was saying earlier sometimes that this become very long So you want to wear a filter and you can click on that and all annotations link at the top and select a various filter Just show me your number one's comments show me the major comments show me the edits and so forth The next thing is we had to work on the user interface because even though we're gathering all this data We sort of had to meld it back to the classical peer review form I showed you traditionally with all the questions and comments And so what we did is we decoupled the hypothesis user interface from our from their Classical back-end database and all those annotations that that the reviewers are putting into the hypothesis toolkit They're actually going back to each of our class database For this one manuscript and then we can repurpose the annotation and how we want it so we go back We store the database then later on we actually Queer our database and we can build a nice web-based form that will summarize the annotations It will give them the context of where the annotation highlighted what the reviewers comments were with the categories were whether there was a confidential comment or not and This is very important because we need to find it a clean way to say well If the person doesn't want to click on that annotations link How can we still show the author a summary of this feedback in the decision letter? And so we build this web form and then we actually can convert it on the fly to a PDF file and attach this PDF file This summary to the author decision letter so the author can quickly see what the comments were and try to revise their manuscript They'll also have the option to go back and actually interact with the tool if they want but we wanted to give authors both ways of interacting with the system and That's the presentation for today questions Great questions. Can anybody hear me through this? And if you do come over here, I have a question And this is in regarding to I'm sorry. I can't hear if there are any questions Okay, I'm sorry, I'm Gail Clement. I'm a library administrator at Caltech So we have a lot of AGU authors editors and I myself was on the AGU pubs committee for a couple years I am really Wondering what your expectation or your hopes are for who's now training researchers For these additional tools and practices around annotation That's part a and then the loaded part of this is part B would be and is there a role for Libraries in particular who are training authors on a whole bunch of other stuff around opens garly Stuff That our training could be Supported that what we're doing To train researchers on our campuses could be integrated into the way that you're Pushing out these tools as part of the research lifecycle as part of the researcher workflow Okay, I'll try to answer the three topics so Right now we're doing a pilot with American Geophysical Union because it's not just for new authors or new reviewers and authors But it's it's even people who've been publishing 20 or 30 years were working You know helping journals peer reviewed for many a long time this whole annotation process need very new to them So what we figure is to do this pilot gain the feedback and then we'll probably set up some web videos to show them How to use the tools best practices? You know also at scholarly, you know conferences like Society of scholarly publishing or council science centers We'll also be able to do presentations and educate people how to use the tools Yeah, so my view on this is that I like technologies to require as little training as possible Researchers are very busy. They don't like to spend time on being trained and So my first step is always try to do this with steps or in ways that they already know Rather than require training to the extent that training is required or at least helpful. I think libraries are a great place It's often the place where kind of new scholarly technologies are getting introduced to researchers because the other place are the senior faculty and they usually use the Tools that are 20 years old. So so in that sense if To the extent that training is neat as I think libraries have a key role to play I Think it is a very interesting question, you know from a I build products product director From that standpoint, obviously you want to design the system and so much as it can as easily be integrated into work flows As possible user experience is a big thing user design is another big thing But we also know the same problem exists on the other side, which is There isn't any such thing as the build it and they will come bit, right? So to the extent that there are new social ways of sharing and communicating that need to be then Built into the system at that may go above and beyond then merely just delivering a product that can easily be used I Think that's a really good point in that We've got all of these great tools out there and people don't necessarily know that they exist So it's not just training people how to use it. It's how do you even make it known that it's available? And that's an interesting question Heather I just wanted to add on that that we're gonna have Unconferencing sessions this afternoon and one that we hope you'll be interested in joining is Adoption strategies So if you are interested in this topic and I think every use case we've heard and we'll hear later today Has the adoption strategy aspect to it. Mm-hmm, so I'm Putting out a plug for that. It's not on the wall yet, but it will be cool Hi, so the SACI conference already has been having For its proceedings has been having annotations via github tags and markup in that way As it was mentioned yesterday being able to go to a specific line is extremely helpful I've been I'm somewhat in charge of making hypothesis a thing as part of that process But I'm somewhat afraid of the difficulty of integrating with the existing github annotation services Particularly this is super useful for reviews Because it's specifically about papers. Are there any efforts right now to integrate across these? different platforms for annotating including things that are say internal or proprietary That may be one of those questions is actually better for the hypothesis team than for us But I could be completely wrong about that What did you see other services would be fine, too? Yeah, sure. What what did you say the conference was? Sorry? I didn't hear the scientific scientific scientific computing in Python conference Ah scientific computing and Python conference. So Joel, can you hear me talking right now? Joel can you hear me did we mute him? Oh Okay, well, that's okay. We can't hear him, but he can hear us to sort of sum up There we go the question was How are we approaching integration of multiple different platforms that are kind of trying to do the same thing? Because I hadn't realized until yesterday that github already had like an annotation process and was way ahead of the curve when it came to that and so Joel you may not be able to answer that question that may be a question for the hypothesis team and the people who are working on the annotating software. I certainly don't have a good answer to that I Think in general, this is sort of like a new field and you know people are looking at different ways to use hypothesis and I think it will continue to evolve but Dan hypothesis are reaching out to different vendors and sharing their ideas even as far as standard Annotation formats and tools and so forth and I think over the course next couple years There'll be a convergence on the ways and the techniques of using annotations But there's gonna be a little bit of learning curve and time to get there Both the tool set and as a previous question answered, how do we train then users and so forth? As a follow-up or somewhat related in terms of Code and software as actually part of the research process There's that donho quote that the real work of science is not what is in the paper It's just the advertisement for the real work of the science and recording all of that and so What efforts are being done right now to really get that? Connection between having not just papers being annotated, but the entire research process, right? Okay, so to sum up that Joel. Did you hear that at all? Can you repeat one more time? Absolutely and I think this is a really good and interesting question and something that my my Science in the classroom is really interested in not just annotating the paper that comes out of the research But how are we going to be helping use annotation to talk about the research process itself? and so You've got this paper. You've got these great things. They're publishing their results But how can we use annotation to kind of weave into that? How were those results acquired in the first place and what's the actual practice and nature of science or any research? Okay, that's actually pretty neat because there's some journals are very classical They have these reviews and I think of annotations It's another way of getting feedback from their viewers and editors to the author so they can improve their communications and so forth whether it's about the piece of But sometimes who's a little more innovative if you look at what be life is doing they're actually publishing the reviews so They'll you know, they're they'll work with the author They'll be comments or annotations going back and forth and when the main should probably gets published They'll actually publish the reviews with the annotations downstream So readers can understand what type of dialogue existed between the author the viewers and the editors to understand What their concerns might have been how the main script was refined so classically we've used and More specifically more annotations were kept very private And I think in the future some journals would share their share more than information and by sharing the views and the annotations It will help scientific conversation about the topic So I would I Don't know if this is sacred ledgers I don't think the answer to every problem in science needs to be annotation I think the problem of better annotating data and code into papers is your t-shirt Right, so Jupiter notebooks and our markdown and related tools are much better suited for that purpose than trying to cram that into annotations and to the extent you want to you know, you can Annotate your our markdown or your Jupiter file But but I think for linking linking the software to the paper Use the tools that are designed for that and they're really kind of terrific in that No, I think that's a good That's a good way of looking at that the actual summing up of the software element of the research That went into producing the results use maybe something else for science in the classroom We definitely use annotation in order to kind of help guide the readers through How the whole research project was done not just the software but but kind of all of it And then how it relates back to The full body of work that was used to inform that question in the first place And then finally how the information in that particular paper is then used to inform the community going forward I guess one clarifying point here the term annotations is a little bit vague So I think maybe using the loosest form of annotations That I can see to be very valuable across the board whether or not it needs to take a specific format and use the specific Technology is another question, but you know gene annotations that's been around since you know We had the genetic code so that those annotations obviously look very different than say I have a hypothesis one Does it need to is a good question? But yes annotate, please All right, we've got time for one more question. You are the lucky questioner. All right Nick I want to speak now as a researcher who has reviewed many articles and had many of my articles reviewed I Just wanted to caution against getting stuck on the problem of having to train people to do annotation and instead maybe focus the emphasis on How can all of you are building amazing tools and I applaud you? How can you work together to make sure that you're not creating 17 different rather identical processes? So as no one trains anyone to do reviews You learn how to do a review because someone asked you to do one and you start doing them So I wouldn't worry so much about training people to do reviews. Your process is far more you know intuitive then Something where you're you're reading a whole paper and you're taking out sentences and Pacing them into a word document and then commenting about those and then trying to create a synthetic review from everything Annotations are far easier Also, there's a huge thirst in the community for more of these annotations and something that's more like public Review where we actually get credit for making a paper better by giving constructive comments and annotations onto the paper So I would just I would caution you to like look to all the upside and focus on the upside because they're in the research Community we want the tools that you're building right now. Our process is kind of optimized for the Pony Express It's like oh well You're gonna send a paper to an editor and they're gonna send it out to some other people and then they're gonna send it back And it's gonna take a year and for science for a science to iterate quickly We need something like the tools you're developing So please know that we love it and focus on making the tools as similar as possible so that when we do use them Across different platforms. It's it's easy for us to do the second third and fourth time So Joel just to sum up that embrace the positive try to Make platforms kind of similar across the board don't reinvent the wheel And keep going and basically from the research community standpoint the product that the e-journals press and the rest of us are kind of Hap-hazardly cobbling together is much easier than trying to copy-paste into a word document and creating that kind of a review document Did anybody have any closing questions? I thought that was a really great point to close on so I Will officially call an end to this and I believe we now have a break