 So, what I wanted to do now is bring down the level of the very, very technical information that was just presented with the two speakers before, down to something that we all sort of do, which is publishing. And I wanted to show you some of the things that we're doing around publishing that hopefully will affect some of you in the future. We have, I think, been told by various organizations, including here in Nature in this special section, that we are not doing a fantastic job at reproducing the results that we publish. Some of those reproductions have been things like the Macintosh operating system, which actually affects cortical thickness. This is a real... This is one of those sorts of things that we would like to avoid potentially, but it does actually happen. So, and has been reported very nicely, there are a lot of antibody misses in terms of the ability to reproduce results, and there are other things. All of these affect science in a pretty significant way. Now, what I would like to show you is here is a fairly typical paper. There are some antibodies that are actually pulled out in that little teeny, tiny bit of the methods, which is kind of made a lot bigger here in the major, in the main section. And here's an antibody against actin. Now, what I might be asking as a researcher is what are, you know, what are the studies that are using my particular monoclonal mouse antibody? There it is. I found it in that particular paper. I go to Sigma, and lo and behold, I have multiple antibodies, which I now have to purchase all of and try to recreate the study, right? This is bad, but surely this cannot be the state of the art. Well, our colleagues at OHSU had taken this on as a question, and the hypothesis was that, in fact, that was the norm. And in fact, they looked across five different domains of biological science. They looked across different impact factors of journals. They pulled out lots of antibodies, lots of journals, lots of papers. And lo and behold, we are not doing all that well. Now, what this is showing here is if you look across just this graph right here, these are the individual reproducibilities or identifiabilities of these reagents. So these are reagents. Here are the antibodies, cell lines, constructs, knockdown reagents and organisms. And this fraction here is out of the total number of papers and out of the total number of antibodies, for example, that were found in papers by a curator. Less than 50% were actually able to be identified back at the manufacturer's website. So this is not great. These are fairly recent papers. Ooh, this is a microphone. So technical difficulties. All right, here we go. All right. So what we've done is we've actually created a pilot project that answered a very important question for me, which INCF was very kind enough to answer, which is the way to an editor's heart is actually through their stomach. So what we did is we asked the editor-in-chiefs of multiple journals in the neuroscience domain to come in for a nice meal and try to discuss this topic, right? So out of this and another couple of follow-ups at both the Society for Neuroscience and at the NIH, we actually got together a group of these journal editors that were willing to tackle this problem. And the thing that they agreed on is they agreed the kinds of entities, the kinds of infrastructure, and the kinds of procedures that would be used for this pilot project. We established the working group. Here are all of the members. Actually, this is just a few of the members of the total working group. There's many, many more. Some of them are in this room. Thank you all very much. So the pilot project involves looking at software and databases. It involves looking at antibodies and model organisms, especially transgenic animals. And what we would like all of the authors to do that are participating in the pilot is to include the unique identifiers inside of their method section. And this is what the editors are supposed to be asking these people. And what we were able to do is this is going to be voluntary or this has been voluntary for authors. The journals we decided should not have to modify their systems in order to participate. And we are leaving this open entirely to the journals in terms of when they would like to ask and how. Okay, one of the technology pieces that had to be built for this because we didn't want authors to go to 10 different websites from 10 different model organism communities and all of these other places, we built one portal. This is built on our new Cycrunch infrastructure. And what we were able to do is put all of the data from the model organism databases, the antibody registry and the NIF registry, together in one place where the authors would basically go in here to the portal. They would find their particular resource. There's a site, this button, which opens up this little dialog box which they can then copy and paste into their method section. So this is uniform. It's implemented across most of the major publishers, everyone who is participating. And so what we found now that we have actually concluded about the fifth month of this pilot project is we have just had 100 articles appear in Google Scholar. So you can search for these right now on your laptops. This is if you just go to Google Scholar basically and you put in RRID and you have to sort of sort by English and by date. That usually works a little better. You want to search for everything, not just abstracts. And what we find is that we can actually pull back all the papers that used a particular resource or a particular antibody. In this case, this is a chemicon antibody that is used in these four different papers. And this is all being identified by this AB90755 antibody ID. You can see that these people took that antibody from chemicon, which by the way, this is just published within the last two months. This company has been out of business for over 10 years. This paper published the same antibody from Millipur. It just so happens that Millipur took over chemicon about 10 years ago. Now this person is publishing Millipur chemicon antibody. And this one is in a place where actually Millipur is not. But the analysis here shows that in all cases, this ID is the same and it is being uniformly applied. Whereas the other portions of this are really a lot more free-texty. So what we have is this 100 papers out of 15 different journals. We've identified 630 RRIDs that have been used by authors. Amazingly, three were removed by typesetting. 95% of the authors' assessments of what their IDs were were actually correct. So we feel that this is very, very nice. This is a good amount of correctness. There is a false negative rate. It's roughly 14%. This number is not as safe as the other numbers. But to the antibody registry, for example, which holds 2.2 million antibodies with IDs, about 200 were added during this pilot phase. In terms of software tools, about 75 were added. So it's really driving registration to these registries and these resources. There have also been several mice registered. There has been at least one rat that went over to the RGD. All of this data is available freely right now, pre-publication at the Force 11 website. And this I just got from Dr. Vasilevsky on Friday. So I thought I'd share it with you today. So Dr. Vasilevsky was our colleague at OHSU. And what she was able to do is she was able to now go back, apply her same standards for identifiability for, are we done? OK. Anyway, it looks like we're doing better with the journals that are actually participating in the pilot in terms of how much identifiability we're able to get for all three of these types of tools. So really quick. Elsevier has actually just added our resolver service to Science Direct. That should be coming out in the next week. So these RIDs are able to be put into papers. They are able to be found. Authors are not complaining too much. And it seems like this is a really good way to go. If you like this project and I'm sorry I'm going over, please help. I assume that all of you are authors. Perhaps you can add one of these identifiers. And if you are an editor or a reviewer, excuse me, if you're an editor, please come talk to me. Because there is still time to participate. All right. Thank you very much. Running them. Who's responsible for making the RRID? The people making the RRIDs are essentially whatever community is actually in charge of producing those identifiers. So for BICE, it's MGI, for RATS, it's RGD. For the antibody registry, the antibody registry does that. And that's us. And what was the reason for using RRIDs instead of digital object identifiers, for instance? Well, the database is already create these identifiers. I mean pulling an identifier as a DOI for a mouse doesn't make much sense, because it's not a digital object. Mice are typically fuzzy objects, so thank you.