 As Linda mentioned, INCF has moved to a new initiative that's the endorsement of standards and best practices. So Mary Ann Martone, who is the chair of the Council for Training, Science and Infrastructure and a lot of other things she does as well, will tell us a bit about that process. Okay, this is on. So thank you for Matthew for letting me present on this here. I've titled this talk, Open Fair and Sightable, INCF Standards and Best Practices Endorsement. So I've been involved with INCF for many, many, many years. I served as the U.S. representative for many, many years. And one of the things that INCF is going to deliver, the promise of what INCF delivers, is exactly what Linda said. We've the need for standards and best practices for us conducting global neuroscience initiatives. And in many communities around the world, I've also been very active in Force 11, the future of research communications and e-scholarship, which is where the fair data principles came from. And we started to really think in the CTSI of what is it that distinguishes INCF from other places like Ebro, SFN, other initiatives. And it's really, INCF has moved more and more towards open neuroscience. I think this is the place where open neuroscience is discussed most openly and something that we're all committed to. Also, data sharing and the need to make data findable, accessible, interoperable, and reusable. There's a lot of work that's gone on around the world. You heard a little bit from Michelle about the importance of things like unique identifiers. But the interoperable and reusable really requires input from the community. It really relies on these community standards. And what are these community standards? And I put incitable there because a big push at Force 11 and other places is to make sure that we expand the notion of what constitutes research from just the article, but also to data products and code and other sorts of things. And so there's been a lot of work around the world to say, these things need to have formal systems of citation just as our articles have formal systems of citation so that we can track who uses them and give people credit. So we really wanted to brand INCF as the place for open fair and citable neuroscience, the place where these things get discussed, but also to help in that. Since community standards are at the core of fair and it's up to a community to decide what those standards are, you really do need a place where you work this through, right? What are the community standards that we are going to adhere to? I've been very pleased at this meeting, at the recognition across all sectors that it is the standards, right, that allow these things, that allow all the great science that we've seen to happen because it allows data to be more easily exchanged than we've been able to do in the past. So INCF in its initial incarnation was really the place where these sorts of standards were supposed to be developed. And there's been some success there. Linda showed you some of the things that have gone on. But there's also standards all over the place that are being developed by different people in the community, including a lot in the INCF. And the tradition before has said, well, let's take all of the groups together, let's create another standard, right, and then therefore that will replace the one that's already there. And I think in my experience in Force 11 and other places, I'm not sure that's the best way to go. We should also take into account all the other standards and things that have been developed in best practice around the world. And we should not be NIH not invented here, but proudly invented elsewhere, I think, or PI or something like that, that that's what we should do. So one of the questions that I've had and one of the problems that we have is that these things are available, but they're kind of scattered all over the place and we don't always know whether they work in our domain. So if you were to start an infrastructure, or you were a funder who was interested in funding open infrastructure, or you were a researcher, or you were anybody else who said, look, I want to find out what the best way for me to produce my data is. Where would you go to find this? And where would we be able to sort of describe and encapsulate the things that are available to you? So in the CTSI last year, we decided that INCF should go a little bit further than it's gone in terms of just bringing people together and saying, okay, here's some things we've developed to actively taking on the role of a standards organization for neuroscience. And we've done a lot of due diligence. There's a lot of people who've been working very hard on this, but there are organizations like the W3C and NISO, and there's national standards bodies, and there's a way that a standards body operates. And we thought that we should incorporate some of those procedures together to be able to actually evaluate these standards and best practices, and then say, you know what? We've looked at these, we find that these are high quality, they are serving a need, and we're going to actively endorse them. So we've spent a subcommittee of the CTSI called the Standards and Best Practices Committee, or SBP, has spent the last few months actually working through this process, looking things like governance, putting all of this in place, and testing it, and you heard Russ Paldrak the other day say that BIDS was the first standard under consideration, so we use that actually as our test to help to develop this entire framework. So essentially, if you can see it there, there's a big beta, because this is again, we worked through this with one particular example. There was a call that had gone out for SBPs to be nominated by the community. So again, anybody who thinks that they have something that ought to be considered as the standard for open fair and citable neuroscience is free to nominate it. We have the SBP committee, and there are members of all the representatives of the governing and associated notes, right? So just like standards organizations, if you're a member, you get a seat at the table. If you're not a member, then you don't. We decided to have two to three committee members review the nominated SBPs in detail to look at them, and we put forward a set of criteria that we would use to evaluate them, and I'll look at those in just a moment. And then the idea, though, was not just that this is some committee over here who has decided to do this. One of the reasons INCF is reorienting is to make sure that everybody knows the nodes, everybody else is involved in this, and it's not just people on committees who happen to get selected who make these. We put it out for a period of community feedback. The community feedback is considered, and then we vote, and we vote to endorse, send back for revision or reject if it doesn't meet the criteria that we've laid out. So there is a period when the whole community will be invited to weigh in on this because even though the committee does its work, they're not going to catch everything, and the idea, again, is that when we actually have things that have endorsement that this is not just, oh yes, some people here worked on it, but that it signals some sort of quality control that you can trust. So what happens if you get endorsed? Well, the SBP will be listed as an INCF endorsement standard on the INCF website, and we just noticed the INCF website on standards and best practices needs to be updated a little bit as usual, we're behind and we'll get to that. But we've created the INCF endorsed badge and the author or the steward of this because many times it's owned by many people will display this endorsed by INCF badge. But more importantly, INCF will take an active role in making sure that this SBP is used in training materials, courses, and workshops, that is we actively promote and say this is the standard stack for open, fair and sightable neuroscience, so we wanna promote its broad use and dissemination. And also because as you'll hear in just a moment in Linda alluded to, there are community working groups that can work through INCF, so standards and best practices are never finished, they're constantly evolving, and there are mechanisms to help support working groups much like the RDA, those of you who are familiar with the Research Data Alliance, that there's a mechanism for you to get some support to actively work and extend on these things as the community wants. So I wanna thank the SBP committee, these are all the members that have been actively working on this over the last few months, as you see we have representation from the different nodes. And we're supposed to select a chair and we never formally did that and so I think that's why USA is in there twice because I just kind of shepherded the process along to get us going. But what are the review criteria? Well, open obviously as open as possible, right? It's assuming we know that there need sometimes to be limits on that and fair. So we go through the different attributes of fair and we evaluate these things relative to those attributes. Sightable, I already talked about that, but it's amazing how often people create these things and there is no easy way to know who are the people who have contributed to it, how should it be cited if you use it inside of a paper and that's very important. But here's where I think the real rubber hits the road. We wanna make sure that this has relevant toolings and implementations so that there is enough support around this that people could reasonably expect if they're going to implement it that they would be able to do so and that the sufficient tooling is available for them. We had a little debate about whether all associated tools should be open and we decided, no, we really wanna encourage industry to use these sorts of things so it doesn't have to be, but ideally there would be some tools that are open so that people could use them. We actually asked for explicit governance and you heard Russ when he talked yesterday that most of us do things the same way. It's kind of organic and we all get together and it's a little handshake, but at some point if you're gonna have something that's sort of broadly out there and used, you do need to have a governance model of how our change is made, how our conflicts adjudicated, right? And so it doesn't need to be extremely dictatorial, but that you've thought through this and say, look, there is a mechanism here for people to feedback. There's something that people can do is really, really important. Maintenance and long-term sustainability. We know that this is a huge problem everywhere. We talked a little bit about it yesterday and I think this is something moving forward that INCF will continue to consider. We know that the funding for all of this is a little tenuous, but there are things that you can do that make it more likely that this is going to be able to be maintainable and sustainable in the long run and less likely. You can see on a lot of these things that many people on the committee, we've all made mistakes before, so there's a lot of experience here about what you don't do. This is what you wanna do. And then I think this is really important. Clear evidence of adoption and use and this adoption and use cannot be the group that developed it, okay? So there has to be evidence that this actually is in use by people and it is usable. Because I think that's a place where we have often fallen down. We've produced these things, we're like, there they are, but if nobody can use them, if nobody can get them into action, then they're really not usable. They're not a standard and best practice. So INCF endorsement, ideally, right, would encompass all of these dimensions and say, yes, this is something that you can trust. We can't guarantee that for every different operating system and every different condition, but again, part of the thing is in governance is there's a help desk, there's a means, there are tutorials, right? There's things that actually help you use this. We also established a grievance procedure because as we all know, right, you've heard this cartoon alluded to, I'm assuming most of you have seen it before, but the endorsement procedure is meant to be open and transparent as much as we can make it open and transparent, but we know that conflicts of interest, competing standards are inevitable, it is almost impossible to get rid of them all, so you wanna be as transparent as possible, but if somebody feels that they have not been treated well or that something was not considered or that something was pushed through at the expense of something else, then there needs to be a procedure that people can say, hey, I would like to see this reviewed because I think that something's going wrong. And again, the SVP process aims to arrive at community consensus, but you don't always get that, and we wanted to just make sure that we established a formal grievance procedure to work through any type of conflicts that might arise, and while we can't guarantee that we can work through all of these and come through a satisfactory conclusion, we do want the process to be open, all the materials available. Right now we're working through Google Docs, but as we'll talk about, we're discussing whether it should go on a different platform. So what are the next steps? We worked really hard to make sure that when we came to this meeting, I could stand in front of you and say, yes, we have worked through this process and all of the basic steps are in place, and I think that that is true, and again, I want to thank my committee for giving up on their vacations and other things to give feedback. We've established a set of review criteria and we've established the process. Those are included in documents, and again, they are available on our Google Drive. I don't think there's a public link anywhere, but these slides will be available, but there is that standards at INCF, so if anybody can't find them, please let me know, and we will get them out as soon as possible. We used BIDS as our first example of a standard that should be posted for community feedback. We decided that rather than use Google Docs, although some of us did lobby for that, that F1000 was actually a better place because we wanted to have some visibility. So INCF has an F1000 channel and we were rushing to try to get this last night, but it turns out it's a little bit more complicated and so shortly after this meeting, we will put out our review of BIDS onto that site. I think there's a fast track and a slow track and so I suspect BIDS will be fairly fast track, but we have to decide and say it will be open. Community feedback will be solicited. We will then get back to those who have submitted it, and then if it passes the endorsement procedure, then we will endorse it and we can display the stamp. So we are planning on creating some manuscripts describing the INCF process, but we would really welcome anybody who is interested in this to take a look at what was produced, give us feedback and input, because this is all meant again to be something that everybody who is part of the INCF community is comfortable with, so everybody knows exactly how it works. And I really do want to give a lot of acknowledgement. If I put all the individual names here, I was sure I was gonna leave somebody off so I decided that I would just list the different groups that have been involved, but the CTSI spent some very intense and long meetings trying to work this out and come to a procedure. They put a lot of effort into looking at what other standards organizations do and to craft a procedure that works for neuroscience. The SBP committee, obviously, the INCF community, as I mentioned, the staff and the governing board also have been very involved and Matthew in particular has helped support the committee, the standards organizations that met with us and of course those people who responded to that first call and gave us our first set of standards and best practices and we want to assure all of you who did that that they will be considered shortly and we're hoping again by Society for Neuroscience that we can display our first sets of endorsements and we'll then be really working on the website and how we can present these and organize them so that they're maximally useful. So I think that's it. Any questions from Marianne? So maybe not a question, but we talked about using the network, the NPRC network to possibly try to lobby the journals for adopting the INCF stamp standards. Yes. And I think that would be a great idea because obviously using those standards in workshop training, sort of like doing some advertisement is good. I think it'd be useful. Oh no, we gotta go right to the journal. But I think the strong arm would be the set of a Neuroscience journal saying, hey, if you're doing these sort of things, you should probably add. I think the good news is that so many people are saying, yes, we want to be fair and yes, we have fair data and we have fair and I think if the Neuroscience community and why not INCF, I think it is the most uniquely poised to be able to really capitalize on this. And so if we start to be looked for when they say, well, who says, right? It says, well, look, we've looked at this. And I think the thing that allows that to happen is to expand us beyond just having to invent things here and say, look, these things are being used by people. Any projects, some of the big brain projects may have things that get nominated. We do an independent review using these criteria and if it looks like, and while we're doing that, I mean, it was interesting with this process with Russ and Chris, I'm sure they wouldn't mind. While going through these criteria, you're like, how should we do governance? Like what is a good governance model? It's something that we can all use to kind of help each other make our things better. So I'm pretty pleased and excited about having this opportunity because I think we can be a place where people go to find this. And then why wouldn't the journals do it? Because you said you wanted to be fair and we're telling you what fair means here, right? So probably using that network is a good idea to try to promote those. Yes, I think we talked about, Jeanette and I talked about this this morning. I think we talked about doing that at 4.11, which is gonna be here. I'm not sure if we're gonna be ready yet. So we can talk about it later. Yeah, that was the second thing I wanted to say. 4.11 will be in Montreal. 4.11 will be in Montreal. 12th of October just for those who are interested in those communities. Any other comments, questions? I'm counting on you all to give us some comments and feedback on our SBPs. Yes. One comment that I think having a fast track and a slow track has the potential to give the impression that things that are fast track are political in nature and less thoroughly vetted. So I don't know if it's a good idea to do that, but that's just a comment. No, I think that's an interesting question. In terms of your mechanism of endorsing standards, you said you get the community standard and then review them and then edit them and then vote to, the votes were accept, edit or reject. If it goes to edit and it's a community standard and that whoever's behind that standard doesn't accept your edits, what is the mechanism you have in place there? Do you just fork the standard? Do you? So I think that we worked out the process. We know that there will be cases like this where we have to consider, but what might happen is what usually happens with a paper or a grant. I mean, we're pretty good at getting feedback and then deciding what you're gonna do based on that feedback. So I think the idea is if there's something very egregious in there that really goes counter to the principles of INCF and we say, look, if this is not addressed, we really can't endorse you, then they have a choice of either doing it or not. It's the kind of major compulsory revisions. But in many cases, it's like, look, we raised this question. There isn't a satisfactory answer to it because nothing is perfect. And then one decides if the community feedback is just generally positive, do we put it forward anyway? I think this is gonna be the interesting thing as this goes forward because I don't think we can anticipate now all of these types of things, but it's almost surely to come up. And with respect to the fast track and the slow track, I think that that's a really interesting one. And because again, any time you're allowed to make a decision, it's like, well, who gets the fast track, right? We are thinking of things like ORCID though, where does it have to go out for two months? Does it have to, is one month sufficient for everybody or is that not enough time? Can we extend it? So I think we kind of have to see how this evolves because I agree with you, having two different ones is difficult, but if I already have a standard that's really used heavily in other areas and so we know it's used and everything else, do I have to go through the long period of community comment? Just talking to you now, my answer might be, well, why not, right? Because it might not be right for neuroscience, but that was one of the reasons that we had two different ones to take care of things like ORCID, for example. Thank you. Yeah, thank you for the question. Yes. Marianne, I'm interested in this issue that you raised about grievance resolution. Yes, grievance. So it seems relatively clear how you could get everybody's input into what should be considered as possible solutions and mechanisms to allow people to say what about me and so on. It's not so clear to me where you go from there. No. You'd have to come up with some sort of mechanism for some objective standards of comparison of performance and then you'd have to deal with the issues of choice. I'm concerned about how one would operationalize that. The grievance. We all know science doesn't operate in a linear fashion. No. So I mean, we've actually had this question. I think Stephen brought it up, but if you had two competing standards in the community, which we have plenty of those, if they all meet these criteria are both allowed to be endorsed. And Stephen's like, well, they can't both be best practice, right? But I don't know. So again, I think we don't want to over prescribe this because I think we have to consider the cases. But I could see a situation where the committee simply between the community feedback and everything else just simply says, look, I can't come to a decision on this. It's a fine thing because it's not like these are the only things that are going to be listed, but it just doesn't rise to that level for endorse because there's not enough information or there's three or four equally good things and maybe the best solution is just to have translators between them, in which case, go ahead and pick them, right? And so I think that these type of situations will come up and what I think would be really useful for the committee is to summarize these discussions in the INCF channel and say, okay, look, here's a recommendation, but right now there's no clear path here. Yeah, one of the things that we're doing within the COMP, for instance, is trying to rise above the issues of this one or that one into a level of APIs which allow people to develop in their own way, but to the user it looks like a uniform front end. Which can be done, right. Which is exactly what the user wants to see. And I mean, partly when I've talked to the W3C, they're not claiming you have to go and translate everything into whatever it is they do. They're saying if you're putting a front end then though you can export your things in our standard so therefore you can convert. And I think that's typically what we've focused on here with Vox Home and others is that you have this interchange language. But I mean, the fact of the matter is, is that I think these are gonna be the interesting discussions and that's why having these discussions in a public forum, everybody should weigh in on it, right. It's not the six people or seven people on the committee to say how is this gonna work? And it's eventually gonna shake itself out into something that is either doable or not doable. But I think that there is a need still to say, look, we've looked at these things and we can say that it's kind of, you know, this one compared to this one is different. When you get two that are equal, I think we might either just say no decision or look, you've got two good choices here. And just maybe that's an opportunity for interchange as opposed to saying let's get one or the other. But I think the procedure really was reacting to my own experience, having been in INCF for so long of groups who felt that they had something really good to contribute and yet somehow everyone said, well no, we're gonna just invite these, we're gonna invent a new one. We're gonna do number 11 or 15. And so I think we don't want that but I don't think we can be extremely prescriptive which is why we said standards and best practices and why we developed our own procedure instead of just saying look, we're gonna operate like ISO or something else. If we do develop things that are robust, you know, there was a process whereby these things could be pushed into the national standards bodies and that I think would be really useful for instrument makers and others for us to say, look, you know, we've gone through the process to get this endorsed by one of these standard bodies because I think that that's really good to help get things into the commercial realm and also is a real sign of importance for what INCF does that it helps to foster standards-based commercial activities, yes. When up here? Yeah, so I have a comment and a question. So for SBP, you showed that you will be looking for evidence of adoption and use, right? I see it as a bit of a catch-22. You know, if it's already well-adopted, you know, does it need to be endorsed again? And also I think it helps to evaluate proposals on their potential for adoption instead of just having already seen that it's adopted and there are, it's a lot of work to get it adopted and helping people to get it adopted is actually as useful as endorsing it at the end. Is that something to be considered? I agree. Right. Second question is getting community input is hard. Like, you know, academics are notorious, nobody responds to surveys. How do we incentivize them to give feedback and input and actively participate because it's actually a lot of work? It is, although I find that if you, one of the reasons I wanted to use Google Docs and not Faculty of 1000 as I wanted to make it as easy as possible for people to give feedback because there I found that people will, you know, highlight something and say something quickly, whereas if they have to compose a sort of a formal comment or something like that, it really increases the barrier. So I think that the jury is still out on that. We also wanted to make sure that it was not just INCF but that we would use all the networks that people have in their various projects to put this out there. If there are groups that we know of, for example, that are particularly interested in this particular type of thing, we would really want to target them and say, hey, you know, let's do this. So I think that there, we would certainly be open to any other ideas that people have about incentivizing this at least in W3C and other places where we've done this admittedly not in a neuroscience context. If you have a really broad campaign, you do get some feedback provided that you make it easy for people to give you the feedback. If you make it hard, you drastically reduce the amount that you get. So I do think it's a problem that we have to monitor and if there are solutions that people have, I would really like to hear them. The other thing about really driving saying, well, if this is already used, then why do we need to endorse it? But my experience is we don't get 100% coverage, okay, first of all, people still don't know. And making it easy in terms of people who are starting to create new things who are not in this field, right, who are just coming in. It's like, here's my developer. I tell you to go off and do something. What do they do? They do whatever they know. They don't know the field. So having a place where they do know what the fields are and then having the journals actually start to reinforce this because the journals traditionally have been reluctant to say, oh, you must just based on one individual say so. Therefore, I think having some independent review where they can say, oh, look, right, yes, we're gonna stand behind you just like they did with the repository gold seal of approval, right? It doesn't mean anything. It's by an organization who has no more power than any of us, but they just said, we're gonna start to come up with criteria by which we evaluate data repositories and we're gonna have a seal of approval. And then all of a sudden, journals are like, well, they don't have the seal of approval, right? We can't use them because, I mean, who the heck cares? Who are these people? So I don't think that just because it's used already that it's not helpful to have something else that brands it and says, yes, we've looked at this and we're somewhat independent, right? We're not just an individual going to the journal saying, hey, you should use my thing because they hear that all the time, right? So that's my answer, but good points. Okay, well. Anything else? Anybody else on the committee have anything to add? Yes. I did all the talking. No. All right, you'll hear from Boitech next. Thank you. Thank you.