 Thanks. So in line with the INCF shift towards standards and best practices, I just thought I'd like to share some work we've been doing in Australia to create a national standard for neuroimaging informatics. So this work came, was proposed in a project by the Australian National Imaging Facility, which is a sort of national body which through what a lot of the funding for infrastructure, imaging infrastructure in Australia comes through. And so there's a number of different nodes. Most sort of imaging centres in Australia will get funding through National Imaging Facility and so it's a good place where to sort of coordinate these kind of efforts. So in this project we had four universities who were contributing and looking at two MRI systems. So one is a small animal, sorry, yeah, small animal Brooker 9.4 MRI and the other was a human semen scourer just as two sort of starting points to begin this process. So kind of the aims of the project were to basically to give users the enough information so they could determine whether the data that they were, data in these repositories is fit for their purpose. And we also want to sort of future-proof this so that someone who wants to come back and use this data in 10 years time can come back and go well is this data of sufficient quality for depending on what I want to do with it. And so to do that we need to make sure we're storing the appropriate metadata and we've also sort of started to standardize some quality control insurance which is particularly important for the sort of small animal side of the project. And so yeah on top of that we by working together to come up with best practice hopefully we can improve the quality and reliability of the data that is being generated from our facilities and also through the project promote the fair principles and maximize the utility of the data. So the project, it was a fairly short project for like the the sort of scope that we're dealing with here so we're not saying that we've got the the final you know this is the way that everyone needs to do their acquire the data and store their data and you know keep all the provenance or anything but it's just sort of a starting point of framework with which we can sort of gradually sort of build up best practices within within the national imaging facility. So we have some documentation how to you know certify the data as meeting these standards some of the process we've documented some of the processes for acquiring the data and ingestion how we sort of store the data in these repositories and then we've sort of come up with two exemplar systems for both the human and the preclinical scanners that we're working with. So on the the certification of the data so again we're not trying to say that this data is of a particular quality we just want to say that there's enough metadata enough you know a link to quality control data so people can make up their own mind whether it's fit for their purpose. So we you know make sure that you know there's unique identifiers sort of tying the the data to the instruments involved cross-reference to quality control and then also again this is more sort of critical for this the small animal data which is typically less standardized than your clinical systems but so make sure that there's the always access to the raw data with all the acquisition protocols and so you can be you know you can do whatever you need to do with the data but then on the same on the other hand have conversions to open data format so it's as widely accessible as possible. On the sort of acquisition ingestion sort of procedure we were looking at sort of sort of saying that we need to have an automated process for this but then you know in some situations that's not possible so we as sort of a lowest common denominator we say well at least the the process needs to be documented but automated where possible. All the instruments that the data is collected from get registered at this central research data Australia website so you can know exactly the the model of the the instrument where the data was collected so again this is not so it's probably and at the moment the data is collected it's not such a big issue but if we're looking to store this data long-term then you know in 15-20 years time someone may not know whether this instrument that Mono's biomedical imaging was of one particular model or the other so it's really trying to make sure this data is as useful long into the future as possible and you again have a regular poly control schedule and do some sort of very basic quality assurance just to make sure the data is complete and it's reconstructable it's not there weren't any sort of errors in the ingestion process so that also look to formalize the way that the data is stored so we have a this idea of a data repository service not really a not a particular repository because we want to be flexible just say if new technology comes along you want to switch your repository to something else but this service is guaranteed for 10 years so this was part of the project and it kind of forced us to sort of go and get some from all the different universities go and sort of say look we've got this part of this project we want to make sure this data is you know preserved into the future it was sort of a good good way to have that discussion with the university and get the funding guaranteed for a longer period wanted to make sure that we have institutional authentication so we know we have sort of a link to a real world person for all the data that was collected and yes sort of identify whether the data in the repository is doesn't meet all these the metadata standards that we were setting and so once we'd sort of set up this sort of documentation again it was just we're just looking at the sort of bare bones to start off with but then hopefully over time this sort of matures into something more useful so we looked at the each node looked at the setup that they had and did a self-assessment against the this core trust seal certification which is set up by the world data system and the data seal of approval organizations and so it's basically just a checklist of about 10 or 12 points we get to sort of see how your data repository matches up with these you know this certification which is basically based on the fair principle so it's a good process to go through and explicitly find out where the weaknesses in your repository or system may be and yeah again like look to sort of formalize commitments for guarantees around the data access so so yeah we worked on two separate exemplars for each modality mainly because a lot of the data that comes from the clinical is in daikon format and so a sort of specialized repository such as X that makes a lot of sense whereas from the pre-clinical side of thing you have it's much less sort of structured so so yeah we basically we worked from X that just because it was a good starting point and we've you know we worked we developed some plugins to allow open ID connect access or authentication with X that so we can hook into this national authentication federation worked on ways to sort of upload the raw data and then you developed a few pipe simple pipelines for extracting quality control metrics and quality assurance and yeah likewise for the pre-clinical exemplar we use this use the different repository this my TARDIS which was developed at Monash University is a bit more general and again you worked on some posts in just filters for data validation and QC analysis but I'm so I suppose one of the the key products of this project which I think will have some real benefit to other nodes within the national facility is we developed a docker compose scripts for both these exemplars so those who aren't familiar with docker compose it's it's sort of a script which instantiates a bunch of docker containers and links them all together and so you're with like once one command you install docker on your system install docker compose and with like one or two commands you can pull up a data repository like X that example and so yeah we're able to pull out very simple sort of one-page set of instructions to you know go to the relevant sort of so the Australian research data commons which has virtual machines and data storage so how to apply for that storage and then install docker on it and run it and get up and running with the data repository so sort of like a starter kit that other nodes which maybe don't have this kind of level of informatics support can adopt very easily and yet the the benefit one of the benefits within our you know context is that if everyone's sort of using the same configuration then we can sort of help assist each other a lot better and sort of sort of unify out the sort of national imaging facility even though the nodes are quite simple yeah so just in future like as I said there's just a basic framework at the moment but I'm where you look to really sort of push the ability to publish data yeah work on some getting the identification into the ingest procedures and making it very simple for users to publish the data and then you maybe and also look at some sort of more in-depth quality control and assurance as well so yeah so this was a joint project between University of Western Australia Monash University University Queensland and University of New South Wales and it was involved thanks let's say you wanted to be interoperable with the bits of format and your infrastructure how much work and what kind of work do you think you should be done well I suppose you could be I mean you could be using the tools which well hopefully there will be some easy ways to export from XNAT to bids for example and then I mean there will be should be a easy to create a pipeline wrapped around QTConf maybe or something like that which could then export to bids and then users can download and run bids apps on it something it shouldn't be too much work but yeah I think it does yeah yeah I think it's it's quite good I mean we worked with the Australian Access Federation on there and to have open ID support so basically means anyone with a institutional log in Australia can log into our into it any of the repositories I mean you can also then hook up to Google or something like that which you know simplifies the from a maintenance point of view at least and you know is a nice way for people to access these repositories