 Here I hope to show you a tool which we developed with a couple of other people's. So it's really a collaborative effort to implement it in neuroimaging. So, automatic analysis or AA is a neuroimaging pipeline developed in MATLAB and it can process various modalities and images, like MRI images starting from structural, fMRI, DTI or DKI images and magnetization transfer imaging as well as MEG and EEG data. It implements most functions from SPM and actually the development started like a wrapper around SPM functions, but it also has interfaces to several functions of FSL and free-surfer and other solutions and tools which were developed either in our lab or in other places. So for example, we have our own non-linear fitting of diffusion tensor. Diffusion kurtoyst imaging is also implemented in our place, but for example, the digital empty noise and the motion fingerprint are came from other people. The AA is available as a code in GitHub, so it's an open source and a Docker container is also in development in terms of the center of reproducible neuroscience project dealing with its apps. So AA provides an automatic transparent and replicable workflows because it's a collection of some best practices. It could be of course these recipes or task lists or workflows as we call them, can increase continuously. It also captures some kind of provenance as you can see also in the figure and these provenance as well as a task list can be recycled and published as well. It's very efficient. It supports parallel processing, so all the modules which are independent for example across subjects or across sessions, they can be run parallel if you have a cluster configured. It also tracks the processes, so if it crashes for some reason or network issues then it can just pick up and start from on. It supports sites or study-specific configurations or defaults. You can even connect particular workflows from another AA analysis. One typical use case is if you have a multimodal study and you have a separate pipeline or workflows for different modalities and just combine them in one or if you have a multimodal study when you have a common processed workflow and you provide a processed dataset and you want to fit different models to ask different scientific questions. So AA is a high level implementation and description of the analysis. It can be easily shared or published. It contains basically, it requires only two files to run the analysis or specify the analysis and one is an XML-based task list which describes a series of modules to be executed. It's easy to edit and easy to reorder and it just looks like this. So it describes all the steps starting from when you identify your data in your database, it's a dichome, you get the data and then you have a structural part, you have a functional part and so on. So let's say if you want to do, for example, slice timing before realign on VORP you just simply swap the two lines and that's it. And the other part of the code is the actual MATLAB script which is a user master script which is the MATLAB executable. And this part is also just at first it loads up the task list and the configuration settings then you can modify the default settings which are stored in the XML files. So you can have a different smoothing if you want. You can have a different basis function for your model if you want and so on. Then you specify where you want to put the analysis. You specify the data, you specify the model and run the analysis. So the modules are the independent units of the workflow. So it contains the header which defines the domain so where the actual module is executed and which looks like this. So it tells you this is a realign on VORP module so it works for every subject independently. It has an input of the API data in the field map and generates the output of the realign parameters and so on and there are the parameter defaults which are loads up but these can be customized in your user master script. And finally, so these modules are ensured also the independence so then they can be parallel processed in the cluster on the cloud and the other part of the module is the actual M file which runs the whole settings and on the data and they are also written in a way that it can be easily recycled. So if you have for example the same, like for example smoothing as a typical example, you perform the same operations on fMRI data or structural data then you can just write a new XML file to have different inputs for your script. So streams what you saw, they are the independent units of the data so they are explicitly defined by the modules as their input and outputs and they also represent the workflow and also capture some way of the provenance. So for example again if you focus back to the realign on VORP module then again here you can see we take the field map the field map comes from the field map to VDM module we take the API from the convert API module and then we generate the mean API which goes to co-registration we generate the realignment parameter which goes to the first-level modeling and we generate the API again which goes to the slice timing. So again this represents the flow of the data and it also helps to define how to distribute the jobs for parallel computation and as I said because we just call the data as a stream we refer to them, we don't have to worry about the file names for example when you run SPM then a certain, for example after realignment you have a prefix of U or R depending on what kind of realignment you're using and again if you just switch the modules the module just simply looking for the API stream, whatever file it is. So the input of the AA, the whole analysis they can start from scratch, start from DICOM data it can also process the NIFTY data but the bits it's also using so it's also bits-aware so you can just simply take the bits we've heard before into AA and do the analysis and we'll just do simple two lines you say we use the root directory of your bits you use this command which puts everything structural, functional, field map, diffusion into the pipeline it will automatically add the subjects or the sessions available you can specify whether if you have multiple sessions you want to treat them as different subjects combine them in a within-subject model it also allows to sub-select different sets of subjects or sessions and it also adds all the events and AA also has a bits-export which means that if you process your data in AA starting from DICOM for example then you can just one command export all the raw data out of the bits and you can upload it so for running if you do the cluster execution then it also allows you to monitor all the jobs which are there so there's some nice GUI which shows you all the jobs which are actually running it also gives the information about the cluster you are using and also the independent jobs the outputs they are probably the most important part the diagnostics which I think is one of the key strengths of AA so it generates for example between summaries between subjects summaries and descriptive stats to identify the outliers so for example for a motion correction summary so it's just like block spots across the motion correction parameters so you can easily because one typical thing so there is a rule of thumb then we say if all the subjects move more than 2 mm in one direction then we consider it an excessive motion but what is more important is the heterogeneity in the data so if all the subjects have a little motion just even below half mm and you have one subject we have moved let's say one and a half mm it is also a heterogeneity which you may need to tackle and there is also a summary of all the registrations so you can easily just browse through it and you can pick a subject which was not registered properly and so on so it also provides a graphical representation of all the first level contrasts with a log efficiency estimate and most importantly it also generates the activation maps overlaid across all the three slices and axes and it also has a free-surfer module so if you set a module to process the data in free-surfer within AA then you can also combine the free-surfer surface with an activation map and it produces an activation map which can project it to a free-surfer so here you can have all the information about AA we have a website, we have our paper also out and you can find information on the Github and in the Cambridge University so the MRC, CBU, WikiPages as well and you can of course always ask the people who are developed AA and they are also keen to help you and thank you very much for your attention Thank you very much Teeble, fascinating, I love that last slide Questions for Teeble? Yes Hi, this is very nice to see the development of the AA I've been the lead users of the wrapper, back in Cambridge so my main thing was when we are really talking about bugs in the software and stuff like that isn't learning a wrapper, even knee-pipe and other is an additional layer for the end-user and easy mistakes I appreciate the possibility of replicability keeping record of your analysis with you with that but what's your general comment about having additional wrappers around softwares where you do not know what's happening inside? I think it's a very important issue and that was also mentioned in GV's talk about when you're comparing black boxes and glass boxes and I think it's very important whenever you use a wrapper whenever you use something you always have to know what's happening in there and I think the wrapper has two roles here one is the easy execution of the program so you don't have to know for example let's speak about SPM, the which button to click and where you can find the parameters and the other thing is that when you use a wrapper or for example in AA you have all your parameters defined in one place so if you remember the user master script the whole script is basically doing nothing but filling up the AAP structure and the actual running is just the last command which say A do processing and processing this AAP structure always go back and check all your parameters which were used by SPM so in this case you can always go back and see what was happening, what you were using but I agree and that's why the diagnostics are very important so when I first did the FSL course for example in 2009 in San Francisco then one of the most cited line in the talk was that always look at your data and I think that's very important because then you can actually see how your data is changing with each operation and AA also has a module called TSDFANA which performs some kind of a QA and this can be replicated throughout the workflow so you can see how your data looks like before motion correction, after motion correction after slice timing and so on Tiwa I have a contextual question you describe a workflow, a generic workflow environment how would you distinguish its characteristics from other workflow environments that are already out there such as Lonnie pipeline might be one example I think they always serve the similar or same purposes and they are again a bit of different efforts so they all of them have different strengths and different features and if you say for example Lonnie pipeline I saw it that as well when I was in UCLA and I think it's also quite impressive with all the GUIs and clicking and it probably has an easy start but what I think is the most direct you may say competitor I would say most closest KNA is the NiPype for example which is a Python based and I think it's more about what you get really used to and what your main analysis so NiPype for example is mostly based on FSL and all the FSL functionalities although again it can use SPM too when AA for example is more focused on SPM and uses the FSL function too but I think it's more like a pragmatic choice which you are more used to and I think the most important thing again the interoperability I don't think that they are there is one for all or one winner here so like there is FSL, there is SPM again we could ask what is the difference between FSL and SPM or brain voyager so again if you can see comparison brain voyager is more similar to again in philosophy to Lonnie pipeline as I can see and I think the main question is if you can find some kind of interoperability between them