 Hi, I'm Richard Soner. I'm presenting a brief demo, more like a highlight of this neuroimaging, data sharing, processing environment that we've been working on with the neuroimaging data sharing task force for INTF. Essentially what we've done is we've put together a loosely coupled system based upon the NIDM data structure to enable users to quickly query a semantic graph to visualize some of those results, identify a cohort they want to filter or they want to look at more, and pass that on to a remote or local processing environment. We really don't care. We've provided a nice environment for heterogeneous computing that can be run on your machine, that can be run on a remote machine on Amazon EC2, and in the future we'll tie into INTF data space for hosting the data. Some of the key parts that we've come up with, in addition to this NIDM data model, are how you actually want to pass information and how you want to encode provenance in this data set. So take an example, for instance, where I run a brain extraction on some remote data. That data is run through a standardized pipeline using NiPype. The provenance is captured, encoded, and then stored back in this virtuoso graph DB. From there, anyone else is able to query this database and see what's been done, see what's been run, and actually get access to my files because there's a service exposing them to public IP addresses. We're still working on authentication, we're still working on authorization, and there's still a lot of rough edges for this. But as the NIDM grows, we'll actually have a very nice, robust processing environment that can be used by researchers around the world.