 to, as Paul said, the massive project and the characterization virtual laboratory. These are very much both infrastructure projects. The focus of these infrastructure projects, though, is very much on usability. And although they are not neuroinformatics projects dedicated, they have a very, very large group of neuroinformatics researchers. And we have worked very hard to actually adapt them in such a way that the long tail of neuroscience researchers begins to use them. And I think we've been reasonably successful as well. So the first project is the massive project, the multimodal Australian Sciences, Imaging and Visualization Environment. This is a high-performance computing facility originally specifically for imaging and visualization, but very much focused on instruments. The data producers that every scientist just about is now using a very strong focus as well as I mentioned on neuroinformatics and molecular imaging as well. Those are two areas that we have very, very large cohorts of users from. The ARC center of intergroup brain function is actually, as of recently, an affiliate member of Massive. And then the characterization virtual laboratory, which is a cloud-based project funded by the Nectar, the Australian program of Nectar. And that's very much about connecting instruments with central compute. Whether it's cloud or HPC, it's about getting scientists to use national infrastructure more easily, including instruments and compute and data storage. The user base of these two projects are literally thousands of thousands of researchers, hundreds of institutions, because we do capture a lot of users through the instruments. So we have a very large cohort of users through the instruments. And again, as I said, a very large cohort of neuroscience researchers. So probably about a quarter of our community is actually neuroscience. Paul's going to present a little bit of this, which is the Monash Research Center view of the 21st century microscope, which is that a microscope is not just limited to a microscope. It now encompasses the analysis, the processing, the visualization, the networks and all of these sort of things. And that's really where we place ourselves. Massive is the engine for the 21st century microscope. So it's a high performance computing facility. It has a number of partners, Monash, Australian Synchrotron and CSIRO, affiliate partners to Centers of Excellence. It is a regular HPC facility. It does a lot of regular compute that a lot of other HPC facilities do as well. But it also has a number of special programs, which we do in order to try to make it as easy as possible for new users to use high performance computing. And for a cohort of users that wouldn't otherwise use centralised compute to start using the powers of high performance computing. And the first thing is interactive visualisation. It provides a desktop access, which has been very successful. And I'll demonstrate that a little bit later. It also provides an instrument integration program. So we have actual experts who really are dedicated to talking to facilities and understanding what is the value add that we can provide to a new microscope perhaps, or to a beam line at Australian Synchrotron. Many of the things that we're trying to do are actually in experiment. So this is a photo that I think best represents massive, and it's a researcher called Andreas Thurus using the Australian Synchrotron, the IMBL, sorry, the imaging and medical beam line to do some lung imaging in this case. And he's using massive. So he's actually doing processing analysis and visualisation of very high-resolution lung images in his experiment. And he does that because it's the only way that he can work out whether his experiment is working or not. So these are the sort of workflows that we're dedicated to supporting. We run a supercomputer. I'm not going to necessarily go into the technical details of it. It's not actually really a very big computer. Maybe massive is a bit of a misnomer. But it is very much the way we run it. So we do run an instrument integration program. We have a wealth of instruments, microscopes, MRI, CT equipment that are directly feeding their data to the massive environment for either automatic processing, semi-automatic processing, or really just for the user's benefit so that they're able to use the tools and services on massive to process their data. And that includes NIF facilities that Graham Galloway talked about. It certainly includes all of Gary's facilities and many, many beam lines at Australian Syncretron. We provide interactive capability. This is really a program that we run for. I mean, for two reasons. Firstly, we are dedicated to imaging and visualization. It's very hard to do any sort of imaging and visualization without some sort of mechanism. So we provide a desktop environment for that purpose. But it's also been very, very successful supporting new users to using centralized infrastructure. And we're finding that more and more often, it's users who have really very limited experience with using centralized infrastructure data storage and cloud and HBC who increasingly are collecting very large data sets, whether it's MRI, CT data, or so forth. There's a new generation of researchers who have these sort of problems with very large data problems. And this is one way that we can actually help them. So we provide them with centralized very fast file system, very fast parallel file system, and remote access. And we're moving their desktop tools to the data rather than the other way around. So the types of neuro informatics that we support, we do a lot of neuro image processing. So a lot of MRI, CT, PET, and so forth. We do a science to support more and more modeling and simulation work, though in the early days of Massive, that wasn't necessarily our bread and butter. But there are now more researchers using that, particularly through the center, Gary's Center. And we also do a lot of microscopy processing, particularly for researchers who are capturing, well, some of the new generation techniques such as light sheet microscopy and clarity and so forth that are generating very large data volumes in a community that hasn't necessarily had the experience to deal with them in the past. So aspirin neuro is certainly a big example. Parnesh did already talk about this, and I won't go into it in any depth. But Massive is the engine room for the aspirin neuro study. Parnesh in particular has been very active in developing a very sophisticated suite of tools and workflows that automatically use Massive for a very large cohort of subjects and requirements to process all of that data that come with it. We support a very large cohort of conectomics researchers as well. And I'll talk to you a little bit, I'll tell you a little bit about the work that we're doing as well on hosting a mirror of the human connect time project, which we're finding is showing a lot of interest amongst the community. We also have a dedicated program with the center, so the CIBF Center for Integrative Brain Function. And there's a number of parts to that that we're going to be partnering with CIBF to achieve over the next few years. The first one is that we're helping form a catalog of CIBF models, tools and data. These are obviously stored and managed across Australia throughout all of the CIBF nodes. But at the very least, we want to try to collate some of that information about who is doing what and who is developing what codes and how to get access to that and so forth, and the same with data. We're working on very early conversations about a data sharing capability between CIBF and HBP, and that's perhaps using some of the imaging services that HBP has developed. So, in effect, providing a way of exposing Australian data, Australian CIBF researchers' data to the greater HBP European community. We provide a dedicated share of the massive facility for CIBF researchers, and most of those will already know that. But if you are a CIBF researcher, you're most welcome to access this. And finally, we are developing a mirror of the Human Connecting Project on Massive. I'll talk to you a little bit about that. So, I'll show you the massive desktop in just a little bit. I do want to do a quick demo, and it's really more of an advertisement than anything else, but I think it's worthwhile. So, we have just recently started to upload HCP data to Massive and provide user projects and user accounts around that. We're working with the HCP team, effectively, to manage that data. But we're also going to be installing a suite of tools around the data sets, as well, through the desktop environment that I'll show you. So, ideally, what this will mean is that it'll provide an Australian location for Australian researchers doing processing of HCP data, and, in effect, being able to process many, many brains in parallel, as opposed to just one, perhaps, on your laptop. I'd like to just now briefly tell you a little bit about the other project, the Characterization Virtual Laboratory. And this is a project that was running in full... For around two years, it finished up late last year. It is still lingering on a little bit in the sense that we have still got a little bit of funding to continue it, and we are certainly still operating it. This is a project that continues to operate, but the core development has completed. The real focus of this was to integrate instruments in Australia with centralized compute and then provide, also, researchers' ways to access that. So, we had a number of programs, and the first was, literally, just to capture and feed data. So, we had an instrument integration program, I guess you would call it, and we actually went out and talked to the instrument facilities on how they could contribute and how they could feed their data automatically to a central repository. We developed environments for tools and services, centralized environments to host tools and services. We had a very large cohort of developers in three specific fields. They were neuroimaging energy materials and structural biology. And that cohort contributed tools to this environment. So, we basically created a scientific desktop, I guess you would call it. And then we developed means to provide easy access to that centralized compute as well. Finally, there was a number of research workflows developed by research groups to demonstrate this environment end to end. The focus of this is very much recognizing that the types of scientific data sets that researchers are having to deal with now are multiscale and multimodal. And we're dedicated to creating centralized environments where researchers can process much of that data and combine that data and also have available already a number of the tools and services that they would need to use. The facilities involved in this project were the four Australian characterization facilities that is under a nationally funded program of increase, including NIF, and you heard from Graham earlier today, Anstow, the nuclear facility, and where the microscopy facility in Australian synchrotron. And again, recognizing the fact that all of these facilities very strongly contribute to a range of sciences in Australia. But in particular, the three that we focused on were neuroscience, structural biology, and energy materials. And a very strong alignment between the data that was generated by the characterization facilities and these three focus areas of the project itself. So as an outcome of this project, there's now a very large group of instruments that feed data to the Australian cloud environment. These are facilities that were either done through CVL or are still being worked on under new funding sources. And we've been very successful in creating the next generation, I guess, of our desktop environment that's up in the cloud and available to all Australian researchers. You really just need an AEF account, and I'll show you that in a second. We've demonstrated also that researchers are integrating this environment into their regular research workflow. So actually around about half of the researchers who we worked with and who started accounts on this have continued to use it, which I think is reasonably successful. So just one example, and this is the Image HD project, which is a Huntington's study to identify, a Huntington's patient imaging study to identify biomarkers for Huntington's disease, led by Professor Nellie Georgiou-Cristianus at Monash University. They're specifically looking at around about 100 imaging subjects, some pre-HD, some HD expressing symptoms, and a number of controls. And they're looking at structural connectivity and functional biomarkers. And it's a fairly large data processing project with using a number of different techniques and a number of different tools. The reason why I mention this as a case study is that it's actually one of our first sort of examples where a new group that hadn't used high-performance computing or cloud computing before had come in, and they're entirely now a digital group in the sense that they capture data to a data management environment. That's exported to the cloud or to high-performance computing. They do their data processing there, and they don't actually do any of their research workflow now on their laptops or on their desktop computers. And that's really what we're trying to achieve. We're trying to provide these relatively simple environments for the long tail of Australian researchers. So I just wanted to show you a very quick demo. And it's as much... It's really an advertisement as much as anything else because this is an environment that anybody in Australia with an AAF account can use. If you are an Australian researcher, you may know that you do have an AAF account. It's the Australian Access Federation, and it's the single sign-in, I guess, for Australian researchers or Australian academics. So this is a new version of our workflow, and it's just in prototype now, but it's a web-based desktop environment. So this is... We have, over the past few years, developed these remote scientific desktops. And this environment that I'm going to show you exists solely on the internet in the sense that users can just log in to a web-based environment and access it from there. Now, what they're actually accessing is either a virtual machine or a node somewhere on a high-performance computing file system, somewhere on a high-performance system. And so I already have one running, actually. From here, I can launch a new desktop, and I've got a number of different environments to launch. So in this case, I've got four preconfigured, a small little one, a very large node with some proper memory and some proper grunt, a visualization node with GPUs and so forth, and a Huygens environment one as well for deconflution. So some options there. I've already got one running, so I'll just join it. And so there I am. So this is the characterization virtual laboratory. Massive desktop looks pretty much identically the same. This is running, like I said, either on a high-performance computing system or on the cloud. I have a number of preconfigured neural tools in here. I have actually a really quite large group of structural biology tools and so forth. It is running a little slow because I'm running it off this, just for your interest. So this is a tool that's been very, very successful to help researchers access all of these tools and services. And at Monash, as I guess one of the first places where we started deploying this, it has started to replace the big grunty visualization or data processing box. The lab machine that every big lab has in the corner of their offices. So we think that's been very, very successful and we're very happy with that. What we can also do is nice neat things with this as well so I can actually share these sessions. And effectively I can have lots of people looking at my desktop workstation environment here at once and joining this same workstation environment. And the Nif and Andrew Yankee, if he's somewhere in the room, has actually started to use that as well for some of his training that he does. So it's been very successful in that as well. Thank you. Okay, we've got time for one question over here. Hi, so I understood you're trying to integrate both neuroimaging data and super resolution microscopy on the other hand. So is there any learning or would you comment on if the super resolution data that have been accumulated so far is just the distortion of the sample or the true representation of the sample in the lifetime? So the neuroimaging subproject of the characterization virtual laboratory did have some aspects of that but I would probably have to throw that over to the expert and that would be Andrew Yankee who I don't think is in the room but I'm happy to put you in contact with him. So there he is. I'll set you up with a conversation with him because he's the expert in that space. Okay, maybe we've got time for one more question while the next speaker gets set up. Okay, would you join with me in thanking Vojtek again for his talk? It's my pleasure now to introduce our third speaker, Stefan Eileman, who's at the EPFL in Geneva.