 So, in addition to standards and best practices, we're also interested in infrastructure sustainability. So next, we're going to hear from Wojtek Gossenski, who is the chair of the INCF Infrastructure Committee. And he's going to talk about a recent meeting that the Infrastructure Committee had a few days ago. Thanks, thanks Matthew. So my name is Wojtek Gossenski. I'm from the Australian Node and I am the chair of the Infrastructure Subcommittee. I wanted to tell you, I've been asked to tell you about the workshop that we ran on Wednesday. So this is all very fresh content. This is a very quick summary of that workshop and that meeting that we had. The meeting itself was ceded by the fact that INCF is very strongly moving towards fair. And it has been identified that there is actually a cost to being fair. And sometimes it is a cost of people, of energy, and it is actually also a cost of money as well. And infrastructure pays a strong part in that cost. So we acknowledge that findable, accessible, interoperable and reusable neuroscience is increasingly dependent on computing infrastructure. The goal of this workshop, and it's the first workshop, is to bring together and build an active community of practice of cyber infrastructure and e-research providers who share an interest in underpinning excellent neuroscience. So we ran this meeting with a number of participants and you'll know many of the names there in the right hand column. And you'll also know many of the projects or organizations or universities or initiatives or databases that they all represent. The first thing to acknowledge is that this is a very wide selection of projects and initiatives. It really, really genuinely is. It includes everything from software, which fundamentally we see as infrastructure. Data collections, collections of collections, clearing houses for data and software, portals, gateways, virtual laboratories, high performance computing and data storage projects on which neuroscientists depend, a very, very wide selection. But there are some common principles and so now we're getting down into what we, I guess, discovered in the workshop. There are some very common principles amongst all these projects. And that is that they are all committed to open science and open source and underpinning reproducible science. The goal is the science and not fundamentally the infrastructure for the infrastructure's sake. Sustainability is a challenge and I'll talk a little bit about that. Partnership is critical and compatibility is absolutely boring but really essential to everything that we're doing. So I want to just go over a couple of these in a little bit of detail to just give you a sense of some of the conclusions that have come out. The first is some of the easy ones. There are some really, really easily identifiable shared technical interests. They include increasingly, for example, leveraging Git for data management versioning, metadata aggregation and search. So tools such as Git Annex and Datalab, container technologies such as Docker, Singularity and Shifter, everybody in the room genuinely used these in some way, shape or form. Technical goals as a whole as well. APIs for programmatic access as we heard just before. Interoperability of data repositories built using open standards. And the principles of interoperable, scalable and also distributed, whether it's distributed across data or communities and so forth. There was a lot of discussion around user community and experience and training is a major focus of that. Training identified as really critical to the success and usability of infrastructure. Train the trainer is a common model that's adopted. But it was also discussed that there are potentially standard ways to onboard users onto infrastructure that we might not be leveraging. So the ability to standardize some of these things. Who does the onboarding and how can we advertise resources is certainly something identified. And the potential to decentralize support bottlenecks which very much ties into the train the trainer model. It was acknowledged that retooling to a new standard is very painful and typically impossible for an average research lab. And that continual small changes or progressive refinement are actually far more typical in laboratories. Compliance can be a problem. Researchers will skip steps if they're not considered critical or important enough and I think that's just natural. And the integration between research funding and research infrastructure funding or research infrastructure as a whole can be quite poor and is quite poor. Scale and scaling up is a topic that came up continually. It's acknowledged that data growth is exponential quite genuinely from our experience around about 75% per annum. Life Sciences Medicine really is the largest, sorry is the most significant growth area for big data and high performance computing across a number of the participants. And it was acknowledged that it's important to provide various levels of on ramping onto infrastructure that accommodates various different abilities of user communities. Whether it's new user communities that are entirely fresh to that infrastructure or user communities that are already expert. It was also discussed that peak HPC can be quite inaccessible to neuroinformatics. Sustainability was really the purpose of the workshop to discuss sustainability and there are a number of things that were discussed. The first is that there's sustainability issues on a number of levels. The first is funding sustainability but fundamentally things as simple as software sustainability. Software needs maintenance and rots. So that in itself is a sustainability issue. Large infrastructures have various levels of sustainability. Smaller projects obviously don't have risks with regards to sustainability. There's a big shift to moving to commercial cloud but nobody in the room for a second I don't think thought the commercial cloud was a solution to sustainability. It's actually in some cases a far more costlier though far more flexible option. It was discussed whether data sharing actually should be free whether it fundamentally should be free or whether there should be a cost associated in the same way you buy a computer. You buy a laptop for your desktop whether researchers should be buying in and expected to buy into data sharing. But really the key that came out was what is the best practice to develop sustainable infrastructure and that's something that we would like to explore further. So there are some likely actions that have come out of this workshop. There is an intent to meet again in the very near future and also to develop some offline discussions out of this. I call these likely actions in a sense because they need to go back to the workshop participants before we go off and do any further with these. A first draft obviously exploration of the shared technical commonalities. The idea of a sustainable neuroscience development best practices paper a position or a position piece on sustainable neuroscience development. A further workshop further workshopping on making open neuroscience infrastructure interoperable and also other other discussions which maybe don't have a direct action. But a further questions to explore in a sense of for example could international groups come together and all agree to support neuroscience in some way commit resources to some of these problems. Or could private sector itself contribute. Thank you. Any comments or feedback. You're amazing. You guys did a good job. We're not so amazing now. Was anyone from the human brain project invited. Unfortunately not. They were last minute exit. They were meant to be. They were meant to be. They couldn't have last second. So someone was invited. Oh yeah. Absolutely. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.