 So, I'm Andrew Salence, I'm from the Center for Open Science and with me is Natalie Myers from the University of Notre Dame Libraries. And today we're gonna talk about the Open Science Framework and an institutional implementation that we've been working on jointly with Notre Dame for the past several months. So, I'm gonna start and cover some of the Open Science Framework side and talk about some of the problems that we're trying to address, how we're thinking about this as an institutionally oriented service as well. And then Natalie is going to go into some specific use cases and details of how we're experimenting in the Notre Dame case in particular. Okay, so to begin, there is obviously much conversation happening in the library space and the university space in many different disciplinary areas around greater access to research, to publications, to the outputs of the research process. We see this through federal mandates, through changes in journal policies and in different associations and the communities at large. Most of this emphasis really comes around discrete objects, things like publications, data materials, but all in a sort of discrete and not necessarily well connected way. So, what we're really thinking about in this Open Science Framework, which is the platform that we build at the Center for Open Science, is more than just access to data and sharing of that data. So we're really thinking about this in terms of a connected workflow. So the diagram that I have up here is one representation of a workflow, thinking from these various stages of coming up with an idea, developing that idea, designing, collecting data, interpreting, writing reports, publishing reports, and all of the different aspects that relate to that. And really within that, we are thinking about how to connect that. So how do we capture that workflow? How do we go from having very separated siloed approaches to connecting all of these different verticals? In doing that, we have to think, how do we make this an easier process for people? So how do we not add more burden? How do we accommodate workflows and practices that researchers have as they work? And we do this by trying to accommodate those practices and really shift our solutions towards meeting their needs in that way. It's obviously not an exactly simple thing because what we're talking about is behavioral change. So thinking about changing behaviors. This is partially a technical problem, but largely a social problem as well. So one way to think about this is thinking about norms and counter norms and how these vary. Norms being ideals that people subscribe to. Things like communality, universalism, disinterestedness, organized skepticism, quality. Trying to believe in these as the values of how research is done. Counter norms being the opposites of these. Things that we actually may do rather than believe. And so many of the ways that we're approaching this, the types of solutions that we're trying to develop are approaching how to realign these values with the incentives and practices that people value. So thinking about the differences in these norms and counter norms. There was a study that was done that looked at what people believe versus what they actually do. So this top bar up here is asking mid-career and early career researchers what they believe. So you see the gray bar in the top line there is most people are subscribing to the ideals, the norms. There's a small minority that are subscribing to the counter norms. The middle section is asking what we actually do, what our own behavior is. And you see that people are shifting a little bit. So I say that I believe in this but I am doing something a little bit different. And it shifts a little bit to the left. The bottom part is asking what others do, right? So this is the greatest shift. It's going from I believe in this thing, I do this thing or I believe in this thing, I don't do this thing to what does everybody else do? And clearly there's a perspective that this is everybody else's problem. So what we're trying to do is figure out how to make that change. So we're thinking about technology as the way to enable this type of behavioral change. All right. So the Open Science Framework is a hosted web application. You can find it at osf.io. This is a free open source tool. And we have a bunch of different features that are built into this that try and address these types of issues and discrepancies between values and practices while accommodating people how they work now. So some examples of this. It's a place where you can put data, materials and code. It's as easy as dropping a file in, storing that file, making that file available. You can manage access and permissions. So it's a place where you can go, you can drop those files, you can control who has access to those files, how they have access, what type of permissions. And this can be done at many different granular levels for different types of content. It provides automated versioning. So one of the key elements to understanding workflow and being able to accommodate workflow and deal with these types of challenges is by automatic versioning of the content. So any file that gets dropped in will automatically get a version. And that version can be replaced. It'll store a new version, it'll store with hashes. And it really provides a deep level of understanding of what the differences are between different versions and how the sort of research process is changing over time without putting any additional burden on the individual researcher who's doing that work. Another element is getting a persistent identifier. So one critical thing to trying to connect and enable more transparency, more openness in the workflow is providing persistent access to the things that are stored there. So we issue persistent identifiers. There's an example here in the red of what that looks like. It's a five-character thing that's appended onto the OSF.io. This is as persistent as a DOI. That's an assurance that we make. And this is applied to every single object within the system. So another element is accommodating practice as practice is. So rather than saying in order to be more transparent, more open, more reproducible, you have to be completely public and open, we know that's not a realistic expectation for everyone, especially at the beginning. So what we do is we make the default private and a researcher can add any content and keep it private for as long as they like. But all they have to do to make it public is hit a button and say make public. And this can be done in many different granular ways to accommodate different types of uses over time. Another element is registration of a project. Now, this is a slightly different concept. Hang on one second. What we're basically talking about here is creating a frozen, persistent version of that content. One of the sort of natural byproducts of having a system that you can store things and change things and version things and make things open or closed is that there's a lot of change. And that's the reality of how researchers work, but it's not necessarily good for persistence in terms of access in an ongoing way, especially when it becomes part of the permanent scholarly record. So we offer a feature called registration, where you can create a permanent read-only version of that content and whatever the state of that content is at the particular time. Another feature, and this is a last feature at this point, is the ability to see the impact. So analytics, this is a really critical thing to trying to drive this sort of behavioral change and motivation for more connection. So we provide a very deep level of analytics on projects. This is using an open source tool called Pwik, similar to Google Analytics. And this includes things like how many times the file has been accessed, where it's been accessed from, file download counts, lots of different things that give researchers an immediate reward and incentive for being a little bit more open and more transparent with their work and connecting some of their work. So those are some of the core features, but that's not really where it stops here. So beyond that, in order to make the workflow a more connected thing, we have to accommodate tools and services that researchers are already using. So it's not about coming over to our platform and abandoning other things. It's about connecting those other things with our platform and connecting that whole workflow in that way. So the logos that I've got up here right now are some of these add-ons that we've connected via our API right now. You see things like Zotero, Mendeley in the reference manager side, Dropbox, Box, Google Drive in the storage side, GitHub in the code repository, FigShare in Dataverse in the repository side. These are different points that we're trying to connect across the workflow right now. Over the next several months, we'll be releasing a whole series of additional add-ons that connect across different aspects of the workflow, so you see Vivo, DMP tool, Evernote, a series of different data collection tools, some more storage tools, more code repository tools, Share the Tech in the authoring side, and then a series of more preservation-oriented repository tools. And then lastly, some publishing platform tools. So in every one of these points, what we're trying to do is continue building connections across different parts of the workflow so that researchers don't have to change what they're doing. They are able to maintain the types of tools that they use that are important for their particular workflow, while also gaining these other points of value through better integration of that process. So again, just to stress this, connecting the workflow is really critical to enabling this type of change that I'm talking about. So again, it doesn't really stop there. So most of what I've been talking about is really centered around the individual researcher right now. So it's about me and what tools do I use and how can I become more efficient and more connected with those tools and reduce the sort of barriers and issues that I've got without having to do more things and have more work. But one of the remaining barriers that we see is the sort of friction at the institutional level. So much of the work I might do might be with collaborators somewhere else. And so I might use a lot of cloud services or other third-party services. But I might also have a bunch of tools that I use at my local institution. And some of these might be required and some of these might be free and available to me. And many of the people in the room here are responsible for building these types of tools. So we can all sort of appreciate that. So we need to think about how do we connect with these as well and what do we need to do in order to do that. So some examples of how this maps on grant administration tools, IRB processes, custom data collection workflows. We know many projects that have to develop new tools and services to meet particular needs on projects. And these may be sort of one-off solutions at a local level. They may be a project that is meeting a really specific, specialized need, but has to be sort of built in this siloed way for now. So we want to connect in with these two. So data management tools, local data stores. Every institution has got some local data storage service that's available. HPC, so high-performance computing. And then lastly of obviously huge importance, institutional repositories and the reporting type of elements that go with that. So all of these boxes sort of represent the things that we haven't worked on yet, in which we realize that we need to attend to. So what we're really trying to do is this new view on the OSF, is OSF for institutions. And what this is really shifting towards is improving the workflow connections at the local level. So all of this is possible by an API. Obviously, that's how we're going to do this. And in the past several months, we've released a beta version of the API. You can find the docs here if you're interested in seeing more about what it can offer right now. And so this is really an enabling step. And then the other part that I'm going to talk about and finish out before Natalie talks about specifics is the other elements that really need to be put into place in order to make this API and make all of the other integrations easier in a smoother process. All right, so OSF for institutions. This is the new view that's coming in the next couple of months. And this has been jointly developed with the University of Notre Dame Libraries and with the Center for Research Computing at Notre Dame. So we've been partnered since June. And working through development of a series of different use cases and trying to get the initial prototype together to lay the groundwork for much deeper connection of other tools and services. All right, so a highlight of the key elements of OSF for institutions. The first thing is a custom URL for institutions. This is something that we've heard from many different institutions as a necessity to sort of feel more ownership, feel more connection with the service that's being jointly promoted and offered. So an example of what this might look like is osf.nd.edu. There are many other formats that we could come up with too, but this is probably the starting point. And if you go there, we confirm that this does re-route to the OSF right now. So it's a real URL. A second element is authentication and affiliation. So in order to make this type of service, in order to basically provide the glue for all of the other integrations that could be possible, we need authentication and we need affiliation. We need to know that this person does actually relate to that institution. So there are multiple different ways we can do this. We're looking at identity providers. Obviously we're using CAS, e-mail addresses, so we could have a list of e-mail addresses that are approved, a whitelist approach. And then what sort of affiliation can be made possible through this? Individual users could choose which projects to affiliate with. You could see affiliated projects on a landing page. So if I affiliate with Notre Dame, on a Notre Dame page, you could see all of the projects that I have affiliated with Notre Dame that are officially related to Notre Dame. If that was something I would be approved to do, which wouldn't make any sense. Create automatically affiliated projects. So this is sort of a batch loading process. If an institution wanted to say, everybody that signs in through this should automatically be affiliated, that could be an option. And then the last thing is we want to make this as easy as possible. So part of the purpose of this authentication and affiliation is so that the researcher experience at the institution is not another account, and it's not an account on some third party service, it's the account that they are to use for everything else. Trying to make that as smooth as possible and make it feel like an institutional service. So an example of what this might look like, this is just a wireframe mock-up, of course. But if you see a project with the different elements on there, the wiki, the files, the citations, up at the top where the title is listed, you might see a logo for the institution if it's been an affiliated project. And again, this is to try and provide some branding, some affiliation, some sense of connection with that institutional services. And then similarly, at the institutional level, we want to provide landing pages. So like I was describing, if you have a series of projects that are affiliated, these projects could all now be displayed as related to this institution. So if I went to osf.nd.edu, I might find this as the landing page, which would list all the projects and represent everything on the OSF that is visible and related through that institution. So this is a mock-up of a new landing page. It's got the same content, but sort of reformatted in some different ways to provide greater emphasis and visibility on different things. And then the last element is group permissions. So one of the critical things that we also recognize in order to make this transition and to make it a pleasant experience for everyone is to support groups as collections of users as one element. The ability to apply a group and specify permissions. So if I were a department head, perhaps, and I wanted everybody in the department to use this as a platform, but I wanted to have some sort of awareness of projects that were on there or the ability to manage those projects if people leave for persistence purposes, we could apply a particular group as sort of an administrator group. This obviously may be really useful for data management services, for research offices, for grant offices, any of those types of groups that have some responsibility for awareness and access to content that has been supported at the institution. And then a last element on here is changes to groups are inherited by projects. So similarly, just for organization and management purposes, if personnel change, we want to be able to inherit and sort of have that be a push-down process under these projects. So we're trying to connect as much as possible to make it as smooth as possible to facilitate other types of tool connections. All right, so just to wrap this part up, some high-level points. OSF connects the full research workflow and all of the different features that I talked about first, as well as the add-ons, all become things that are available through OSF for institutions as well. So this becomes a locally connected service that gets all of those benefits and all of that workflow integration. Coming in early 2016, OSF for institutions will provide these types of features to make it easier to connect at a deeper institutional level. And then lastly, we've done a lot of work getting different use cases together with Notre Dame and with a number of other institutions, but there are many use cases that obviously we don't know about yet. So we'd love to hear more. We could talk about that in here. But afterwards, but we're definitely open to hearing other ways in which you might be able to use this to support services that you have around data management, around institutional repositories, whichever areas you feel are most relevant. All right, so this particular presentation can be found on the Open Science framework here. And it's also on the conference listing. So at this point, I'm going to transition over to Natalie, who's going to talk about some specifics in terms of things that have been being prototyped and sort of prepared for this type of connection. And we'll do questions at the end. Where is yours? Hi, everyone. I'm Natalie Myers. I'm an e-research librarian at the University of Notre Dame. And I'm going to tell you a little bit more about connecting the workflow and supporting the research mission at Notre Dame using the Open Science frameworks. I hope this cartoon reminds you of yourselves because we drew it when we were trying to figure out what some of our problems were when we were creating programming around data management and how to help researchers complete their data management plans. And what we recognized was that our system was imperfect and that complying with federal mandates for even creating data management plans let alone data sharing was something that we were doing imperfectly. We were curious about how we could improve that through education, through software, and through services. So at Notre Dame, we've pursued a handful of pilots and extensions with the Open Science frameworks. I'm going to give you an overview and we'll go into each of these in depth in today's presentation. We're piloting migration of software and files from our VecNet digital library. It's part of a Bill and Melinda Gates Foundation funded project to support modeling of vector disease for malaria eradication. We're also enthused about early project progress on the National Data Service dashboard and in integration with the Open Science frameworks because it enables execution of files from the Open Science frameworks on the National Data Service that work was done by Ann Taylor through cooperation with our Center for Research Computing and its director, Yarek Nabriski. Finally, we are working with DASPOS, an NSF-funded project that has some PIs at Notre Dame. It is a project for data and software preservation for Open Science that comes out of high-energy physics but also has participants from libraries and computing science. In the case of umbrella, which is a tool that has come out of the DASPOS project, the student developer Hian Meng is testing umbrella software preservation tool with files on the Open Science frameworks and we think this represents a good way to store computer simulations and make them shareable and reusable between researchers. Finally, we're looking at ways to integrate our institutional repository with the Open Science frameworks so that we can use OSF during active data collection and sharing on projects and have it more deeply connected with our institutional repository for making preservation-level copies of scientific information that we want to share in site. So how did all this start? With researcher questions. I want to preserve my simulation method and results so other people can try it out. It sounds simple. You think, well, I just need to have maybe an input file, a process, the output file, bundle it up. How hard could that be? I'll put that in my institutional repository. Maybe I'll give everything a DOI. Maybe I'll wrap the whole thing and add a DOI and declare myself done and people could cite that and share it. Except, what if you have to repeat that an awful lot of times? What if you want to change your input values? It's just not that simple. What it really looks like when people start to do it is they need to preserve more than their input data, their process and their output data. They might want to preserve a mode of their simulation. They might need to preserve a config file along with an input file. They might need some libraries. They might need an environment. They might need a protocol. They might even need a particular version of an operating system or particular hardware configuration that allows that operating system to run. Preserving this entire environment is a much more difficult task than preserving just an input file and output file and a pointer to an executable. This problem is what got us curious in pursuing solutions to this problem. And I think what we can take from it is that the challenges for reproducible computing drive us toward some of the solutions we're looking toward. For example, your application might work perfectly well on your machine today, but do you know if it'll work next month? Will it work next year? How about 10 years from now? Will it even work today on another machine? These are the questions that DASPOS was pursuing and that we were trying to think about as we began to improve the way we preserved and cited computer simulations and the output of computational modeling. The DASPOS project team includes computer science experts from University of Notre Dame as well as high-energy physicists from the American Experiments and the Large Hadron Collider and librarians and other data preservationists who have an interest in thinking about ways we can better preserve computational output for the future. The goal and scope of the project is broad and I hope you'll take a look at it further. We'll have some more upcoming workshops funded by NSF over the next couple of years. One of the important things that's come out of DASPOS for us is an opportunity to explore working with tools like OSF to think about how we might better preserve computational output. Physicists aren't the only ones who need to preserve their simulations. Sometimes when people think about biology they think about field work but in my past few years' experience working on the Gates Foundation's VECNAT project what I realized was that vector biologists use computer simulations too. Part to share them with model developers part to share them with other scientists part to explore their parameterization or their variables. I'd like to thank Steve Carroll who created this illustration for the Economist because I think it really represents well the way that any computer simulation can have multiple variables that people want to explore over and over. To find the best malaria intervention for a particular place under particular environmental or climate conditions becomes very important to be able to run and rerun that simulation many times explore sweeps and think about ways that we can improve the delivery of malaria drugs and interventions to help save lives. So how do you do that? How do you share those simulations quickly? Well one of the first things we did is we said okay we'll make a digital library and we'll make sure that we put everything in there that people use to parameterize their simulations and we'll have links to the code and we'll have links to input files other people can use and that will really support reuse and reproducibility and the good news is it does and you could think of us as sort of early adopters of mutual learning for change mantra but a digital library alone doesn't quite preserve a whole simulation environment. Our library was originally built to support mathematical modeling of malaria transmission and eradication. We built on a Hydro-Fedora platform contains field, labs, survey and simulation data along with bibliographic, demography and climate data that people might use to model malaria. But ever since we've built the digital library we've been iterating over the best ways to provide access to resources used to parameterize models to the models themselves as well as their output. The primary goal of the Earth Science Information Partners workshop last year was to explore feasibility of implementing a data citation model developed by the Research Data Alliance for doing dynamic data citation. We tested that as a use case on the VecNet digital library and the fact that we can explore through our API and through tools like the OSF and its API ways of better sharing computer simulations is a terrific way to move forward on scientific progress. We also thought, okay, we better take this dynamic data citation approach back to disease modelers themselves so we showed it at the Institute for Disease Modeling, Symposium and Bellevue last year and got further feedback from vector disease modelers. Finally, we showed it again at the most recent RDA plenary but in the context of repository platforms for use cases. What are the features and limitations of the repository platform we're using now? In our case, the Hydro-Fedora platform and how does it support the entire scientific workflow? It's an important part of the workflow but it is not a single tool for a single need kind of scenario as we saw before, preserving entire simulation requires more than just a digital library. Along the road, we began to get to know the folks at the Center for Open Science and to explore the features of the Open Science frameworks. Excuse me, I'm going to go back a second. And we began thinking about looking at connection of the scholarly workflow during a NISO conference. We attended open repositories. This past summer, we're attracted to the OSF features and then we hosted a panel at Notre Dame with Center for Open Science on their reproducibility projects and we began to have conversations about how could we use all those add-ons you see in the OSF environment to improve things for researchers at Notre Dame. And we thought about, well, we'll integrate our institutional repositories with the OSF, we'll start with central authentication system authentication. We embarked on the National Data Service dashboard project with CRC and Ian Taylor. We began piloting registration of select VecNet malaria data files in the Open Science framework and we began testing the umbrella software preservation tools interactivity with Open Science frameworks. The idea was to work on a reproducible software engineering environment where people could do reproducible development on the OSF. So the first question, why OSF in an institutional repository? It helps you start staging data for preservation and enables initial sharing between collaborators. Also, as Andrew showed, gives us an opportunity for institutional branding and central authentication that fosters ease of use and trust among our institutional researchers. The group role enhancements are things that our lab groups need and preserves the nature of their hierarchical roles. And the storage source configuration and the flexibility of those add-on storage environments like Amazon S3, Dropbox and Box are things that our researchers are using already. Finally, we needed integration with a computational environment and access to HPC and reuse. We wanted to be able to reuse our software and simulations over and over but do it in a constrained and sightable way. We also were attracted to the way that we could do metadata enhancement in OSF incrementally add and automatically add provenance and metadata to each of our data snapshots over time before creating preservation-level copies of research data. And finally, we were attracted by the idea that we might be able to push an OSF project snapshot to Q8&D our institutional repository and that that incremental approach would encourage institutional data preservation and more deposit into our institutional repository. So the bullet items you see on this slide represent all of the benefits and attractors that made Notre Dame encouraged to begin work with Open Science Frameworks and the developers at Center for Open Science. Here on this screen you see a diagram of our institutional repository OSF integration and you can see how we can take advantage of external data stores like Box, Dropbox and Amazon S3, how our researchers can incrementally prepare their data for preservation storage and how OSF can interact with an institutional repository. You also see a cloud at the top that shows how we can connect OSF to computation and to the reuse of simulations. These were the kind of things that attracted us to this project. On this screen now you see a slide related to the National Data Service OSF dashboard integration effort we've embarked on since this past summer. The lead developer on that effort was Ian Taylor. Ian's presented a lot of work on National Data Service over the year at supercomputing and other conferences and has been instrumental in bringing up the dashboard itself. What he's done is integrate it with the OSF so that you can take a file on OSF and run it over NDS. There is a little let's see if you can see my cursor, probably not. You see the green arrow on this slide if you go to our presentation on the CNI website or the OSF website after the conference you can see a demonstration of the National Data Service integration and it will show you how we can take a containerized simulation from data input on National Data Service through the dashboard to creating new output and its concurrent visualizations as you see at the bottom right. We also through the DASPOS project thought about another way of doing software and environment containerization unlike the NDS example the umbrella example is one that relies on a portable environment for reproducible computing. The idea is you can declare your whole environment in a statement containerize that and then allow someone to spin up a VM that has your simulation inside it that you can site run and reuse. There's a little bit more information about this at the center for creating computing lab at Notre Dame in our department of computing science and documentation about the umbrella software itself. I'll tell you a little bit about the umbrella features what attracts us to them is that they make applications portable and reproducible. The execution environment can be clearly specified at the hardware kernel OS software and environment variable level you can materialize an execution environment at runtime automatically with no need to configure manually it helps you do matching evaluation and you have minimal need to change things to work with one of these containers it can be loose coupled with other sandbox techniques like parrot or docker and it allows you to construct a sandbox through mounting those mechanisms without copying all those files inside your container. Multiple namespaces can also be constructed concurrently in the declarations and umbrellas so it's a very powerful tool and finally it allows us to utilize more computing resources you might remember that bubble on the diagram we wanted people to go from a citable preserved object to a local machine grid or cloud where they could actually run a simulation what we have worked up is an open malaria case open malaria is one of the software models we used in the VecNet project it's from the Swiss Tropical Public Health Institute in Basel and it is an open source malaria modeling tool one of our developers at the center for research computing Alex Vayushkov has developed a use case where he can implement an open malaria model run that allows users to download their run as an umbrella declaration what this means is that while they're in the web portal for open malaria they can say give me an umbrella declaration with that declaration they can then have access to files either on our institutional repository you see those in green or on OSF you see that one in yellow in this case in our declaration we've declared the software and hardware environment we've put the simulation in we've left the executable and one of the libraries on our institutional repository because those are static they're not changing for the simulation and at the bottom we've put our input file on the open science framework because that's something that might be active in changing over time we might be changing our environment or changing our input file the variables and how we parameterize them for the open malaria run in our declaration we can then using umbrella invoke a VM inside a container that allows us to execute the open malaria software and actually execute the run and create an output file when that happens umbrella inside the container creates the VM the software is spun up the input file is sent to the software the output file is created and the user gets their own output and has reproduced exactly a simulation someone else wanted to share with them you'll notice on the bottom left hand corner of the slide a little green button if you want to see a demonstration of this happening in real time you can play that animation after the conference I think that it will help you understand the data flow from preservation to reuse in the umbrella declaration environment for more information about umbrella itself the student developer is featured here on this slide her advisor Doug Thane who is a professor in our computing science department and a PI on the DASPOS project and Alex in our center for research computing who stood up the open malaria use case any of them would be happy to help you learn a little bit more about umbrella and how it could interact with OSF so what can we learn from DASPOS umbrella national data service and these open science framework pilots let's look at some of the questions and decisions that can inform ongoing efforts related to data and software preservation for open science we need to think about repositories do they take provisional data active data preservation data only that's why we are looking at this dual approach of OSF for active data and QEDND are institutional repository for preservation data are tools compatible do they plug into existing tools we already use maybe JupyterLab notebooks or even the ability to diff two files what are our software preservation layers are we trying to preserve program binaries sources compilers or something else like a container we need to think about naming what are we using URLs, DOIs, persistent identifiers of some other kind and how are those implemented on our various systems and how do our APIs of these systems allow us to invoke those and interconnect services and I think that's where the strength really lies the complexity of composition can we connect systems together like the national data service the open science framework our own institutional repository QEDND in a way that makes doing science easier for our researchers and then how do we cite that science how do we make those simulations shareable inside a publication or a scholarly record and do our users have to change their behavior finally that's the most important thing are our tools close enough to the way people work today or to their native tools so that performance doesn't suffer or ease of use doesn't suffer and drive people away from the opportunities for these integrated workflows so some of our immediate next efforts will explore those topics and for their detail we're going to be doing some enhancements to the national data service dashboard we are going to automate the uploading of diffs of files so that you can see the difference between two input files in a simulation environment we're going to support Jupiter notebooks in our scholarly workflow with OSF and we hope that some of these projects will further allow us to bring the benefits of OSF to our researchers potential future projects beyond that for us and OSF include using the OSF for master files management in a spatial repellence trial we're running potentially using the OSF in the EU funded switch project and possibly using OSF with the race project where we could extend the national data service dashboard implementation integrate it with Pegasus workflow system and curate ND so that we can do better preservation of circuit design so these are sort of our next wave of where we're looking toward OSF integration and I want to leave you with something optimistic which is that all of what I've showed you includes complex preservation of computational science you don't have to be intimidated by ambitious integrations those are possible for us because of the OSF API but OSF for institutions provides bedrock features that meet our data sharing reuse and citation needs for many of our scientific researchers right out of the box so that while we're exploring many ambitious extensions OSF for institutions in our effort there I hope will benefit more than just the researchers at Notre Dame but allow us all to take advantage of the terrific platform like Andrew you can see a link to my presentation on the open science framework itself and I hope you'll take a little bit of time to explore some of the websites of the partners who have participated in these pilots and extensions and I'll close now so that hopefully we have time to answer questions I'm sure you have some for Andrew about OSF or perhaps for me about the extensions thank you very much