 I just want to welcome everybody to this webinar that the Center for Open Science is running today. It's part of a series of webinars where we're trying to highlight some really amazing use cases of our software infrastructure platform, the OSF, but also highlight more broadly some kind of best in show engagements with open scholarship at different institutions and organizations. And today, we're going to be engaging with an amazing team at Virginia Commonwealth University, led in conversation by Gretchen Gigan, who is our product owner at the Center for Open Science, engaging with OSF institutions, which is a suite of features on the OSF for different types of administrators that everybody will be getting much deeper into today in conversation. But I just want to flag, as you guys are coming in today, that we will be following up if you attended or registered for this webinar with a recording of the webinar, with contact information if you'd like to get in touch with us or our panelists if they're open to that. And we just want to encourage you to ask as many questions as possible. So we made the format of this webinar a little bit more open. When we do engage in the Q&A section at in the last 15 minutes or so, feel free to take yourself off of our on video to speak. Or if you feel comfortable, you can drop your questions in the Q&A section. And our panelists and Gretchen will make sure to answer all of your questions. Feel free to drop those in throughout the conversation as well. And they might be able to get to them within the conversation. So with that, I'm going to pass over the mic to Gretchen Gigan, who is our intrepid product owner for OSF institutions and a librarian, and is going to be leading this conversation. Great. Thank you, Nadia. So hi, everybody. As said, I'm Gretchen. I will take a minute to have each of our panelists introduce themselves. So I'll start with the person who's immediately to my right, which is Dana. Hello, hello. My name is Dana Lopato. I'm an assistant professor in the Department of Human and Molecular Genetics. And I'm really excited to be here today. Is there anything else you want me to add? No, I think that's fine. Let's just keep it simple and get into some good conversation. So going clockwise, then the next on my screen is Tim. Hi, good afternoon. My name is Tim York. I'm a professor at Virginia Commonwealth University in the Department of Human Molecular Genetics. And I'm also the director of the VCU Data Science Lab. And thank you for the invitation from the Center for Open Science. Great. And then last, but certainly not least, is Nina. Hi, I'm Nina Exner. I'm an associate professor in the Research Data Librarian for Virginia Commonwealth University. Great. Thank you all so much for being here today. So we're going to, I have some questions. I want to talk to you guys about the work that you're doing at VCU. And then we actually met a little bit ahead of time and came up with some things that we would love to get some conversation going in amongst the whole group. So we may flip the format and ask some questions of our attendees. But as Nadia mentioned at the top, you can also ask us questions. I'm monitoring the chat and the Q&A. And we would just love this to be a very informal and lively session. But the reason that we wanted to talk with VCU in particular, VCU has been a member of our institution's program for a while. And we can talk a little bit later about what that really means. But I see we do have a number of other institution's members on the chat. So hello, all of you. But we wanted to talk a lot about VCU's particular kind of work that they're doing in their data science lab and the relationship of that lab with the campus and the library in particular. Because we know a lot of our institution members come from libraries. And I may be a little bit biased as a librarian myself. But so I'd like to kick it off to just talk about the data science lab. What are the goals of the lab? What's the mission? How did you go about creating it? And how was that process? So I think maybe Tim or Dana want to kick us off in that question. I'll grab that one. Save the best for later. So the VCU data science lab started in 2016 officially. And the way it started was, as most great things do, with an idea over coffee with one of my former graduate students, Dr. Aaron Boland. And we would meet once a week just to talk about things data science. This was a little while ago. So we probably started in 2014. And we've been noticing things that Johns Hopkins was doing. And you see Berkeley in the space of data science. And we wanted to do the same here at VCU. And so we contacted our department chair and said, hey, what a crazy idea. What about starting a data science lab? Really so we could have our own sandbox to do data sciencey things and to teach our students more about data science. Met with our chair and he said, oh, no, you need to talk to the Dean of the School of Medicine. We're human genetics in the School of Medicine. We said, oh, OK, this is getting a little bigger than we thought. But we went ahead and did that. We met with the Dean of the School of Medicine. And he said, no, wait, you need to talk to the vice president for research at the university. And that's in the office for the vice president of research and innovation. And we were immediately feeling like we got in way over our head. Next thing we know, we're at a meeting with all the higher ups of research at VCU. We got three slides into our sort of pitch talk about the VCU data science lab. And the vice president told us to stop what you're doing. You've already sold us. We're going to do this at VCU. And so that's basically long story short, how we began. And so the VCU data science lab is funded at our office, the vice president for research and innovation. We have a modest budget to mostly have a and we mostly have a teaching mission. And we can talk more about that. So I'll stop there and in taking a sort of follow up direction. Just to follow up on that. It sounds like that was a nice case of units across the campus working together. But I assume you had to think about and make a case for why this was a good opportunity for the university. Can you talk a little bit about kind of how you pitched it? Yeah, so it's definitely from the ground up, sort of two faculty members thinking, hey, this might be a good idea, rather come in from the top down. And so we pitched it as we need to formalize the data science training at at VCU, especially for graduate students in the biomedical sciences. The VCU data science lab serves all departments across the university now. And so our main mission was education. And we started to data science courses to really teach students how to to handle data and do it in such a way that was open and where their results could be had a greater probability of being of being reproducible. And so remember, this was about 2014, 2015. And then in 2016, I think that a really sort of influential paper came out in nature from Baker at all. Asking the question, is there a reproducibility crisis? And that, I think, sealed the deal at VCU. I think that paper said about 90 percent of the 1,500 respondents said mostly researchers, scientists that, yeah, there probably is a reproducible reproducibility crisis going on. And yeah, we probably need to do something about it. And so I think that really helped us in terms of timing, getting our administration, our leadership interested in funding something that could could help. The research community at VCU tackle this this type of problem. Right. So you mentioned that you teach two data science courses. What are the other kinds of programs and work of the lab? So we also we also do workshops. So we try to it doesn't always happen this way. Try to have one or two workshops per per year. And I think the majority of workshops revolve around introducing the research community and anybody interested at the university to the the open science framework, because it's a really sort of nice, general, easy to understand introduction to how to put open science methods in place. Into into practice. And so I'm happy to open it up to Nina or Dana if they want to talk about any of the workshops or classes more specifically. But but I will say we we partner with VCU libraries and our research data librarian, Nina, and it's been a very successful relationship. The VCU data science lab is relatively small. It's me and usually one other faculty member. Plus we try to recruit students to help out. And so being able to team up with our libraries and kind of utilize a lot of their resources, especially in terms of their ability to to organize and advertise and put on these workshops has really been a successful sort of partnering for us to kind of get the word out about what we're trying to do. Yeah, one of my next questions was actually about the relationship between the library and the lab. So maybe, Nina, you can talk to us a little bit about how you view the kind of cooperation and the relationship and the benefits and maybe the challenges of, you know, this kind of model of of working at the campus community. Sure, I'd love to. But let me actually close a little bit on the previous question, too, because we we sort of do some stuff that we all happen to be in sort of as the data science lab, sort of as just a team of people who are interested in data science and the open data space. So for example, we have a get a couple of guests slash co lecturing roles in responsible conduct of research classes. And we have a more robust one in our transparency and reproducibility course that our trainee are NIH T programs that are institutional trainee programs do for increased knowledge about reproducibility. Those are things where other sponsors that are running programs have said, hey, data science lab people, you guys do this reproducibility, transparency, rigorous data science, evolving analysis, sort of approach to things. Could you come and give us a talk on your on your part of things? So we do also guests in a fair number of guest lectures and workshops and a fair number of other teaching in addition to the data science labs signature course series. So on the library side, there's a lot of different kinds of perspectives on what the OSF is useful for. And for me, the data science lab is really the practitioner perspective or the faculty, the researcher perspective, how do graduate students or new faculty, mostly it could be more established faculty, but I think mostly we get graduate students postdocs and new faculty members and new labs coming in. How do they learn how to use a tool that can help them with their data sharing with their pre-sharing data management, but in a way that can be shared later if they want to with their collaboration management with all the different sort of solutions that the OSF can work on, but very much focused on the researchers side of why and how to use OSF, which, of course, is really the starting place for everything. And the rest of us are our benefits for why we want to do things like a robust preservation plan, like only only librarians really care and in other preservationists really care about the robustness of the preservation plan. A faculty member just wants to be able to say, is it good enough for someone who cares about that stuff? You said it's good. OK, good for me then. The digital preservation process is very interesting to librarians and not a whole lot of interest to everyone else. They just want to know it's good. So I personally identify with that statement. Go ahead. So one of the things that's good about our collaboration is that we can sort of bring our various strengths but really focus on the researcher learning process. It sounds like you have a nice also kind of maybe one of the things that makes it successful is kind of understanding what your particular niches are and your strengths are and being kind of, you know, on the same page about that as opposed to kind of confusion over over roles. And I'd imagine that comes about through, you know, communication and and working together on on a lot of these things. You mentioned kind of OSF and it fitting into the picture. And I would love to talk a little bit about that. So maybe Dana would like to weigh in a little bit on how does how does OSF fit in particular? What do you think is it's kind of particular niche or particular strength as a tool? And, you know, we recognize that OSF is one tool. And we kind of I think one of the strengths of it is that it can be one tool as part of a larger workflow. So kind of what are your thoughts on how how you use OSF and how you incorporate it into the work of the lab? Yeah, sure. So there there's sort of the the two areas where I would talk about OSF fitting in and both of them are in the education space. So specifically with our data science class, we want to have the graduate trainees use something that they could then also use outside of class directly for their research. So making sure that the training that we're doing in class, you know, deducted coursework is investing and directly applicable to what they'll go on and do. And the the fact is that we have students in our data science class from all over VCU from the School of Medicine, the School of Pharmacy, School of Nursing, the Graduate School and to try and pick one tool that they could all use that is amenable to all of their resources and research areas. I don't think we could do much better than OSF. We looked around, we tried to to shop and find something that everyone would be happy with. And OSF by and by and large was the the winner by a mile. You know, you'll have folks say, well, I do I do computational work. How about we use GitHub? And you say, well, GitHub is great for what it does. But OSF can play nicely with GitHub and folks who are not familiar with that specific tool can still interact and collaborate through the OSF so that that third party add on ability is wonderful. Just for saying this is going to be agnostic to departments, agnostic to approaches. Let's let's be inclusive. Welcome, everyone. We can all work together here. It's it's a nice area there. And then backing up to the RCR training that Nina and I do. Again, we have lots and lots of different programs represented. And they are interested in pre-registration. They are interested in preprints. They are interested in how can I make my research material both shareable when I'm ready to share. And OSF is just a natural conduit for those kinds of conversations and talking about collaborations and finding other people who are using similar tools. It's just so useful in that regard. It's just been a natural. I want to say compromise point, but it's people don't have to give anything up. That's what's nice. If you're using GitHub, you can still use OSF if you're using so many other programs. But but GitHub has been the main sticking point. And that's why that one's what comes to mind first for me is like, I don't want to give up my version control. You don't have to. It's all all good. We can we can all work here. That's great to hear. And I will say over the coming year, be on the lookout for more and more kinds of OSF integrations. It's big. A big goal for us right now. Nana, did you want to add anything about the R stuff that you just put in the chat? Yeah, I realized that I jumped straight to OSF. For some reason, OSF is on my mind right this second. But a lot of the data science lab projects are really around our and our studio use as well. So we do a lot of workshops on getting people started in the OSF. But a lot of what we do is sort of fundamentally around, I should say mostly what they do. I don't really do a lot of the coding part, but is mostly around how to do how to make the shift to computational instead of point and click graphical user interface approaches to the. And sort of moving transitioning away from Excel, although outside of my data, data science lab work, I do a lot of Excel based stuff inside my data science lab work, though helping people to transition away from Excel, which is sort of much more error prone. And also to use things like version control. And I think I saw at least one person who works in RCR in the introductions, which are above my camera. That's why I'm just steering above my head like a like a crazy person. Things like having a change log also so that you can see what all has been done with this data in the past. Those kinds of benefits are sort of a mix because you can push the R computation to OSF so that it can keep track of your versions and you can jump backwards if you had an error at some point, things like that. Great. So I want to maybe switch a little bit and we can talk about some kind of larger goals and larger challenges So, you know, what are the challenges you encounter in your mission or goal of increasing reproducibility? I assume that's one of the big kind of north star, right? So how do you approach that and what are the challenges that you see there? Sorry, got distracted by the chat. So the question, what are our challenges for kind of introducing these reproducibility sort of modern computational tools and methods to the research community? I think that's a great question. And our angle has always been to get them while they're young is in a bottom up sort of approach. And so that's why we keep talking about these courses. And so we have two, three credit sequence courses and data science open to anybody in the university directed at primarily graduate students. And so our philosophy is to teach the students, the students going to the lab. They demonstrate to their PIs that, oh, hey, look at this really cool stuff we can do that's going to protect our data, you know, and hopefully, you know, aid and making our results more reproducible and more members on our team can access, say the scripts, the code, the data, the results, et cetera. And then the PIs say, hey, this is, this is fantastic. They send more students our way. And so in terms of challenges that I think that's that sort of, I'll call it a positive feedback loop there has taken a little bit of time to get going. But I think we're hitting our stride now. And so the word is out about what we can do as a data science lab. We're small right now. We're really two members and a handful of students. And so I'd say the first, the first challenge is getting the word out and that was kind of primarily maybe due to our strategy, but I'm still going to I'm still going to defend that strategy of teaching students versus trying to convince, you know, PIs that they need to change their workflow because we know better. And so I think the other. Yeah. No, please go ahead. Sure. I think the other challenge is that more and more different pockets around our university are data science are popping up. And one of the challenges tracking all the different people interested in data science and who have data science skills because we're we're likely not using utilizing all the talent that exists at at the university. And so, for instance, in our computer science department on our academic campus, which is separated from a medical campus by by about a mile, they've started a data science certificate master's program. And then in bio stats, they started a genomic data science program. And so one of our challenges is to keep track of of all the innovations in data science that are that are happening at our university. Dana, Nana and other challenges from your perspective. I would say two sort of challenges. One, we if we had more people, we could have more sections of the class, you know, the class routinely goes to wait list. And there's only, you know, so many seats in the classroom and we just don't have the personnel. So I think we could, you know, if we were able to do more sections, that would be great. And then just when I think about one of the difficulties on the medical campus with encouraging reproducibility is you can find a lot of people who are eager and they say, well, how do I do this with clinical data? How do I do this with certain, you know, protection levels? And so making sure we talk to people about, well, there are options, you know, and walking them through that. So making sure that we are cognizant of everyone's needs and data security issues. Great. Excuse me. Sorry, remembering which way I'm going here. So these are all great. I think one of the other, you know, big things I've had discussed and maybe this can be the beginning of, although I do have one other thing I want to talk about, but we'd love to get some audience participation. So, you know, we've talked about these challenges and the strategies that you're using to kind of address them and what your goal is and what you're trying to achieve. But a big question is how do we determine if we've been successful? You know, what kind of measures of success do we think are valuable? You know, what are you looking for? And we'd love to hear, you know, through the chat ideas from the audience about things that they find to be real key indicators of success for your overall Open Science efforts. Yeah, I mean, along those lines while people are kind of putting their ideas in the chat, I think, you know, for us starting small, having, you know, students fill the seats in our courses and having a wait list gives us some indication of success that, you know, what we're teaching is useful. That sort of positive feedback loop where we hear stories about PIs and, you know, program directors saying, oh, you need to take, you know, data science courses. They, you know, teach you kind of how to utilize R and to automate things and to avoid sort of manual mistakes like Nana was talking about. How to, you know, use the, you know, utilize the OSF to your advantage like Dana was talking about. And so for us, those are indicators of success. The Center for Open Science also gives us a report from our, you know, some metrics from the dashboard which are kind of even sort of like developing and sort of gives us a sense of how many individuals at our university are utilizing the OSF. And we always see sort of a bump after we do a workshop or students in our class that I'll have an assignment where they have to create an OSF page. And so to us, those are very simple metrics of success. But I would love to hear everybody's ideas about maybe what they track or other kind of ideas for gauging whether or not we're getting the message out to our research community. And it's a little hard to benchmark. I think I'd also like to see how other people, it doesn't have to be OSF particularly how other campuses are thinking about tracking or benchmarking or assessing and evaluating their uptake on open science on reproducibility or transparency practices. We've in addition to things like how many registered users do we have especially maybe proportionate to the size of the campus and how many people register for the class. Although the class is kind of a the class series always maxes out and so that's if it's always a goal that's not going to represent growth necessarily. It's really a ceiling on capacity. I think how many people then change their practices afterwards which can be a small number in a way but then I think thinking about how many people you train on a thing and then they actually go and implement that and have a major change in behavior afterwards. You would expect one in a hundred people maybe to totally change their approach to how they're handling their analytical pipeline from being ad hoc and closed to being computational, reproducible and open like I would normally expect that to be a very small number so that is an important story maybe to know how many people have made a major change to what they're doing. And I think we get that information anecdotally. We hear the grapevine or we get an email or you know like I said before they send their students to our classes and our workshops or they attend themselves. That's how we get that information. I've been hesitant to send out a Google survey to the university explicitly asking people their behaviors but I suppose that's an option an option too. I was flabbergasted the first time someone said I need methods help with my thesis. Can you help on my thesis committee? Oh and also I need to pre-register. I was like you want to pre-register your thesis? Really? It was just it wasn't someone I had worked with in data science specifically. They weren't in data science specifically. They were actually doing social mixed methods and they still wanted to pre-register and I was like wow that's really cool. Let's see how we can get through this. That's really interesting. It kind of ties together a couple things I wanted to also talk about and also with a question that we have in the chat which is about so follow me on this. My brain is pulling all these things together in real time. You talked a little bit about the institutional membership in talking about the metrics that you get in terms of who's using OSF. There's a question in the chat also about the role you might play in facilitating institutional engagement as opposed to encouraging individuals. Is that something that you think about? Is that something that you plan for? By that I mean kind of an institutional message related to OSF usage and or the benefits of being involved as an institutional member which provides you things kind of on top of what you get with OSF for free which is the metrics and the community events we're trying to build and the SSO sign in and those sorts of things. Is there any thought about the larger impact of that membership and whether or not there's a plan for building that or thoughts on how you could get more out of that institutional benefit? Yeah, so one idea and I'm not sure this kind of really answers the question but it's one idea and it's not really contingent I think on an institutional membership but I think the institutional membership gives this process legitimacy in some respect. When you actually sign in through your using your VC credentials for instance your university credentials is that instead of targeting just the individuals through the class from a ground up approach is using kind of the I mean it's important to utilize these practices these open-side practices and so we've thought about things such as for institutional grants, pilot grant money that researchers who are awarded grants for instance would be required to put their the products of the research onto an OSF page whether private or public that could differ by the different components within the OSF but for instance those products of the research that the institute funded can be available to other researchers more rapidly because pilot studies don't always get published but that's a way to utilize the OSF at the institutional level to make sure that what they're funding is made stored somewhere else not just on a PI's laptop but available to the rest of the community at research community at VCU so that's one idea Our campus does have a data cataloging initiative which is run completely separately other than that it's housed in Comtec I think no it's not even totally separate primarily has a different goal than overall visibility of datasets but we have institutionally again beyond the DSL but I think in collaboration with several of us have been in a lot of the same meetings there's been discussion we're NIH heavy but even in the larger context of reproducibility and also some of the White House science and technology policy discussions about data that is shared that comes from research needs to be in an environment that is harvestable by uh regularly sort of used search engines it needs to have a DOI it needs to have some sort of JSON output of the metadata which probably for a lot of folks just sounded like I went into foreign language there but it needs to be compatible with the right kinds of things so that if someone is google searching for a dataset it'll happen that's for non-techy people sort of gets into more gory details than anyone really cares about but when we talk to the IT folks they're like oh it's compatible with this and it's got that kind of output and it's got this kind of benchmarking from a technological perspective and so in that sense institutional discoverability is supported and when we talk about institutional discoverability usually with our tech heads on or our policy heads on we tend to get good responses about that so I wouldn't say so much that it's been a driving part of the initiative but people do tend to say like oh we're already doing something with a thing that schema.org metadata compatible oh phew that's one less problem on our plate so it fills a need but I don't know that we've got a current cohesive initiative around that um that might not be an entirely satisfying answer Nicholas I'm so sorry yeah if there's a follow up from that Nicholas please do put that in the question question section or the chat would anybody else like to weigh in on any other thing before we switch over to maybe some more open discussion questions any other follow ups to questions we've asked or something we haven't haven't mentioned I've done an excellent job interviewing I guess which is not true it never happens okay so when we met a little bit ahead of time we did talk about some questions that we would love to get some input from the attendees so we already talked about one about success another one we talked about is whether people are specifically supporting reproducibility or just repository services Nina I don't know if you want to follow up or add some more detail to that since it was your question reproducibility and transparency means so many different things to so many different people and it's such a sort of complex continuum and I see a lot of people in the participants list that I respect a lot for their contributions and hard work and transparent data sharing so I'd be interested to hear just what kind of advice or strategies or starting point people have thought about we can throw some of ours in there too but I'd love to hear from the audience why don't you answer your own question so there's a few different things I think we do and a lot of them are different approaches to the flexibility of how to use the OSF or other tools especially our studio also but things like how do you transition to it as a lab notebook a sort of collaborative note taking environment for one or more people that's one use case that we tend to chat about with users and sort of present a few examples of well this lab has done things this way small L lab and the and that gives some people one way to sort of envision alright I've got this wiki which I could take notes in or I could do other stuff in I've got this file dropping area I've got these plugins or or add-ons and OSF parlance but what do I do with those different parts I've got this changelog and I don't even know why that's there from a lot of users perspectives and so one of the ways that we work with that is sort of talking about if you wanted to use it as an electronic lab notebook environment here's how you would do that if you just have your basic science person and you're looking for somewhere that you can share a couple of spreadsheets a couple of Excel files to link to your latest journal article here's here's our use case for how you do that so sometimes that's an OSF-centric way that we do things at the opposite end of the spectrum in a non-technological approach we've had reproducibility discussion journal club events that's tea with as in the tea that you drink we so we've done journal club open discussions that are just like here's what's happening so those are a few types of initiatives highly recommend reproducibility is a great starting place for people who are just like want to do a small size reproducibility and transparency something but I'd love to hear other people's thoughts on what they're doing on campuses. We are still open to adding your thoughts on the in the chat about how you're addressing reproducibility in addition to kind of repository services and requirements we do have a question in the chat though which is related which is about advertising the OSF platform especially OSF projects and specifically Moriana is asking about advertising this to humanists and humanities department and I know that may not be where some of your wheel house is but at some point humanities will have to share data if it's federally funded that's part of the new White House mandate so do you have any yourselves or in the chat examples from other institutions about that particular audience and how you might reach out to them so this question has exposed our soft underbelly we struggle so we're based on the medical campus and so we don't have a strong presence on what we call our academic campus which is geographically away from us and so that's where we need to move to so any ideas we would love to hear them from anybody else how these technologies can be introduced to humanities departments we have a big art department here at VCU and I'd love to figure out how to use some of these techniques and tools how people in those departments might be able to use those tools we did do a blog on the Center for Open Science about replication in the humanities and case studies I'll just put that in the chat looks like we have a hand raised yeah thank you I didn't notice that Moriana yeah that's Moriana asked the question so please follow up I'm the one with the question yes I'm following up because we have other products for example for scientists like lab archives where they can you know create their lab environment but I have a science background too so what I struggle to think about how can I reach to the humanist I'm a scholarly communication librarian and what I think is the OSF projects can provide a platform where humanists can collect all the materials that they use for their research yes humanists do research it's just different it's full of images and articles and books and I don't know documents but it could provide them to a place to start collecting everything and sharing and collaborating with others that's where I'm focusing on the project so I just wanted to yes I think where is I'm going and the digital people working in digital humanities projects might be I think the prime targets for this but yeah and it would be a good if you do work at a university with the digital Humanities Center or some folks working in it I mean that's a great first place to try and do some collaborative effort with because it's data mining right in a lot of cases and it's a very data focus discipline in some some parts of it I actually started out in digital humanities so it has many many different kinds of ways it can be done but thank you I think it would be a lovely place to put maybe not the interface for a portfolio although it's feasible that way but it wouldn't have a sort of look and feel but you could do the all of the objects that you were going to put in a portfolio and that would seriously decrease the amount of storage that you might need in whatever design platform you use for the interface part of the portfolio also project tier with this more social sciences than humanities but for sort of the social humanities and some of the overlap areas project tier has an OSF starter arrangement that a person can clone if they want to use that I will look that up and stick it in the chat we have an example of a I'm sorry Dana please you go ahead talked enough I was just going to say we've also had good conversations through the libraries open educational resources group because that brings people together across the academic and medical campuses and so we've been chatting about well could OSF help with some of these courses and also projects in the courses to get folks used to using the OSF to organize their independent research projects yeah we've also heard from a lot of folks about OER over the last couple of years obviously a very hot topic in a lot of higher ed I was going to say I can think of one example in OSF because you can redirect the URL where you can put your materials in OSF and redirect to a different homepage that may display or contextualize better than the OSF does and I think that one of the things that for institutional members will be available more broadly but we're going to be very soon starting a group to try and gather a lot of these support resources, examples, templates things to help folks who are trying to figure out all of these flexible ways of using the OSF a better starting place for understanding the possibilities is anybody else have any other things they'd like to add we do have something on metrics in the chat but any other thoughts about humanities or just promotion engaging in outreach I would just say that coming up our partnership in terms of outreach, our partnership with VCU Libraries particularly I think you might be organizing this event is the I Love Data Week over the middle of February a whole slew of events I think that's actually on the academic campus part of our Love Data Week is going to include an OSF workshop include an OSF workshop glad to hear it we'll follow up about that over multiple days great it looks like we have another hand up from Leanne if you want to go ahead and ask your question yes hello everyone it's been very great to listen to this discussion I think on both sides of how to expand this but also in terms of looking at metrics one thing that may be helpful is actually having these conversations with the departments in terms of what does it look like for you all mentioned that reproducibility and transparency looks very different for these departments but it's certainly something that I know as a qualitative researcher has been on a radar of how do you improve this and although it may look very different from of course the data science groups with how these tools can be helpful for them and how does it fit into their workflow and view of research and what this means for open science within these disciplines can be really helpful in expansion mm-hmm yeah that's a really great point we obviously are looking at OSF and reproducibility from like a mile high so understanding the details is important and it all comes down to personal relationships in some sense of relationships across campus and with departments in the chat also related to metrics Nicholas noted that encouraging higher institutional buy-in into things like Helios for the higher education leadership initiative for open scholarship can be a great way of thinking about things and seeing who's there in the open he listed a link to the members and showing an administrator that their peer institution is involved in something that they're not involved in is always usually pretty persuasive while we wait for another question was there another question that the panelists were interested in asking um I'm looking at what we had the other day um any other questions about measuring success or promotion and outreach yeah Nina mentions in the chat the scan all fish project which is our favorite example project here at the center for open science there's also a project that has hundreds of images of crocodiles I think but they're really fascinating um projects I think in terms of um specifically the institutional membership features being able to get some more metrics on our user base is would be helpful for us in terms of knowing where across the university OSF for instance is being utilized and maybe what departments what schools that we need to target um I maybe a year or two ago started exploring kind of being able to get some of that metadata from the registered um users and that's something that I need to revisit can also be I don't know um I do see now and on the public profile or the profile information on on OSF for each user there are there are fields to enter your your department or institute in your job title event and so maybe being able to access that information um I think for people in my position you know knowing where to target resources on um getting the word out about for instance OSF would be would be useful right like you you've mentioned anecdotes about you know someone approaches you who's using it and you weren't aware right so knowing where that activity is happening and that is something that we can support it requires a little bit of additional um coordination to be able to get that information out of your directory service um but if we can it's possible to do that and um we can follow up with you and for anyone else who's a member out there um that is something that we can we can work into it um we're also going to be starting a project um hopefully in the next two weeks um to begin talking about how to improve our metrics dashboard and our metrics report um to see what other metrics would be really useful um and what other um uh you know what else we can we can get what else we um can you know tweak things to try and find what what would be helpful um in terms of those metrics that is um particularly to the library audience um I sometimes like to pitch it as you know the people on the campus are going to do what they want and so they're already using OSF so you might as well know what they're doing and be able to get a sense of where that activity is happening not that using OSF is a bad thing um it is a good thing so it can be a virtuous circle right like you find out who's already using it and then you know where to target your efforts to get even more um folks to use it um so those those factors are are key um in success um and yeah we have a link to Crockbase in the in the chat and I I recommend to anybody who's got um some time to kill to go and check out both of those projects scan all fish and Crockbase they're they're pretty fun um we're getting close to the end of the hour so I just want to put out a last call to anybody um if you would like to ask a question or make a comment um in the chat or raise your hand um also to our panelists if there's any last um things you would like to mention um please go ahead now um but if not we can move to thanking you um for being a part of this it was really great to talk with you all um we appreciate it and um hope that uh folks out there are interested and inspired and want to learn more about OSF and and want to um you know take back some of these um ideas to their institution to uh increase open science and reproducibility um so I'll stop talking if there's anybody uh else who would like to make any um last comments I mean I'll just in saying um thank you for the invitation and I think use cases for the OSF the skies of limit and our DSR data science lab is small and so um it's all I think only going to grow and get better when we include other sort of members from across the university to kind of share what potential use cases you know could exist um we're focused on biomedical research right now but expanding I think is sort of the next stage for for us and it's only going to get this OSF and open science outreach um kind of to be able to propagate further throughout the university yeah great well thank you everybody um just a note we will be sharing the contacts for the panelists and recording of the webinar to everyone who registered um so uh be on the lookout in your email for that um if you have any questions about OSF or OSF institutions um I am happy to uh to talk about it um and uh other than that I think I'll just wish everybody a good afternoon and thanks again to all our panelists and all our attendees um so thanks and we hope to see you at the next one