 Hi, everyone. Welcome to the Open Science Symposium hosted by Carnegie Mellon University Libraries. Thank you for joining us. My name is Melanie Ganey. I'm the director of the Open Science and Data Collaborations Program and a liaison librarian here at CMU Libraries. So I'm just going to talk a little bit about our event today and how we expect that to go. Today is our fourth Open Science Symposium, the second that we've held virtually. The day will be a series of short invited talks with panel discussions that address the opportunities and challenges of practicing open research. And I'll just describe these sessions a little bit right now. Basically the way this will work is that we'll have three short talks on a given theme or topic. And at the end of each talk, there will often be time for a quick one or two questions from the audience. Then at the end of the three talks, we'll bring back our three speakers to do a round of Q&A together. So if you have any questions that are maybe a bit broader and maybe a couple of our speakers might be able to answer them, you might want to save those for the panel Q&A sessions. We have a really diverse set of speakers here today representing many different disciplines and research methodologies. I'll say that many of their topics are very interdisciplinary as well. And our speakers come from academia, industry, government, publishers and nonprofits. So really represent a lot of different perspectives on open science. Our attendees are also coming from a lot of different places today. We have many researchers from different disciplines attending from institutions all across the country and abroad. We also have many library professionals in the audience. And so we're really excited to have a lot of discussions today from this really diverse set of people here. And we really encourage questions. So don't be shy. If you have any questions that occur to you, just please feel free to put them in the Q&A as you think of them and our moderators will ask our speakers the questions when there's a chance to do so. The first Open Science Symposium was in 2018 and this was actually the very first initiative that we had as part of the Open Science and Data Collaborations program. Since 2018, our program has grown to include a pretty wide set of services that support transparent, reproducible, reusable and publicly available research across disciplines here at Carnegie Mellon and beyond. We do this by providing tools, training opportunities, community building events such as this one, as well as collaboration opportunities. And you can learn more about our program at this link here and we also have an open science newsletter that you can get to on that page that will let you know what we're up to at any given time. So we are five years out from our very first Open Science Symposium and it's actually really interesting to look back at that initial program and see what has happened with Open Science in the meantime and what topics we've discussed over the years. I'm just going to give a brief summary of the sessions today and how it fits into the context of our other symposia. So session one will be talks about a few fairly new initiatives here at Carnegie Mellon that center Open Science in research and learning. Session two will be about the intersection of Open Science and communities and here we're thinking about communities both in terms of how we generate open data sets for community use, as well as how do we bring communities together to generate open source products for communities. Session three will be about the impact of policies. And this is a topic that has often come up in the Q&A discussions at our past symposiums. It's been noted many times that there are not enough incentives right now for data sharing and for people to take the time and effort it takes to make their data sets reusable for others. And that institutions have a role to play in this. And so we've never really directly addressed this in the symposia in the past, but today we have speakers representing government and institution perspectives on this. And we'll be talking about the role that these policies can play in driving changes in behavior in Open Science and incentivizing them. Also note that there's been a lot more conversations on this topic in the last couple of years because of Helios and so this really felt like the right time for us to have this session here. In session four we'll be talking about open access publishing and this is a topic that we have talked a lot about in our past symposium, but it's also interesting to see how much this has evolved in the last five years it's been changing very rapidly. And so for example, in our very first symposium in 2018, we had a session, sorry, we had a talk that was about the value of archive, the preprint server that a physics professor from Carnegie Mellon gave, and it was about the value of preprints to the physics community. And we had that talk because at that time bio archive was online, but it wasn't widely adopted yet by biomedical researchers. There was still some hesitancy around it. Some of the journals kind of discouraged its use. And in the five years since that first symposium, the behaviors around preprints in the biomedical sciences have dramatically changed. We now have met archive as well. And there's been a couple of key things that have really driven these changes in behavior, one being the COVID-19 pandemic where we saw this really dramatic need for rapid publishing in the biomedical space, as well as changes in policies from the NIH where they began allowing researchers to create preprints in their grant applications and their progress reports. And so this policy that incentivized this behavior has really dramatically shifted the norms as well. And so open access has really been evolving quickly and we're excited today to have some speakers talking about some very new things happening in OA and some initiatives that are really pushing the envelope of what's possible there. I'm just going to talk briefly about how we, how you can navigate this virtual conference. The whole day will be on zoom. 9am to 4pm Eastern daylight time, it's short talks and panel discussions, and you can ask questions using the Q&A feature in zoom. You can also upvote questions by pressing on the little thumbs up icon on the question. And as I said, just feel free to put the questions in there as you think of them. We're really hoping to have a lot of back and forth between our attendees and panelists today. We also have a community notes document. This is an interactive Google doc that you can use to put links, discussion points, comments in, and you'll be able to access that after the event. You can also find all of the logistics info for the event at the top of that document as well. We have a full code of conduct for our event today. We hope that everyone will be respectful to everyone involved with this event on all of the platforms. If you go to this link, it will show you how to report a violation of the code of conduct and if anyone is being disrespectful we will go ahead and remove them from the conference. And finally, before we get started, I'd really like to thank the organizing committee for this symposium. These are all of my colleagues I work very closely with on Open Science at Carnegie Mellon University Libraries, and they all put a lot of work into this and so very thankful to them. Some of them are going to be moderating today so you will see them on camera, and some of them are helping this run smoothly behind the scenes. So a huge thank you to them. We also thank our Dean of University Libraries Keith Webster for his support of Open Science in this event. Okay, so with that are we're going to get started with the exciting part of the conference actual talks. And so I'm going to turn it over to my colleague Chas Griego who will be monitoring the first session. Thank you, Melanie. Hi everyone. My name is Chas Griego and I'm a science and engineering librarian here at CMU Libraries and so I'll be moderating this first session. You've already heard this a few times before but just real quick one more just reminder of how the session will run. So each speaker will give a short talk and then we'll have time for one or two questions. But then after all three talks will invite all of our speakers back on screen for a panel Q&A. And so if you have any questions that might be good for all of the speakers go ahead and hold those for the panel Q&A at the end of the session. And for all questions go ahead and please put them at the Q&A and upvote any that you'd also like to hear be addressed. So our first session is focused on new initiatives from here at Carnegie Mellon University and promote transparent reproducibility and public access to information. And our first speaker is Subha Das, an associate professor of chemistry at Carnegie Mellon University and the director of the ChemZone outreach project. Das, you can go ahead and share your slides when you're ready. All right, is that show? Yes. Okay. Now hopefully there's not too much of a lag and hopefully I'm unmuted as well. Okay. All right. Hi, everyone. I'm just going to tell you a little bit about the remote control science via the cloud lab we've been using. I've used it a little bit for research. The idea is to use it a lot more for research. But initially it's been for teaching a few classes. Just by way of acknowledgments, you know, I've been teaching a class to use the cloud lab since fall 2020, which worked out great in terms of the timing. I hadn't intended to teach it remote, but that's how it worked out. But then since then I've developed the class based on the initial training from the Emerald Cloud Lab. And, you know, we have access to that thanks to Brian Frezza and DJ Kleinbaum, who are the founders and CME alum, and then Malav and Ben helped train me and then help get the classes going. So I want to thank Zach and Laura from the Everly Center for Teaching, who've helped, and I'll mention that later, getting some of the training modules in an online format so we could, you know, get it out to a lot more people than in sort of traditional formats. So in that way, sort of opening this up to even more people. If you're not familiar with this, the cloud lab was started by Emerald Cloud Lab around 2017. And then, you know, mainly used by pharmaceutical companies or, you know, commercial interests. But then with a partnership with the Emerald Cloud Lab CME was opening the first academic cloud lab, if you will. And then, I know, you know, based on conversations they were excited about having CMU have a cloud lab, because it's an academic environment which is much more sort of open. Mainly, there's so much data in the cloud lab not just the data from the experiments themselves, but also data about the experiments how the experiments were conducted because all of that metadata is collected through this shared, you know, and the automated instruments. So that is typically, you know, from their commercial clients, all that data is essentially, you know, not usable by others, right. So in this case, you know, as academics, we're also excited to sort of, you know, have this data available to a lot of others. So the cloud lab itself is an automated platform and that can be remotely automated. And it allows you to also collect all of this information and in a way use new technologies AI ML to also push how the science is done. Right. And also get people in a lot of different disciplinary disciplines together, mainly because you're using the shared work workspace. And so the cloud lab itself is essentially a remote control lab. There's a central code based platform which runs typically on this one is based on Wolfram Mathematica and symbolic language. And that's forms of framework to connect all the instrumentations and that's the software platform that runs all of this. Everything is essentially traceable. So everything you do, you know, you, you write the code for it, everything has an ID, whether it's the room, whether it's the notebook page, whether it's some object, whether it's a data object, or, you know, a tubing, all of that, whether, yeah, if it's an instrument, everything has an ID and everything is traceable. The cloud lab itself has, you know, about 200 different instrument types. Everything from chemistry, bio physical experiments bio chemical experiments. You can also do other sort of computational experiments. The CMU cloud lab itself will have a few more instruments. If you go back to ECL, we'll have more cell culture and more by biological type experiments, feasible sequencing and all of that. I think there's a full list of instruments. If you go to emerald cloud lab dot com. I'm not sure, sure if the CMU cloud lab instrument list is public yet, but you should be able to see that so So what's the advantage of the cloud lab is one is you can, the idea is to do reproducible science and I have actually now data to to sort of show that and see that right. It should allow sort of active learning based methods to put together these experiments. The other thing I'm excited about and we've already started doing is how it allows collaboration. One is, you know, the data is homogeneous because if everything is on this platform. Others are with when you're working with collaborators, and they are also using the cloud lab to perform experiments, you get the data and everything in the same format and that's an issue that's sort of non, that's non trivial, because you want people to share data but you want to share that in a format that's, you know, similar, regardless of who's giving it to you. Yeah, so yeah, I've already covered the breadth of instrumentation and so on. I'm going to skip this. Well, I've mentioned already reproducible science. That's something we sort of wanted to do is, you know, we sort of want to be able to describe our methods really well, everything is in a notebook in the cloud lab. And so, in the cloud lab, once you shared the notebook. Another researcher will be able to get an I'll show an example of what the notebook page looks like. The thing that I'm really excited about and I wanted to also train students in this is we want to also improve access for researchers and students at other institutions for these remotely operated instrumentation and this past summer in fact, I had students from Spellman and Morehouse College in Atlanta. I visited for a conference earlier in at the end of April. And then we since I taught a course in summer, we're able to get some of their students here and the idea is that having all this instrumentation available to others also frees up those institutions from having dedicated instruments and labs. I think about it, there's a lot of training and undergraduate labs, which are there but are not used. Again, when the lab is not, you know, in class or the amount of time an instrument is used. If you can increase that that actually is more efficient, and actually the instrument lasts much longer if it's on. So we hope to sort of facilitate facilitate these partnerships and access, as well as have other academic partnerships with industry and so on. So this timeline, this is an older slide so this again construction and their delay so the same internal rollout is not yet completely ongoing and the cloud lab will open in January 2024 as the information as of two weeks ago. So, and the reason one of this, this has also taken slightly longer is because the emerald cloud lab, which used to be in South San Francisco, they shut down and move to Austin again because they're expanding in here. Quite a bit so the thing that the old cloud lab was 14,000 square feet and they're moving to 102,000 square feet for facility in Austin. We mentioned DJ and Brian, and that, you know, we're doing this remotely so the picture behind me is actually the picture of the cloud lab which is set up in Carnegie Mellon. This is in August before I left Pittsburgh. So so yeah so so it's this is now they're just, you know, putting everything together and getting it online. So the the if you want to take a virtual tour. You can go to emerald cloud lab dot com and you can just walk through and see what the lab looks like. So I won't do that here. It's remote controlled and automated, but there are if you can see the picture, there are people who are running the, you know, helping do things they are technicians who move things around, but they do it only according to a specific script. So in that sense it's still automated and still, you know, based on the commands and the program and script that you set up. In labs, you know, everywhere in the world. You know, there's a lot of automation already right so in my own lab we do a lot of DNA and RNA synthesis and work. And so we have automated synthesizers for DNA, right. So that's also computer control. So the difference between that and the cloud lab is now this is remote. I can run the DNA synthesizer or any other instrument, but also the cloud lab collects a lot of data about how the work is done right so when I do a DNA senses sometimes it might fail here in the cloud lab format. There's all kinds of other in sensors and instruments about humidity environmental conditions. So that would help in troubleshooting. So how does the cloud lab work, you set up the experiments from it from a computer I can set up experiments from here. The experiments are run. The data is collected and stored in the cloud. And then you can again analyze and look through your data. I'm going to skip some of these, you know, how exactly these are run. So my goals in looking at this really was to look at reproducibility accessibility and also efficiency right because if I can do experiments with just the right amounts and not have to buy, you know, huge amounts of reagents, which I don't use all of. It makes it much more efficient. So in terms of using it for teaching, I had access to larger instruments that the only other thing was learning how to use the interface. So that's based. So, so to teach students, frankly, also myself how to use this interface. I went into doing simple nucleic acids DNA based manipulations, you know, how do you make a stock solution how do you measure how do you dispense. And I figured if students learn to do that, then they can go on to do any other kind of experiments. So these are the courses over fall, starting from fall 2020 that I set up. And I don't want to go into too much detail about the specifics of this and I'm happy to go into in the Q&A. But the idea was to just, you know, do simple things, run simple DNA based experiments that, you know, students should be anyone, even without a, you know, strong background and chemistry should be able to understand and then run the experiments and then analyze the data. Since I taught that in fall 2020 and 2021, with the help of the Everly Center, Zach and Laura Portmire, we were able to set up, convert some of the assignments and course content to an open learning initiative, the OLI modules. So this, the reason for doing this I figured was if I wanted to scale how this class or how the initial training would be, we wanted to have that where it's, you know, of essentially instructor less. And that was the motivation behind converting a lot of the assignments to this open learning format, where students access the cloud lab, they do the assignments they do the setup they do some preliminary type of learning about how the cloud lab works and how to set up experiments. And then we the second part, then is an instructor led course where they do specific experiments for that. So this is how the command center looks like you log in. There's all kinds of documentation, you can look up and use. This is what the notebook looks like. One of the things that the was somewhat unusual. One of the things that goes to, you know, the open science is that notebooks are all shared. Right so and you can set up teams in the notebooks, so that you know but everybody has data. So initially, I wasn't kind of used to that so that was a bit of a learning for me because I had to set up separate notebooks for each student of, at least for when they were doing assignments. And then later on, we did group projects and this was really convenient because on the left side these are all the different notebooks, but each I've just hidden the student names. But this is what a notebook looks like this is how you set up an experiment, you can see the script, you can inspect the data and see that and everybody who has access to that notebook, you know, has access to that. So later, I've gone into where we were doing shared experiments. Yeah, I don't want to go into too much detail on how we do the experiments. But I'll just quickly show you how the students access the again from the documentation, they can look up what instruments to use. And this is just to show you how reproducible the data is so these are data curves, each curve is from a different and it's extremely sort of reproducible. And this is another set of data so again, all the different students have reproducible data and I can put them all together. These are also not always perfect so it's not these are not simulations. Right, so this is real data you can see when there's a glitch. So, for example, the bottom right you can see there's some data they're glitchy. So you can exclude that from the analysis. I'll finish now stop here with by showing you this. I was really happy in 2020. I said hey, we have some stock leftover from 2020. Can we use that in 2022 fall. And in 10 minutes, a student in 2022 was able to take the sample that was there from 2020 and run it in 10 minutes, meaning set up the experiment in 10 minutes. Essentially the same data. So I'm hoping collaborations and other things, this would be really great to have this kind of platform. This should be reproducible and you expect that it would be reproducible and that's why we use DNA and redo experiments and it should be robust. But to actually see it, you know, when the experiment is run and you see the data, and it's the same, you know, with stuff that's been just sitting in the fridge for two years. That's, that's, you know, pretty fantastic. So with that I'll stop. Yeah, and then, you know, there's other things you can see in the in the cloud lab, you can see how long the protocol took to run. And that's down here. These are things that you typically don't get to do. So I'll stop with that and now, you know, I can take questions in the Q&A part. Thank you. Hey, thank you so much, Das. We only have one time for one quick question that's here in the Q&A. So, Siobhan McCarthy is asking, what software is being used for the notebooks. Yeah, the question is, what software is being used for the notebooks. Oh, so we're using the Emerald Cloud Lab software but that basically runs sort of mathematics, it's Mathematica based. So the programming we do to set up the things is Mathematica. And the Emerald Cloud Lab has its own software that we log in to access the notebooks and to access the cloud lab. Okay, great. Thank you very much. We're going to go ahead and shift to our next speaker. So if anyone else has further questions for Das, you can save those for the panel later. So our next speaker is Saeed Choudhury and he's the Associate Dean for Digital Infrastructure and Director of the Open Source Programs Office here at the Cardi Mellon University Libraries. Feel free to share your slides, Saeed. Okay, so hopefully you can hear me and hopefully you can see my slides. Is that the case? Yes. Great. All right. Well, thanks so much. It's a pleasure to be here today. As Chaz mentioned, I'm the Director of the Open Source Programs Office along with Tom Hughes and the Associate Dean for Digital Infrastructure here at Cardi Mellon. I'm going to talk about a rolling wall of openness and I'll explain what that means, obviously. But I do need to credit that to Josh Greenberg who's a program officer at the Alfred P. Sloan Foundation. And he came up with that term or he mentioned it to me, I should say. When I was talking about what it means to support open science, science becomes more automated and in some sense more complex than just the way we heard from Professor Das. So in terms of policy, Melanie had mentioned earlier that the policy implications. And there are two sort of large cases of that that are worth mentioning. One is an announcement from the federal government, the White House in particular that 2023 is the year of open science. This builds on something that NASA had started, but then several federal agencies have joined up to support this initiative. And I did speak with a federal funding officer fairly recently and said, well, it's October, I guess November now. What does that mean? Is the year over or so on? And he obviously said, no, no, no, it's not just one year. This is the lodge of a multi-year sort of long-term type of program. The other is a memo that came out last year from the Office of Science and Technology Policy in the White House, which built on a previous memo about public access. And it basically asserts that the outputs from any federally funded research need to be publicly made available. And there are all sorts of caveats and conditions and so on. But fundamentally, it now applies to all federal funding agencies. It removes embargoes on publications and it mentions data very specifically. So I've often thought about these types of policies in memoranda with this diagram in mind, which shows three research objects going from left to right. Articles on the left, data in the middle and software on the right. In some sense, the policies that are coming out of the federal government are moving left to right in this diagram. They started with papers, now thinking more about data and starting to imagine software. But I will say that software has not been mentioned explicitly, at least in part, because there's a concern about capacity within universities to address any kind of conditions or provisions that might come out. So the open source programs office in some sense is a response to that. We see ourselves as being a community convener, a clearinghouse of sorts, and helping to better discover, manage, curate, and share open source software. And then the arrows that you see coming on the right side of that diagram are what kinds of impact may be possible if you have well managed and curated software. The squiggly lines between the data and the software is my attempt to convey that the relationship between data and software is more complex. Typically the relationship between data and papers is citation. But if you think about data and software, the boundaries are blurred, it's a little hard to sort of distinguish them. And this is partially what I'm getting at in terms of this rolling wall of openness. As we start to think more about data and software, as we start to think more about the new kinds of science that are being conducted and the new kinds of facilities that are being built, an infrastructure that's being supported, it's going to be harder to differentiate between data and software. And I think it's going to be harder to be binary about what it means to be open. So while there are some nuances, you can basically say, is an article open or not? Do I have to have the subscription or go through a paywall to get to the article is something you can ask. I think it's a lot more complex, a lot more nuance of data. And while much of software is released as open source, there are different ways you can use it. There are different things you can do with the data and so on. We just heard an excellent presentation about Cloud Lab. It's a very exciting initiative. I don't need to go over everything you just heard. The one thing I will mention is that there's also interest in looking at Cloud Lab as a model for a national network of these so-called remote control laboratories or self-driving laboratories. There was a workshop just about a week ago at Carnegie Mellon that brought together several members of the research community, people from the National Science Foundation, leadership at Carnegie Mellon to discuss exactly that. So this is not only a trend, I think, for Carnegie Mellon, but largely speaking for the life sciences, material sciences, chemical engineering, mechanical engineering, and so on, several disciplines. So as you heard, there are many kinds of artifacts, outputs that are being produced in Cloud Lab. And my background is in systems engineering. And so when I heard this for the first time, it was this sort of, ah-ha, well, of course, it's obvious once you hear it. But in many of these disciplines that use Cloud Lab, the environmental conditions do matter. What the temperature may be, the humidity may be, actually affects how your experiment is wrong. So that kind of metadata, as Professor Doth described it, is really critical actually from a reproducibility perspective. So when you say we're going to make the data open, it's now important to start asking questions about, well, should the metadata be open? And those notebooks, you know, are they data? Are they software? Are they something in between, you know, like Jupyter notebooks? So we now start to have all these different kinds of, you know, software artifacts, that's probably a better term. But then you also have machine learning models, algorithms, you know, AI models, you have hardware. There's a lot of equipment in this facility. There's a very robust and healthy open science hardware community that's now asking questions about, can the instrumentation be more open? You know, what does that mean? Does the instrumentation have to be open in order for it to be reproducibility? Or do we influence, you know, the instrumentation makers? Do we ask them to think about standards? These are complex questions that don't come up when you're thinking just about an article about whether it's open or not. It's also really important to note that as you heard, the Cloud Lab original facility was built for startups, private sector companies in the Bay Area, who deliberately did not want to share with each other for obvious reasons. Moving to a university setting is very different and there's great interest in sharing not only the research outputs, but the teaching methods and so on that you heard. So we're taking what is inherently, you know, not to any fault of ML Cloud Lab or any sort of design decision per se, a closed system and opening it up. And that has some very interesting implications in terms of how that works. So I applaud ECL. They've open sourced their software, their programming language for ECL. That's a great step. But we're working with them to better understand what are the implications of opening the system up. And mind you, it's important to keep in mind things like the commercial potential, right? One thing that came up in these conversations was, you know, there may be pharmaceutical companies that are interested in the protocols that are being developed. Do we open them up immediately? Do we open them up after some point? I don't have an answer per se, but these are the questions that I think merit some further exploration. I spend a good part of my career working with astronomers who have very open data. It has no commercial value. It has no PC issues. It's sort of ideal in terms of data sharing. But even they have embargoes to research purposes. The WANI team of astronomers creates a new data set that they get first write an analysis to it before they even sit to the rest of the community. So it's not like the precedent doesn't exist, but it is a lot more complicated and complex in this particular environment. This is a screenshot from a National Academy's report that looked at new automated research workflows, the impact. And not surprisingly talks a lot about machine learning and artificial intelligence, which you also heard from Professor Vast. And this is a very simplistic way of thinking of it. But in some sense, machine learning is helping people analyze the results of the researchers, analyze the results of the research, sort of coming up with interesting ways of exploring where else to look at results, whereas AI might move us in the direction of actually helping design experiments, you know, that you don't necessarily think of in traditional ways of doing science. But this raises all sorts of other questions about openness. When you're talking about large language models, for example, and I'm hearing a great deal about everything in AI needs to be open. Well, what does that mean? So imagine the data, for example, Facebook's large language model, probably to no one surprise uses the data that Facebook has been gathering for years through their platform. But some of that comes from kids, people under 18, right? Some of that, while you shouldn't talk about medical information, Facebook people do. So should those be open? Probably not. It's not a clear cut question again that data should be open or not. Is it enough if the data are open? Is it enough with the code as open source? Is it enough if the weights are open? So this is that sense of a rolling wall of openness that we don't necessarily go in and say everything is open or it isn't. We have to explore these in much more deep sort of nuanced ways, understanding the interconnection between all the research outputs and artifacts. And he may even sort of question, in some sense, the licenses that are typically applied for open source software that come out of the open source initiative. I know Mike Blackhurst is going to speak next about the open energy outlook. I've been talking with him about that project and the code. Many of the recommendations that come out of university tech transfer offices for the open source licenses understandably are about patents and commercialization. But what if you want to build a community? What if, as in the case I believe is true with OEO, there's interest from the government. There may be interest from companies. There may be interest from other universities. Those are different communities. How do licenses have impacts on all of those types of interactions? So I know there are things that remain to be addressed even for something like open access of articles. And I'm not trying to sound, you know, somewhat, you know, put a challenge out there, but I do think we have to think more, you know, systematically and comprehensively about all the nuances of sort of a continuum of open rather than a binary nature of what is open and what is not. So if we have time for a question to do right now, I'd be happy to do that, but certainly looking forward to the panel discussion as well. Great. Thank you, Said. Yeah, we have time for a couple of questions. So if anyone, there are none in the Q&A right now, but if there's, if anyone has a quick burning question they'd like to put in, go ahead. Said, I'll go ahead and just ask you one quick question. So one thing I think about is when it comes to opening data, as you said, there is a lot of concerns about privacy and there's aspects that shouldn't be shared and some may not want to share. You know, a lot of, a lot of people may not understand code or be able to read code. Do you anticipate there's similar concerns about, aside from kind of monetary things, would there be concern about cash and open this up? Yeah, I'm glad you asked that question. I did think about mentioning this during the talk. There are concerns, obviously, about unintended consequences or maybe even malicious uses, right? If you think about a facility like Cloud Lab where there's all sorts of materials, all sorts of materials, maybe, or if you think about AI in terms of, there's a company called Collaborations Pharmaceutical that published a paper about how easy it was to generate toxins using their AI models. So transparency is a really important part of reproducibility, right? But if transparency exposes potentially negative uses or negative outcomes, how do we handle that? I don't think the answer is to say no, we don't ever talk about it. But I think the way we talk about it has to be done in a very thoughtful and deliberate way. And it doesn't even have to be malicious. You don't want people trying experiments at home based on things they're trying in the Cloud Lab. We have to make sure that the capacity, the awareness, you know, education and capacity is raised so that people understand the implications of what they're doing with these facilities and these tools. Great. Yeah, thank you so much. We're going to go ahead and move forward. So thank you again, Said. So our last speaker for this session is Michael Blackhurst, the Executive Director of the Open Energy Outlook Initiative in the Department of Energy and Public Policy here at Carnegie Mellon University. Mike, go ahead and share your slides and you're ready. Sure. Are you able to see the slides and hear me? We can hear you, but we cannot see your slides. Yeah, let me. Yeah, I didn't see that through, sorry. This is it here. How are we now? All good. Okay. Well, thank you so much for having the Open Energy Outlook group here. It's really interesting to see the first two talks. I really appreciate the nod from Said and the challenge that he set up with respect to, you know, making sure that open software has similar value to IP. That's definitely a challenge. It's not highlighted here, but I'd love to talk more about it if the group is interested. I'm going to talk a little bit about what OEO is, the model that we house, that's the software that we house that Said mentioned, and why open science is important in this space. Some of it's going to be kind of traditional. Here's a, here's a method and here's some example results. Some of it's going to be focusing on the open science aspect, but it's all intended to be late person level so that, you know, people can understand what what's going on. You probably, this is probably not new to you, but decarbonizing the US economy is going to be an unprecedented challenge. We're looking at possibly having to spend 4% of GDP per year through 2050 to do that. And it will require the coordination of a lot of different actors with different motivations, some of which are private sector actors, some of which are policymakers, some of which are, are everyday folks. And there are millions of different decisions that need to be nudged or incentivized to be able to do this at scale. So why is open science so important for decarbonization? Well, like I said in the previous slide, we have to coordinate a lot of different decisions and they really do have to be coordinated to do this well. We have to under democratize the understanding of opportunities and challenges. We really need to build trust. That's an important part of this transparency, replicability community are required given the costs and risks. Decarbonization touches all disciplines. I'm an engineer, but I really appreciate social science and how important it is and I'd like to see it more utilized in our space. And we need to sustain a commitment to the spirit of open science to ensure that we can meet our decarbonization goals. So, in the briefest terms, the open energy outlook initiative examines US energy futures to inform energy climate policy efforts by applying the gold standards of policy focused modeling, maximizing transparency and building community, building a user community. Sorry, I had to move some windows around to be able to read my own slides. And I highlight this part in red in that I think it's the most aligned with the spirit of open science. If you follow our work, you'll see a couple of different acronyms that are thrown around and I get a lot of questions about what they, how they're different. We use this method called the tools for energy model optimization and analysis, which has an acronym that's pronounced to Moa. This is a general method. It's essentially algebra and the open energy outlook or refers to both an US instance of to Moa. That is all the data we've collected to build the model to make the algebra run, if you will, and the broader initiative, broader initiative. I know this is somewhat complex for those of you who aren't in the energy space, but you can simply think of this as a, as a flow diagram from energy sources. These are all the choices we have and how we power our economy on the very left hand side. And we convert those sources to energy services. We make electricity. We refine petroleum. And then we buy technologies that provide a service to us like lighting or space cooling and to Moa reflects the network of how we move energy from sources to services. The heart of it is relatively simple. If you tell me what the prices and the operating characteristics are of the technologies we could choose. I will tell you what the least cost set of those technologies are to meet your demands. So all these things with question marks on here. What sources should we use? What conversion technology should we use? What end use technologies like appliances we should use those are things that to Moa will find for you. And you have to tell it what the prices of those things are and how much energy you need. You know, if this is still a little bit okay, you can think about designing a house using to Moa where you tell to Moa. I need so much energy for lighting and hear all the choices I have. I need so much energy for heating and hear all the choices I have. And here are all the prices for those technologies and fuels and it will tell you what the least cost set of technologies are. So, I'll skip this for time for time, but this is a description of it's a meta description of the information that we collect the publicly available information we collect to outfit to Moa. To make it representative of the US energy system gives you some indication of how we divide up the model and time and space and probably key to the open science community is that it's written in Python. Other things that to Moa does well as we can you can add all sorts of user specified constraints so you could imagine that we are concerned about emissions as well. We can schedule emission reductions custom emission reductions by year. We can introduce a carbon price and figure out how the, the least cost set of technologies and sources change in reaction to those different constraints. We can do policy analysis like introducing the subsidy for a technology. I'll talk about how the inflation reduction act introduces subsidies and how we've modeled that to estimate emissions in the subsequent slides. And we could do needs analysis for respect to uncertainty and variability, which is really essential in a climate and energy space. There's enormous amounts of uncertainty and variability and being able to model that well as is essential. Here's some example questions that the model can answer. What technology pathways are essential to decarbonization. What are the cost and emission implications of different policy approaches I'll give you some results in the subsequent slides that that speak to these questions. What the carbon decarbonization policies are robust to uncertainty, how much flexibility that we have in trying to meet our policy objectives so these are just examples of how we can use the model. And some recent results for those of you who aren't used to looking at these acronyms. I wouldn't worry about the various colors on the slide, but you can think about the top of them being the total us emissions. Some of these may be particular labels may be intuitive for those of you who have worked in the energy space but but what we did was estimate how the IRA the inflation reduction act might impact emissions and so we see that during the IRA's active subsidy period which stops in around 2033 you can see the dash line on the x axis there we we estimate about 30% reduction in emissions and is the IRA subsidies expire we see emissions rise a little bit again and then level off around 2040. And a lot of the reductions that we get from the IRA come from the electric power sector which is shown there in green and the transportation sector which is shown there in purple. Those are intuitively those come from more renewables and electrifying transportation to some degree and a little more biomass used for for powering our transportation system. But as you can see in these charts, if you don't worry about the colors you just see we still have a bunch of emissions at the end of 2050. So we could use tomorrow to say what else do we need. Oh, I should say to reduce emissions to zero. And so we impose a carbon constraint in the model and we say what what technologies and sources are least cost and getting to zero. So we see more emissions being squeezed out of the transportation sector electric power sector. We see things like becks to hydrogen so that we see using that means bioenergy biomass for making hydrogen. And we see this technology called DAC which is direct air capture that's sucking carbon out of the atmosphere so we see a lot more use of what we call negative carbon technologies. So we get to zero carbon zero emissions by 2050. These charts. I just pulled these charts as an example of one we can do some uncertainty modeling and variability modeling the ranges in these charts if you're not familiar with box plots you might just ignore me for a minute or two. But the ranges reflects different portfolios that are really near least costs so one of the neat things about our model is that we can tell you what the least cost set of things are that will meet your goals. But we all can tell you a whole bunch of different sets that are very close to the least costs so it avoids this lock in effect to sometimes happens in these models. Policy makers might find certain technologies intractable for a whole host of reasons maybe there's a jurisdictional issue maybe there's an equity issue. Political issue. So we're able to say what are some other pathways to meeting our goals that don't require a singular set of technologies and that's what the ranges are shown here. And the key takeaway here is if you look at the upper left hand box, we're going to need a lot more electric power. You might have heard of electrification as being an important in meeting our mission goals are going to need a lot more electric power for all sorts of things. And that's going to require that we coordinate. It increases in renewable supplies as you can see in the other charts and some battery storage as well to do that if we don't coordinate those things then we're going to actually increase emissions as people try to electrify their end uses. And then what we can do is like I said before we in the previous slide we can. We are able to identify different pathways to achieving the same goals that are almost essentially the same. And so we are the group those pathways into different scenarios which are shown here in the various columns. We have a low hydrogen scenario. And that's the that that has a lot of carbon being captured from coal plants. You can see that in the in the dark gray. I wish I could use my mouse here but the low hydrogen scenario requires a lot of sequestration of emissions from make continuing to use coal. Whereas, if you look at the high electricity scenario, you see a lot more renewables and a lot less coal use and therefore a lot less sequestration of emissions from coal. So the point the point of this is that one of the neat things about our model is we can avoid a lock in effect. And we can identify a whole host of similar strategies that have similar or different strategies that have similar effect on emissions. We have a whole host of online resources that invite potential users. These are intended to create a community. I will admit that the learning curve is still a little steep and we're working on that I'm happy to talk more about that. We published an annual energy open energy outlook last year and we're working on a new one. You can find that at our at this link here. This is this picture on the left is the cover page. And then I picked out a few charts that are similar to what we've already shown that are in that. And then one thing that I really want to emphasize is a real challenge, but I think we're also a real opportunity is that, you know, this model will identify technology as an opportunity. But it's it's only a solution if we can coordinate with other actors, which is another reason we want to be open and transparent and accessible. We need a lots of other researchers and stakeholders to engage with us in this community to be able to achieve our goals. And so we see our model as part of a bigger community and want to help grow that community. And that's part of the spirit of why we're, we're open. I think that maybe. Well, so here are some things we're doing to make more open. We're reactivating our advisory board, which will include about 20 to 40 members. We're looking to form a corporate consortium to inform model development and applications. We are streamlining updating the model code. It hasn't been updated for some time. So that hopefully will make it a little bit easier for new users to use the model. We're looking to publish our results in a more user friendly format so that even if you don't know how to run the model, you can still make use of our results. As I mentioned, we're publishing a second open energy outlook. We're imminently starting an open energy blog, which I'm excited about. And then we're always looking for opportunities to collaborate. So if you see anybody who wants to participate in collaborating with us along these lines, please reach out to me. As you probably know, a real challenge that we face is few people want to support maintaining an analytical resource. And we always do some innovation alongside alongside maintenance, which kind of can help at times, but it's really been it's it's hard to find people who are enthusiastic about maintaining a resource. We're not in the spirit of openness and so love to talk to you more about how we could overcome that challenge. And finally, I just want to make sure I thank the Sloan Foundation for the opportunity created for me and for our model. The Scott Institute is a big supporter and houses our, our, our website, if you will. And it's been a really excellent collaborator and here are the team members that that help make the initiative successful and help make me successful. Great. Thank you so much, Mike. Again, if anyone has any quick questions directly for Mike, please go ahead and put him in the Q&A. For now, I'll go ahead and just ask one quick question. So, you know, we're all aware of climate change deniers, but you know there's, there's that kind of middle section of people who don't don't think about these technical issues as much but they do have really good solutions. And one question I sometimes get in regards to like electric cars is, you know, why should I invest in driving an electric vehicle when electricity is still being produced by fossil fuels, producing emissions. So with the resources that the Open Energy Outlook is creating, Timoa, do you envision this can be some type of tool that anyone can, you know, refer to someone, pull it to someone and say, here's a way that we can answer your question with, with models and data. Well, I often, when I would teach concepts like this, I would try to to help students understand the difference between a market taker and a market maker and the influence of scale is really important here. So, I would say that if you're an individual person and your, your only change that is happening is your individual car, it's not going to have any impact on supply. And so I wouldn't necessarily make that a center point of my decision is probably what I would, would tell the individual person, but our model certainly could be used to help understand as you scale up. Lots of people choose electric vehicles and we don't decarbonize the grid. What would, what would the emissions look like, or if we decarbonize the grid and don't commensurately switch to more electric end uses will have a bunch of excess renewables with nothing to do with them. Right. So those are the kind of things that our model could do, but I would the feedback I would give to your, your individual friend or colleague is that, you know, an individual decision is not going to be a market making decision. Does that make any sense, probably not answer I understand, but, but the model isn't isn't for individual decision making it's at a different scale. That's great. Yeah, thank you. It's really great to hear that with this model, you can't assess, you know, these different variables, different outcomes, different technologies. Well, thank you again, Mike. So now we're going to go ahead and just invite all of our speakers back on the screen, and they can answer questions together as a panel. And so, again, everyone, please feel free to post questions in the Q&A and I'll be moderating. Well, I'll have a question for us I eat if nobody else has questions so but I'll defer to others if, if nobody has questions I'm happy to ask one. Yeah, feel free. Well, I was I keyed on your anecdote. So, with respect to that described Facebook having sensitive medical information as a potential challenge to transparency. But I also thought on one hand. You know, that could Facebook and others like Facebook could use that as a. As a excuse or a constraint to not being transparent and that could further incentivize them collecting a bunch of protected information so that's part of the challenges and decisions, you know, how do you reflect upon that. That challenge there. That's a really good question, Mike. I think there's a understandable concern, particularly with AI that the response from a lot of big tech players, like a Facebook or meta or Google and Microsoft is along the lines you just described that we have these these data we shouldn't share and it's really too complicated for us to figure out how to do it. You know, you could say why don't you just strip out anyone under the 18, why don't you strip up, you know, anyone under. He's talking about medical data and they say we have you know, petabytes of data, how do you expect us to do this. So I think it really, you know, I'll issue this as a call, you know, their challenge to our community is we have to come up with ways to sort of hold them accountable. You may be aware of the big announcement that came out of the White House this week about AI. I am not an expert in how these things are written and how they should be interpreted. But one of my takeaways was there's nothing in there that a company has to do. There's a lot of recommendations, a lot of suggestions, a lot of please do this. But I, you know, these companies might look at that and say there's nothing in there that's actionable for us, right. So as a community, we need to come up with mechanisms to do that. So in the case you just described, for example, one could be even if you can't share the data, you need to produce some sort of data profiles, right. In the software world we call them, you know, software bill of materials, right. So I just understand what's in the data, how it's composed, what are the profiles. You know, there's Alex London, Professor of CMU gave a talk at the library's summer retreat, where he showed an analysis of classifying, you know, faces, using AI models, and it was tuned on basically, you know, white matters. And when that was discovered, Microsoft went back and we added, you know, data to make it more diverse. So just that little bit of an insight that your data was composed in the following types of images allows some kind of response, some kind of remediation. So are there data curation profiles that we can produce to say you need to at least describe your data in this way. If you can't share, you know, your, your model weights, you at least need to tell us what you were waiting. So this is where I'm getting at. If you if you just show up and say, make everything open, legitimately, even a big tech company might say we simply can't do that. And if that's the end of the conversation, then we haven't made any progress. So I think we need to think more about this nuance kind of rolling wave of how we can get insights into what's happening, even if we can't see the actual content. Awesome. Thank you. And I have a question for both of you, Chas, no one else in the system. Well, it looks like we just got one question. Okay. Okay, so this is from Cheryl. How do we balance openness with the current academic model of journal publication credit for promotion and tenure. I mean, I'll start, not not not something to answer simply. I really will just start. It's a it's a it's a complex web that you know you're describing with that question. And I think with articles, you know, maybe this is somewhat controversial right we we ended up with these sort of unintended maybe even somewhat perverse ways of which people are assessed in terms of the impact they're having the impact factor, and so on. So conducting that reengineering that is very difficult. But I am trying to dodge that but what I am going to say is that with data and software we have new opportunities to to get it, you know, right to do it better. Maybe aware of something that came out from GitHub public GitHub innovation graph, where it aggregates GitHub, you know, data in a very large scale at the national level, and looks at things like how many, you know, commits are made to repository and it's a good tool, but I immediately contacted people get up and said, For goodness sake, please don't start counting number of commits to repository as a metric right that that'll just be a disaster. I will go fix documentation, right. I will remove spaces from documentation and just make commits. So, I think the lessons that we have learned from the article side we need to apply in terms of metrics around data and software, and I will say the federal, you know, funders and you know, the year of open science and so on is giving us an entree and a pathway to think about better metrics for data and software. You know, while we, we certainly need to try and keep working on how people assess articles, you mentioned, you know, sorry, Melanie mentioned Helios, which is the higher education leadership initiative for open scholarship. It's a group looking at reappointment promotion tenure practices and is trying to introduce open science into updating or modernizing some of the RPT processes so I think that's a group or we really want to try and engage with them. Yeah, I have been in various academic positions from tenure track streams to administrative positions and you know, on one hand, a rubric that might work would is to get all your publications out and then make whatever fodder from those publications available openly. Make it open that that requires a lot of time and energy that then distracts from further promotional activities. So I see people who are really care about the visible public impact of their work, making that a priority, and accepting the opportunity costs that comes from investing in proposals and publications. I think that it's really nice to see some sponsors focus on open science and and standards for accessible research products that are fairly funded come out. Those are really helpful. I think that will that will refocus the way that reprioritize sponsors priorities so sponsors say we really care about open science, then that will flow towards people who want to provide open products and that will help with their promotion. But I also think and I'm not definitely not an expert here but I wonder if there's going to be a, you know, the study of what makes things open and successfully open as a whole another branch of science that could really help, you know, align traditional academic tenure evaluation with and keeping things open. It's really hard to figure out how effective a certain decision might be in the open science space we're going to start a blog and we have got mixed advice about that and I'm excited about it but you know, that's not something that is necessarily going to help somebody with promotion so I'm getting a better understanding of what is successful in the context of open using the traditional metrics of science could be a good sea change, and I don't know if the others here have, you know, expertise in that they could comment on that but I'd be interested in learning more about that as an opportunity to merge those two to overcome that constraint when it comes to tenure. Any other thoughts about this topic from the panel. Yeah, I'll just quickly say that I think people recognize the thing about you know just publications so it's not just publications as part of the tenure and promotions and that is one part of it, but I think people recognize their issues with that so yeah. Great. Yeah, thank you all. And it looks like we're getting some good discussion here on this question in the chat. Again anyone else has other questions for the panel. Feel free to put them in there. I think we may have a question from melody. Yes, hi. And for anyone who's interested in that topic we will have a panel later this afternoon on permission and tenure and incentivization and whatnot, but Mike you noted that there is kind of a steep learning curve for using some of these resources and I've also heard that in the context of the cloud lab. And it reminded me of a comment and some discussion at around table we hosted in September around data sharing, and one of the biology professors here at Carnegie Mellon Joel McManus suggested that we might need to rethink the way that we traditionally do graduate education and biology where it's very focused on reading and discussing papers, but the students don't necessarily access underlying data. Think about how to reuse that data and whatnot and so maybe the way we traditionally do that education hasn't really kept up with the advances in open science. And so I'm curious if you think that there are broad implications for how we think about graduate education undergraduate education to prepare this next generation of researchers for participating in open science and the technical skills that they might need to do that. Well, yeah, I have really, I hope my slides were indicative of this but I have focused on trying to be a better science communicator with age and I had had some formal training in science communication. And I agree 100% that we, if we could make a little space in a bet and accreditation. We could possibly, you know, offer some additional electives that not just compliment the opportunity for science communication to help people become better thought leaders as they grow throughout their career. But then, you know, I think there's been an appropriate and healthy focus on technology and society at large that I think is also aligned with this. What should engineers and scientists know about ethics and equity? How can we bring in the quote unquote normative dimensions of technology and make sure that we're educating long term thought leaders. So, and I think science communication is one of those important elements to preparing long term thought leaders, especially in the technology space. Yeah, I'll just have that in context of the cloud lab as well. We do have to evaluate. I mean, even without the cloud lab, you know, evaluate or reevaluate, you know, how we do things how we do, you know, education at a different level just to keep up with technology but also advances in science. For example, the cloud lab. The also the question is, you know, when you're how exactly are you teaching students to do experiments and what's important in an experiment itself, right. Is it the ability to do something well 10 times or is it the ability to design the experiment. So if you're doing cloud lab experiments, does that count and this goes towards accreditation as well. And does that count as a lab course. Right now the courses I've taught don't are not classified as a lab course. It's a computer course but that's also yeah so but these are things that you know we'll have to figure out as well. I just want to add and give appropriate credit here that this comes from prior work I've done with Chris Borgman and Cal Palmer information that's community and and both of them. You know, I'm paraphrasing of course, in essence said to me that one of the important things about open science is that it makes the implicit explicit. So a big part of graduate training or even, you know, advanced undergraduate or any undergraduate training I should say is conveying tacit knowledge. Right. I mean that that's ultimately when you go into a lab or you go into a research group or so on. You're trying to sort of learn the tacit knowledge that that makes that that group work and the science from that group work. And things like, you know, tomorrow and cloud lab and other types of, you know, new types of scientific systems are getting better at that explicit implicit knowledge being explicit, right and and the more open it is, the more anyone can come along and learn from your lab and see how something how research is done how an experiment is done what methods you know what data and so on. So I do think openness as a way of lowering that threshold of bringing in the next generation is a really important driver and reason why open science is important. Thank you. Okay, still, still a little bit of quiet in the Q&A, but I'll go ahead and try to pick everyone's brain for the time being. So, one thing that I've been thinking about lately is kind of just the future and anticipating technology predicting advances we're going to make and we have a pretty good idea about some of things with with AI and other things like that, but as as we're all aware right now we're in a virtual webinar on zoom, and this is something that we wouldn't have even thought about is is early back as 2019. And so, I'm just curious if anyone has any thoughts about, you know, could there be any kind of disruptive technologies or things that happen and do you think that might set us back in our progress of opening things open science or any other initiatives that we're making? And I'll apologize in advance for a very, very deep question. I think you should, you should be applauded for asking deep questions and trying to articulate what you're thinking. That's a good thing. Well, have you seen Black Mirror? Have you watched the show Black Mirror? That might be a good place to start if you haven't seen that. There's a lot of deep, deep thinking that comes alongside that, but that would be the easiest way for me to sum up some type of response. So, Jazz, I'm not sure if this is the direction you were thinking that question might go, but one other thing I've heard in the cloud lab context, but I think it applies even with the open energy outlook is not losing touch with the physical. Is that even if a grad student can come along and just go into cloud live and learn just by looking at data and notebooks and models and so on, how to perform experiments, the value that's the handling materials, right? So way back in the Stone Age, when I was an undergraduate in a lab, we took these lacrosse balls actually and put them in liquid nitrogen and then, you know, they would shatter. And then as you held them in your hand, the heat of your hands would sort of start to warm them up and you could start to feel them get soft again. Is that something you can emulate easily? A virtual environment? So the importance of what's actually happening physically in terms of what you're modeling and experimenting. And if you think about, you know, decovernization, it's like, that's a great model, but if someone experiences poor air quality, that's a very different way of thinking about it than looking at models as well. So I'm hoping disruptive technologies can actually move us in the direction of making sure we don't lose that sort of physical experience, which I think also makes it important, is an important component of open science. Yeah, just to follow on briefly, I think a similar thought, which is, I think it's really important to remember the personal narrative that comes alongside with whatever the technology that your study is in the context of decarbonization. I grew up in Appalachia and it's a very challenging story for that community and it has been probably to our shared detriment and even their detriment to try to force upon a community major societal change that has disproportionate costs and benefits. So I think part of the opportunity with being transparent of being transparent also means being inclusive and building a community. I think that's a good thing. I think what I heard in your question to some degree is, you know, is it is the push to being super transparent. You're going to have unintended consequences. And I think I don't think in this particular case, that's true. And even more broadly, we have major asymmetries with who has access to information such that being more transparent inclusive seems to me intuitively that it can have, you know, drastic negatives. But that, you know, that would be a really nice additional pivot in how we do research and fund research would be to make sure that we're telling those personal narratives alongside the innovation so that we can think about not just the technological change that engineers create but how that plays out in a broader societal context. Yeah, I'll just add to that. I mean, because I've had for different things. One is in terms of losing on the physical touch and things like that. So, you know, we are trying to do things where students do, you know, things in the lab and then in the cloud lab. But there's also different levels because at a certain point, you're not going to be able to do something in the cloud lab unless you have the domain knowledge to do that. On the other hand, this summer and now I'm working with a student, not a student he was a former student. He's a PhD and he graduated with his PhD from the chemistry department a few years ago. He's a brilliant computer computational chemist, but he has muscular dystrophy. He's not. He's always lived in his wheelchair, but now he can actually run experiments. That would not have been possible and you know, because of the way he can interact through the computer. He can actually do things and he doesn't necessarily need that. So yeah, I mean, so having that, you know, that experience physical experience. I'm coming on to thinking that it may not be totally necessary. It will be necessary because we always need people who need who know how to build instruments, but that's a whole different aspect. So that's very touching to hear and that's that's a great story. I was planning to ask, you know, are the cloud labs making computational researchers considered doing experiments for themselves. I myself was a computational researcher who didn't want to pour liquids as terrible at that. It's great that we're even seen examples of this technology is bringing higher accessibility to those practices. Okay, well, it looks like we're at time. So, thank you all again for being here as a panel DOS Michael and site. So for now, this, this wraps up our first session. We're going to go ahead and take 15 minute break. So, feel free and come back around 1040 and we'll get started with our second session in format. Each speaker will be given us give a short talk, then we'll have some time for one or two questions. After all three talks the speakers will be invited back for a panel Q&A. Those that might be good for all of the speakers please hold those for the panel Q&A, and all of your questions should be put in the Q&A box. Our next session is focused on open science and communities, both open data for use by communities and community generated open source resources. We do have one program change in this session. Additionally, Janelle Knox Hayes could not be here. We have three great talks lined up and our speakers are Monica Granados, Taiwo Lissisi and Melvika Sharon. Our first speaker is Monica Granados and assistant director at Creative Commons, working on the open climate campaign. Monica, whenever you're ready. I think Monica is still trying to get into the meeting so we're going to start with Taiwo actually. Thank you. I'm sorry if you're putting on who you want to start with first. I managed to get in through the general link. So first, you let me know. Whatever you prefer. Okay, I'll go. Thanks, Taiwo. Hi, everybody. Sorry about that. My name is Monica Granados. I, as was introduced, I'm an assistant director at Creative Commons and then I want to talk to you a little bit about the pieces that we need an open policy to solve the world's greatest challenges. I'm going to start with this big statement that we have here at Creative Commons and it's really the ethos of the open climate campaign, which is, we're going to solve the world's biggest problems, then the knowledge about them must be open. A great example is what happened in 2020. At the end of 2019, the World Health Organization realized that there was something happening in China. There was a number of cases of pneumonia of an unknown origin. This was really localized initially to certain regions in China and then ultimately the country of China. But eventually it was labeled a pandemic, meaning that everyone in the world was affected by this outbreak. COVID-19 affected us all. I don't think there's any human on earth that wasn't somehow affected either gravely or at least minorly by this new virus. And something really interesting happened because there was a recognition that we were facing something that we probably hadn't faced in 100 years. That was when the last really big pandemic came to bear. What normally were pretty close practices all of a sudden became pretty openly shared information and as quickly as possible because there was a reaction to the fact that everyone on earth was being affected by COVID-19. China publicly shared the genetic sequence of COVID-19. The National Science and Technology Advisors from a dozen countries called for open access to COVID-19 publications. And researchers responded, 77% of COVID-19 related papers are open access. Journals responded by making their COVID-19 related papers open access with no fee to publish. There was also a recognition that preprints were a pretty good outlet for us to be able to disseminate information not only freely but very rapidly. You didn't have to wait for the 12 months that your paper has to go through the usual scholarly communication pipeline. All of that ended up resulting in a lot of information about COVID-19 having that information as easily accessible as possible and generated treatments and helped us develop a vaccine to COVID-19. What was once pretty close practices became open because there was recognition that this is a world level problem and to solve that challenge the information about the problem needed to be open. But open sharing of research is really not the default. Here's just a snapshot that I've grabbed from the United States and Canada. If you take the authors from the papers from the United States, only about 41% of those are open. Canadian authors only about 38% are open. Also not really using open repositories. There was a paper that came out from Federer and colleagues in 2018 that after the implementation at plus one of the data availability statements that required you to state where your data was available, only 20% of those actually had data in a repository. So how do we move towards open? How do we move towards recognizing that we need to have all knowledge be as accessible as possible, save for privacy or indigenous data or species at risk data for example. How do we get access to that knowledge and particularly in cases where we're trying to tackle world level problems. So I really like to think about it in three ways or three, you know, three avenues that we need to have and then they need to work in concert to get us towards open. The first is training. A lot of researchers don't really even know that open practices are available to them. Many have misconceptions about the costs of open access and using repositories so training is a really important component. And it could happen at the institutional level but also there are many organizations including some from colleagues that we'll hear from today like Open Life Science that provide training for researchers. It's really about how do we connect them with existing training and resources so they know that open access is open access and open science and open knowledge is an available method for them. We also need to support it with infrastructure. There needs to be a place for people to put their preprint and there needs to be a place for them to be able to put their author accepted manuscript into a repository. There needs to be a place for that data that has critical information to be accessible to others and there needs to be investment and and again at multiple levels at you know your institution level at the national level at the international level recognition that to support open we need infrastructure. But what I really want to talk to you today about is the policy aspect. The third is incentives and rewards. We absolutely have to change the way that we reward scientists and researchers on the research that they produced. Right now they're incentivized not to do things openly. That's why we see those statistics. That's why we see 20%. That's why we see 38%. They are no incentives right now to practice openly or the ones that do exist. They're unaware of looking that back to training. So one of the things that we need to think about is how do we change tender and reward structures. How do we change policies to encourage a culture change towards open. So I want to talk to you a little bit about open access policies in the and what an open access policy should look like. An open access policy or an open science policy is something that is being more and more used by national governments and funders and international organizations to encourage producers of knowledge to work openly. What usually the policy will say is that if you're going to take money from this organization to do your research, then a stipulation is that you have to make the work open. But open is a really big word. What do we mean by open? How can we operationalize that open into something that is equitable and functional and effective. This is a snapshot from the second plan for for French open science. You'll see that there's a couple of different high level headings here. I want to point out three pieces that are absolutely crucial to a policy in order for it to be effective in getting us moving towards open that is equitable and effective. The first is, you know, you'll see a provision for open access. You'll see requiring open licenses. And you'll see that I'm going to talk and I'm going to talk a little bit more about what do we mean by open data deposit. So starting with open access again open is a really big word open may just mean free to read. We want to go more than just free to read. We want access to be immediate, no embargo. And to stipulate that there are many, in fact, more options for free open access where there is no cost either to the reader or to the author of the manuscript. Preprint as I mentioned again was a tool that was used a lot by researchers during the COVID-19 pandemic version of the manuscript that has not finished the review process but there is a lot of infrastructure that's being built on top of preprints to allow for review to allow for ultimate creation of versions of records on top of a preprint base. Absolutely free for you to upload that to many different preprint repositories that are specific to your discipline. Green open access that is being used a lot presently where you're putting your, your manuscript into a formatted or sometimes unformatted manuscript but it has gone through the peer review process into a repository here we're really talking about how do we make sure that it's the author accepted manuscript and that there is no embargo using tools like rights retention keeping hold of your copyright so that you can take your product, your intellectual property and put it into a repository as soon as it is accepted. And increasingly we're hearing more and more about diamond open access where it really serves similar to the traditional scholarly communication workflow except that there is no cost to publish openly. The, the costs to publish are born through government funds, private funds, or sort of collective funds to make sure that the author, neither the author nor the reader case. We also want to make sure we so to bring all that together. It's important that rights retention is part of that policy, making sure that authors know that they have to keep their rights. Because that enables this entire ecosystem if you sign over your rights to the publisher, you no longer have that ability or the right to use these other forms of open access at least not immediately. We also want to make sure that that paper or the data has an open license. Again, if it's free to read, it doesn't completely allow the data and information to live to its full potential to be able to be remixed or reused or translated or text and data you want to make sure that your policy requires an open license that stipulates that you want this information or this data to be reused, not just read it. You wanted to allow for the full reuse of the publication and data, like I said for text and data mining and in the standardized format that enables machine readability. Lastly, we want to make sure that there's a way to deposit open data or requiring open data and then providing examples of what infrastructure you can use to deposit your open data. There are many institutional repositories that you may have available to you, but you also can search for a repository to use in the registry of research data depositories. They have a very neat tool that allows you to actually look for the subdiscipline if you want to look for a very specific repository where you know your colleagues will go looking for that data. We're taking all of this and integrating it in at the open climate campaign where we're working to create policies with national governments, environmental organizations to make work open through policies. Climate change is not open right now. We want to make sure that it gets opened. And this can happen at many levels, including at your institution. Here's an example from the University of Ottawa that shows that at the institutional level you can create open access policies that can be implemented. You don't have to wait for your national government to do it. Just more information about the open climate campaign at this website and thanks for inviting me. Thank you so much, Monica. So we do have a question in our Q&A and the question is, but who is paying for this open access when impact factors are so often important in certain disciplines? Yeah, so to answer the question, so if there's no article processing charge required for a preprint or if you're depositing, for example, an author accepted manuscript. So let's say if you retain your rights to your paper and you're part of an institution that requires you to retain your rights, so you've got that sort of organizational backing behind you. If you publish in a high impact journal, like say you're publishing in nature, you got a really cool study in nature. By retaining your rights, you can still put a version, a formatted version, or unformatted version, most likely if it's an author accepted manuscript, into an institutional repository for free, and it can appear with no paywall in that institutional repository, and it could still appear on nature.com as well, so that you're still getting the benefit of having a paper in nature. Really, it's important for me to express that there are so many ways to do open and still support the important research that you're doing and the incentives that require you to be publishing in certain outlets. Thank you. Thank you so much. So I think that we'll go ahead and move on to our next speaker. And if you have any additional questions for Monica, you can put them in the Q&A for the panel session. So our next speaker is Tauola Sisi, the CLIR Postdoctoral Fellow in Community Data Literacy at Carnegie Mellon University Libraries. Whenever you're ready. Thank you so much, Kristen. Just quiet and share my slides. Can you all see my slides? Good. Just to be sure. Yep. Okay. It's a good morning. My name is Tauola Sisi, and my presentation title today is Creating Community Data with Community Access for Community Needs. I have segmented this talk today into three categories. I'm not oblivious that a lot has been and will be spoken about open science and open access data today. And I will briefly speak on how community data can be created with community members. And in this context, community data, I would conceptualize it as any form of evidence or information that is relevant to local communities. After this, I would briefly touch on community access to community data, and how that can be facilitated. And lastly, I will speak on community data and how we can effectively position that for the use of the communities. So communities across the U.S. face and hurry of complex challenges, you know, community development challenges especially, and addressing this usually require a robust collaboration, you know, to create valid data that can facilitate community action. International organization here like UNESCO have also identified creating knowledge systems and data that can help advance local communities. And one of the ways to create such data is through a subdivision of open science that I know some of us are, you know, very familiar with and that's called citizen science. This basically involves the collection and analysis of data relating to a natural world, you know, by members of the community or the general public, which is usually a part of a collaborative project with professional scientists or researchers. Some of the ways that researchers that we, you know, really can help advance citizen science is through community engagement and recurrent outreaches on the work that we actually do, you know, our ongoing projects and create awareness in order to facilitate community participation. And one thing I would, you know, also say when it comes to creating community data with communities is, you know, define and define, define what your project is about, define what you want to do. So that way, community members can get acquainted with your project well enough, you know, that those who are interested are able to partner with you. Another way to create useful community data is through community review. I'm currently working on a collaborative project with some other Clef fellows from John Hopkins University and the University of Virginia to mention a few. And what we're really trying to do is that we're trying to explore the concepts of community review. And this can involve including community champions or community members as study participants. And then as our projects, you know, are ongoing or after the completion of the project we send projects or reports back to them to give their opinions, more like, you know, a community peer review, and that way we would, you know, help participants to look into the community research study and review it to be sure that it's, you know, conveys the ideas and their perspectives and that way, even, you know, our end results, the end product of our research will be improved in quality, because it's really and truly the heart of the community we are hoping to serve. So open access data is crucial to community members and partners because, you know, it helps empower communities and build their capacity when it comes to knowledge accessibility. One of the questions that always come up and we always hear it often is what does it mean for communities to have access. This can come in many ways. This can, you know, be seen in different ways, especially because not every member of the community are trying to achieve the same thing. So in facilitating open community data, one of the crucial things is to prioritize being descriptive in a way that it would translate to multiple community audiences. You know, for instance, when talking about data management or metadata, it is, you know, people tell that we consider the question of how we are making sure data is not just reproducible from the aspect of open science, which means useful for other researchers, but also trying to understand what it means for data to be useful for community members who do not have our backgrounds. So the question would be how are we making sure that community data or data for the communities accessible, not just from open access perspective of open science, but in a way that community members can, you know, understand it. So I would also quickly mention that I would like to quickly touch on the issue of integrating community voice in data sharing and policy reform activities. We have really seen that in the past years, the progress at which states and community-based organizations have developed and advocated for data sharing initiatives across sectors have, you know, increased. We have seen how it has moved forward. However, however, community members are often not included in these efforts, although recently I've seen that an example of what I've seen recently is that the robot who Johnson foundations, learning and action and policy and partnerships initiatives have, you know, taking it up and groups like the center of health care strategies are working with states and community-based organizations. And, you know, coming to members and I want to emphasize that community members, they are not only working with community organizations, but, you know, actual local people, the community members to kind of like integrate community voice in data sharing and policy reform activities to better understand their project attend, which is health equity challenges. This particular efforts and projects portrays to me how community voice can be prioritized and integrated in accessibility of community data. And now thinking about community needs, some of the, some of the benefits of providing. Now we're just, okay, yes. So thinking about community needs, some of the benefits of providing community data with community access is that it provides community stakeholders with access to scientific data. And, you know, it also increase citizens trust and, you know, just generally also improve their engagement and active participation throughout the live data, a data lifecycle process. And one other thing that this also does is it helps, you know, improve community storytelling and whether they want to use that in form of in form of grant writing, or just to pitch their ideas. And it also help them, you know, as a result, communicate effectively their project outcomes and I'll get into that just in a little bit. So when it comes to storytelling, community based organizations can can make use of community data to, you know, bring up a strong case for things like grant proposals, like I mentioned, or facilitate other, you know, projects. They can use such community data, either it is, you know, qualitative or quantitative to make a good story for themselves by having statistical facts, you know, or qualitative quotations that support the community project stand. And currently I just actually, I taught just yesterday, a workshop on community data and storytelling and how, you know, community partners, or, you know, people who are involved in community research can integrate data storytelling and link it to the organizational goals in terms of creating a strong proposal and anything related to their community needs or project goals. So when it comes to communicating project outcomes, like I mentioned, community organization and partners who do scholarly work can are able to get more clarity because for project communication and set realistic demands and expectations because of the data and the knowledge that they have and that I think that is a very powerful thing being able to link the data you have at hand in your communication to the public or to your audience as a case maybe. I think one last thing that I would like to make sure that I mentioned here is that one of the questions that we should, you know, as scholars or as researchers as professionals should be asking community researchers is how community members using data, you know, within community members, there are those who are citizen scientists who just want to do research or use data for analytics, but there are also community groups and members who want to assess data because they just want to understand the data. So having this knowledge can help us researchers venture into projects and create data that actually supports diverse community needs. And this will also help us blur the disconnect between community data creation and actually meeting the community needs. And I would just weekly mention that the three categories that I'd mentioned today, it's really a vicious cycle, you know, the community needs can can define the kind of community data created, and that can translate into trying to ensure community access, which, you know, goes back to identifying new needs of community members based on their current knowledge of data, and their unique abilities and experiences. So, with that, I would, I would say, you know, please, these are some of the projects I'm currently working on and working with community members. I'm also sharing the effects of flooding in Pittsburgh so please feel free to reach out. And I'm also teaching some workshops that I've, I mean they're over this semester, but please feel free to reach out at the spring and I can give you more information on them. If you want to learn more on them. So, thank you very much. Thank you so much. Any questions for Tywell please go ahead and put them in the Q&A. I'll just start I'm wondering if you can talk a bit about the initial outreach to impact to community members. So, how do we find these partners and how do we ensure we're reaching out in an equitable and accessible way. Yeah, I think one thing that is very important is for us, especially researchers or professionals is willing to actually go out there. Like, when I, you know, came into CMU and I wanted to start my journey of finding community partners I actually started, you know, attending their you know, going into their town hall meetings and just engage and just, you know, facilitating that communication, making sure they know what I am doing and why I'm here and how they can collaborate with CMU and do amazing work, you know. So just put yourself out there, you know, as a professional as a researcher so they can know what you're doing and genuinely also care about them, what they are doing and you know seek for ways to collaborate where it's mutually beneficial. Thank you so much I think that's a really, really important point connecting. I think we'll go ahead and move on to our next speaker if you have any additional questions please feel free to put them in the Q&A. The next speaker is Malika Sharon, a senior researcher for the tools, practices and systems research program at the Ellen Turing Institute in London, and co-lead of the Turing Way project, which we're big fans of here at CMU whenever you're ready Malika. Thank you so much for having me here. I'm really delighted to always hear who's using the Turing Way and who's reading the Turing Way and who's building the Turing Way. So thank you so much for all your contribution. I am going to mainly talk about the Turing Way but in a way that it makes sense that the Turing Way isn't just a project but it's a vehicle to build community and connect communities. This is an open source open science project in a community driven handbook on data science and research practices. Our goal is to involve and support community of diverse actors in terms of data science and research, and to get the build reproducible ethical and collaborative practices for everyone. Let's just start similar to what Malika was saying, what is it that we are trying to address where are some world's biggest problem and how can open science actually contribute to that. The goal of open science is not openness itself. This is something that I believe when we're building community and working with them in resources that they can use. It's not just about creating something but really about achieving knowledge equity for diverse actors who can not just access, this is great, but they should have a say in production and setting the direction of knowledge building. In this space I want to show that over the year they have been working in open science and with the Turing Way community, we have really actively tried to align our work with knowledge commons. The knowledge commons can be defined as information, data and content that is collectively owned and managed by the community of users without depleting their quantity or quality. Digital common is a part of knowledge commons, which is about global digital resources produced and maintained together in a decentralized manner. So the creative commons is actually a classic example of that. Collective and decentralization is achieved through promoting licensing, authorship, peer production, governance, participatory way to foster equitable access to resource, which is a very resounding definition of open science. The participatory process to manage any common for shared benefit is called commoning. There is no commons without commoning. A common always requires community, people who care about, who can or want to access the common and the resources that they have, as well as governance, which is a set of rules for carrying the resources as well as for the community members around them. Very aligned to that, the Turing Way is a digital common, which is a book, which is a public resource, an open source community that accesses and supports the project. We value and support the idea of openness, the diversity of knowledge that people bring, as well as the local realities that each of our contributors bring. We foster a community that take the advocacy and intervention work, not just about writing chapters, but how they push forward a conversation in their own community to achieve collaboration, equity, and access and knowledge system. The Turing Way started as a book on reproducible research in 2019. The initial group of people who came together were writing about computational reproducibility, mainly thinking about what does it mean to build open source tools, practices, and systems. How do we apply version control licensing? How can we apply research data management approaches, code quality, code testing, and so on. Of course, we have heard from previous speakers in the session today, it's just not about one thing or other, it's not just about the technology itself. There are so many things that is associated around how we achieve openness and reproducibility. So reproducibility is one of the goals, and in the way that we do openly allows people to look at the processes, but it just has to start from the very beginning and go throughout the lifecycle of research. In that process, we are also actively thinking with our community about practices for project design. How do we communicate it for people of diverse knowledge? How can we make sure that people understand it? Knowledge written isn't just accessible. How can we ensure that people can communicate about it in their own communities? Building collaborative practices within the community of the Turing Way, but also extrapolating that in the research infrastructure. Thinking about ethical consideration, at what stages should we apply them, and of course it's not just a one-time effect, it's something that needs to be integrated throughout. As well as we maintain a community handbook, ensuring that all the practices that we are applying in the Turing Way can be reproduced by someone else in building different communities. The project was started by Kirstie, and I joined just a few months later. Now I co-leave this project with her. The committee manages Anli Steele and our project manages Alexandra Araujo-Alvarez. The project has been running for over four years, over 300 chapters, many, many resources in there. And I also want to acknowledge that the project is supported by the Alan Turing Institute where we are based. It's a national institute for data science and artificial intelligence in the UK. But this is also to disclaim that although you see these spaces a lot, these are just enablers of the work that's happening in the community. That's why it has happened to itself. These are individuals from different organizations all across the world who have come together to share practices from their own communities and projects that they are involved in. So here I want to kind of leave a nugget of messages that we have learned through building the Turing Way. One of the biggest ones is open science. Progress is open science. The purpose of open science, again, is not confined in one community that we work in. It's about the ripple effect that it creates. Open science also allows us the infrastructure through which we build these kind of work. It's a process for development, maintenance, and sustainability of digital commons while we make sure that people involved in it share the benefits. The project has grown quite a lot. We have over 450 direct contributors to the project, 5,000 monthly users. We have been building governance in different ways and we have over 25 core members involved in the development of it. In the last couple of years, we have also received some recognitions being referenced in a lot of peer reviewed articles, thousands of different resources that people have referenced us. But also, there are many communities and projects that are built on the model of the Turing Way. So Turing Way has built in solidarity with other projects where people, rather than explaining what the Turing Way already has, they build case studies in their respective domains. So for example, we have an environmental data science book, which is creating case studies on environmental data and at the same time referencing back to the Turing Way for practices that they need their users to learn about. We are also a co-direct for open life science and also involved in various communities and the Turing Way has become one of the places where we can convene these different communities and involve them in conversation that we all care about. So you can find all the resources that you would be hearing about on Zinodo, but you can also find us on social media and other spaces and I really invite you to connect with us. The project is developed openly on GitHub and we also recognize that GitHub already comes with its own barriers. We also provide lots of training in how you can use GitHub or version control or different kind of technology that we have applied. And our sneaky purpose for doing that is that by the process of getting involved in the Turing Way, people understand what it is to apply open source open science practices. And they can use the Turing Way as a playing field, bringing back all the practices that they see and experience into their own work. Just to give shout out to some of the resources that we use a lot, of course, thanks to Zinodo infrastructure maintainers who are persistent identifiers. So all the things that we are sharing with the community can be centralized in one location. We are using, of course, Git, Jupyter, Book, Binder and different kind of bots in order to make our resources useful and welcoming and accessible. We are also hosting the book itself on Netlify. I showed you that we have like five guides and community handbook and 450 people writing over hundreds of chapters. And it is quite overwhelming if you have never seen the Turing Way, there's just too much. And often I say that you don't know what you don't know. So maybe begin with something that you want to know right now, either something that you're working on currently or something that you want to learn about to apply in your own work right now. And then by default, you would also browse different things around it and you would learn about different practices that you may not know originally. However, this is also a place for you to see, is there any practice that you know that you think the community should learn about? And if there's a gap in the Turing Way that you can help us develop. We understand that the book like this can get outdated, especially in data science as this is evolving very constantly. We want to acknowledge that five years down the line, the practices we are sharing today may not be fully relevant. So the book should be taken as a work in progress. The book belongs to the community. The community belongs to the community. Everything that we do is really for and with the community. This is not a Turing project, the Turing Institute, although we get lots of support and financial investment into maintaining and developing it. As I said, this is a work in progress, which is evolving with the needs in the community. We are creating the resources together. It's about the way how we do things together, the journey and not the set of rules. Anything that's written in the book can change and you can change it. So this is just a nice map of where the book is being used, which is worldwide, which is really fantastic to see. Which also makes us believe that resources like this, this little common light with Turing Way is quite an important resource for maintaining and perpetuating the practices that we want to see. We also have community members who are translating the book. So we should not forget that the book is written in English, not the entire world speaks English, or not at least as their primary language. And we have some really fantastic people in our community who have been translating the book and not just translating text to text. It's really about contextualizing, internationalizing, also building cultural awareness among us by making sure that we're not just advancing the English hegemony, but also thinking about what does it mean for users who are not originally involved in the development of a technology. They use crowd in for localizing all the translation materials and some languages that they are working on currently are Arabic, Turkish, Portuguese, Spanish and French. And you can definitely come and join us, not just to translate the Turing Way, there are many communities who are doing translation work also come and work with us. But we need to provide these pathways clearly. It's not just to say that here is a book, you can do it, you can work on it, you can use it. It's really about being intentional about do people really know how to get involved. So our community members are working really hard to make sure that people get onboarded. They have different ways to contribute and use the book. Their concerns are resolved, they are heard and listened to when we are not doing well in terms of justifying the practices or they should be able to challenge us. So some of the easy way people can get involved is of course fixing links and helping us fix any typo, making sure that their resources are represented in the book. The book's purpose is not to reinvent the way it's to centralize the practices that other people should know. We have committee member reading and reviewing each other's work. We are also translating, as I mentioned, but also mainly to think about what are some best practices from your community that you would like your committee members to know. There are many work that are happening. Of course, the book has become one of the ways to share with the world, but the community is where a lot of activism and intervention work is happening. So beyond translation, we have people who have been working on research infrastructure role by making case for how different kinds of roles should be formalized in institutions so they can prioritize open science practices and integrate those considerations in the project. They have also published paper recently called Manifesto for Open Research Infrastructure Roles. We have many people who are doing training and outreach based on the tutoring resources. We have infrastructure maintainers who have been making sure that our book doesn't break and that we are making it accessible and useful for everyone. We have a group of accessibility to working group. We have environmental data science book. We also have our yearly book dash sprints and events and lots of different things that you would hear about. And I'm currently working on something called Practitioner's Hub where we are working with specific organization from different sectors who can share the practices of open science from their respective sectors and tell us how the terrain way can be useful for their own work. The purpose of creating different kinds of pockets of work is to decentralize power. These people are the leaders in the community. It's not just the four faces that I showed in the beginning. It's not about just informing that you should read the book and apply it but making sure that these people can collaborate with us and in fact they can lead it in their own world. It is also extremely important for us to think about acknowledgement and incentives, something that previous speakers have already touched on. We recognize it very strongly that open science or open source in the past have built on the work of volunteer labor. That's not sustainable. And it's also been recognized that it's not just about who gets acknowledged at the moment in research. There are many, many kinds of work that's happening in the community that just does not get the same recognition and stays hidden. And in fact, the work that stays hidden often marginalizes the marginalized communities even more because these people are behind the care infrastructure of the community. So we are trying to reimagine what recognition for all contributors in the Turingway looks like. And from that can be built processes that other communities can take forward. Some of the easy way we look is all contributors bought. It's a bot that you can install in your own GitHub repository. You can keep credit to anybody who has contributed to your work, not just those people who push forward on the GitHub directly. But we also have a narrative record where people can go and write about what they are doing. They each get dedicated link that they can use in their CV. Finally, all the people who contribute to the Turingway are listed as author. It's not about the quantity of contributions, the quality of contribution that we want to recognize. But again, I've been talking about purpose a lot. And the reason for talking about open science purpose and that also recognizing that it goes beyond openness is because we want to build foundational skills for people and give them tools and practices through this openness. It implies a huge emphasis on outcomes, but open science practices are integral throughout the research process to achieve those outcomes. It's worth remembering that openness does not work in vacuum. It requires combined approach that takes reproducibility, ethics, collaborative and inclusive approaches at all stages. You can do open and unethical research or collaborative and ethical close research, but putting it all together takes skills, mentorship and understanding. It's not about doing everything all at once. It's about taking the right step at the right time. These are together considered as foundational skill. This is something that we're really trying to push forward through Turingway. I want to just mention OLS, Formerly Open Life Science, which I'm a co-director for. It's a training and mentoring program and a capacity building organization. This is one of the places where people don't know how to apply specific open science practices into their own project. You can come and learn with us. We offer four months free training program. We also offer micro grant in honoraria for anybody who involves in this work. This is a different way of doing open science because it is nonprofit. It allows us to be a lot more flexible than an institutional way. Finally, I'm going to take a deep breath and I want to give you one message if you want to take this away. It should be to recognize that open science is a way to transform the way we conduct our research. In an era where we're facing a lot of challenges globally ranging from pandemic and climate change, natural disaster and conflicts. We need to acknowledge that open practices is our duty as researchers and members of our society. The many facets that we learn about open science should not be taken as restriction or destruction, but as a scientific freedom. We all can learn and learn challenge dismantle and rebuild research infrastructure that prioritizes reproducible ethical and collaborative research for collective benefit. Through this radical reimagination of open science every day, we can actually change the way that we work. With that, I would like to thank all the people who have helped us come here. Thank you all for inviting me here. Thank you so much Malika. Any questions go ahead and put them in the Q&A. I have a question for you regarding your point about communicating to community members with diverse knowledge I'm wondering if you can elaborate on that a bit of what that entails especially with respect to like research or responsibilities. I think I want to just go to the opening that Melanie was saying that, you know, we are bringing a very interdisciplinary perspective when we come together as open science practitioners. Suddenly, you know, we're no longer just environmental research or bioinformatization or biologists. We're all thinking about how can we make our work open. So that is that is what I mean when we say, you know, diverse knowledge owner and producer. So we need to sometimes go underneath. So I generally would I should have shown we have a sketch about this iceberg where on the top we see that there's research outcome and specific people and specific policy. But underneath we're all doing the same thing designing our work integrating our practices doing the same processes. It takes some time for us to recognize when we come as interdisciplinary research that we are all doing the same thing although on the surface it looks very, very different. If we're understanding that differences is not that much compared to how we how we may have on the face value that answers but it's just it's not as complicated as we see it the moment we start working with different people. Thank you. Yes, thank you very much. I'd love to invite all the speakers back on screen to answer some questions as a panel. So we have a question from Tatiana Osova about what are the incentives for individual institutions to develop open science policies so I think that that question could be for any of our panelists. Yeah, maybe I'll start it first and I'll take, I'll take one perspective, and that is like for like advocating for your own researchers. There's been a lot of meta science or research on research to show that open practices actually do end up benefiting researchers themselves, and ultimately, your institution. So you could look at it from any different perspectives depending on like who your audience is and like who you want to convince that you should you should institute a policy at your academic institution or your research institution. So if you're talking to researchers, you know there's a clear citation advantage for open articles. And that goes across disciplines there's been some work from Aaron McKernan and colleagues that have shown that has shown that you know there's an overall citation advantage and then really across disciplines as well. So for researchers themselves, you know you're sort of advocating for them to move towards open, and it's better for them, you could also take that up one level and say it's actually also really good for institutions and and leadership that your institution has a bigger impact and these metrics are important to, you know, to leadership. And, you know, sometimes that matters in terms of like funding decisions, having more impact of the work that is being done at your research institution is good for researchers it's good for the, for the institution itself for the, you know, for the impact that is of the research that's happening at that at that own institution. So that's one perspective. I'll, I'll turn it over to my panelists to talk about others. And I think I would just, you know quickly had that that being an institution and facilitating open science also you know, it just, it just helps with engaging with the communities, you know, around us, you know as a university or any type of institution so we are talking about facilitating collaboration and in this aspect we're talking about even cross sector collaboration from the university to the community and you know that that would also you know attract diverse stakeholders to the university and that way we might even get to venture into other valuable projects so if I may add to that, I think it's less about individual organization building their open science policy but the implementation framework, because at the, you know, higher level, nationally speaking a lot of countries are trying to build policies based on, you know, across country, but the implementation itself is very important for institution to invest on because there are a lot of people who are doing open science but they're doing in a fragmented way, unsupported and delivered a lot of their vision. So our libraries are doing a huge service to try to bring them together and that needs to have investment on and these kind of policies really helps them in getting that investment and support that they need. Thank you all of you for the for those answers and I think the next question touches on a little bit of where you were going, Malvika. And that question is, are there any federal changes that you believe need to happen to implement some of the open science policies that have been suggested. This is something that I suppose when it can't I would be better to speak on but I want to begin by saying that I feel like things are happening and unfortunately a lot of work has happened so far on the grassroots level have been have been acting as activists and we're still like now suddenly let's say a year of open science gets launched and your government says that within the next five years you have to open everything and and funders have been mandating it. And suddenly it is it is a priority for everyone. It's no longer bunch of grassroots activists trying to shout on the street. So I feel like those kind of, you know, policy and and government mandate definitely helps but I'm going to pass it to Monica and I. Yeah, I mean, as Malvika said, we wouldn't. I don't think we would have the Nelson memo, which is that it was a memo handed down by the White House Office of Science and Technology policy, requiring that all federal departments and agencies put together their own specific policy that requires immediate and free open access to publications and data that are being produced by those those departments and agencies that would not have happened without that grassroots support. I wouldn't have happened with like work like work that was happening that's been happening at that spark. Creative Commons attorney way all of it like that momentum wouldn't wouldn't have been there to, you know, to push the decision makers to to mandate that it is however important how that gets executed. So the memo is very broad. It says, you know, you need free immediate open access to the data and publications, how that gets implemented can really have a different effect on researchers and on community members. If we don't have specific language, you know, as I mentioned in in in my talk where, you know, we're looking at rights retention where we're looking at specifically the many different avenues for open access that open access does not equal article processing charges that there are many free ways to get your publication open and again still sort of allow you to publish where ever journal that you decide is the most appropriate place for you to publish your work. We need to have that you know that those specifics in the policies that get created for each department and agency, and it's going to be a challenge because I think yesterday I was on a another webinar and there's something like 400 different departments and agencies that like the that the OMB actually doesn't even know how many departments and agencies exist in the federal government because it's so big. So how do you make sure that each of those policies. You know, like work with the other policies within, you know, sort of those 400 organizations, and that like they center ways in doing open in an equitable way, because what happens here in the United States will have a really big impact across the world as well. The US moves to paying article processing charges as a default. That's going to make a very equitable in equitable situation for other countries that cannot follow suit that cannot afford to pay them $10,000 to make something open. And I will just quickly add to that, in terms of thinking about even open reviews, you know, and open evaluations and things like that, because I feel I'm a community junkie I like to see that community voice is being integrated, you know, in this sense. And that was one of the things I was mentioning in my presentation, in terms of kind of like a community peer review, given the community voice to actually be able to say to have a say in bits, you know, data sharing or policy reform. I think that would, you know, also have, you know, a significant impact. Thank you very much. This has been touched on a bit throughout, but I'm wondering if any of you want to elaborate a little bit on what changes you would like to see and research practices policies, etc. That would increase more global accessibility of the research. I've been thinking a lot about this, and I think I always blame it to my own identity. I am an Indian with a citizenship from Europe, and living in London, I do not belong to a community and I cannot think. Sorry, not belong to a country I belong to too many communities. I don't belong to a country which means I can't think for one single country. I need to think for multiple countries and having that experience of lots of privilege that I have here versus my colleagues who don't have access to the degree that I do. I need to think about how can we build a borderless open science and not so much about that this is our national mandate. We need to think about how does it affect the rest of the world. Very, very much what Monica was talking about that we don't need a pandemic to remind us that problems are universal. And also I just really had to that is, and I think that goes from what you're talking about, it goes to the point of, you know, how people ranging from the federal government, you know, to, you know, local leaders and you know, community champions are they, you know, willing to invest, you know, in making this universal. I think that's really the point like we can talk about it as much as we want to but we need people who are willing to kind of like help build capacity and invest in and I would, I would dare to say that this might not necessarily when we're talking about, you know, making this internationally available. This might not necessarily have a direct benefit to the investors, but at a point of just wanting to see our world thrive. We need to have people who are willing to, you know, invest to facilitate this. Yeah, I'll just echo what I would just said. It's like investing in like the people, but investing in the idea as well as well. You know, I work for a nonprofit that is grant funded. And every year we have to go out and see who wants to support what what we think is a public good, like the licenses are a public good and every year we have to go out and like convince people that we need these licenses and they need to be there should be stewardship around them. There isn't, you know, consistent funding that goes in to do this type of work. And so it's draining it's and then I know all like, you know, or I know for me can talk about like from our experience from OLS like what it's like to go out there and like, you know, grind out going and convincing people to fund something that is like is a public good, and so having reframing our idea of like how we maintain these like these systems systems that really should be considered a public good, and we should have like investment from from from national government and continued investment to maintain these infrastructures and the people that build community that build tools that maintain tools. Thanks for this really important discussion. And I think that we've cleared out all our questions so we'll go ahead and break for lunch. We'll take about an hour and come back at 1pm Eastern time for our session on federal and institutional policies. Thanks again everyone. For those of us. For those that are just joining us. My name is Melanie Ganey. I'm the director of the open science and data collaborations program at Carnegie Mellon University Libraries. Our next session is going to be about the impact of policies, and we've already talked about policies quite a bit today. These themes are very overlapping, but in particular the session will address the impact of policies from the federal government as well as institutions and departments. And as I said before we've never really done a full session decade to this topic in the past or really excited about this. So if you're just joining us the way that this session will work is that we will have three talks that are about 15 minutes each. There will be time to ask a couple quick questions for each speaker after their talk. And then when they're done will invite all of them back on screen to answer some questions together in a panel Q&A. So if you have any questions that are more general or might be addressed by more than one of our speakers you might choose to hold on to those until the panel. So with that, we are ready to get started. Our first speaker is LaKeisha Harris. LaKeisha is the dean for the School of Graduate Studies and Research, where she is oversight of the university's 28 graduate programs, and is the co-lead of the institutional and departmental policy language working group at Helios. And with, and you can share your slides when you're ready LaKeisha. Thank you everyone. Thank you so much. We're going on a new computer so can everyone see. So thank you Melanie for the introduction. Can everyone hear me okay. Yep, we can hear you. Okay, great. Okay. Thank you. So again, yes, I'm LaKeisha Harris. I am the dean for the School of Graduate Studies at the University of Maryland Eastern Shore, which is a historically black college and university on the initial of Maryland. Before I begin, I just wanted to give acknowledgement to my co-leads for the Helios institutional and departmental policy working group. I'm Elzeda Tipton, who is the provost and dean of the faculty at Whitman College, and Chris Burke, who is the director of libraries at MIT. And the three of us have been working really closely with the Helios team and the other working groups to really dig into the policies at the universities that we are in, and how they are affected by this move to open science. So part of our goal was to develop a collective action plan for embedding open science into the promotion and tenure process. And as we who work in higher ed, we always talk about how important the promotion and tenure processes because that is what basically how you are evaluated as a faculty member. And so we want to, we wanted to really look at each of the institutions, the differences. As I said, I'm at a historically black college. So our mission is different. Excuse me, LaKeisha. Sorry, we're seeing your slides in presenter mode. Oh, I'm sorry. Okay. Oh, no worries. Let me stop sharing. I'm not sure how to quite do that. Give me one second. Do I just share my apologies. I'm not sure how to take that off. That actually looks good. So, oh, how you see it now. Okay, I think I need to keep. Does it look better now. Yes. Okay, thank you. All right. So yes, my apologies so my goal. Yeah, so again, our goal was to develop a collective action plan. While respecting the differences of the universities that each of us reside at recognizing the differences between the faculty the missions, the interdisciplinary challenges that we have. We have been engaged in discussions for more than a year now on how to engage our campus stakeholders into this mission of open science. And I can say for for someone like myself, this was a new discussion for us, not new to the system in which my university resides, but just new to the university, the idea of open scholarship. I'm having conversations with faculty members on campus who were not really open to the idea because they did not quite understand how their individual work would be used and so we've been engaged in a number of conversations about this over the past year. So to this end, we started off by developing an RPT joint statement in which we asked universities to immediately sign on and say that they would engage in the process of advancing open scholarship on their university's campuses. So we talked about the importance of open research and scholarship and in shaping a positive research culture on campus. We talked about the promotion and tenure process. Many of the universities and let me just back up and say we have more than 80 universities will signed on to the Helios group to complete this work so we have representatives for a wide range of universities. But what we found during this process immediately asking people to sign on was not quite the right way to go because many were hesitant because of the types of universities, faculty discussions. As I said, many people were really hesitant to sign on. So we took a step back and said, well, instead of asking you to sign on that you will immediately make changes to your promotion and tenure process. How about we just ask you to commit to having those dialogues on campus. And so, so far, you know, only one university had signed on to say that they would make those changes. That was Whitman College, which I was a to tip them is the provost, where she is provost. But we've had a lot more positive discussions about, hey, let's just commit to having the discussion about whether or not we can influence our promotion and tenure processes. And so one of the things we did, we talked about just the components of engaging the key campus partners, and that looks different at every university. Some universities have faculty, senate, university, senate. Do you start with the faculty? Do you start with the provost? Do you start with the president? And so what we're finding is that everyone has a different way of engaging who those stakeholders are on campus and how important this was. And then we want to ensure that everyone has the resources that they need in order to make these decisions or changes. And so we've decided to start having little conversations where Helios group, Helios leadership is willing to come and talk directly to the university, talk to the faculty, developing materials that we can send out to our faculty and campus stakeholders, just so they'll understand how important this is. So this is an ongoing process. Okay, so in January of 23 the members of the working group contributed to an issue brief where we disseminated this information to campus leadership and engage the stakeholders to really further these campus discussions on open scholarship. And again, as I've stated, we are in various, we're in various parts of that process. I know for me personally, I've started the conversation with our faculty leadership. We have a really strong shared governance body on our campus. And so I know it was important for me to talk to the faculty and university faculty leadership and staff groups because this impacts everyone. And then we just talked about the core concerns about aligning with emerging federal directors in light of the Nelson memo in which the White House determined that or declared that 2023 is the year of open science. And so what I found and just, you know, anecdotal data. Some of the faculty said, well, I don't want other people seeing my research. I don't want people being able to share my results will share my work with other people, not really understand it. So as I dig a little more deeply into what the concerns are. I'm finding that, you know, some people are hesitant just because of things such as that. Whereas on the other end of the spectrum, I have faculty who are saying, yes, I want my research to get out there. We are smaller university. And, and so we do a lot of great work at our university. And so we really want the work that we do to be shared wildly and openly. And so we're using this opportunity to really show our campus administrators that this is really important for us to engage in and for them to advocate for resources for us to continue this work. So, last but not least, so to that, and the Helios team applied for a NASA training and conference grant. And we're working along other professional societies to really bring all of these campus leadership individuals, so presidents, provost to the table. And so in January of 2024. We will be bringing presidents to Miami, Florida, where they can really learn more about the Helios group, what we're doing in the importance of open scholarship and leading. So we just had a planning meeting so members of the working group we won't be attending the meeting so it's just reserved for the presidents and we have about 30 presidents who signed on so we're really excited. And so we're continuing to advocate for them to come learn more about the group. And so we're hoping that they also once they get the knowledge that they will be able to share this with their campus stakeholders, and we'll have more individuals sharing, signing on to continue the work of changing our promotion and tenure processes so that we can advance open scholarship. I think that was all that I had today. So thank you. Great. Thank you. We have time for a quick question or two for Lakisha if anybody has a question specifically for her in the audience. I have a couple questions, but I think that they would make sense to ask the panel so I'll. Okay. Okay, with that. There'll be more time to ask Lakisha questions that people think of them and so we will move on to our next speaker. That is Michael Doherty, a professor and chair of the Department of Psychology at the University of Maryland, and you can share your slides great. All right, so I hope you all don't mind I'm going to use the floating head method. I'm going to start off by talking a little bit about the work of doing this presentation. So, actually, what I'm going to talk about today just follows directly from the stuff that Lakisha just talked about. I happen to be working on the same working group with Lakisha and so hearing everything that she presented really sort of tease me up for some work that I'm going to share today, which is aligning incentives with institutional values. What I'm going to talk about actually is our efforts in the Department of Psychology here at the University of Maryland that we've undertaken over the last several years to I'm going to use the word reform, because that's really what we've done to just sort of overhaul and reform the way we think about faculty evaluation so I'll go in a little more detail as we go on here. I started though. If you're interested in learning more about what we've done at Maryland. I'll share some links actually there's a link in the share document that can send you directly to our policies if you want to see them. I've also created this OSF website where we are where I have a bunch of resources that I've used and other people have used in this space and if anybody has any questions about anything we've done. I encourage you to reach out to me directly and and we'll set up a meeting happy to talk about this stuff. Okay, so I think it goes without saying but I must say it anyway incentives matter right people respond to incentives no matter what it is. Now the problem with incentives is that they also can be gained right and so once you set up an incentive system and you identify some metrics or measures. Those metrics and measures then become the target and the problem as we know from Campbell's law but also good arts law. Once you you set up those metrics they become the target. They also cease to measure the thing that you think they are designed to measure right they cease to be good measures. So we know faculty will game the system no matter what the system is they'll game it they'll game it either implicitly or explicitly there's nothing necessarily wrong with that. But it does become problematic when the things are trying to game are not good for science or good for fulfilling the mission of the university. And so what we've been trying to do in our reform of our promotion tenure annual review processes is to use metrics that first of all first and foremost reflect the core values of our university and institution and the core values of science. And to set up our incentive system or promotion documents in an annual review process so such that if people do game those incentives, it's actually going to result in a pro social behavior. Okay, that's what we're aiming for now whether or not we actually achieve that is a whole nother ball game but I think we're getting closer. All right, so the important thing here is that the promotion tenure policies the probably the most important policy on any campus for faculty for tenure track faculty. These kind of codify the incentives. So I'm going to answer two questions today one is why did we reform our policies and then the other one I'm going to say I'm going to describe what we did. There's a third question that I sometimes dive into which is how we do it but that's, we don't have time to go into all those things. I'm happy to answer those questions later so why did we reform our policy. Well, this is actually an effort that I started back in 2017 so when I became the chair of the department. One of the things I said I wanted to do was to overhaul or reform our policies to really build a more reproducible science I'm in psychology and most of you probably know that psychology is dirty laundry has been out there for everybody to see. Okay, we had some pretty high profile cases of academic dishonesty fraud. And a lot of the stuff on questionable research practices sort of emerged from what we found out what was going on in psychology. It's not unique to psychology but this is something that has been out there for about 10 to 12 to 13 years. Very publicly in our field and and so when I took over the chair ship I thought you know what we really need to do something about this. One don't want to have any fraud cases in my department, or, you know, but what we wanted to do is to build a more build an incentive system that is going to support trust transparency and reproducibility. The other thing that really drove our ultimate reform. So that was the sort of impetus the reproducibility issues were was the impetus for how we got started on this. The more I dug into our existing policies the more I realized that, you know, the way we go about incentivizing and rewarding faculty really bear bears very little similarity to what the university say that is important and I think this is probably true across academic disciplines. We talk about community engagement we talk about making our work public we talk about solving social issues or or our grand challenges. And yet when you dig into the incentive systems. None of that is in there in a meaningful way. And so we really felt that is necessary for, you know, a lot of reasons to really bring these things into alignment the university mission statements tell you a lot about what universities are supposed to be about. But when when the rubber hits the road, i.e. tenure, those core values aren't interwoven typically in a meaningful way into those policies. And the third issue here is that when you start digging into what are commonly used in in promotion documents things like impact factors and citation counts, those sorts of things. When you start digging into that literature you realize that a lot of those metrics are seriously problematic and I'm happy to share data on that for anybody who who isn't already familiar with with the problems with those metrics. Okay, so, you know, I said, one of the impetuses for reform efforts was the reproducibility issues and if you weren't familiar with this, many of you might recognize some of these headlines. And then together literally it took me about 15 minutes to identify this these 15 headlines, simply by googling academic dystopia or something like I remember what my Google term was. But the problem here is this is what the public sees. Right. So we ask questions like well gee whiz why don't, why don't the public why don't they trust academia, why is there is this eroding faith in higher education. Why are there individuals out there who just don't believe in science or who are saying things that are sort of anti scientific. Well, you know, we don't help our case when these issues of fraud or these issues of a question re research practices emerge right and so this is the problem whether or not it's an actual problem you know I there are these issues out there, or whether it's a public perception problem, we need to solve this. What did we do. So that's a very brief snippet over why we did what we did. But what did we do. Well, as I alluded to earlier, this was a multi year effort I started and started down this path in roughly 2017 and over the last five or six years. We've done everything from reform how we advertise our jobs. So if you look at our job advertisements. They now include things like we want our applicants to address issues of the, you know what they're doing to ensure the reproducibility and transparency of the research. So that's in our job ads. When I develop startup packages I explicitly put in money to support open access publishing. It's something that our candidates can't get out of they can't say why I don't want that money Mike I give them the money and I can't repurpose it. We also did an overhaul of our annual review and merit process, we overhauled our promotion and tenure documents. I developed an internal funding mechanisms and some more recently just in the last year or so we've, we've rolled out some or in the process of rolling out some awards, all of which are in them essentially the same goals and reinforcing the same set of core values that we want to see people doing work that will advance fundamental science. We want to reward people who are, if they're not advancing fundamental science, but they're, they might be doing stuff in the community want to reward them for doing that work for making their work public. We also want to make sure that we're rewarding people through all these practices for all these evaluation points for doing work that makes their work more transparent, more reproducible, more open, so that more of the work products that people are producing are available to the general public. The other thing we did and I won't go into all the details here we wanted to focus on the behaviors of the scientists. Not necessarily exclusively looking at the outputs. So we wanted to focus on the things that our faculty had control over, and they have control over how they do research, how they make their work accessible. They may not have control over and often don't have control over what journal ultimately accepts their publication. They don't have control over, you know, what random things come up in review panels for grants, but they do have control over how they carry themselves and how they conduct their work. And so we really wanted to sort of refocus on those core components of faculty behavior that support good research practices research integrity ethics, and a variety of other types of things that we do. So, again, building on this, what did we do? Well, I think a very concise way of saying this is to say we developed a more modern, inclusive and fair approach. So if you look at our tenure promotion documents, words such as inclusiveness are not just put in there for posterior purposes, they're actually in there for purposes of people being evaluated. We wanted to build a document that were incentives that encouraged high quality reproducible science. We wanted to reward people who were doing work that benefited society, people who were doing work that engaged with the community, not that we're requiring people to do these. But we wanted to give them avenues for promotion that recognized all these other things that people could be doing for the benefit of society and which fulfill the university's mission. We wanted to give those people a pathway for promotion that didn't, you know, force them into this sort of singular way in which most tenure documents that I've looked at envision success. And I won't go into all these other little things here, but you know, the bottom line is, you know, we have problems to solve in this world. And we can't do it by pulling up in our little bubbles and not sharing our work public and so we really wanted to incentivize people to take risk to share their work as widely as possible, and really accelerate science. Here's a few examples of some of our criteria so if you again take a look at our full document you'll get the sort of full picture. And I'm just pulling these from one small section of our overarching criteria this is under the category of quality and potential for impact. Right. So, first one really talks about the first line up here really talks about community application of basic science for addressing real world problems or societal needs so this really relates to this issue of we're solving problems that are relevant to our communities. The second bullet point here. This is, you know, I think something that is really important in psychological science because historically underrepresented groups really haven't been part of the scientific process, let alone. So, we haven't really addressed those problems that are pertinent for a lot of the historically underrepresented groups and so we wanted to give people an avenue to tenure that recognized that if they're going to do work that addresses these historical gaps. This is a, this is valued and important, and this is a pathway for promotion. And down to number four here really talks about the openness and transparency component so that fits with the theme of this seminar. So, we recognize the development of research tools code data and the open sharing of those resources. And then finally we wanted to encourage people to engage in transparent ethically sound and reproducible research and so these are all criteria and then of a mechanism for how people can demonstrate that we, we use sort of a new version of a CV where we have people annotate their research CV so a citation isn't just a single line that tells you the name of the article and the publication location. It actually has details associated with each one of those articles that talks about, you know how they're hitting each one of these different components within our criteria we call that an annotated CV. Okay, so what did I learn throughout this process so I'm going to wrap up here in two slides. Number one and this is I think great is that faculty see value in making their work transparent in reaching a broad audience. So, faculty want to do the right thing. And you know the great thing is is many faculty already doing it. And the sad thing is is they aren't always getting rewarded for it which means they probably could do it more if it was for part of the reward structure, but many faculty are already making those pro social behaviors right faculty are saying well I can publish in this this journal, which will reach, you know is highly prestigious, but isn't going to be accessed accessible to those people in the global south, or I can publish in this other avenue that is accessible to the global south. One thing I learned is that we've been doing it one way for so long that it's really hard to imagine something different right. We, I mean, the, who knows where our current system came from. But just because we've been doing it one way doesn't mean it's the right way. And, you know, fortunately, I was able to sort of coax my faculty out of that sort of notion that while we're doing it this way we should do it this way that's the system I went through so everybody should go this system. So we've, we've sort of gotten ourselves out of that loop. The third thing here is that if you want to do this type of thing it takes time to socialize and educate Laquisha talked a little bit about this. This was a five year process. I didn't like walk into a room and say here's what we're going to do guys let's do it. In fact what I learned is that doesn't work. And what I ended up doing was spending a lot of time educating my faculty in very subtle ways. I learned a lot of intentionality and persistence that's involved. And here's the best part is that if you talk to administrators, they're cool with it okay they they are interested in new ways of doing things and they're open to it. So this shouldn't be a real impediment to changing the way we do things because I think the administrators are generally open so probably always disagreements about how it works. So just to wrap up here with one last slide. Laquisha mentioned Helios I've been on the working group with with Laquisha. And so a lot of great stuff going on within the Helios sphere. One of the things I've been doing is working with a few people that are part of Helios and the open research funders group to run some workshops so we've run some workshops for various psychology departments. And the annual meeting for graduate departments of psychology chairs, one at the association of psychological science and and I'll leave you with an open invitation that if you are interested your community or whatever is interested in running a workshop. We will engage with you and try to put one together so I hope Greg Aaron Caitlin and Eunice from ORG don't don't kill me for this but I either we would be we would love to be able to do this and again if you have questions about any of this stuff feel free to reach out. Thank you. Great. Thank you so much. This is so interesting. We have time for one or two questions for Michael individually if anybody has any. I actually have a question. I was curious about the reaction of other departments at your university you mentioned you've talked to other psychology departments and noted that psychologists are very aware of these issues of reproducibility and I think a lot of us think of them as being on the forefront of these open science practices and so I'm curious if you've had conversations with other department heads where you are about these policies. Yes, there's no like straightforward answer you know commentary on that. We went through as a college a round of revisions and we so we were passing back our promotion documents between various departments and I think some of the other disciplines just aren't ready. Quite frankly, for making these, you know, we're doing something very different from any other departments. I think a lot better. I'm not too sure that everybody's there yet and that's the education side to be honest. That makes sense. We have a question in the Q&A. How did these new guidelines integrate if at all with ideas around slow science, allowing more space for thoughtful engagement with science research involves community participation, etc. Yeah, that is an excellent question and in fact, one of them, one of the sort of issues in the background that we were thinking about. When we develop these, if you take a look at the guidelines, you'll see sort of some of this language built into them. But we don't mention slow science per se, but what we have done is is we've gotten rid of metrics so we don't look at citation counts and impact factors. Those have been expunged from our criteria, but we've also been careful to reframe things in terms of substance not quality quantity. So the focus of our review process is substance over quantity. And so, and that's something I've had to drill into people over time and, you know, that's an ongoing process. But, you know, I think that very much fits with this idea of getting it right should be the first thing that we should be concerned about, and then getting a lot should be really backburner. So I totally agree with that. And the issue with community participation and participatory studies that this is psychology is a very diverse field and a lot of people are doing that community engaged participatory research. And we recognize that that type of work can be much more labor intensive, take much more time to curate. And so part of the reason that we're kind of backing off of numbers is because we want to be able to give people space to do that. Great. Thank you. We have a couple more questions. Wajin Wong, our former colleague here says, thanks for the great talk. I wonder what are your thoughts on extending the work on faculty evaluation to student success. And what are the points of entry that rigor and transparency can be built in for students. Yeah. Gosh, that's a, that's a great question. I'll admit I haven't given the evaluation process for students much thought but one of the things on my agenda right now is rolling out training in principally research integrity that really hits on a lot of these issues but not in the process of the evaluative process. So, but I think it's a really important aspect, particularly when we think about those students are the people who are applying for faculty jobs in the future right then and we really want them to be team themselves up to be successful in all possible ways but I wish I had a better answer for you. Thank you. And we have a comment from our associate dean of academic engagement here Nikki agate, who says that she's been working with the he metrics team with a number of Michael's colleagues including the dean of arts and humanities and various department chairs at UMD to expand this excellent values based assessment work beyond psychology. So, very great to hear that. So with that, we will move on to our last speaker in this session and again if anybody has more questions for Michael there will be another chance to ask him during the panel. Sorry. Our final speaker in the session is Jamaica Jones, the program coordinator of the NASA tops mission and the executive secretary of the White House's Office of Science and Technology Policy subgroup on the year of open science. Thanks so much. Hi everybody. Hold on give me one second to queue up my slides. I would love your help. In knowing hold on one second. We practice this a few times but it's the love your help in knowing whether or not you're seeing the participant view or the presenter view. Or the view, and this always takes a few seconds on my computer so I think I might have frozen there but this does always take a few seconds on my computer. Yeah, now we're seeing the, now we're seeing the speaker view, but that looks good. Okay, so now you can see the, the, the view that is intended for the, for the participants and not the one with my secret notes correct. Yes, wonderful. Thank you and thank you for that introduction and for bearing with my technical glitches over here. I am indeed Jamaica Jones, I'm the program coordinator of the NASA tops mission as mentioned. And I will be talking about our work to transform to open science at NASA and beyond. Thank you to LaKeisha for your, your kind introduction of the year of open science. I'm really pleased to be able to talk to you about that as well. I'm going to briefly go over open science kind of writ large at NASA, and then move into tops, which is short for transform to open science. And then it really will be spending the bulk of the time here talking about a year of open science and what we've been doing with OSTP in the White House. So, at NASA, at home. Hold on one second. Over here on my end, my, my screen is quite small. Our commitments to open science are evident across the landscape of the of NASA science woven into the research and community engagement efforts across NASA writ large. A lot of that is this is integrated into internal policy regarding the sharing of scientific data and other research and mission outputs. Internally, these commitments extend across the science mission directorate, which is kind of what people think of when they think of NASA. It's referred to in short as SMD. The science mission directorate is central to the NASA mission engaging the nation science community sponsoring scientific research and developing and deploying satellites and probes in collaboration with NASA's partners around the world to answer fundamental questions requiring the view from an into space. Working. Not advancing slides over here. There we go. I'm sorry that took so long. Well, in all that time, I should have pointed out that at the bottom of the screen, there was a green rectangle that said the chief data science office with a little arrow that was pointing up. That's meant to indicate that the chief data science office, which is where tops is housed is the office at NASA that's tasked with making the most of the science data that emerges from NASA research. So by advancing these three goals, and as you can see open science is centered right at the top of this list supporting some of NASA's highest level priorities. Within CSDO since the smaller unit called the open science, open source science initiative, which is NASA's NASA's means of operationalizing open science. And that's where I sit in my work at tops. OSSI works primarily across four areas and the tops mission is aligned with the community engagement focus. As many of you I'm guessing probably already know tops is a five year mission to accelerate the adoption of open science both within NASA and beyond. This has already been ably discussed here open science principles embrace transparency collaboration and participation. Recognizing this tops has been designed to support the community through through engagement opportunities through resources incentives and coordination. The tops mission is to inspire and empower scientists researchers and communities to embrace open science as a catalyst for positive change. This is at first objective, which is increasing understanding and adoption of open science principles and techniques. The tops team has been hard at work developing open science 101, which is a community developed introduction to core open science skills. The training has been piloted at some conferences throughout the past year and is currently in development as a five module course. It's in the last stages of its beta testing and is targeted for release next month. Once it's launched, it'll be taught through either a self-paced online course or synchronously through online and in person workshops. It's a really excellent opportunity not only to develop the skills necessary necessary to participate in open science effectively, but to also demonstrate those skills when you're applying for NASA funding and other funding opportunities. You can sign up if you're interested using the QR code on the screen. I also provided the same link that link in the community notes document that Melanie and her team were so kind to set up. Okay, so we were super excited about open science 101 this year, but that's not the only thing that we had to celebrate. As I'm sure you all know, on January 11th earlier this year, we were delighted to receive official White House recognition of 2023 as the year of open science. As part of a year of open science, NASA has been working with partners across 17 federal agencies and offices to spark change and inspire open science engagement through initiatives that will advance adoption of open science across the federal sphere, and ideally beyond. At the federal level, the year of open science has been coordinated in advance by a subgroup of the NSTC subcommittee on open science. The NSTC stands for the National Science and Technology Council, which is part of the broad policy apparatus that supports the science policy initiatives advanced out of the White House. I enjoy the tremendous honor of being the executive secretary of the subgroup on the year of open science. I've been in that role since the very inception of the group, which was the middle of last year, actually. And I'm really happy to share some of our major accomplishments thus far. First, and as already mentioned, we've secured official participation from over 17 federal agencies and offices. It's a really diverse bunch, which we're proud of, including representatives from NASA, from NSF, from NOAA, but also the Smithsonian, the National Endowment for the Humanities, the State Department, and many others. Many others, as you can see, like scrolling off the bottom of the slide there. Together, our participating agencies represent over $100 billion in federal science funding. The group is co-chaired by NASA, by NSF, and NOAA, and in its first few months, months set forth four goals for the year, including establishing strategic approaches towards open science, increasing openness and transparency of review processes, accounting for open science and activities in review, recognition, and incentives, and doing all of the above while engaging communities that have been historically underrepresented in the practice of science. The intent behind these goals is not that they would be achieved by the end of the year. This is a running theme throughout the last two segments here has been that this is really a culture change and that these things that we're talking about take time, can take a lot of time. So rather, the intent was that participating agencies, each of which is quite unique and responsible to a diverse research community, would develop individualized approaches to these goals as is appropriate within their home cultures and missions. Supporting this work, we drafted a federal definition of open science, which goes like this. Hold on, I got to move my thing out of the way again. Got a small screen. Open science is the principle and practice of making research products and processes available to all, while respecting diverse cultures, maintaining security and privacy, and fostering collaborations, reproducibility, and equity. So note that this that this definition really embeds those commitments to equitability to reproducibility and a respect for a diverse array of knowledge types and sources. Okay, so our subgroup has done a lot and achieved a lot leading to something that we've all been really proud of, which is the extent and productivity of our interagency collaboration. This collaboration can be really difficult to achieve across federal agencies, but by coming together and sharing resources, insights and lessons learned, we've been able to support our participating agencies and advancing open science individually as well. Towards that end, we've been proud to announce the series of early career researcher listening sessions that OSTP held earlier this year, engaging over 1000 participants in a consideration of the opportunities and roadblocks faced by scientists who are just getting started in their careers, and who want to and who are already engaged in advancing open science in their work. Meanwhile, and actually a few months prior, the US Geological Survey themselves engaged a broad community of researchers in an open data for open science data integration workshop, working towards that discipline specific capacity building that's so necessary to move the needle here. Funding is, of course, also essential in moving the needle so we were delighted to announce right at the start of our work NSF's investment of over $12 million and it's new Ferros RCN program. That is another acronym, of course, because it's the federal government. That one stands for fair as in the fair data principles so fair open science research coordination networks. Now, hand in hand with funding, as Michael just discussed, incentivization is absolutely crucial in advancing open science across our communities. So recognizing this, we're thrilled to announce the White House opens OSTP year of open science recognition challenge. This was just announced a couple weeks ago. It was designed to celebrate stories of team built open science that benefits society address a challenge and advance the solution to that challenge, all while embodying open science principles. It's administered through challenge.gov. I put a link to it in the community notes. The challenge is structured such that teams can nominate themselves. There are six categories of consideration, including open science and service to communities open science to advance education and open science to advance solutions to pressing global challenges. There are also categories recognizing technical advancements that themselves enable open science, open science advancements that enable innovation. And last but not least, open science to advance interdisciplinary collaboration. We're really thrilled about this challenge because it presents such a great opportunity to recognize scientists who have been practicing supporting and enabling open science, often without recognition for so long. We hope that you'll consider nominating yourself as a project that you've worked on meets the criteria, and we'd really love your help. We'd welcome your help in sharing word of the opportunity across your networks. This is my final slide just FYI. The challenge.gov page offers a ton of information about the challenge. There'll also be a short information session about it, featuring my subgroup colleagues, including Maryam Daring-Hellam, OSDP's assistant director for public access and research policy. That'll be taking place next Wednesday, November 8th at 3.30pm Eastern time. And you can sign up via the link that you see on the screen, which is of course also in the community notes document. So that that brings my time to close. Thank you again for the introduction for the invitation and for this time to talk to you about TOPS and our work with the Year of Open Science. Thank you so much, Jamaica. Again, does anybody have any specific or question specific for Jamaica before we move on to the panel? I actually have one. And it's possible, LaKeisha and Michael could also comment on this, but they can do that during the panel if they want to. But you mentioned the early career researcher listening sessions, and I was curious if there is any concern among early career researchers as to the money that it might take for them to adopt these data sharing practices. I know that they can write into their budgets, but it seems like that's money that they're probably taking away from something else if they're running a lab. I know Michael noted that they include money for OA publishing in their packages. And so the money is obviously very important and there's a cost to this. I'm just curious if there has been any concern from early career researchers on the financial aspects of this. There were, if I recall, four sessions and about 1000 people participated and financial, what were perceived as financial roadblocks emerged as one of the most consistent and regularly voiced concerns that our attendees raised. There was broad concern about the financial costs and potential limit limitations and also the repercussions that that would have for equity concerns. It was it was very real, very real concern. I didn't link in the community notes, but we'll following this session, a link to the readout of those sessions they weren't recorded, but they were nicely captured in some extensive notes that the White House put up on their blog so I'll include that link in the community notes so that you can read more a little bit more about it there but yes definitely. Thank you. And we have a question from our colleague say each other for the open science recognition challenge to the submissions have to come from individual researchers or could institutional projects also be considered. We are hoping to recognize team based projects so absolutely. We welcome nominations of projects. That's the, they were the challenge was designed around that expectation that that would be more project based works than individual work. Great. Thank you. And we have a comment and question from Amy Koshifer and this section is something we talk a lot about in our outreach as well as this term science. I wonder if the term science may exclude some researchers. It is STEM focused. How do you see all researchers participating in this effort. We, this has been an ongoing conversation across the subgroup and with OSTP about how to phrase this whether we want to talk about science research or scholarship. I found in the last year and a half of working with the subgroup that we tend to think of and try to refer to science and research as broadly as possible so as to include participation from and a little self-recognition by people who don't generally fit a traditional understanding of science. I mean, I know that I mean, I wear multiple hats. I work for NASA. I also work as a liaison to the White House. I'm also getting a PhD across the road from CMU at Pitt in a in an area of focus that has me engage a lot with exactly how science is defined. And in my work, I think I find that my own definition, my own understanding of science is quite limited. But we are and we have made efforts in our language in our year of open science communications to be as broad as possible and to extend that understanding so that people like me can continue to see ourselves in a in a community that we might not have otherwise. Thank you. Yes, I know we, we often kind of go back and forth between open science and open research to be broader at Carnegie Mellon and I will say that it's been really nice having the definition from the White House and something we can. That's wonderful to hear. Okay, so with that, we'll bring all of our speakers from this panel back on screen so we can do some panel questions. And I'm going to kick it off with a question that I'll direct to Laquisha first. So, I think you mentioned that Whitman had as an institution committed to updating their policies. And then we've also heard from Michael, where they enacted this change at the department level. And I'm curious if you have any insight as to whether you think one of these approaches will end up being more common as this work goes forward. Yeah, that's a great question. Hello, Michael, because he did a great job of breaking it down at the departmental level. I don't know that my institution would be able to do that at the departmental level per se. I think Whitman worked out really well because I was eight as the provost. And so she has a direct line to the president she has, you know, direct lines to faculty and deans. And so for, so I think that's probably why it worked out really well for her because. And so for me, it's about scheduling meetings with the deans on the provost, having them talk to the president, and then, you know, bringing it to the faculty simultaneously, because we're all at the shared governance meetings monthly. So that's what we've been trying to figure out basically is what is the best approach. And so I don't think there's any one way, but I would say definitely if you have your provost on board, you should be good to go. Great. Does anyone have anything else I'd like to say or are there any other questions from the audience minute to type. Okay, somebody is typing right now so take your time. I can follow a little bit of on the key just comments. So I have, I've had the opportunity to work both within my university, mostly within my department, but also with other department chairs within psychology. And I think disciplines will play in a very important role. Because, you know, if you think about the first line of evaluation is really your colleagues and so having communities of scholars solving the problem within a discipline I think is going to be really, really important for pushing culture change. People look around they want to say well who else is doing it. And no one wants to go first. No one wants to go last, but no one definitely wants to go first within a field. So I think that's going to be a crucial component of it. And, you know, just one, one last thing, you know, some people oftentimes I hear people say things like what's really hard to push culture change. It's true. But it's not solving climate change. Okay, that's a hard problem. So if we put this in perspective, really what we're asking people to do is to engage in a pro social behavior, right, and people want to do the right thing. And so that's not hard. It's just getting the mechanisms in place and sometimes the bureaucracies can sometimes, you know, impose barriers that don't need to be there. Great, thank you and I sort of have a follow up question to that so it's a little bit similar to watch this question earlier but so in my work as a librarian here, you know it's pretty noticeable that you know sometimes postdocs and graduate students are interested in engaging with open science but the PIs of the labs at least in science really kind of drive the culture of the work and how it's done in the labs. And this was my experience as well as a graduate student postdoc. And do you have, we do have some graduate students and postdocs in the audience today and I'm just curious if you have any words of wisdom for them is there anything that they can think about aside from the fact that, you know when they open their own labs if they continue saying academia then that is a chance for them to, you know, have the work culture, the open science culture that they would like in with their trainees but is there anything that they can think about in the meantime. Who's that one for it. I know you both work with graduate students and Jamaica you're a graduate student yourself so if anybody has any insights as to yeah some of these, like, I'm thinking particularly of like some of these very successful PIs and they are you studying things a certain way and they've had a ton of success and so they might not be in some device to change your practices unless they're forced to buy the funding agencies and whatnot. I mean, I'm a department chair, so I'm kind of in a weird place. But my senses and listening to my own colleagues and this is, I mean, I, the graduate students are going to, they're beholden to their lab PIs I mean I recognize that. So a lot of it probably really does fall on leadership to send the right signals to reinforce the types of things that they want to see their universities do and that is not just department chairs but I think beings and provost especially. And so those messages are really, really important. Because ultimately, we're responsible for training the next generation, and I've been very consistent and persistent with the messages I do with my department then. It's been taken some time but the faculty, like you hear them talking about these things now and you hear the students talking about a lot of people using open science methods and stuff like that. I'm also in psychology where it's been kind of out there for a while so that might be a little different. I don't come from a position of, I mean, I'm grad student, but my hope is that this will be the end of my career in academia and then I'll continue to work in policy. And, you know, having had this perspective from working with tops and particularly be working as a liaison to OSTP during like the Nelson memo era. I can't help but wonder about the effects that policy guidance will have on a change in culture as well. I mean, as the Nelson memo goes, the guidance goes into effect. The federally funded researchers will be required to have an ORCID ID, will be required to deposit their research data, will be required to do all of these things that will help to affect that culture change over the long term. It may not be immediate. It may not affect today's graduate students, but it might well help the case of those who are coming five, six, seven years down the line. So actually, okay, we have a comment from Wajin. NINs just had a funding opportunity to implement open science at the department level. That's really interesting. Because we have heard, yeah, there's a need for funding for the implementation of this for some disciplines. Okay, so we have comments and a question from Matthew Humphries. One aspect of encouragement for open science that has come up throughout the symposium, but is especially apt this panel is the order in which engagement efforts happen. We could train graduate students, for example, to care about and be proficient in open science practices, but if the systems they move into do not reward that their real world education will only teach them that part of their training was maladaptive. So we can try to include faculty in these discussions, especially where there are faculty sonnets that can lead to policies becoming stalled over faculty hesitancy and a lack of knowledge about open science. Education combined with dialogue seems to be the first step in helping encourage open science, but what does the order of events look like beyond this if one were to try and blueprint this encouragement. So, so that comment really hits on to what we have been discussing for, you know, the past years, you know, for me, for us, I would say, having that the starting just committing to having those discussions, having a certain individuals trained like I said for me, I'm being trained on all of this and I've had to take time to really learn about the open science practices. And now I feel that I can go and educate the faculty and have those conversations with the faculty Senate. I do agree with you that things can get held up there. So I'm making a commitment to just keep moving those conversations along, putting myself on the agenda, at least if not every month, every other month, because of my schedule. So, I know, I know at my university, we are really starting to take a look at the postdocs and how to prepare them for careers outside of our particular university. And so, this is actually, that was a great question. So now I'm looking at how can I engage them in the open science discussion which is something that I don't think we've collectively talked to them about now so. Yeah, I agree. Just start by committing to have the discussion at your university. I think it's the first part that we should do. If you want to say anything before I open my mouth again, but I can share some thoughts. No, please go ahead. Okay. I think it's a great question and, and, you know, part of me is like, you know, training is so important that we can just skip over these reproducibility issues and hope that they'll be fixed later because we know that's just not going to happen. So there's there's some aspect that we have a moral responsibility to ensure that our students are doing things the right way. And I can share some, you know, sort of anecdotal things like when students come to you and say I want to be part of the solution, not the problem, like literally graduate students. So, you know, they oftentimes want to be the solution finders. So, but we shouldn't be putting students in a place where they say, oh, well, I'll do what I need to do now to get a job. And then when I have my own lab, you know, I will do things differently because once they get that new job, then they're going to be looking at Okay, well now that I have this job now I need a tenure. When I get tenure, then I'll start doing things differently and then it's going to be something else and that that's just never going to solve the problem. And so part of this is what I try to do internally to my department, but also my campus is every single time I have a lever to pull. I will use that lever to point out issues. For example, when I write letters for promotion or whatever I include data sentences that state you know here's why we don't use citation counts, because they are biased. They do not reflect the thing that people think they do and I, and that goes to our dean and that goes to the provost. And so every time they see one of these they're getting that repetition of what's wrong with the system. And I know that's just internal to our department but you know when I have levers that I can pull outside the department I try to pull them in. That's really what we all should be doing, because the more times we say it, the more often we say it, the more likely it is that people will start changing their minds. If you'd like to share, Jamaica. No, I was just sort of reflecting I mean I guess yes, reflecting on the, the common thread in Michael and Lakisha's answers were sort of starting where you are and doing that little bit that you can I actually noticed throughout her presentation when I mentioned about just getting people to commit to having the conversation, even at the federal level, I mean, maybe, especially at the federal level, I also mentioned that getting agencies to coordinate is very difficult because they each have internal cultures to state the and so it whereas we sort of came out with guns blazing with these great hopes for the year of open science it turned out that where we the best place to start from an interagency perspective which was where we were, which was to have people make moderate commitments to advance to starting the conversations within their agencies and figuring out how it worked best for them to move forward individually. So, I suppose I don't have anything to say other than to notice that that seems to be a theme that's emerging here. Yeah, and I would say the type of university definitely matters. Chris who is one of the colleagues. She, you know, she works at MIT and she said they're not going to make any changes. And so, but she's, you know, she still shows up she's active in the meetings. And so I think just by having these discussions I think there are. There is a pathway for them to make some changes before you know the mandates come down and you have to make the changes so I think getting changing that culture in the mindset. It's just really the toughest part. Great. Our next question is kind of about this idea of qualitative versus quantitative metrics for open science. And so the questions from call up linsky and it was about if there was a way of making some of the criteria for 10 year 10 year applying for positions or advancing a graduate degree as it relates to open science quantitative. And Nicky is pointing out and echoing a comment from site earlier that this can lead to people gaming the system that dashboards are easy and people tend to be drawn to them but there is some issues with that and that a kind of approach around open as a prioritized process is what we need in the spirit of slow scholarship and slow science if we really value something such as openness and might be that we need to accept that assessment criteria might also require slow approach. So I'm just curious about thoughts on this idea of quantitative versus qualitative metrics. I don't want to like, take all the time but I. So anybody else want to go. Yeah, actually, this is very much relevant to the way we do things in the psychology now here at the University of Maryland. You know, quantitative metrics tend to sort of boil things down into numbers so you think you can care can compare apples to apples. But the problem is is that the underlying units are not apples, right different people are doing different things there's no two researchers that are the same and so I think we sometimes. Don't take this the wrong way I think sometimes the metrification of these processes. You know, can fool us into thinking that we have some really great tools for measuring success. And I think that's a problem because the things we're trying to measure are not unitary. Like even within, you know, two faculty studying similar topics within the Department of Psychology here or anywhere. They could take very, very different approaches and their dossiers are going to look very different depending on whether they're doing that community participatory work. Or whether they're doing online studies or whether they're doing FMRI work, even if they're trying to answer the same question so I, you know, I, I'm hesitant to sort of say okay let's, let's get rid of the nuances and think about numbers. And, you know, we've approached this as we don't want to mandate we actually, you know, we started in 2017 this is before the OSTP memo. And we're trying not to mandate people to do things in different ways because we recognize things will evolve over time. And so we want to give people that flexibility and agency to meet, you know, the criteria in the way that they best do their work, if you will. So, I'll leave it at that. I think this is a debate that's come up in our previous symposia as well, particularly around this idea of how you measure and acknowledge data reuse. And so we've had, I remember speakers in the past who were really arguing that you might need an extra dimension to the current citation system that allows for people to be recognized for their data sets being reused. But then we've had other people that have suggested that that kind of just plays more into the current system that's very quantified and how you really need an entirely different system for acknowledging open science and so it's, I think something that continues to be grappled with any other comments on that topic before we move on. Well, just one is the one of the issues is I think many of the stuff that we things that we use to evaluate faculty really can be boiled down to reputation metrics. And the problem with reputation is that reputation begets reputation to gets reputation, which means, unless you come into it already ahead of the game, you're going to fall behind, right. It's a classic Matthew effect right and that'll happen both with citations publications, but also data reuse right you will tend to use the data from labs that already have that reputation. So, yeah, I think it's important that as we think about these new systems that were not, you know, there's a potential to kind of accidentally emphasize these inequities that already exist within academia and there's a need to be intentional and not making those even worse. Yeah, I was going to say just just quick point so that that really ties into what we're trying to establish what are the resources that are needed as because every university is resource differently has a different mission and so it's really important as you look at your own university and your own capabilities of before you make drastic changes, are you able to handle that and I know that's something that we're looking at right now. There's more comments and discussion in the Q&A here. David is also commenting on this idea where he links to an article which we will put in the Community Docs if it's not already there. That is an important overview of open access publishing from the global south. And it shows how well indexing of the scholarship is accounted for by Web of Science, EBSCO host and Scopus. And so using those types of metrics can arguably ignore a great deal of research and literature from the global south. That's an excellent point. Cheryl says maybe trying to measure provides a common language to discuss what should be measured. Okay, is there any other we have a few more minutes for this panel Q&A. Oh, we do have another question. Is team science something that our metrics should support and if so how can we best support that work. I can say yes. I think the credit system is kind of important. The authorship crediting system is important for documenting how people are making their contributions to team science but generally yes and that's one of the things that again motivated some of the work we did was just making it more modern to recognize secondary data use collaborative research. It's not the same as it used to be when we were all just kind of doing our own thing. Any other comments on this idea of team science and how we acknowledge that. But do you think that credit is a good start. I had a fellowship with the Society of Scholarly Publishing and did a very limited study of how credit was being implemented by publishers and it was limited. So there's some uptake yet still to to be accomplished for that to be a totally effective system. Just, you know, but you know, as we were talking about earlier the maybe the most important step is the first one so it's good that that was taken as a thing towards team science it would seems like it would really depend on the discipline. Yeah, and I think like Michael you noted that there's a lot of differences in how the disciplines have adopted these practices and their attitudes around them and I would imagine you know team sciences I think becoming more interdisciplinary as well and so there's a real opportunity there as disciplines you know collaborate with psychology researchers if they're in fields that have not adopted some of these practices to kind of maybe normalize them a bit in their own home disciplines. Okay, we have just a few minutes left in this panel. Does anyone else have any questions or comments. Well I just want to say thank you for all of the questions because it's giving me a lot to think about as we prepare. We're rounding out the semester, but I've been taking some notes on things I want to take back to our leadership. Really pushing this a lot. Yes, people are putting some very nice comments in the chat. Great speakers. Yes, I think we're super excited to have this panel today because like I said this has always come up a lot in the discussions of these symposium but we've never really directly addressed it and I think. You know before Helios I think it would have been hard for us to maybe even find the people who are willing to talk so openly about this so I think this was really the right year for us to address this, especially with the year of open science. So, with that, I want to just thank our panelists here again laquisha Michael and Jamaica for some really excellent discussion around this idea of incentives and policy it's very important topic. And with that we are going to take a short break before we come back for our last session of the day on open access publishing. Thank you Melanie. Thank you.