 So welcome everyone to the grassroots webinar on the open science assessment framework today will be with Clifford and Josephine who will be presenting the ongoing work on the development of the open science framework in the grassroots project before we start a few housekeeping rules as usual. So you'll notice the webinar is being recorded so please take off keep your camera off if you don't wish to be seen. And there will be a dedicated time for questions answers at the end of the presentations. So please write your questions in the chat. I'll collect them and we'll make sure to address them after the presentation today. And finally, you can raise your hand if you wish to speak and but please make sure to keep your microphone muted. The rest of the time to avoid background noises and to ensure a smooth webinar. And quick introduction before we start so this is the first in a series of webinars on the open science assessment framework. The grassroots so the open science assessment framework is one of the main outputs of the grassroots project and grassroots is a three year horizon Europe in for your project, which is focused on creating an open and federated data space for research assessment. And the aim is to provide tools services and guidance to support and enable policy reforms for open science aware responsible research assessment. And this is at various levels so including researchers institutions organizations and countries. As I was telling you, the open science assessment framework, I'll call it the call it the OSAP is one of the main outcomes of the project. And the aim is to assist research funding and performing organizations in tailoring and implementing new generation open science aware research assessment approaches, which Josephine Clifford will introduce today. And the materials will be made available after the event in our general community and on the project website and I'll share the links to these pages in the chat further on. Quick presentation of our presenters today Josephine and Clifford both working in the development of the open science assessment framework in the grassroots project. Josephine is a senior open science specialist at the IT Center for science in Finland at the data management office. And she's a senior open science and fair data policy and practice specialist in international context. She's also leading work on open scholarship and access the Finnish partner representative in the collaborative. collaboration network knowledge exchange. Finally she's also very keen on finding ways of incentivizing open and fair research, and she provides the expertise on a variety of fair matters, including maturity assessments, incentives, and practices. Recently, the second speaker today Clifford research at the Center for science and technology studies at Leiden University. And he's working in the social studies of science with particular focus on open science in the context of responsible research assessment. Clifford is leading the development of the open science assessment framework in the grassroots project. He's also co-leading one of the co-working groups towards open infrastructures for responsible research assessment. Finally Clifford is one of the members of the CWTS Center for science and technology studies focal areas on information and openness and evaluation and culture. So Clifford will do, I will let the presenters start the webinar, and I wish you all a very interesting event. Thank you. Thanks. Welcome everybody, and thank you Lutti for the introduction. Can we advance the next slide please. So, first, I want to just give you an idea of what the content of this webinar is. Some additional context to the Grass OS project. And as we're introducing a new framework, we'll look at a couple of related frameworks to help situate where ours fits. Then we'll discuss in some detail, but at a high level, what the open science assessment framework is. And also be followed by Josephine, who will introduce a use case of an element of the framework. And finally we will end with a mentee facilitated discussion, and I hope at the end we can also just take questions that you might have. Before that, we will get to know some of the contexts from where you are. So if you want to use your QR code or the code itself at the website, you can answer the first question and we will show that. There are only one participants at the moment, and the code has also been pasted in the chat. Okay, we have a nice impression. A little is the leading to race between some and a little. There's a fair amount of a lot, some not at all, and I don't know. So we advance to the next question. I think there was a second mentee question. Is it possible to in the arrows in the left lower corner to advance to a second question? There we go. Maybe we've seen them. Okay, please. Next slide, please. A bit about grasp OS, not to do a good job of presenting that I will add to that that we are 18 partners. We have three different disciplines. So infrastructure experts, responsible research assessment and open science experts and communities. So we have a quite an effort focusing on the communities outside of our project to engage with. In addition, among these partners, there are nine pilots who serve a quite important role in conducting assessments research assessments, using our approach and our resources. Ultimately, but across different national context and disciplinary context. So there are co development partners. So please. And our project is guided by this idea of an open science aware responsible research assessment. And that's recognition among other things that there are two movements that are kind of intersecting here. And we look at the broad idea of responsible research assessment as the context for embedding open science contributions. So on the one hand assessments need to reward open science practices. And at the same time, from our view infrastructures used for assessment need to be open. This bottom part of the second piece that is we see is quite novel contribution. Next slide please. Thank you. So I think not to mention most of this, but I will just go through the core components of the project. One is the open science assessment framework, which we will be discussing today. The other is producing assessment data tools and services that will be used by the pilots, but as a way to make them available to the public. And this these resources will benefit from this federated open metrics infrastructure. The grass, oh, as pilots are the next dimension. And finally, as I mentioned, we spent some time and effort in engaging the community of practice. And joining this community of practices, this core working group on open infrastructure for responsible research assessment. Okay, thank you. Okay, on frameworks. Next slide please. So let's start with a definition. This often maybe cause raises questions. So we'll just be clear that by framework we mean a basic structure underlying a system concept or text. In my read, it doesn't imply anything particular about what we're doing or what these other frameworks are doing. It's just that they are structured in a way that this activity can be presented as a cohesive whole. So these are the relevant examples that we will go through somewhat briefly. Next slide please. So most of you are probably aware of this. This came out in 2017, I believe. And it was the first large scale effort to articulate what we mean by contributions to open science. But more importantly, what are the ways in which contributions can be made. Really focusing on beyond research output. You see that the cover research, the Oscar covers research output. Including the data sets and software funding. The next slide please. But it also covers a much broader view of academia of research. So contributions to peer review. Teaching, mentoring and consulting. So many of these are not the traditional ways in which people are assessed. The nor nor cam or nor region career assessment framework builds on this idea. Contextualizing it for their values and interests. And also adding in tools and databases and other elements of their research community. Next. And the next framework also builds on the work done before them, both of these and other resources is the opus research assessment framework. Opus is a sister project to grasp OS and they've done a nice job in distilling this into a taxonomy of contributions that can be either open or not. But it's divided into research, education, leadership and valorization. It's interesting that they include with these with this taxonomy with this framework. What they refer to is policy interventions. And that is to say a policy intervention might be, for example, addressed to the director of a institute to ensure that if you're going to assess on things like open science that there is capacity for doing that and that there is appropriate training available. So this is a resource that we will include in our framework for our pilots and beyond. Next one, please. This indicator frameworks report also came out from the expert panel from the European Commission. And I'm putting it here because it pays a lot of attention to context to the context of the evaluation in how it informs what sorts of in this case what sorts of indicators you might select based on the context. Thanks. Thanks. Please. So this is the I norm scope framework. And we've actually adopted this for use for the pilots and in our framework as well. But it's a high level framework addressing responsible research assessment. It's high levels so that it can be used across different contexts and different levels of aggregation. And it basically has 5 steps start with what you value so that you can then assess or measure what you value. Take seriously contextual considerations. It also determines in part what sort of assessment approach you will take. Then, once you have those to look through the options for evaluating and determine whether evaluation is the right tool for what you're aiming to do. Once you select an approach, an option to probe deeply on the possibility of introducing unintended consequences. Things that may privilege some groups and exclude others, for example, want to be aware of that. And then finally, to evaluate your evaluation. Next slide, please. And the next. So we'll now start with our framework. Our framework is guided by a number of principles primarily through the core. Coalition for advancing research assessment. In Europe. But the, the also fit self the framework itself. We believe has a situated to facilitate some key. And first is the diversity of contributions. So, including vast range of contributions beyond material outputs. Is it quite a challenge. And we're trying to take that challenge in this framework. And also, this is the idea that primarily qualitative evaluation, but supported by quantitative indicators. And these two things are related. So we think we can make some headway on these. And also looking at the scope principles, one that we think that we can address, which is to evaluate with the evaluated. And this means to include those who are being evaluated means to us. Through the full process. So in the beginning of designing the assessment and deciding on what counts as evidence. Thank you. So the open science assessment framework has 3 components, a method, an assessment portfolio and an assessment registry. The method here we call scope plus I scope and the plus I means infrastructure. So we introduce assessment specific infrastructure into the process. And that includes that's operationalized as a number of resources like templates, guidelines, checklists to help evaluators work their way through designing the assessment. And it's focused on responsible research assessment in general and open science in particular. The assessment portfolio is a just go through these three then jump to the next. It is a digital portfolio to capture all the information about an assessment event. Not only the narrative and evidence, but also the outputs from the scope process where you have all that the information that shapes the assessment. Diversity will enable including the diversity of inputs and roles. At different levels of aggregation researcher group and institution, for example, and it's positioned as a collaborative resource. So to be open to those for being evaluated as well as the assessment team. And then finally, an assessment registry. To publish an assessment protocol, which you complete next. Next slide please. So we have a lot going on in this framework. So I want to break it down by connecting the things on the basis of assessment event phases. So we have four phases, the readiness phase, the design phase, I'm going down the left column and performing the assessment and then evaluating the assessment. So back up to assessment readiness. This correlates with the first two steps of the scope. Process start with what you value and context and purpose. From the method column, we provide a number of templates and guidelines to facilitate this process. And we already on the right hand column instantiate an assessment portfolio and begin collecting this information. So that the portfolio travels with this process through each phase in assessment design, which is for scope options for evaluation and probing deeply. Again, templates guidelines and checklists that are relevant for this phase. And then beginning to populate the assessment portfolio, which we as again position is multi actor object. And populated with the evidence narratives appropriate for the approach and to also publish the protocol resulting protocol. Next is to perform the assessment. Here, the assessment portfolio serves as a way to distribute all the same information and content to the stakeholders involved with the assessment. And then finally to evaluate evaluation at the conclusion of the event. And then to publish the protocol, not the people evaluated or the evidence use, but basically to provide a transparent record of what the values context and purpose were and what sort of approach was then enacted. And then to be a resource for others. So, with all that in mind. We still are quite conscious of the fact that context. Purpose and values inform assessments. And that each context is different so the flexibility is still a part of this approach. One more slide. Next slide, please. In the next slide, the mind's not turning by the way, I think it crashed out. I'll keep talking while. Yes, I think yes, it's crossed out for some reason. The last slide I'll just start talking about it is to provide a bit more detail on the assessment portfolio. Specifically, the openness profile, which is what Josephine will present in a moment. But basically, the assessment portfolio will have a specification and then multiple templates to address different levels of aggregation from individual researcher. To institute to institution or community, and we will experiment with country level portfolio. But in the individual level, there's the individual assessment portfolio for hiring tenure or annual review, for example. But the openness profile is different. It's also for individuals. It's more of an ongoing account of one's contributions to open science. So I wanted to give you that background to set up Josephine's presentation. I'll mention it. Okay, now it's, you could, there it is. Yeah, next one, please. This one. This is the one I've been talking about. The final thing I'll say is, if you're interested, raid, the raid identifier, research activity identifier is the underpinning infrastructure for this portfolio. And the, if you want more information that is linked to this slide. So I will pass it on to Josephine. Right. Thank you very much. Yes, so I'm here to tell you about the openness profile that you've heard already briefly mentioned and the, especially the pilot ambitions of the Finnish research that if I serve service when it comes to this openness profile. One more please. Thank you. And what is it all about. So an openness profile is a digital resource in the form of personal profile, which lists activities and outputs related to open science. And it's located and accessible in one single place, making it really easy for the end user. The concept of the openness profile was originally created and worked quite extensively on during a few years time in the collaboration network called knowledge exchange and we are in this grasp OS project furthering on on that work. Two reports were published on this topic in the knowledge exchange. The first of which defines the basic concepts and the second outlines the reference model and goes a bit further into the requirements of an openness profile and that included the links to the reports in towards the end of my presentation in case you want to read some more. And when it comes to the technical side of the openness profile, as Fieffer mentioned about the rate service the research activity identifier. So it's covered through that, which is the PID system. And it is currently being further developed by the fair core for us project. And the aim is to add responsible research assessment enabling extension to the rate. And this way, to be able to satisfy the requirements of the openness profile. And there is another another PID system that will be leveraged for this work, the orchid, which enables the automated processes involved. And in the grasp OS project we rely heavily on the work done in the nine pilots that Clifford also mentioned and these are consisting of national research performing organization and thematic level pilots and these nine pilots will feed very valuable into the work in terms of the requirements and wishes for the openness profile. Next slide please. So some of the basic requirements for an openness profile and why we think it's a good idea to have one in place. So the main idea is that it is to reduce the administrative burden and that it will allow for both metrics based inputs as well as narratives. So this is really important as all things related to open science are not necessarily measurable, but should still be able to consider be considered as significant contributions to research. So it allows recognition for various different types of research outputs. It is also important to assure provenance in all aspects as well, which in turn fosters trust. So there needs to be a balance in place between the automated processes obviously and manual check so we cannot automate everything and just have it fixed through that. There is also an urgent need to involve the communities in the work to reach consensus when it comes to, for example, working on taxonomies, workflows and standards. PID based automated workflows are also important, which allow linking between the research objects such as between the people between organizations and outputs, etc. And lastly, using APIs makes it possible to retrieve information for creation of, for example, knowledge graphs. Next slide please. So the mockup would probably look something like this through its integration into ORCID, where you can see on the right hand side in short the types of input you could include in the profile, ranging from ORCID records that contain structured formats equipped with a PID to manual entries in the form of text. And for example, one post one output of such type that might not necessarily have a PID equipped with it. Other types of inputs are mentioned in the OSCAM that you also heard Clifford presenters now. And to the next slide. Yeah, so here you can see some more concrete examples of the type of outputs and activities that could populate an openness profile. So this is just, if you would like to familiarize yourselves with these outputs. Later on, I'm not going to go any deeper into these right now, but onto the next slide please. So this is what the research.fi looks like. It's a research information hub. So this is the where the use case part comes into play in this presentation. You can also consider this service as Finland's national crisis system that compiles information on Finnish research from institutional, national and international sources. And its main purpose is to provide an overall picture and a comprehensive information base of the research that are produced in Finland. However, one of the functions of the researcher profiles in research.fi is to enable transfer of information to research funders or organizations. This is all based on the researchers permission. So nothing is being transferred without the researcher having their say on it, which this is enabling using the information for evaluation purposes, but this is only in such cases. So in terms of national monitoring of open science and research, the research acts as a platform for presenting the results. The monitoring indicators are used to determine organizations openness profiles and their sort of levels of open science and research. And onto the next slide. So about the researcher profile shortly. It's public profile where researchers can populate information from orchid and from their home organization. The research researcher can choose what Yoshi wants to publish as I said so it's completely based on a yeah it's a voluntary basis. So the content of the profile can include, for example, at the name of the person of the contributor, the description of research subjects, keywords, affiliations and titles, education and degrees. Other research activities and merits, for example, memberships and rewards. It can obviously also include publications, research data, etc. So it can include quite a large variety of researcher related information. Next slide please. So about the pilot ambitions of research.fi. So the researcher profile would list the open science activities as a separate section. And it would allow a few, it would allow more diversified inclusion of open science elements and activities as it is now. There will also be a test bed for this where researchers can test themselves this functionality on a voluntary basis. And there is also a plan to collect further feedback in terms of usefulness and user friendliness of the openness profile through interviews and in the survey. And lastly, there it is very important to point out that everything that goes into the researcher profile is totally up to the researcher. Him or herself, I mentioned this already, but just to stress this that it is not mandatory to populate this profile at all. This is based on the my data principles, which is a new approach in personal data management and processing that seeks to shy away from an organizational centric system towards a more human centric system. And where the personal data is considered to be in the hands of the individual who can decide on access and stay in control of this data. Next slide please. So this is another mock up of the researcher profile that might give you an indication of what it looks like and where the open science activities would be positioned. Next slide please. And it is always good to also bring from the sort of considerations and maybe some limitations involved. It's good to know that research that if I is not an evaluation tool, and it's for that reason also not designed to support evaluations, rather it is purpose to collect and disseminate information on various research activities. And so the openness profile would not either be used for evaluation purposes, it would be nearly put into place to showcase openness. There is still some dispute around the meaning of openness and this is also something that should be clearly defined in the project and agreed upon among all partners involved. And this is especially, especially important in the context where openness is to be considered merit. Talking about merits, it's also good to know that openness cannot in all cases be considered as a merit, because there are always exceptions to the rule as the scientific landscape in itself is very diverse. For example, when it comes to sensitive data, there are not equal opportunities to openly share all the data. So we have to be quite careful if we start measuring openness. We also need to make sure that we only include the indicators that are realistic across disciplines. Only then can we start to include open science achievements as metrics to be evaluated. And as a last consideration, we need to find a reliable way of bringing front the open science activities that are not currently very visible, for example, teams science efforts. Next slide please. So on to the added value to research.fi, all things related to advancing research support and policies in any way is best done on the international arena. I think we can all agree on that together with like-minded colleagues, like-minded organizations. So this is especially important for smaller countries such as Finland. The Grasp OS project also emphasizes convergence when it comes to defining best practices within open science, more concretely by developing tools and services to merit researchers and organizations on their open science activities. This project also supports the implementation of the core commitments of the agreement on reforming research assessment, which is on the agenda of most of the higher education institutions in Finland. Next slide please, and I believe this is the last one, including only the slides to the knowledge exchange reports on the openness profile I mentioned earlier. That was all for me. Thank you very much. And I think we have the Mentimeter next. So you probably noticed that there is different code for this part of the Mentimeter. It's 29444583. So how would you describe responsible research assessment using only one word? This could also be interesting in our word cloud setting. We can also probably facilitate that. But this is interesting to see very different types of input coming in. Yeah, complicated, difficult, but also necessary and important. I see coming across quite many times. Someone thinks it's a little bit fuzzy as well. And if you want to open up what you had in mind when typing in anything into the Mentimeter, just raise your hand and we let you speak. So the next question is, please put the following on a scale from one to five, ranging from no interest to being highly interested. Interesting. You want to pick your minds on a few topics here related to the OSAP. So there is almost the tie between the scope plus one method be useful at your organization and would assessment portfolios be of interest. So there seem to be an appetite and interest for the both of them. Okay, so the next question. Open science efforts valued and encouraged at your organization. So here are a few options to choose from, but also the category other if it doesn't quite fit into your situation. So, most of you say that it's encouraged and valued and it's done through good guidelines. So that's good to hear that these are available. Here I would encourage you to write into the chat if you wouldn't mind if you would like to provide longer inputs or also speak up because it would be interesting to hear more about this topic. I wonder if we can also take questions or comments while we're doing this. Yeah. Yeah, both either in the chat or raise your hand. There are many different inputs coming in. So, in terms of how open science is being assessed at your organization. So one says that it follows the choir recommendations. Open publications is encouraged, assessed national strategy and cholera commitments and transparency and reproducibility. It would be interesting to hear how this is concretely done at your organization. Can you go a bit further down? Oh, there are more inputs there but I don't know if you can go back. Yes. Can you scroll down so that we can see the rest of the comments there. Yeah, thank you. So some say that they have a plan in place to work to include various aspects in their new assessment framework. There are discipline independent assessments taking place according to established recommendations. Okay, I think this was the last question we had in the mentor this time. Right. Yeah. We are opening the floor for more questions. If you have any, any for me or Clifford or for anyone in this room really everything was really clear, I guess. Hi Josephine and Clifford. Thank you very much. There are still no questions in the chat. I think everything was very clear. Very nice presentation. Any questions I can ask for a few minutes of your time and share the feedback form with you in the chat. If you agree, and we can see if any questions arise in the next minutes. Okay, there is a question in the chat. Do you think to add software or other research products in your evaluation from Julia. Hi Julia. So do we think about adding software as a research product? Yes. Outputs will be included. I guess there are some challenges too. As we know from one of our pilots, there are some challenges to evaluating contributions to software. One is that it's not so easily tracked. It's often say in a GitHub repository. The other is which I find also quite compelling is there are versions of software. So often you would need to be able to specify which version. And I think our Laurent and his computer science pilots are looking at ways of collecting that in an automated way, but it's not always structured in terms of what the contribution was and which version. But yeah, it's a good question. Thank you. Thank you Clifford. Do we have any other questions? I see in the chat that Kumar, who is working with Laurent, wrote we are working with Laurent on identifying softwares in publication as well. So for information Kumar and Laurent working on the pilots linked to this activity. Another question from the chat from Ivan is grass powers focusing only on researcher academic career assessment or includes other aspects of research assessment such as projects teams institutions as well. Another good question we're trying to cover the full range in terms of individual researchers to groups, faculties, institutes. For example, and across our pilots, I think we cover quite a range. There are pilots that are looking at individual searchers, along with their pilot. But they are usually in a higher level of aggregation department. A couple of national pilots, so a national level funder. And we have two that are three that are community based or thematic. It's hard to sort of classify them. But for example, the computer science community is one of them. And then humanities and social sciences is also another. Thank you Clifford. Next I have a comment from 40s, who says that the scope method is meant also to provide the context, which I believe is a comment. That's a question. Yeah, yeah, just as a comment that I think grass places in as a project. It will give you the opportunity and the infrastructure to assess different contexts. Of course, the scope method I think in each organization needs team needs department can define what needs to be evaluated and I think that's the conversation that we're having that we are not defining what needs to be assessed. I think we're trying to cover as many research outputs and practices as possible. But in the end, it's down to the individual institution and organization I think to to work with our material and also, for example, to set up their own research assessment exercise, let's say, unless I'm not clarifying it correctly. Clifford, if you want to add on what I said. Yeah, I would only add that that's addressing context and incorporating it into the assessment decisions is seems to be fairly complex exercise. The latter part, but that is one of the resources we will develop, which is a template for accounting for contextual factors. Thank you for this and Clifford. And next up is an announcement for the next community of practice meeting of the grass purse project, which will be on assessing open science in the context of computer science so it will be on the computer science pilot. And the link is in the chat so if you have not yet joined the mailing list, you can do so here. And another question, I'm moving on directly because it wasn't a question. Next question is from Ivan. I was asking are and how grass purse activities connected with our working groups. Maybe Clifford you want to talk a bit more about the core working loop you are co-leading. Yes. So, two of us are from grass boys are co-leading the open infrastructure for responsible research assessment working group. So we're directly connected to that it's directly related to what we're doing. So we see some synergies there. Beyond that, we're paying attention to the others. It's a quite a nice initiative to have so many working groups focused on research assessment. Yeah, so it's mainly the one that we're working directly in. But they're all, I think, quite relevant. For example, some of them are focused on specifically on early careers. And yeah, I think that answers your question. If not, please let me know. Thank you, Clifford. I've added the link in the chat to a blog post on the way for our working group. Okay, is interested in reading a bit more about the plans. Okay. I don't see any other questions in the chat. Just wait a second in case it seems like we finished the questions. So I'll put back the link to the feedback form at the bottom of the chat. Please take the time to provide your input. It helps us make the next webinars even better. So I'd like to thank Josephine and Clifford for the webinar, and also Zeynep who contributed, and Zeynep who is hosting on the technical part. So thank you everyone. And we'll share with you the links when the materials are available. Thank you very much. Have a nice day.