 Okay, I think we're ready to go. Thank you for joining us for today's webinar, bringing fair research data management to researchers at scale. I'm Katherine Unsworth. I manage the skilled workforce development team at the Australian Research Data Commons and I'll be your moderator for today's webinar. Just a brief word on the ARDC. If you're not already familiar with us, we run programs and facilitate partnerships that ensure Australian researchers are internationally competitive through having access to high quality data assets, platforms, infrastructure, policies and training to transform lives. So moving along, it is with great humility that I acknowledge the traditional owners of the unceded land on which I come to you today, the Wurundjeri Waiwurrung peoples who have cared for and protected this land since time immemorial. I pay my respects to their elders, past, present and emerging and extend that respect to other Aboriginal people present today. And now before I introduce our two speakers, I would like to cover a few housekeeping topics. Today's webinar is being recorded. We will be able to share a link with you after the event. We welcome you to revisit the content yourself and share it with colleagues. We also invite you to make comments and questions and pop your questions in the chat box. If you think of a question for the speakers at any point, just type it in there and we'll hold it over for the Q&A part of the webinar. Please remain muted throughout the webinar. If you have a question you prefer to ask directly during the Q&A session, please raise your hand and unmute yourself. And now we'll introduce our speakers. So we have Dr. Katarina Hale and Dr. Frederick Coppens. Katarina is Alexia's program manager, communities and training and past president of the Erasmus Mundus Association. She has operational and strategic oversight of Alexia's communities bringing together European experts to develop standards, services and training within specific life science domains. Additionally, Katarina coordinates Alexia's training platform activities to strengthen and grow bioinformatics training capacity and competence across Europe. And then our second speaker today is Dr. Frederick Coppens is and he is the head of the VIB data core and new core facility in the Flemish Institute of Biotechnology founded in 2023. So it's very new. So that'll be exciting to hear about that. And the data core provides hardware, software and data-related services to all VIB centres and cores for both nonsensitive and sensitive data. Frederick is also head of node of Alexia Belgium, the multidisciplinary team. Their focus is on developing infrastructure services for data management and analysis. This encompasses the adoption, development and implementation of standards for interoperability of data, metadata and technologies. So at this time I'm going to hand the floor over to Dr. Katarina Hale and who is going to start today's presentation. And I know Katarina, I absolutely did not pronounce your name properly. So you might like to say that as you start your talk. So it's all yours, Katarina. Thanks very much, Katherine. And this, this worked really well. Yeah, my name is Katarina Hale. If you, Ellen, can stop sharing the slides, I can move on to bring up our slide deck as well. Here, I'll go full screen. If somebody can give me a thumbs up that you're seeing that. Great. All right. Yeah, it's a great pleasure that Frederick and I will talk you through some of the fair research data management tools and how we bring them to research or researchers at scale from the Elixir perspective. And for all of us to be on the same page, I have a couple of slides to introduce you to Elixir, a European research infrastructure in the life sciences. As Katherine said, we're very happy to then also take any questions during this webinar also later on our contact details or on the very last slide. So very much looking forward to the next minutes and almost, I think, up to an hour's time. So as I said, Elixir, the European research infrastructure for data life sciences. What do we do? How do we function? Elixir is an intergovernmental organization bringing together life sciences resources such as databases, software tools, training resources, interoperability resources, and also compute and resources as well as data management support. And what we really aim to do is to bring together all of these resources, tools and the support as well across Europe to form a coordinated single infrastructure federated across all these countries, which is accessible to everybody across Europe, but really also around the globe. The way we are structured is that we are made out of 24 nodes that numbers fluctuating a little bit at the moment we're growing slightly into other regions of Europe as well. And across those nodes, more than 240 institutes belong to Elixir and form that federated network of institutes and also people read. We are located at Embley BI and we in that case is the Elixir Hub. That's really the secretariat that coordinates the activities for Elixir so that the infrastructure can function and that we all come together across the different regions in Europe and beyond. Together we accelerate the understanding of life and really we function as a foundational data infrastructure and serve the sciences. Life scientists in academia and industry are our key stakeholders. And we connect international infrastructure to mobilize data for life sciences in the European research area. But as I said, most of our all of our resources really are fair. That also includes that they're accessible and free in many ways. So that really makes it a resource for everybody who's interested in the domain and wants to interact with them. So just a couple of words on our structure. Elixir's technical expertise and the infrastructure per se really is built on top of five platforms as we call them and they bring together experts from nodes to develop Elixir's technical vision and coordinate activities and define technical areas. And those five platforms, they look at compute, data, tools, interoperability and training. And just to go a little bit into more detail, so we cover aspects such as cloud compute, storage and access of services, long term sustainability and data of data resources, finding registering benchmarking tools, but also thinking about best practices for tool development per se. In the space of interoperability, we're looking at discoverability, accessibility of data, but also how to integrate and analyze biological data and think of a thing around standard diced file formats, metadata and vocabularies. In the space of training, we really want to build capacity in our nodes, so that national training can also be promoted in their own rights. So again, training capacity and competencies across Europe to empower our users to also actually be benefiting and using what the infrastructure has to offer. And our platforms and technical experts are joined by our elixir communities. In that sense, we are connecting infrastructure and life science across Europe and beyond. Our communities are formed around domain expert experts in elixir nodes. And as you can see, the portfolio is very wide. So we started off with four of them at the moment, we have 17. And the topics range from proteomics to biodiversity, microbial biotechnology, human copy number variation, and some even a little more technical, such as Galaxy and research data management. Those communities, they provide a mechanism for long term collaborations, also with other European and other infrastructures, as well as large scale initiatives. They try service developments in the elixir platforms and they provide a framework to develop and maintain community standards. So with this, I'm finishing the introduction to elixir, but I have linked a couple of resources here for the communities more specifically, there's a handbook. And we've also had a very recent webinar series where all the platforms and communities portfolio have been introduced. So anybody who's interested, feel free to have a look at that. And with this coming to today's topic. So why do we need fair research data management? Why has elixir decided to dedicate time and effort to this? And why do all of our nodes get on with the topic as well? Why do we do this together? And where are we at this point? So I've just pulled up two statements here and you probably could add the Australian counterpart to this. There is a really, really big need for data sharing and reuse. I think as researchers and people that are working in that space, we also have made or should be making a commitment that this is happening. So you can see it's a very hot topic and high on the agenda, both in Europe and the US as you can see based on the article. And there is a buzzword that we probably all know of, but just to remind ourselves and be again, kind of thinking along the same lines, we're looking at findability, accessibility, interoperability and reusability of data. So as fair as possible, and as close as necessary, I think is probably something that resonates with all of you as well. So moving on to the research data management itself and data management planning as well. It's a very complex landscape and it's very often very overwhelming. So where do we start? What can I do? And who is this important for? So as you can see here, there's a number of different groups and stakeholders that will need to engage with the topic. And those range from the bioscientists to the researchers, but also the data stewards and potentially the investigators, lab managers or coordinators of larger projects, and also laugh, but not least the funders and policymakers, they probably the ones not so much doing the hands on work, but they really want to see a need to see all these data management plans so that we all doing the right thing. So in on the one hand, support is overwhelming, because there's a lot of data initiatives out there. But that also makes it very hard sometimes to find the entry point, because a lot of what is available is relatively generic, and might make it a bit of a leap to get to where you want to go. So in that sense, support is underwhelming as well. How do we in LXE support fair data management? This slide captures six different angles on how we do that. And I'm going to start with the fair services and resources. So we're really looking at promoting registries, standards, ontologies, identifiers, but also data management platforms, stewardship tools and templates, all of this aligned with the fair data principles. Fair data techniques within our research infrastructure are covered in areas such as workflows, reproducible processing, transparent reporting and provenance, but also fair assessment and evaluation and verification methods. Most of this is specific to communities because it really is the users that need to be able to use and to apply the fair data management in their certain domains. I've added a couple of comments here and examples in the slide. So again, this ranges from from basically across all the life science areas. Human data structural bioinformatics for diseases, plant sciences, microbial biotechnology, you name it. And we have, as I said, very recently also added a research data management community that will then across the scientific domains support the researchers to think about the fair data management itself. We're looking at trusted repositories. This is also something that is happening together with the global bio data coalition. So Alex here has a number of recommended deposition databases and portals. And we look at scalable curation and sustainability. Fair data policy and activism is also key. The fair principles will mention them a couple of times already, but also thinking around fair leadership and partnering at the global European and national international level to really bring all these principles across and bring them to the ground, to the people that should be using them. It also comes down to the expertise and the training. So we have to acknowledge that this is not something you know how to do, but you have to learn and to grow into. So we're looking at capability frameworks, skills, data manager networks and training portals to support all these initiatives. And with this, we have the people in the center of the chart and I will pull up a couple more items here. And they live in a very complex, but also simple space at the same time. There's a number of stakeholders and other kind of repositories and things that need to be considered when a user is thinking about fair data management. So you've institutional repositories, but also public and general repositories. And there's also specialized and national research data management platforms and repositories that need to be considered. So really, again, it's the complexity, but also the need to be compliant with many of these things. So as you can see, there's a lot of different links out into those different areas and spaces. The questions that somebody will ask him or herself is starts around the file store, the institutional file store, but also then the general repositories that are available that that could be used together with the specialized and potential national regulations that need to be considered. And last but not least, it also comes down to think about the trusted repositories, whereas my data safe, whereas we are those fair standards being supported and so on. And with this, we ask ourselves the question around how can we help the researchers, the data stewards and the project managers to navigate and contribute to this fair data repository landscape. And that's when Elixir has together with all the members and the nodes started to work on supporting those practices with products and processes. So you can, if you're interested in any of those start off on the Elixir Europe web page, you will see a page that tells you what we offer. And as part of the guidelines here, you can see the three research data management tools that we will be presenting today. And with this, I'm handing over to Frederick to move on with some more details on all of those. Yes, thank you, Katarina. If all goes well here, you should hear me now. So I'll be going over some of the services that we actually provide to do data management in practice, because that's really what we're trying to enable. There's a lot of policies out there. There's a lot of kind of requirements that researchers need to fulfill. But how can they actually now do this? Click. And so I'll talk about three services that we've developed really in the last couple of years. And we've had a focus on Elixir on data management since about 2019. And this has come out. So I'll go over, first of all, the RDM kits. What we wanted to do here is provide a toolkit for researchers how to do data management. As Katarina said, it's a very complex landscape. And for people that are in it, and we know how things work, and there's a lot of tacit knowledge of where to go, what to do. But it's very hard if you're kind of new in this, how to figure out what the best approach is, what are the best practices. So what we want to do is provide this guidance to the landscape in specifically life sciences. And you can go to the web page and find everything that we've done. What we've tried to do is provide this local support for data management. And Elixir always mentioned that we have half a million life science researchers in Europe. And if you extend that globally, even way more, of course, we can't train them one by one. That's there just, we don't have the capacity. So we built this toolkit and we built that kind of alongside a expert network that we have across all of the Elixir nodes. A lot of people that have the same problem and we brought them together. And they've been contributing to this resource. And we've also focused on the training and capacity building, really trying to professionalize data stewardship and how we can handle data. And we also have a strong focus on getting those data in the open. We need kind of brokering pipelines where we can get the data from the researcher into the public repositories. And this needs to be as easy as possible. So we've tried to describe this all in this toolkit that is written by the bioscience community for the bioscience community. And this is an evolving thing. How did we structure it? There's a number of kind of aspects to this. So we want to provide support through the whole life cycle from planning and collecting data to further processing, analyzing it and finally preserving it for the future and sharing it. So colleagues can reuse it. This needs to end up in data management plans, but every phase has its peculiarities that we have described. And this toolkit serves as the focal point where we can find all of the guidance information best practice and also examples how people are doing this. So you can really go into depth and find the right information for your problem. And we try to make this as sustainable as possible. That means that we are not going to duplicate information. We want to provide the context. We want to sign posts to other services so you can easily find them. But we're going to refer to those services for the details. And here we really want to make sure you can find what you need and then kind of come or end up in the right services to provide you with all of the details. We've been doing this. We started well basically in the lockdown. The project started March 2020. We launched the first version about a year later. And right now, I think that the numbers might already be outdated, but we have over 150 contributors to the content with 350 tools and resources, managed it over 100 pages and then quite good usage numbers. I'm not going to go into detail, but kind of the backbone of this system is GitHub, GitHub pages, which makes it very easy to contribute and to make it even easier. We often also you just Google docs that we then translate or copy over into GitHub. We have an editorial board that helps do this. This is the outline. I'm not going to go into detail. We don't have the time for that, but we have different kind of views depending on what you want. We have this life cycle which takes you through the different steps. We also provide guidelines. These can be very generic that are beyond life sciences, how to store data, how to collect metadata, but can also be very specific. What do you do if you have transcriptomics data? We provide an overview of tools and resources that are relevant and we always put them in the context. So it's not just a very long list. You always get some kind of information on when it can be relevant. And we also do this for different roles. So as a researcher, you'll have a different question than a project coordinator or a data steward. As said, we refer to different registries and we try to make it as concrete as possible. And I'll just give you one example of what we call a tool assembly. This is where we try to make it very concrete and learning what is available is still different from and this is how people have actually deployed it. So you can know if I can adopt this, it works because others have done it and there's also a name to it so you can contact them. And there's a lot of examples by now. One is in translational biomedicine where Luxembourg set up a system to kind of cover the whole aspect from sensitive data for data access, providing the authorization. I mean, literally every single step and they've detailed this in this tool assembly with a nice graph that I appreciate you can't read, but you can go to the webpage and find this. And so here you can see how all of these components work together. And you have a list of all the relevant tools that they are using. So this really helps in adopting technologies that have proven their worth. Moving on to a second resource, which is Fair Cookbook, which goes in a bit more detail. So this was a product from a Fair Plus project, which is a collaboration also with industry where they developed concrete recipes to cover the operations operational steps of fair data management. So where RDM Kit provides context, and this really goes into the weeds of how to do this. And so again, here we've worked with the community. There's over 100 people from both industry and academia that contributed. And there's again, different ways of interacting, you can just use the recipes they provide, you can join and contribute that the same goes for RDM Kit. And you can use this kind of to drive your policies in your institute or your way of working in your trainings. And also here, this is a broad collaboration and recognized resource in Europe. Those recipes, so they've made everything citable and credited so you can acknowledge the work people have done. Here's some examples of what this looks like. There's always the accreditation of the people through the Orchid. And this is also done through GitHub and through all the web technologies. And if I just give an example of such a recipe, I mean, this gives you an overview of for a specific task that you want to do or for verifying a specific data set, it will give you an overview of what is there, the older tools and skills that you need. So kind of the ingredients of your recipe and then a step by step process. And this can go into really details, as you can see on the right with coding examples, how to run things, how to set up things in every nitty gritty detail. And they also refer to further reading material and they make the link back to RDM kit to provide a context. We do the same in RDM kit, we provide the link to the fair cookbook to go into the details and the RDM kit then provides the context. And the objective here is really to bring this in the hands of researchers that they have exemplary data sets that they can use to improve fairness of their data, that they discover new technologies, they can assess how fair they are and how fair the technologies they use are and find out what skills are still lacking, for example, and what the challenges are that they will be facing. This has been published so you can find this information also online all. And then the third one that I wanted to mention is the data stewardship wizard or DSW. This is a data managing management planning tool that is developed in the check note of elixir. And contrary to a lot of the data management planning where the ultimate goal is we provide a PDF that you can send to your funder. This is really focusing on the guidance and exploring options and it's and provides a lot more versatility. I'll go in a bit more detail. The software is essentially I mean, if it's maybe unrespectful that a kind of Google forms on steroids. This really provides an additional layer that allows to make interactive questionnaires that go into much more detail and provide feedback to the researchers. So you have a question and in this case, where are you going to store your data? There's immediately some text kind of what do we mean with this to put this in context because the short question can sometimes be a bit confusing. And there's references of where you can find more information, how we look at that. And they refer to, for example, here, fair sharing, but they can also refer back to the resources I just mentioned. You'll have a couple of possible answers in bullet points, typically, and they will be tagged with different levels of fairness. They will tag on the one hand what aspect they solve. So in this case, the findability of resource and through the color coding, you'll know how much they contribute. If you put data in your own hosted repository, they will be more difficult to find than it's a commonly used repository in the domain. Both can be perfectly valid solutions. You just need to know what to use and then where the differences are. So these fair metrics are integrated and the system keeps track of who provided answers. We can work together with different people and they give you some advice on the answers. As you can see also, there's depending on what you answer, you might get some follow up questions to detail this in more or provide more detail. And then in the end, you get a full data management plan. What's underpinning this are knowledge models. So this describes all of the questions and all of the details provided. We know that these evolve. So this is provisioned for that these can be updated and the content can be migrated. So you don't need to start over again whenever there's a new knowledge model provided by data students or your institute. And this really makes for the versatile and future proof platform that we need for data management planning and ultimately the machine actionable data management plans. And the output of this can be a PDF because funders require this, but it can also be adjacent that you can then link into your institutional systems. And this is, for example, what the Norwegian notes did in their Norwegian eLifeScience infrastructure where DSW is one of the components to make the data management plans. And with that, I'll hand it back to Katarina. Great. Thanks Frederick. It's always nice to see these things together. And I hope that that was already useful for everybody. I wanted to follow up really briefly to also just highlight how all of these links together and how all of the tools are necessary when we think about data management. So we're looking at specialized complimentary resources. So whoever wants to engage with them most likely should be thinking about all of the three components. So again, in summary, we're looking at the RDM support guidance and context that you can find in the RDM kit. Then the more questionnaire based data stewardship wizard with fair data stewardship guidance and our detailed recipes for making fair data as part of the fair cookbook. So it's all linked back and forward. The connections are there. And if you'd be keen in also knowing how to contribute and do get in touch and let us know. What is important in all that space is really to build on the real examples to keep supporting the community. And that's also why we do work with the community, not just experts in the data management and all the aspects, but really the people that have a need. So we are thinking of user thinking about how do I use fair research data management to solve my problem. We have different layers here in the actions, the benefits and the impact. And if you look at the pyramid on the left, the deployment really is the top part of it. But it goes down to the use cases and the case studies and to be able to implement those we need the majority assessments, processes and best practices and the tools to also do all the things. So what is next? And amongst others, how are we as LXR keeping to support all these movements? How are we committing to also keep supporting all the tools that are existing now? One of the things we have done recently is to establish an LXR research data management community. So what they will be looking at as the community is the network of RDM professionals. So we are rethinking on RDM knowledge exchange and capacity building, management of RDM knowledge per se, the coordination of content management and editorial functions for the RDM knowledge ecosystem. That's exactly the three tools and approaches we've been talking about or Frederick has introduced to you and the management of RDM training resources and expertise. So a network of RDM trainers. We're looking into establishing learning paths and training resources. And last but not least, there's also external stakeholders we want to engage with all these initiatives and the material self. So this is around coordinating the interactions with external infrastructures, organizations and communities active in the space of RDM. So if we look at it once more from the LXR angle and the landscape perspective per se, we want to align influence and apply solutions. And this really goes well hand in hand with our LXR platforms and the services and resources, but also the external stakeholders, the infrastructures, organizations and communities. And you can see the logo of the Australian bio commons here already. So that collaboration is ongoing and we hope to broaden the list. There could be many more logos that we haven't added at this point. When it comes to the knowledge exchange and capacity building, the capacity building really goes into our nodes into the network and the people that we're working with and then also indirectly and directly to the national stakeholders and the LXR communities. As I said, really when it comes down to the scientific domain drive, you are more than welcome to join this community. It's an open community. They will be meeting on a monthly basis. They will start to work on a smaller internal project, but we're more than happy for everybody who has an interest to just come along and join the conversation. And with this, if I'm not mistaken, thank you very much for listening. I see there's a couple of comments in the chat already. We're more than happy to take your questions and have an open conversation about the topics we presented. Thanks very much, Katarina and Frederick. Fantastic presentations. Thank you for both of those. I guess Adalyn, are there questions in the chat at this stage? It looks like we haven't got any questions so far. Can I then kick off with one, please? You mentioned 17 Elixir communities. I was just wondering, how do you actually manage that number of communities and actually keep tabs, I guess, on their levels of maturity and where they're at and what support they need, those various sort of steps through the growth of a community? That's a very interesting question. So our communities they're mainly bottom up driven. So it's the community that caters for the community. But we as an infrastructure do support them in some of the structural setup. So it's really more on the operational side on the one hand. So we will help them to be able to have their monthly meetings. We will help them to have an agenda. We do financially support them as well in the sense that we enable them to meet on a yearly basis, face to face as well. And then I think a really good activity is that when a community gets established, they will be writing their own white paper and that includes some sort of roadmap. So the people in the community will identify what is needed in the space at the very moment. The more mature community gets, we incentivize them to also revise their roadmaps. So we'll do a little bit of the ushering. We'll do a bit of the pushing sometimes as well. And every community is a bit different. I think we also need to acknowledge that the different scientific domains have different dynamics. And we want those communities to be able to adjust to the dynamics that they need to be functioning as a community driven activity. So there isn't the one answer to this. We can explore it a little bit more if anybody's interested. But it really comes down to the drive of the individuals. And if we look at it a little bit more as well from the bigger European angle in most of these areas, projects are being funded and are being happening. So where Alexia comes in as well is to really incentivize joining the efforts and coordinating the different research outputs that are being created, not just by individual individual institutes, but also by larger initiatives and projects that are being carried out at a given point across Europe, mostly at this point, but we're not in any way close towards broader collaborations. Yeah, I'm particularly interested because we do this ourselves within the Australian research data commons is we have a framework, a community's framework, and there's an evaluation process as well. So we wouldn't just set up a community just because there was, you know, the idea that, oh, this community might be a great one to establish that they have to kind of meet certain criteria and, you know, have, I guess, a certain level of sustainability as well around them. So is there an evaluation? Do you kind of come back and revisit those communities a year on and say, oh, yes, this is working fantastic. We'll we'll throw in a bit more support, or this has just died a natural. So we'll wind this this community up. Is there that kind of approach as well? Or we're starting to grow in that direction. I think the initial focus was on we want the communities we need to engage with them. Let's set them up. I think the one commitment we definitely get what you just said, we don't just want to set something up which might not go ahead. We will not establish a community if there is no community driving it. We don't assess them on a yearly basis. And the way we try to think of assessment is based on their impact. So what is impact? And I think that's where we are trying to find different approaches and solutions. We have some communities they're very large. They have a very strong focus on say bringing people together, meeting, incentivizing information exchange, also giving say more junior individuals possibilities to present, to have posters. So that's a bit more. We really focus on our meeting. We all do our research in our own rights in our own areas. And we use this community to bring all of this together. There's other communities you might say they're a little bit smaller, but they have actually managed to develop standards. They develop standards. They then started to get them out there in Europe. And it's now the go to standard. What is better and what is worth? I think for us both of this is very, very valuable. So we are thinking of rather how are we, do we need to sunset them? Do we congratulate them that they made it to the next level? I think we're really working out what are the things we want to keep giving them? What are aspects they need to exist as a community to keep also feeding into our infrastructure and the technologies? The communities, they still exist. They will keep existing. Their necessities and needs might change. And specifically when we set them up, we do support them with a little bit of an internal project budget, which I think also definitely incentivizes them for the first two years, really, to work on something very concrete. And that also often shows us how things will kick off. Leadership will be formed. There might be other people stepping up into those leadership roles. It all kind of rotates a little bit as well. So we are aware internally about their successes and potential struggles. But that's also how we really try to then think of what's the best support we can give them at any given point. Thanks for that. That's fantastic. Yeah, very interesting. We'll have to have a wider discussion about that at some stage too. Are there any questions? Please feel free to pop your hand up if you just prefer to ask your question rather than putting it in chat. But if you're not sort of happy to talk to your question, please put them in chat. Maybe we can ask as well if you can give us the show of hands. How many of you are involved in research data management? I think from our end, we'd also be really keen to hear from you if those tools are potentially interesting. Are those entry points that you could be using? Or what else would you need from us to actually be able to engage with them? Because maybe something we didn't kind of address much is a lot of these things. And I think all of them are not just in Europe. I mean, we're working, for example, with NIH in the US that have contributed some of these resources. And so yeah, we need to find different ways in some cases how we work together just because of sheer distance and time zones and different kind of way things are structured. But in general, we all have very similar challenges in data management and specifically then in life science, what we're doing. And so yeah, if you want to engage to reach out, we've done this also in other domains. We will find some way of kind of booping you in and then kind of giving you the information that you need and see how we can collaborate and take this further. Everything we do is fully open and the communities, as Katarina said, are open for externals also. So yeah, we're very happy to also work with you and then get your information into these platforms and see how you can use this for your communities and your researchers. Very good. Thank you. I think Lindy. Sorry, Adalyn. I was just going to flag that Lindy's hand as well as a question from Robin in the chat. Thanks, Catherine. Lindy, did you have a question? What was your hand still up from just before? We can take the one in chat before and first then. Yeah, that's for sure. So I don't know, Robin, did you want to unmute and talk to yours or do you want us to just go from your chat? Either. Yeah, that's fine. I can speak to it. Either I'm Robin from the ARDC. My question is kind of related to fair, of course. So you mentioned how important sharing and reuse are when it comes to kind of fair and RDM. I was wondering if you found any of the components of fair particularly hard to incorporate within research data management for the researchers because I work heavily with researchers. So it'd be interesting from your perspective if there's a component that you found the hardest to kind of engage with researchers around. I can maybe take this. So I would highlight two aspects. On the one hand, is this brokering aspect. So we have repositories, all kinds of things for specific data types in the European Nucleotide Archive is one example for sequencing. I mean, pride for proteomics. I mean, there's loads of them and they make data available. But for researchers that are not in our kind of bubble, it isn't trivial to get the data in there. And there's tutorials and people have described how to do it. But even then, kind of you need to kind of know a lot of background before you can actually effectively use this. And we've seen this specifically in COVID where people outside of our outside of our kind of normal environment try to get data available. And they basically gave up because they couldn't figure out how all of these technical components work together. And so they're kind of making that as easy as possible is really needed. And is the focus area because the number of people, number of researchers generating data is growing every day. It's not just a few labs anymore. It's literally everybody. And that kind of brings me to the second point is we really need these systems inside institutes. And that's what we're lacking. We didn't really talk about this today, but we're also working on platforms that can help research day to day in their management of these mainly large data sets and getting all of the metadata that you need to in the end make them fair and then adequately described specifically for the domain that you're in. And kind of really getting that on the agenda in institutes that it's not just the kind of library information that you have in terms of metadata but the domain specific information that people can actually make sense of your data getting that mindset change is really the in my mind, at least the game changer because if we can do this then we can link up a lot of the information we have here in a kind of at least semi-automatic way. Thank you, it's great. Cheers. Katarana, you might like to talk to Melissa's comment there about the workshop. Sure. So yeah, the community aspect came up quite a lot and as Katharine introduced me I initially started my role as the Elixir communities coordinator. So that was really about thinking about the Elixir communities portfolio. So I was invited by the Australian Biocommons to really share a couple of more thoughts on the topic. So that will happen on Friday. Melissa, please, I'm not gonna say the time wrong somewhere around midday. So anybody who's interested in joining that conversation I think you do need to register but that should be free and easy to do. So I'd be also keen to hear your thoughts and I know that many of you have also been attending a course on community building. So that will happen on Friday this week as well. And I believe there's also a question from Phil. Are there any known Australian groups using Elixir services? So yes, definitely. I mean, I would need to see the stats of the services we talked about today if Australian researchers are using those that I didn't have them at hand here to look that up. But there's a number of other things that we've been working on and some of the people here are online and for example, around workflows we talked about reproducibility of workflows. We have kind of a similar exercise with workflow hub which is a registry for workflows with Johan Gustafsson has a leading role in from bio commons. There's also the example of Dresa which is a training registry which is built on the desk infrastructure that we developed within Elixir and electricity is kind of developed by ARDC together with partners here in Australia. And there's the Galaxy community we're working together I think for a decade by now on providing these services. So there's definitely a number of examples but I think we can make more connections. Again, there's always a little bit of a hurdle and we're very happy to be here in Australia now and we've experienced that there is a hurdle getting back to Europe both physically and virtually. But we find ways around that and so we're very happy to accommodate also people here to kind of take part in more different topics and then we need to figure out how to do that and what is the best approach and what makes sense both for you and what is feasible on our side. Thank you for that answer. That's great. Elix has also popped in. Now, Elix, did you want to ask your question or are you happy for me to read it? I'm happy for you to read it if you like. But yes, as I said, I've got to go through the resources but yeah. Yeah, so, Elix is interested in the fact that they're life sciences based and for the life sciences communities and so wondering how applicable they are across domains. A lot of what we do, at least the principles and the technologies are transferable. I mean, the RDM kit is based on GitHub pages. We spun out this as an Elixir toolkit theme which is being used by, well last plus count was 12 different projects. So it's not just RDM kit. Of course, the content in this case is specific but kind of the way of working and tagging things is more generic. Things like workflow hub, most of the things are in there are life science specific but there is literally no reason that that would be the case. We can accommodate any workflow type and are already working with different domains. So, I mean, it depends a bit but I think in general we try to find ways of working that are generic and then apply them into life sciences to make it concrete. And we are also working with different domains. So in Europe, you have these kind of clusters around life science, energy, environment, there's two more. And so we're working together more and more with them around kind of foundational technologies which are specifically implemented in the domains and then where we take on the life science part. Yeah, I think oftentimes, Frederick, just having the structure of the resource or material in place is useful because obviously that doesn't necessarily, that's not necessarily domain specific and so once there's a structure there, it's much easier to then overlay your own specific domains or disciplines into that training or resources. So, we've kind of done the hard yards for everyone is what I'm trying to say. And it's also kind of the way of working is something, I mean, with RDM kit, we necessarily had to do this all online. The good thing was is we knew the partners already well within the Elixir network, so that worked out quite well. I mean, it was one of the most successful projects at least that I have run within Elixir. But there's now kind of a way of working around kind of gathering together and you'll see more toolkits coming out of Elixir because of that. And so it's in a way that we can adopt and we've done similar things with communities like some ways of working work and others don't and then we learn from that and we continue with the good ones, right? And I think maybe just the cross research domains, it all starts somewhere, but Elixir also doesn't function in an isolated space. So it's really about reusing technology that has been developed in one place and applying it to another. A lot of the topics are interdisciplinary and they will or we will end up collaborating with others. And as Frederick said, the backbone is usually adoptable. How much work is it then to actually get it implemented or is this the right thing to use in a different domain? That's probably then for others to judge as well, but in principle, the guidelines and the structure should be there. Yes. And so we've learned to avoid bio entitles of tools because we've burned ourselves there. I mean, you have now fair sharing. It used to be bio sharing. Then the reason for the name change is obvious. And yeah, so we want to reflect it also now and we're much more conscious about that. Very good. Are there any other questions? There don't appear to be any in the chat. Anyone want to put their hand up for any last questions, I guess? Katherine, we'll be happy to share our slides and have them shared with the attendees as well. So if anyone's interested, I mean, they're on the video, but it's sometimes easier to just click here. It will be made available from our site. So then it's up to you to make them accessible. Yes, we'll pop them in Zenodo and they'll get a DOI and people can use them and site them. So that'll be all good. So Alexis has just said, thanks everyone, look forward to exploring these resources further. So yes, I think we're all going to enjoy doing that. So just to finish up, I guess, I wanted to put a plug out there for the ARDC's new impact booklet, which has just come out over the last month or so. It's not it's available online and there's the bit.ly link there for you, for anyone who wants to go and have a look at it. And so this is the new partnering for success impact booklet. And in this booklet, we're exploring how the ARDC provides researchers with competitive advantage through data, obviously, and we'll be sharing how we partner with research industry and government to unlock the potential of data for research and also highlight our future direction through development of our thematic research data commons. So if you're you're interested in now a new direction, the the three thematic research data commons, planet, people and has plus indigenous are very much highlighted in this this new impact booklet. So worth a read if you are interested. So just wrapping up just the next slide. Thanks, Adalyn. Just a big thank you, a very warm thank you for everyone who joined us today and particularly obviously Katarina and Frederick. They're a long way from home and it's wonderful to have them have them here. And so thank you for sharing your insights and expertise. It's been an absolute pleasure hosting your incredibly informative presentations today. If anyone has any additional questions or would like to connect directly with the ARDC skilled workforce development team, you can either reach out directly using the contact information on this slide. And if you haven't already, please subscribe to our newsletter. And I guess with with that, just have a great rest of your day, everyone, and see you next time. Thanks all.