 Hello everybody. Welcome to the Fair Data 101 Express course, virtual course. My name is Liz Stokes and very soon I'll stop being so nervous and relax into the first of our live Q&A sessions for this course. So thank you so much for joining us. Looks like we've got 99 people here, which is a nice number. In Australia we have a convention to acknowledge the traditional owners of the land on which we meet. So for me, living in the inner west of Sydney, this is acknowledging the Gadigal people of the Eora Nation. I'd also like to pay respect to the Elders past and present as the traditional custodians of knowledge in this land and welcome any First Nations people that are here today. So I'd like to extend a warm welcome to everyone here today. I know that the Fair Data principles can appear a bit like a jungle, but if you look closely you'll notice there's a little path in there. And I hope over the next four weeks we will be able to encourage you to enjoy the wide variety of ecosystems that we find among the Fair Data principles. If you can hear some noises, it's actually my dog who is noticing that there might be some activity outside. I apologise for that. She's very nice otherwise. So let's go for a walk among the wonderful, rich jungle that is the Fair Data principles. So today we're going to focus on findability and we have assembled a fine panel of people from the ARDC who are going to tell you a few stories about the Fair Data principles. Before we get into that though, I'd really like to share our hopes for you in this course and just give it a bit of an outline so we have a sense of what's happening in this course and what to expect over the next four weeks. So our hopes for you are that you'll be familiar with the concept of Fair Data and its application in research, that you'll gain experience with technologies, services and tools for making data fair, many of which are provided by your very own Australian research data commons and also that you will be able to identify best practice examples of Fair Data management. This is the second time we've run this course and we're doing things a little differently to keep it interesting. We've opened up unlimited registration and also shortened the course so hopefully this long thing will keep things manageable for you. What it does mean is that we have a wonderful array of expertise and different experiences across your cohort. We know that people are joining both to learn about Fair Data for the first time and also to undertake a refresher. Let me share with you. So these are the components of this course. So there is a course website and I encourage you very strongly to bookmark that link, which I hope that you have access to already because this has all of the links to all of the course materials across the course. Over the next four weeks, there will be two webinars, a set of activities and a quiz for you to do and we will do a live Q&A session just like this every Wednesday. There's also a Slack workspace and we've had several people join already. I think we've just hit over the 100 participant mark there and it is really lovely to see people introducing themselves and starting to get stuck into some really good discussion there on findability. You'll notice on this screen, I've also included a link to our Code of Conduct. So because we understand that learning is a social activity, we have a Code of Conduct that we require all participants to follow. So I recommend you copy this link and have a read of our Code of Conduct because I think it's up to all of us to uphold a safe and respectful learning environment and ensure that our social interactions create spaces in which everyone feels empowered to learn. So if you feel that the Code of Conduct has been breached in any way, there's a link on that code to a reporting form which goes direct to the ARDC and we'll follow up on that promptly. So our findable today experts are Natasha Simons from who's the Associate Director of Data and Services at the Australian Research Data Commons. We have Siobhan McCaffety, Project Manager for ARDC and Pids Enthusiast and also Keith Russell who's the Manager of our Engagement Team. So now it's probably time for me to stop blathering on. This concludes the guided proportion of our introduction now and I'm going to hand over to Natasha, Siobhan and Keith to share some stories about how the ARDC facilitates findability in relationship to the fair data principles. So I'll run these, I'll run their presentations altogether and then we'll open up for questions. So feel free to pop questions into the question component or the chat in your go-to meeting and we will take it on from there. Thanks Liz. Wait for you. Ask me a screen share. Yes. Oh, that's right. I need to scroll. Just let me go into present mode. Okay, great. So thank you Liz and welcome everybody. Really nice to have you here. So the State of Open Data Survey which is a survey conducted by Pig Share and Digital Science and it's been conducted each year since 2016. In 2019 they had more than 8000 participants in that survey from more than 190 countries and they asked the question which circumstances would motivate you the most to share your data? What do you think the answer was? So the top answer at 62% was the increased impact and visibility of my research and then coming in next at 60% was for the public benefit. In other words, researchers want the data they share to be findable and this is where the fair data principles come in. So the fair data principles give us a framework for making data findable, specifically mentioning rich metadata, the use of identifiers and exposure of data descriptions in an index or searchable resource. So I'm going to talk about Research Data Australia which is an ARDC search engine for Australian research data collections that speaks to a number of points on the findable aspects of fair. So Research Data Australia helps you find access and reuse data for research. It caters for researchers, policymakers, educators, business people and the public. It's more than just a search engine and enables you to reuse existing data to explore beyond your discipline and to assemble data resources to solve big problems. So it's interesting that the number of the day appears to be 99. That was the number of people who came in in the first part of this webinar. I think it's increased now but 99 is also the number of contributing sources into Research Data Australia and they're contributed by organisations across Australia. So there are 144,126 collections of data records in Research Data Australia as well as collections of software, references to researchers, services as well as grants and projects. Okay, it's important to note as well that we don't store the data itself in Research Data Australia. We provide links to the data that are provided. The records are provided by those contributing organisations and the links go back to the data sets that are held in their collections. So we don't all have open data in this either. It's a mix of descriptions about data as well as links to open data. Okay, so for the fourth year in a row since the State of Open Data Survey and Report were initiated, data citations were listed as the Holy Grail in terms of rewards for data sharing. So researchers who share their data want it to be seen, heard, they want to stand out from the crowd, they want it to be understood to have impact and they want to be acknowledged as the creators because that helps improve the visibility of themselves and their research. RDA helps researchers improve their citation counts by making their data more findable and we include a site this feature on each page of the data collections. We also have PID services at ARDC, PID meaning Persistent Identifier, which is also mentioned in that findable aspect of FAIR. So we can issue DOI's Digital Object Identifiers, which can help with the arrow. You can see it's the 10 dot number there and that is key to data citation as they're used to track data impact. So on this page as well, you can see that we also, sorry, my control panel's just in the view of that. You can also see that we show in Research Data Australia the number of data citations and that's through a link between Research Data Australia and the, it says Thompson Reuters data citation index, but it's now owned by Clarivate. Okay, so Research Data Australia also improves the connections of Australian research data. So it's really important that data is discovered in context. If you just find a data set and you don't know much more than the data set description, it's not going to help you too much. You really need the link to the publication that talks about the findings and analysis of the research data. You need a link to the researchers who produced the data and the article. You need a link to the samples that were collected or a description of the samples that were collected during the course of the research. You need a link to the research organization that the researchers are connected to. A link to the software, the underlying software perhaps from the data collection to the project itself as well as to the funding bodies that were able to fund that research. And Research Data Australia enables all those things to be collected and to be connected in the metadata records. So Research Data Australia also takes information as inputs that those data collection records they come in from institutions as well as records from funding organizations like the Australian Research Council and the National Health and Medical Research Council. And then that pushed out as well to places like Google. So they increase the findability of data in Google just by being in Research Data Australia. We also feed out the data site. They're the organization that meets digital object identifiers. They also have a search interface. And through those things we also are able to exchange information with publishers as well. So that when somebody searched for an article in the Scopus database, for example, the link goes back for the underlying data held in a repository. And that's through contributing that information into Research Data Australia is fed into all of those services. So just to sort of wrap up with a data impact story. So I started by saying that the top motivators for sharing data were impact visibility of research as well as for public benefit. And this is a really good example. So Professor Anne Cuss from the School of Public Health at the University of Sydney, led a study into the link between sunbed use and melanoma. And she found that young people are especially sensitive to sunbed UV radiation. So her team did some modeling and they estimated that banning sunbeds would reduce the number of melanoma cases in New South Wales alone by 120 per year. And as a result of that, that research was pivotal to the New South Wales government banning commercial sunbeds from the end of 2014 with other Australian states and overseas places following suit. So this research was the winner of the SACS Institute's Research Action Awards in 2015, recognising the impact study had on improving public health. It features in the ANS, Australia National Data Service data impact stories if you're interested in following that up. But it's a real reminder of why data is so important and the impact that it can have on the world. So Research Out Australia helps that by improving findability, sightability and connection of data, which helps make more data more fair, which means more impact of research. So that is it for me. I don't know, Liz, if you can, I think I stopped showing the screen and you hand it to Keith. Excellent. Keith, are you there? Oh, but we might need you. I'm muted. Wonderful. I'll be quiet now. The floor is yours. Do you need me to make you a presenter? Yes, please. Then I can share my slides. Yes, let's see. Can you now see my screen? Yes. Okay. I would like to build a little bit on one of the fair guiding principles. And I'd like to build on the principle F2. So the principle is that data are described with which meta data, but I'm going to make it a little bit more specific for today to talk about which discovery meta data. So in this current period of lockdown and sadly not being able to reach out to the larger world out there, I do want to bring in a few international pictures at least to maybe to taunt us and tease us. It certainly taunted me. So because research is an international endeavor, it's all about making sure that research is findable not only for Australians, but also for the whole world. So when considering meta data are crucial to making sure that data is findable and reusable in the end, it's not just the icing on the cake, it's actually a crucial underlying factor. So there is a huge array of data out there and trying to find the right data set as a researcher or as a potential re-user is like looking for a needle in a haystack. So this is where metadata comes in really handy as being a signed post to actually get you to the right data set. If nobody's going to find your data, they're never going to be able to reuse it down the track. Metadata can also perform a lot of other functions down the track, but today I'm going to focus on the discoverability aspect and how metadata enables discoverability. So when describing your data set, you need to consider which metadata standard to use and that will depend on the platform you're putting the data set in usually. So there are a huge array of metadata standards out there. There's a lovely visualization of the metadata universe. If you ever get a chance, have a look, it's mind-blowing, there's so many standards out there. There's a lot of those very lovely acronyms like ISO19115 and with CS, etc. So I won't go into all these different standards and all these different details. And there is also a very useful metadata standards directory which can be useful to map out which standards are there. There are reasons for these different metadata standards and a lot of that is because they are for specific disciplines or specific purposes. So imagine a data set about peers, lovely, beautiful birds, endangered birds in New Zealand. When you're actually describing a data set about peers, you need to probably be more specific about what is really in the data set and for what purpose has the data been collected and what information is in there. So it could be more directed at the habitat of the key and how it interacts with that habitat, but it could be on the completely different aspects around the genomes and the genomes of the peers. So when describing your data set and add a rich amount of metadata inside it so that you can identify what the data set is exactly about. And that's also really important to the researcher that wants to make use of the data set so that they can quickly interpret whether it's useful to them or whether it's a completely different type of data which is not relevant to them. Now, I mentioned earlier all these different metadata standards and that's all very well, but if you put metadata in one specific standard, it's going to be discoverable to those that are looking and understand the language of that standard and the language of that discipline, but it might not be understandable for somebody that's looking from a different perspective, a researcher that's using a different language or a different type of question. So for that you need translations between these metadata standards to enable that it can be found through different groups. So we call that crosswalking and it is actually possible to crosswalk between these metadata standards to make sure that information that's in one metadata standard gets carried across into another system is also discoverable in another system and then again to another system. So for example, you can have a data set in your local institutional repository. I've put up here SyroDAP as an example and that holds the metadata in a specific metadata standard that gets moved across to Research Start Australia where it's captured in RIFCS and that then is also moved across through schema.org and can also be found through Google data set search. So that means that a researcher can find the data either by looking in the DAP or in Research Start Australia or in Google data set search. Similarly, it might be oceanographic data which probably makes best sense to be deposited in an ocean data portal and that it goes again in a very specific discipline standard and that will also be harvested by Google data set search. So making sure it goes into the right repository using the appropriate metadata standard for discovery makes it much more discoverable through settings. So wrapping up, make sure that you have well-described metadata with rich metadata so that enables the discoverability for a range of researchers in the relevant metadata standards and make sure that it's discoverable through these multiple platforms and portals. Okay, thank you. That was my quick pitch. Awesome. Thanks very much, Keith. I'm going to hand over to Siobhan McCaffrey now to talk to us about the glory of PIDs. Great. Can everyone hear me? Excellent. Cool. I've just managed to wrangle my microphone settings. So thank you for that introduction and I'm going to talk to you a little bit about PIDs and about DOIs. Can everyone see my screen as well? Yes. Yes, we can see. Great. Cool. Okay, so let's get on my notes as well. So by means of introduction, so my name is Siobhan McCaffrey. I'm a project manager at the ARDC and I work in a few areas and one of which is I am an avid PIDs enthusiast. As I asked Liz to introduce myself, I think PIDs have a really quiet and powerful role in the plumbing of joined up data. And there's sometimes a bit of an unsung hero in fair data, particularly the F of fair. So I thought that I would talk you through the ARDC identify services and really concentrate on DOIs. ARDC takes a really strong stance promoting and developing PIDs infrastructure and services. For example, we bake PIDs into our national level services and increased infrastructure. We support and co-invest in programs that have a mandatory fair component for the data. We support AAF who are the Yorker consortium leads in Australasia. And last but not least, we have the suite of PIDs services. And you can see that on the screen here. We've got DOIs, handles, IGSN and RAIDS. And the most widely used of these is our DOI service. So what is a DOI? Well, it's a unique digital identifier for objects. Kind of does what it says on the tin. It's a persistent link to an object's location and they use to facilitate discovery, tracking of citation metrics. It's got six essential metadata elements. So identify, title, creator, publisher, publication year and resource type. So that's not a lot of essential elements really. It's just enough to make it really flexible and very, very useful. What do we use DOIs for? We can use it for whole heap of things. You might recognize them from being at the top of published journals. For example, the DOI at the bottom is an article. You can click on that and have a look. You probably recognize the tin dot as the beginning of all of the DOIs. But it can also be used for research datasets and collections or repositories. Also software and models. So it's a digital identifier that can be used for digital only output. Also grain literature. So theses, reports, conference papers, newsletters, creative works, preprints. This is really important as well in the world where journals are often expensive to access. Technical standards. So the standard for DOIs is also accessible through DOI for the standard for DOIs. And specifications. It can also be used for instruments. And this is a new area where things are kind of moving. So telescopes, synchrotrons, sequences, and I call them shiny thingatrons. Anything you like really can have a DOI attached for it. Should it have something, should it have a DOI attached for it? That's a slightly different discussion. DOIs use handle technology. And you might have noticed a few screens back that we also offer a handle service. So handle is an underlying PID service or infrastructure that's used globally. And DOI uses handles, along with some additions to the handle standard. DOI is an international standard, which is really important for improving things like findability because of Februns using the same standard. We know exactly how that's going to work. It's overseen by the DOI foundation. And DOIs are allocated locally by globally distributed registration authorities. So what that really means is there's an overarching governing structure. And underneath it, there is local or geographically separate organizations that will allocate the DOIs. So for our purposes, the two organizations we need to talk about are Crossref and Data Science. Crossref is one of the organizations that is within Australia's geographical location. And they do DOIs for scholarly and professional research content. So journal articles, books, conference proceedings, reference linking and searchable metadata databases. So go and have a look at Crossref. Really interesting stuff there. They do a lot of work. And DataSite is the other one. DataSite is also a not-for-profit. And it works on research data and other research outputs. Both of these organizations work together to make sure that DOIs cover everything, but they do different parts of the everything. So how does the ARDCs DOI service work? So we work by providing two means of interface. One is a web interface and one is an API. Our DOIs come through DataSite in the main. And we have an allocated amount of those. And you'll use a web interface to make either a single DOI or use the API plumbed into an appropriate platform. Something like maybe a repository or maybe some kind of software that you're working with and possibly even for repositories for publications within the university. And that will have the API plumbed into it. So we can mint lots and lots of DOIs for you. And you can then allocate it to whatever is appropriate to allocate it to. So that's how it works. That's what you can use them for. And if you have any more questions, please feel free to contact me. Awesome. Thank you so much, Siobhan. All right. Well, that concludes our panelist mini session this morning or this afternoon, wherever you are. And now I'd like to open it up for a Q&A discussion. So if I can invite our panel to turn their micro-video cameras on. And also welcome Richard Ferris from Victoria, who is also on hand to talk about some of the intricacies of DOIs and repositories. So what I'd like to do is I'll go through some of the questions that people have popped into the question box so far and then ask our illustrious panel. If you have more, please put them into the questions or the chat sections. I'll try and keep ahead of both of them. So let's see. Although there were a couple of... There was a question about, is there a preferred hashtag for the Twitter discussion? You may have seen that in our previous webinars. But if you can hashtag ARDC Training or FAIR101, they're the best hashtags to... Those are the best hashtags to use. Now, there was a question about links to the metadata standards when Keith was talking. And so we'll make these slides available so you can follow up on those links later on in the week. Hopefully we'll have those onto the Slack and onto the course materials website quite promptly. There's a comment on the Riley visualization of metadata standards. Is amazing, but most of the metadata schema standards there are not applicable to describe data sets but other digital objects. Would anyone like to comment on this? I think that's true to a certain extent, although it also depends a little bit on what your definition of data is. And I think that is one where, for example, the wide array of materials that are produced in humanities do require also quite a wide array of underlying metadata standards. So think of moving images and audio recordings and artifacts, etc. They often do have quite different metadata standards and fields associated with them too. So yes, the Riley does contain a huge range of different metadata standards and some of them may not be directly applicable to research data. If you take quite a broad view of research data, you might find that there's more included than you would expect in first instance. Awesome. Thanks, Keith. Also another perhaps more of a comment rather than a question, but that's the order they've come in. So I'm going to go with that. I love the comparison of Natasha Simon's of fair data principles as IKEA instructions. I have one for metadata. They are like toothbrushes. Everybody thinks that is a very good idea, but everyone wants to use their own. And I'm updating it to pandemic times that metadata are like the mask, which is very lovely. Thank you very much for that comment. Just for those who don't, yeah, that's fantastic. Just to hear the for those who don't know the IKEA analogy, I said that the fair data principles are like opening an IKEA flat pack. You know, you read the instructions and you think it's going to be easy, but once you start, it's actually a lot harder and it takes a lot longer and is much more complicated than you originally thought. But one solution I kind of thought I've come to think of fair as a continuum rather than a little checklist of do these instructions because there's just different degrees in which you can implement that fair. So conceptually, I think it's it's good to think of it that way. All right, thank you. Can you talk a little more about how the language is translated between different collections to be findable for researchers from different disciplines? Anyone like to take that one on. Yep, happy to talk about that a little bit. So, in different, in different discipline repositories and different distance discipline systems they they often use different discipline metadata standards. And there are actually nowadays machine machine actionable sort of crosswalks between the metadata standards. So it basically says, okay, in ISO 19115, this field is described as this and that translates to this field in a more generic metadata standard like schema.org or with CS. It sounds dead easy. You know, if it was really that easy, then it would just be one on one. In practice, it can be a little bit more complicated because one field can map to several fields in a different different metadata standard or it might need conversion, etc. But that the basic principle is that there's an XSLT between these two metadata standards and that will actually allow for a translation from one metadata standard to another. We've done a series of those to enable harvesting from a number of repositories into research data Australia, and those crosswalks are actually available publicly if you're interested. And if you would need a crosswalk, please get in touch and happy to have a chat to see if there's something already available. Awesome. Thanks Keith. So we've got quite a few questions now coming in about DOIs and some of the and ranking Google data sets or ranking the Google data set. So let me move towards that. How, how do you? The first question I have is say multiple organizations have ownership of an asset. How would a DIY be created? Which organization would be represented in the DIY? Perhaps I can comment on that Liz. Projects that span multiple institutions, complicated around managing the data. So one of the things we recommend is that at the start of those kind of projects, the party should make clear, particularly around the data who has responsibility for what. So you don't want, say five universities are working on a project together, they all meant the DIY for the data set. Some, they should agree as a group. So who should take responsibility for that activity and who should be curating that data in the long term. So one, it's likely one institution would hold the data and then make it available for everybody in the partnership. So they really need to agree on who's going to take responsibility. And so there should be some kind of contract that outlines the roles and responsibilities of each party. I'm just going to throw that out slightly more. That is, it is ideal to just have one data set, one DIY for that data collection, even if it's held up, even if it's referenced across multiple institutions, and that it would resolve to the one place of that whoever has decided to be the home caretaker for the lead institution there. I'm just going to skip over to a very recent question about distributed data sets. Then in that case, could one DIY be used for for this, even though there may be subset data, sub data sets within. Is it just, is there a hierarchy required? I don't think there's a hierarchy required, but it's useful to be able to, well, you can decide the level of granularity by which you want to. It's really think about the DIYs are there to cite data. So at what level is your data best cited at the collection level, like the whole level, or at different subsets of the data. And if it's both of those things, then you can legitimately have different UIs for each of those things. They all resolve to different, well, they all resolve to a landing page. You could potentially nest them all, but they have to resolve to a landing page that describes those things so that when someone sites it, it goes back to that data set. So just have a think about the granularity there and the relationship would exist. You can make relationships in the metadata of the DIYs to sort of say this data is related to that data. And that's also in research data of Australia. You can do that nested collections thing as well. Great. And I'll add just a little bit more on there about working data. So the UI is a particular often used for end output data. So this is a data set which is attached to a publication, for example, and that will have a DIY. Often though there's working data underneath that that people are still using for their research and it's changing so much for the DIYs that are appropriate for it. So in a way it's horses to courses. Is this something that's finished and whole as a product that people need to be able to access for citation? Or is this working data that I'm still currently using and something like a handle will be more appropriate for continually involving data also because it's got less compulsory metadata attached to it. So that those six fields for DIY, while they're not not a lot sometimes you can't satisfy that. So you'll need to accept that you can't put a DIY on that data set yet. It's still working data. Yeah, thank you. What prevents duplication in this? So thinking about other thinking about making a choice about when you're putting a DIY on something. For example, if a DIY is minted for an output by a repository, but that same output is also given to and launched by a different publishing platform, which in turn meant to DIY. I'm just going to add to this question. So how do you sort out that kind of spaghetti? You don't. I'm wrestling this data. Sorry guys, there's no easy solution to that. If you think about research gate where a lot of researchers put the data that issues do is he can't stop that. Sorry. It's just going to happen. Ideally, we don't want that because you want, again, it comes back to the citation. You want to be able to link back to that one data set. But it's really difficult. You can't really prevent that proliferation. But I think at the start of a research project, if you're careful and you work out, this is where the home of the data is going to be. And we're going to have one DIY for that. And that's the one that's going to link back to the article. Then that's the ideal situation. So it's about being careful and checking and being a responsible citizen like that. But you can't really really stop it in reality. Right. Maybe I could add to that a little bit. Would a project want multiple DOIs for its data or a single one? We prefer to have a single DOI for a data set because then the citations can collect around that DOI. If a project has lots of DOIs for the same thing, then those citations are all scattered across all those different DOIs. And it's more work to bring them all together. So our strong preference is that one DOI is minted for a single data set. One more thing on top of that is this culture of ownership of data. So if one person owns a data, then they can put a DOI in it, which is problematic because realistically in research, it's not normally one person doing the research. It's fairly rare for large data sets to have one researcher working on them. So I would like to just put a little grist in the middle and say, why are we continuing with this model of are you owning it, you can put a DOI on it? Institution or the lab or whatever, perhaps that's more appropriate for them to have ownership over that data and therefore it's theirs to put the DOI on. That's very interesting, Siobhan. I'm going to jump over to a question about looking at the personal level or, you know, on somebody's CV. It is much more beneficial if people cite my data, my papers rather than my data. Therefore, when posting my data online, I ask users to cite the paper that discusses the data collection slash analysis method rather than the data DOI itself. I think it's really about the people will cite both data DOI and the paper if I ask them. So what is your proposal? When will data citations be equivalent to article citations when it comes to my own CV? And what should I do in the meantime to ensure data sharing is most beneficial to my career? I think that's really, really common. Sorry, I'm going to jump in and be a hog again. But yeah, I think that is actually the challenge of data citation is that it is cited in really different ways. And it's not just, I cite the article instead of the data is really, really common. And then the article itself can contain links to the data or the metadata record that the journal creates can contain links to the data, especially if it's got an integration with a repository that provides that DOI. So there's more and more integrations happening between publishers of articles and data repositories. So you can say here is the article and here is the underlying data, but it does make tracking the citation count of data very, very tricky. And sometimes it's cited in text and often it doesn't have a DOI the data. It's just like a link. It can be a link to a website. I'm sorry to say that a lot of data is just dumped on a website. It's not in a repository and it can, you know, a lot of it's in supplementary section of journals. It's all very different practices and where we are moving towards a world where we are improving that practice. But it's gradual. It's not like you sort of say here's the fair principles and magic happens. It's going to take a little while to get there. So I think the practice that you're doing in that example does reflect general practice at the moment. I don't think there's anything wrong with that. But I think we are moving to better guidelines from publishers about how to cite data and when to do it and how to link it through the metadata records. I don't know if anyone else wants to add to that. I'm actually going to, I'm sorry, I feel that I probably need to wrap this up at this stage because I'm aware we've gone over time. And I do appreciate that people have that they're only prepared for a 45 minute webinar and I have got quizzes and activities to fit into their busy schedules as well. As much as I would love to keep talking about this and answer all of the questions. So I'm feeling quite an internal conflict as well. We will undertake, if your question hasn't been answered, we'll undertake to answer that and put those Q&As up on the Slack channel and a link to that on the course materials website as well by the end of this week. So I apologize if we haven't been able to get to your question just yet, but we'll respond to all of those in the fullness of this week if I can say. Thank you very much, Natasha and Keith and Siobhan and Richard for being here with us today and answering all these questions. And thank you everybody in the course who's joined us for giving this a go, Red Hot Go, and I look forward to seeing you on the Slack. There'll be a little post-webinar survey that happens as soon as this webinar ends. We're really interested in how this is going for you, so please, it's very short. Please fill that out. Let us know how you're going. And I encourage you to jump onto the Slack and dig a bit deeper into some of these discussions and introduce yourself and get to meet other people here. So looking forward to seeing you all next week. Thanks very much, everyone. Thanks for letting us talk about things you love most.