 So today the topic is here from the experts, sensitive data can be published. So this webinar follows on from the launch of the new ANDS Guide to Publishing and Sharing Sensitive Data. A couple of weeks ago, of course, we had the first webinar on this topic which introduced the guide. So today we're here in town two advisors of the guide who offered their expertise in research ethics as well as legal matters and these, of course, are the two topics for which we have the greatest queries, comments and concerns. So we're very lucky today to have our two speakers. First up we've got Professor Michael Marvin, who I'll hand over to you in just a minute. Michael is a professor of statistics at the Centre for Higher Education, Learning and Teaching at the Australian National University. He's also the chair of two Human Research Ethics Committees at the ANU, Humanities and Social Sciences Delegated Ethics Research Committee and also chair of the Science and Medical Delegated Ethics Research Committee. Michael's going to guide us through the ethical questions that arise in sharing sensitive human data, what we need to consider and how to plan ahead if you're thinking of doing research in this area. After we hear from Michael, we're going to hear from Bayden Applyard, who's the baristerate lawyer and a national program director of OSGOR, which is, of course, Australian government's open access and licensing framework. Bayden will share his expertise on the legal aspects of sensitive data publication, in particular the Privacy Act, and licensing your data before publication and sharing. Great, thanks very much, Sarah. I mean, one of the things that people might assume is not true is that sensitive data can be shared. And I guess this document that Anne's has produced is a bit of an eye-opener, I guess, for a lot of people. I think there's this feeling around research that the words sensitive and sharing can't go together. But in fact, a lot of the questions that surround ethics do so in the context of a document produced by the NH and MRC, which is the National Statement on Ethical Conduct in Human Research. It's been around for a few years, and it's the set of guidelines under which all of the ethics committees around universities and other research bodies operate. So we're principally guided by statements made there. And there's a statement that's in there that actually I think is very encouraging of sharing data. It almost has me feeling that it's unethical not to share your data. Gathering data imposes a burden on participants, and burden equals risk to some extent. And of course, as long as you can minimize the risk of that sharing, then it's encouraged. And it means that someone else won't have to go out and gather data from exactly the same people in exactly the same way as you have. And therefore, it's a way of making that risk spread out for those people. So the National Statement basically wants you to reuse data and to make it available for others, provided you cover off on the risk factors for your participants. So you need to be able to facilitate this. Now, I guess underlying all of this is some judgments about what the ethics clearance process is about. And so I thought it might be useful to start with what I do when I sit on one of these ethics committees. Now, as Sarah said, I'm the chair of both of the Delegated Ethical Research Committees at AMU. And these are what other universities might call the low risk committees. So there's low risk research and there's high risk research. Well, many researchers tell me that they do no risk research, practically doesn't exist when you've got human subjects. But nevertheless, the low risk committee gets to see probably easily 80% of the ethics applications at the university. And what we do is we do the same thing that the high risk committee does. We sit and we think about what risks are involved with participants and we think about whether or not those risks have been properly identified by researchers. Then we think about whether or not the researchers have done enough to inform participants about those risks. Now, these aren't issues that are peculiar to data sharing, of course, but the data doesn't come from nowhere. And so for it to be born, I guess, this process needs to have been done. And of course, it gives you an opportunity at the get-go to let people know what you might be doing with the data in the future. And that's a really important component of what we're going to talk about. The other aspect that people don't necessarily think about is that some people try to get around these sorts of issues by being very vague in information sheets. And as an ethics committee, we don't really like that. Because ultimately, if you're vague about something you come across as being evasive and people don't tend to trust you quite so much. And so when researchers say to potential participants, I'm just going to use your data to finish my thesis or publish a paper and to do future research. I think it's important that researchers are mindful to be as specific as they can as to what that future research might entail. In particular, they ought to be saying whether it's only for their use and they ought to be saying whether they might pass this data on to other people. And in the same sense that the national statement on ethical conduct says that sharing data is a good thing, I also think that researchers who intend to pass on their data ought to be upfront about that. I don't think you have to be upfront in order to pass it on. And we'll talk about why you can protect the data anyway. But the more upfront you are, the more you can hold your hand on your heart afterwards and say I've been a good ethical researcher. When you gather data, you also have to think of what the national statement refers to as the principle of beneficence, which is the idea that research carries risk, but research also has benefits. And there's a long-time tradition in the conduct of ethical research that the benefit should outweigh the risks. Now, what's not stated in most codes of ethics is who makes that judgment. And of course, if you leave that judgment solely in the hands of researchers, you end up with one view of risk versus benefit. If you leave it in the hands of ethics committees, you get another view. If you leave it in the hands of the participants, you probably get a third view. Although ethics committees would probably see ourselves as the guardians of participants, therefore our view should be closer to theirs. I have a lot of researchers put in proposals that say there is no risk. And well, I'm almost always, and I'm going to say do always, go back and say, well, I can think of five, have another go. And researchers tend to then be able to come up with usually the five that I've come up with, because when they say there's no risk, what they're really saying is there's no risk I want to have you about because you're an ethics committee. Unfortunately, being on an ethics committee means I can think of risks and generally do so pretty readily, so it doesn't work as a strategy. The process of gathering the data is, of course, the first step in sharing it. And for that, the more consent you can put around what you'll do with the data, the better. Because once you've got a signature on a piece of paper that says, I'm happy for you to do this, then the barriers to you doing it are effectively gone. Having done that, once you need to maintain the rage of it and make sure that as you go about the research, when risks arise, whether they be things you've anticipated or things that you haven't, you work to mitigate those risks. And, you know, examples of things like you're often collecting data from vulnerable populations and, you know, you know, for example, that data might cause them to have someone knock on their door later with bad news or, you know, some requirement that's suddenly upon them because they've participated in your research. And, you know, you've got to try and do something to make that risk less likely if you can. You can't always say if you can. And in conducting research in other countries, for example, you don't necessarily have a whole lot of control about what security forces might do once you've left. And the one thing you'd be sure of is they probably won't be talking to you, but they will be talking to your participants. And you ought to have thought about that beforehand. And that partly gets to the question of, well, you know, how does data become sensitive? And in the ANS document, there's a great list of, I guess, personal details that might cause your data to be regarded as sensitive. And I guess researchers need to think not just through that list from the perspective of doing domestic research, but once you head overseas, even very, very small things that for us have no problem giving it out to anybody on the street, you probably wouldn't want to be made any more available. So those lists are definitely contextual. So that's the baseline. We talk about best practice and it's a good idea to talk about best practice, but we recognize that, you know, best practice lives in a fantasy world and the real world where the research is actually done. However, having said that, you can put in place measures that make your work likely to adhere to best practice. So it's not necessary to have explicit consent for data reuse, but it sure helps because it just simply removes the question about whether you're doing something you're allowed to do, because you've got the permissions slip right there. There is this issue, and again, this might not always be possible, but if you do have a future use for the data, even though you might not be able to specify it at the time the research is done, that future use might introduce risks that you didn't think of before. And this particularly happens when you're sampling in small populations, which is a case I'll talk about later. And then if it is possible to go back and get ongoing consent, that's something your researchers should do. Now, you know, we live in a world where the data might have got to you in a way which means you can't do that because you don't know who the participants are, and of course, that's going to be the case sometimes. But nevertheless, it's something that every researcher should think about. One of the great things about being an academic is you often think about things that you can't ever do. And, you know, as soon as we rule that practice out, I think you rule out some of the best and most creative parts of humanities. Having a good imagination is always a good thing. The question of whether the data is sensitive, and now that's been answered in the ANS document, and it tends to define sensitivity around two things. One are features of the data, so features that are likely to make it sensitive. They generally get to things like identifying details. But you'd be surprised what could be an identifying detail. And it's not always the obvious thing. You don't have to know my name to know who I am. More practically, I think what makes data sensitive is the harm that might be done if it got out in the wild in the form you collected it. And so I think those are the two features of sensitivity that most concern us in this ethical space, is, you know, what are the features of the data and what's the risk profile? And I think you can't really ever separate features from risk when you're talking about ethics. And if the data is sensitive, you can't just give it away. You have to do something to it before you ever let it out of your hands. And one of the things we get people to do in their protocols around ethics is they have to articulate a data storage strategy. That data storage strategy has to involve secure storage where access is very tight and controlled and access is all known. So whoever's gonna have access to the data has to be advised to do participants before you gather it. So when they give consent, they're giving consent under full information. That's why the data is sensitive. But of course you can make data less sensitive, okay? So any data sensitive best thing to do is to ask first if you haven't asked or if in any event you should think about desensitizing the data. So lots of words for desensitizing and I like in the ANS document that all these words are displayed there. When I first took on the job as an ethics chair, I used to occasionally use the word anonymous, feeling that I understood what it meant. And I don't, or I didn't, I do now. A lot of researchers tell us that the data is anonymous even though they were sitting opposite the person they collected it from and therefore know who the person is. So anonymity is one of these in the eye of the beholder things. So if you know the person from whom you're collecting the data, then it's not anonymous. And you might say in response to that, well, I'm not gonna use their name. And the answer is, well, you might be standing in a court of law where you have to, in which case you're not gonna be able to tell the judge it's anonymous because the judge will tell you that you're in contempt. Of course we have a legal expert here who will try to reverse that. I agree with that. So anonymous in some practical senses only exists where you never ever have any access to anything that you could identify the participant. And then oftentimes if you didn't ask for consent first and you didn't get explicit decline of consent, then you still need to go and make sure that the identity of the respondent is appropriately removed because at the end of the day, one of the sorts of risks that always exist is unintended risk. And you may not intend it, but you can still anticipate it. And that part of that anticipation is to remove the identifying information. Before I talk about confidentializing things, I wanna say one thing that I think is very important as a bit of a headline around the use of shared data that's been desensitized. And that is it really does change the beneficence equation. I mean, when most research is done initially, you're going out to gather data, there is risk to your participants, but you've gotta be able to put your hand on your heart and say the benefits of my research is such that this risk is appropriate. And that's a central plank of the decisions around ethical conduct. Once the data has been desensitized, and what you're essentially doing is you're breaking the link between participant and data. And therefore it will no longer be the case under any conditions that someone can go and knock on the participant's door and go, I know what you did last summer, because that link is just no longer present. And so the risk essentially disappears. And that's a really massive benefit when you come to the beneficence question, because it's all benefit and no risk. And this is one of the reasons why sharing data is something that the national statement says is a good thing, because the national statement is all about balancing risk against benefit. And once the risk is gone, then it's very easy to do ethical research. So that's, and that's something that I think comes through very clearly in the ANS document, but I wanted to make it clear here again. In terms of how do you confidentialize data, and this is covered beautifully in the document, but I thought it was worth talking about. A lot of people think that anonymizing data, and that's the word that's often used, is all about removing names. And I guess from a strict dictionary definition, that's what anonymizing means. But it's not good enough to just remove names if you leave in other trails that take you right back to the same person. And so the ANS document points out this notion of what are called direct identifiers. And all of us know what they are, name, address, telephone, number, email. A lot of researchers take data in the form of a photographic or audio or video data. And one of the things that as a chairman of this committee I always push back on is why? Because photo, audio, and video basically identify you without too much effort at all. I know one of these people that when I watch a current affair tries to de-pixelate faces in my head, which is to work out who's saying that. Of course, sometimes it's not that hard when you put together information that you can Google very easily. So at the end of the day, there are things that'll give you away no matter what. A lot of researchers propose using initials in the field to confidentialise their data while it's in their briefcase. We almost always say you can do better than initials because everybody knows who BZ is. And that's a problem. And it doesn't take a lot to put stuff together. And I'm gonna give you some information about myself here freely. Some of which I'd be proud of you to know, others are the bits less so. Okay, which is which. So indirect identifiers are things that don't immediately tell you who I am, but it doesn't take too many of them to work out who I am and build a case. So you might know what I do, I'm a statistician. You might know I'm very tall. Once you know I'm born in Toowoomba, which is reasonably large town, but I'm guessing there's probably not too many people who satisfy the other two things. Suddenly you're zeroing in on who I am. And so it will quite often be a feature of desensitised data that my name will be missing and my address will be missing and my phone number will be missing and my email address will be missing. Not the place I was born might not be and how tall I am might not be and my occupation might not be. And any two pieces of these three might not give me away, but three definitely does. And so there has to be some thought around how much indirect information might betray me. And again, this is a feature of how the data gets cleaned before it gets shared. But one of the things that you must do when you get data that's been desensitised is to be aware that the person desensitising it might not have removed enough. I'm sure here in Ann's there'll be no problem, but there's a lot of people around, I guess, who desensitise data. It's like going to the dentist, right? They try to desensitise your mouth and it's not always successful. And they'll be digging around in there and they'll say, do you feel this? And unfortunately you've got so much in your mouth you can't say yes and so they continue to hurt you. So desensitising is one of these things that again is a bit in the eye of the holder. You're in good hands here, which is good news, right? Okay, so what makes sensitive data sensitive? What makes it risky? Few things. And sometimes best efforts don't always get you where you wanna go. So there's inherent risk. And when I say risk, I don't necessarily mean someone dies. Risk again is one of these things which people have differential tolerance. There are some people and I'm probably one of them. When I get called up on the telephone without the other party identifying themselves adequately and they start asking me questions, I'm very quick to say I'm not gonna tell you that even if it's what's your favourite ice cream. Because I wanna know why they wanna know. And so there's low risk stuff. And then there's very high risk stuff and this can happen in overseas countries where you might be doing research on something like insurgencies and such research is being done at the ANU. And if the parties, if other parties got a hold of the identities of your participants, then death is a possible result and possibly even a likely result. And so risk can go from negligible to the very, very extreme. And don't be surprised at how willing others are to guess identities. It seems to be part of human nature, Brian. I give you enough hints, I can work it out. Academics are well aware of this. We get our papers refereed by people who are anonymous but most of us can work that out. And we have a short list of names that, you know, who did me in. And so third party identification is what that's called and it's particularly problematic in small population. And so even if the data's been desensitized, there is a possibility that it raises risks for participants. And that's why it's better if you can to get explicit consent around reuse beforehand because even though you can use the data in the absence of that consent, you wouldn't want to inadvertently cause someone pain of any kind through your use of that data. And so even using desensitized data, I'd encourage people to be sensitive about their use always. Ideally, when data's confidentialized, risk is reduced, hopefully it's gone. Generally the data remains useful and that's, I guess, the big ticket item here is that someone won't have to go out and collect brand new data, data gathering. But do remember that confidentializing isn't just a matter of, you know, going into Excel, removing column A because that's where the names are and there's your data. And I call this notion of putting information together to find out who you are, a jigsaw attack and people are adept at doing it. And generally the people you don't want to be doing it, who will be doing it? Security organizations, for example. Okay, so this is from the ethics committee standpoint, how you can be ethical. Be aware of what participants were told. You might not be able to be aware if you're a recipient of shared data, you might not get too much information about the participants. But if it is possible to pass on the consent was there, then it can ease your life as you try to push your ethics proposal through a committee, okay? But also be aware that it is sometimes the case the participants are told, we will not reuse or share your data. And once that's been signed off on by participants, it's not shareable. No matter what you do to it. The sort of thing this might happen to is genetic type data which is often collected these days. And one of the things on ethics committees that comes up is how, what are the risks? Because we don't know them necessarily. And there's always this specter that genetic data will prevent you getting a job in the future if it's found out that you have a particular genetic defect that's not presently thought to be a defect. It's one of these funny things. So quite often in research of that type, participants are explicitly told, we're gathering your data, we're gonna use it for this purpose. And then we are going to, we have to keep it for seven years under guidelines, but then we are going to destroy it and we will not under any circumstances give it to anybody else. And that binds its future use. So that's important. Again, by the time you get the sensitized data, hopefully it will have satisfied all of the things that people were told it would. So it had only been accessed by the people they said it would be stored appropriately and deleted if you said so. And of course, this is one of these funny things that pops up on information sheets that I see pretty much every day. And that's things like on the information sheet it says, I'm gonna use your data to write a paper, give a conference presentation and for future research. And then later on in the information sheet it says I'm gonna keep your data for five years after publication and then destroy it. I quite often wonder about how these two coexist because it certainly puts a clamp on your future research if after five years the data is gone. And so I guess I'd be encouraged. And what the national statement requires by the way is not that you destroy your data only that you keep it for at least five years. It doesn't say what you should do with it after that. And a lot of people just think I'm gonna destroy it and that that's a clamp on reuse. So you should consider right at the get go, ask can I reuse it? Then don't say you'll destroy it and then work with the party like ands to have it desensitized so it can be reused which is consistent with the national statement of what they want you to do. Sarah mentioned this as a particular question that comes up all the time and I can see why it does because best practice isn't always followed and best practice is to ask first like your mother always told you but a lot of people don't. And so what happens if nobody asked? You didn't say can I, you didn't say can't I and your information sheet is just completely silent about what future use of the data might involve. The answer to that question is really simple luckily. If you didn't ask, it's okay still you don't have to go back and ask provided you don't just spray the information around desensitized which is not a word but we're gonna create it for the purpose of the webinar. So in that circumstance, it is perfectly okay the data to be so shared. You have to confidentialize it so you have to take out the bits that are gonna cause harm and then you just have to do a reasonable check that there's no residual harm that could be done and the sort of residual harm that could be done sort of features here. When you, and this is sort of the sharp end of the risk stick if you're in a small population, no matter what you do you pretty much can get some identification going. Indigenous populations are the best example that I can think of they're oversampled ridiculously in Australia and so a lot is known about them and a lot is asked about them all the time. And it's usually pretty simple within that population to work out exactly who's being talked about. You're going into a small Indigenous community and you're asking about unplanned pregnancies and these are the sorts of questions that get asked. You might be asking about one woman who everybody knows and you can take off pretty much whatever information you want and everybody's talking about who's being talked about. Now the Australian Bureau of Statistics knows this and so when they're asked to give out data on Indigenous populations they actually go in there and deliberately alter and make up stuff under quite legally so that the information that gets out can't be traced back to that one person. So Indigenous populations are a very difficult area but equally the same is true of rare conditions and here I'm thinking of say rare genetic or medical conditions where there might only be three people in Australia who have the condition and therefore you are identifying if you say anything about them. And other things like unusual or memorable features and although they sit on a slide which makes them look like small problems a lot of research is done on small things and as a result we need to just be aware of that as we go forward. And finally for me the main message I guess and this is something that every researcher should do whether they're intending to share their data or not or whether they're using shared data or not. Participant first is the central message of every ethics committee. The people who are doing your bidding and giving you data don't have to and are doing it out of the goodness of their heart. We ought to repay them by thinking about what we're doing to them in the process. This involves always doing a worst case analysis. Think of the worst thing that might happen to your participant when you're gathering your data. You know damn well it won't happen but you ought to have thought about it, okay? And then the third thing which I think is the central message that Ann's is delivering is that data reuse is what we need to be doing. It actually makes our data so much more valuable than just collecting it once and then summarily destroying it. And so if you can see consent to do that it's a great idea but even if you can't don't give up you can still reuse the data. Thank you so much, Michael. The insights there into the workings of the ethics committees that many of us submit our application forms to. We're now going to hear from our next expert, Baden Napoliard from Osgoal. And Baden's going to discuss the legal aspects of sharing and publishing sensitive data around privacy law and licensing. What I might do is just talk about where ethics and privacy law meet and perhaps where they might depart because they are two different beasts that they seem to occupy a very similar space in people's minds. Firstly, I might start at the beginning and just say that the privacy laws in this country were never couched to the community on a proposition that it's only about locking information up. They were actually couched. Their policy position is to address a balance about what should be released as opposed to what shouldn't be released. It's not intended to be interpreted that everything will be locked up or else. That said, I can see why people come to that conclusion because that's often how the acts are constructed. In particular, I find going around government agencies many government people are very, very familiar with the locking down part of the act but they never seem to get to the exceptions part which is at the rear of the act which actually says you can release personal information in certain circumstances and some of the circumstances you've touched on and other ones are a little bit more obscure. Privacy law is very much a default law. There are other laws that govern how we deal with what might be regarded as personal information. I'm thinking, as I mentioned earlier, of the Aboriginal situation. Most jurisdictions have something equivalent to or akin to an Aboriginal heritage act which places particular security around government databases or culturally sensitive sites and locations. And indeed in Queensland, for example, there are very few laws that trump the right to information law which is the new FOI law but the Aboriginal Cultural Heritage Act, a particular section of that who can access that information is usually not in the public interest. And so the Aboriginal Cultural Heritage Act or Aboriginal Heritage Act 82 I think it is gives ups the right to information act and all the things are attaching to that. The other aspect to this is, depending on where you are in the country, different privacy laws may apply to you. So not only do you have the issue of special laws about special information that may impact the economy, but the privacy laws around the country are not identical. They're similar, but not identical. So for example, the Commonwealth Privacy Act only covers in a research case, ANU. It doesn't necessarily cover Bond University, depending on what they're engaged in. And you know, for example, again, it would probably be the information privacy act Queensland that might bind over those organisations in Queensland. So different jurisdictions will have different laws. There will be, you know, it's in general reference to privacy principles and they'll generally be the same, but they're not identical. The other thing too is that businesses can be entrapped by the privacy law, but many businesses are not entrapped by the privacy law. Now, if you're working in a private institute and you're not doing health and medical research, it may be that the privacy act doesn't necessarily apply to you. If for example, the turnover is under $3 million and you're not entrapped within those particular provisions and you're not strictly speaking a public or an agency within the meaning of the privacy act, which may or may not include a university. The other thing important to note is that privacy law only deals with personal information. So I frequently find in government situations that information is collected from the community and it may even be from an individual, but it's an individual representing a company. A company can't have personal information within the meaning of the privacy act. So where you're collecting information from a company, it won't necessarily be entrapped by that. Another thing too is it only affects records. So if for example, you're in an office and you've got a database full of records, but you've also got a telephone book beside you that contains records of everybody's phone numbers, that's not going to be subject, the phone book clearly is not going to be subject to any privacy arrangement unless that information exists within a record that you happen to hold and the record is subject of the privacy scrutiny. Of course, as you pointed out, in ethics as well in the clearance documentation, privacy laws can be modified by agreement. I think everybody here is at some point in time consented to having their privacy or their strict rights ameliorated to some extent by agreement, whether it be entering into a mobile phone contract or even signing up to some sort of website on the internet to sell you goods or services. I'll tell you what does keep me awake at night about confidentializing or desensitizing data and that is degrading it. I'll give you an example. I did a fair bit of work with a police service in Australia and they were looking at publishing a crime map. And the crime map carried the locations of crimes that had occurred in this particular state. And the locations of the crime maps were very clearly, the crimes were very clearly announced had taken off the system, they reported exactly where the location was. Now, leaving aside crimes of a personal nature, domestic violence or sexual assault or those sorts of things, so they're excluded. Let's just talk about break and enters for example. The police service decided that it would be infringing upon the privacy of those people to publish the exact location of where that particular crime took place. I happen to be of a view, rightly or wrongly, I might be to have the argument perhaps that there is no personal information attaching to a location where an event actually happened. But what the police decided to do is degrade the data so that instead of dropping the pin on the map where the crime took place, it might be 30 meters down the road or 40 meters down the road. So what you're doing, maybe you might be obscuring the personal information if there is personal information about where the crime took place, but you're also then casting an aspersion on the same theory that something happened somewhere else. So I do get nervous when I say, I frankly would prefer a very reasons debate about whether those sorts of things are personal information or not. The other thing that I find happens in government, and I suppose maybe similar to your work in ethics is the documentation is often too restrictively constructed. And we will only use this for a particular point in time or a particular study and you're effectively wiping out the benefits and opportunities of reuse of that information. But similarly, I get also a little bit concerned when I see things like, well, I'm not gonna give this to anybody. It won't leave my office. It may well leave your office. It may well leave your office if you're subpoenaed to provide it. So I would prefer to see text around those sort of things to say what the situation actually could be. Can I interrupt there and just say that the most common thing I would write back to the ethics applicants is, please insert the phrase to the extent committed by law. That's it, that was my next line. You know, but I agree with you similarly, too vague and not mentioning of all is probably equal if not worse sin, absolutely. So my main message, I suppose, is to be reasonable in the circumstances. I don't think a law has ever required perfection of anybody but it certainly does require you to be reasonable. Finally, I'll talk about licensing and that's a feature of the ANDS document which I agree, fantastic document. I've already started showing some government government people around the country. If you're going to publish your data, if it's going to go anywhere in the world, it needs to have some sort of license on it, full stop. That's about the end of the discussion. And if it is still confidential information, then odds are it's going to be some sort of restrictive license. And under the ELSCO program, we have a restrictive license and in fact, a lot of our licenses are restricted but we have a special one just for example, for health and medical records. But primarily what we would like to see is in terms of, you know, about opening access to publicly funded information which includes publicly funded research information is for the data to be desensitized and published on an open basis. And when I say open, I mean, in terms of licensing a Creative Commons attribution license or if there's no copyright subsisting in the data in a public domain mark, though I grant you, it's more often gonna be the case than not that where you have data sets dealing with individuals that it will be copyright subsisting. I think that's about all I need to say other than to, you know, if you want more information about where, about Creative Commons licenses or the public domain mark, OSCO.gov.au, we've got a special tab for research as well on that website. And just as a bit of a shameless plug, we're gonna have a new website very soon. February is the date and we're gonna be releasing some new tools including a Microsoft add-in to electronically insert but visibly and intermediate data license attributes about documents. So thanks very much. Thank you very much.