 So first of all I'd like to give a brief introduction about the A in the data in the FAIR data principles and the A stands for accessible. So the way it's being described and way force 11 described the principles is that metadata so data and the metadata both of them are retrieved by their identifier using a standardized communications protocol. So when retrieved by their identifier that's the identifier we talked about last week so that can be a DOI, a handle, a pearl, something that's persistent and that through by using the DOI, a handle or pearl you should be able to get access to the data or the metadata and the protocol to get there should be open, free and universally implementable. So the thing to think about there is that it's something that is a protocol which is standardized and used by, can be used by anybody, it's not something that is bespoke, not something that's home built or badly documented. The classic example is HTTP, that's the very normal way of using through internet accessing materials and accessing data. It should not require some specialized expensive software. Another point they make in the data principles is that the protocol should allow for an authentication and authorization procedure where necessary. So this is a common misunderstanding is that when people read accessible they think oh that means I have to make my data open. If you actually read the data principles that's not what they're saying. What they're saying is accessible does not actually have to be open or free but you are expected to give exact conditions under which the data are accessible. So even heavenly protected and private data can be made fair. If you implement it properly, implement the fair data principles properly then a human being can see that the data is maybe not openly available but then what steps they need to take to get access to the data and because in the fair data principles they also talk about machine access to data. If a machine goes hunting around and looking for the data the machine should be able to recognize that the data is not open and what steps need to be taken to actually get to the data. I'll talk about that a little further. If the user, so that's either the human or the machine, has been granted access to the data then it should be accessible through some sort of authentication and authorization procedure, some standard procedure. The last point they make under the fair data principles about being accessible is in the case in which data is no longer available at least the metadata should be accessible. So this is of course not ideal but in some cases it is necessary to actually take the data down so that could be if consent for use was only for a limited period of time or maybe there's been a legal takedown notice or something along those lines that really make it impossible to no longer make the data available. In that case it is valuable to still keep up a metadata record describing the data and explaining that the data is no longer available. Now just to reinforce that accessible does not always have to be open. There are clear cases in which data cannot be made openly available. Obvious example is where data refers to human beings and specific characteristics of those human beings like information about their health, their income, religion, attitudes, political persuasion, all that sort of stuff. That's not the sort of information you can make publicly available. Other examples and that's probably worth remembering is that there are other sets of data so for example threat and species. The location of where threat and species can be data which is not some of you want to make openly available because that could mean that the last few of those species are hunted down or collected. Famous example the wall of my pine is the location of that of those specific species need to be protected. So finally another example where data cannot always be made openly available is where there are commercial interests in the data and maybe the metadata can be shared but the data itself is sensitive. Well, there are commercial interests around that and in that case it would not be appropriate for that to be made openly available. When considering making data accessible we do argue to make it as accessible as possible and as openly available as possible. A possible angle there is just to provide the metadata as a starting point. If the rest cannot be made available at least the metadata, slightly more useful perhaps is making it available through mediated access and in that case it's valuable to be clear about how the user can actually get access and that can be through by providing an email address, name, telephone number and if for example the user has to go through an ethics procedure to get access to the data then clearly describe that ethics procedure and what sort of information is required to apply for that ethics procedure. So I was talking about the mediated access and about providing information about who to contact if you want to get access to the data. One thing to keep in mind there is if you list a person or a person within the organization have a think about whether that person is ever going to leave, if that's a researcher, if they're going to another organization have a fall back, have some sort of mechanism to make sure that or maybe a more general email address so that when that data custodian leaves somebody else can at least answer the question and grant access to the data. Another possible angle in making data accessible is creating a de-identified version of the data and making that public as long as it's properly de-identified and that can be useful for certain data users at least have a better view what's in the data set and for some purposes a de-identified version can be enough. Finally, good point to keep in mind is if you do want to make the data accessible plan for this and you can send forms because coming back afterwards and trying to get consent is not easy. Another angle worth keeping in mind and that's something I've invited Jingbo to talk about more is making data accessible can be through various routes and various protocols and in some cases it doesn't make sense to have a large data set available through download, in some cases it can make much more sense to actually have services over the data which allow the users to interrogate parts of the data, pull in parts of the data that are much more specific and answer their requests and that can be for a human being but especially for a machine that can be extremely useful. So one thing to keep in mind there you need some sort of community agreed standards around that.