 What are the FAIR principles and what is FAIR data? FAIR are a set of guiding principles that enable and increase the reuse of data by humans and machines. FAIR is an acronym that stands for Findable, Accessible, Interoperable, and Reusable. The FAIR principles originated in the life sciences, but can be applied to all disciplines. They are increasingly gaining traction and becoming a requirement by many research funders among others. Let's have a look at what each FAIR principle means. To enable its discovery, data should be described with rich metadata, and it should be assigned a persistent identifier, such as a digital object identifier or DOI. These metadata should be available online, in a searchable resource such as a data catalog or repository. Metadata and or the data themselves should be retrievable via their persistent identifier using a standard communication protocol, such as HTTP or HTTPS. This means that following the persistent identifier should take you to the metadata or data. However, keep in mind that accessible does not mean that data must be open, in the sense that there are no access restrictions. It rather means that if data has access conditions, these are clear to both humans and machines. Therefore, the protocol for accessing the data should allow for an authentication and authorization procedure where necessary. In addition, metadata should be accessible even if the data themselves are no longer available. Whenever possible, metadata and data should use recognized standards. By using formats, terms or vocabularies that a community has agreed upon, we make sure our data is understandable by others, but we also make possible for data to be exchanged and combined across computer systems. Interoperability also involves providing context by including references to other relevant metadata and data, for example by linking to another dataset on which your dataset is built. Data should not only be available, but also effectively reusable. To achieve this, data should be abundantly described and documented in accordance with community standards. Metadata and documentation should be able to answer the W questions to help others understand what we call the provenance of the data. In other words, where did data came from, and what happened to them along the way? All of this is needed when we want others to understand the context of the data and judge how relevant and useful they are. It increases trust and the likelihood of reuse. To make data reusable, we also need to let others know what kinds of reuse are permitted by including a clear data usage license. Given the multiple aspects of FAIR, data is not either FAIR or unfair. FAIR is a spectrum. In other words, data can be FAIR to a greater or lesser extent. So, how can you make your data FAIR? Unfortunately, there is not a one-size-fits-all. But note that much of the work for making your research data FAIR can be addressed by depositing your data in a trusted data repository. By choosing an appropriate, trusted, and preferably domain-specific repository, you can score many points in the FAIR game. When you upload your data to a repository, you will typically need to provide metadata by filling a form. The elements of the form comply with a specific metadata standard. Your metadata will then become machine-actionable and searchable in an online resource. The repository should also generate a persistent identifier for your data. It will also provide the possibility to include references to other data or metadata. For example, to link to related datasets or your orchid. In addition, trusted repositories will have authentication and authorization procedures in place to make sure that appropriate access conditions for the data are respected or enforced. And repositories also allow you to choose from machine-readable licenses, enhancing the reusability of your data. Domain-specific repositories tend to make use of discipline standards and controlled vocabularies, increasing the interoperability of your data. Data repositories are indeed a key infrastructure enabling FAIR data. However, they won't do all the work for you. After all, you are the one that knows the data best. So you are still responsible to provide rich metadata and documentation to make the data understandable. Besides, if a discipline repository requires the data to be in a certain standard format and to use controlled vocabularies, the standardization process is still your job to do. Therefore, the sooner data is being collected and managed in a FAIR way, the easier it will be to keep the data FAIR in the end. This is sometimes referred to as making data FAIR by design. That is why planning for data management, even before you start collecting data, is essential. So, are you ready to make your data FAIR?