 Just to introduce myself, my name's Keith Russell. I work for the Australian National Data Service. I'm your host for today. My colleague Suzanne Sabine is behind the site scenes, co-hosting the webinar with me. Just a usual little bit of background. The Australian National Data Service works with research organisations around Australia to establish they will have them trusted partnerships, reliable services and enhanced capability in the research sector. We work together with two other increased funded projects, Research RDS, Research Data Services and NECTA, to create an aligned set of joint investments to deliver transformation in the research sector. So this webinar is part of a series of activities we are undertaking to which aim to support the Australian research community in increasing our ability to manage our research data as a national asset. So as I mentioned earlier, this is a third in a series of webinars around FAIR, so we've already had the webinars on findable and accessible and today interoperable, next week reusable. So today I will give a brief introduction about what is interoperable as described under the FAIR data principles in 411. And then I'm very grateful that Simon and Jonathan have available to talk about what they did in practice in the in the Osnome project to make their data interoperable. I think it's a great example to show how this quite complex topic can actually be carried forward in practice. So this is what 411 says about interoperable and first of all a few things to keep in mind. So just reiterating a few things I mentioned in the very first webinar. So when they talk about data and as you look at these headings you'll see that they talk about data and metadata so interoperable applies both to the metadata describing the data collection and the actual data itself. Another point to keep in mind is throughout the FAIR principles they think a lot around not only data being usable for humans but also for machines and that provides a huge benefits in bringing together disparate data sets in bringing together bits of knowledge that are distributed over different data sets. And interoperable is a key element there to make sure that data can be brought together and actually can be, we can get those benefits out of bringing data together which will enable new knowledge discovery, new relationships to be discovered, new patterns to be recognized, all those pieces of work. So as we look at these three headings that they have listed under interoperable, first one there is that data and metadata use a formal accessible shared and broadly applicable language for knowledge representation. To keep in mind there is that not only for you as the researcher or the researcher that has created the data but also for another researcher that wants to understand the data and use the data it's useful that they understand the language you've used and that is a standardized language something that other other other users can also pick up and use. So ideally that is the case for the metadata, that is definitely the case for the metadata and ideally that would also be used in the actual data itself. Very basic example, if a researcher has observed that they saw a magpie they can write in I saw a magpie but it's much more useful for a researcher somewhere else on the other side of the world that you write in that it's an Australian magpie and that is a Craticus tibogen. That means that a researcher on the other side of the world using a standard language will actually be able to better understand what you meant and what that description is about. Now it's not just in the actual wording used in the vocabulary used but it's also in it's useful to have a framework around that which will allow the data to be also be machine readable and picked up by machines and used and interpreted. Now one obvious example which gets mentioned quite a lot is using RDF and ontologies. That is quite common in the life sciences and a number of life science researchers and that were quite active in the force 11 group but one thing they emphasize is that it doesn't just have to be through RDF and ontologies there might be other solutions for this and they don't want to make it exclusively through those technologies so that's something to keep in mind regarding the making of data interoperable that's what I've invited Simon and Jonathan to come and talk about and they'll be able to talk about it in much more detail. The second point here is around vocabularies and using vocabularies and they emphasize that if you use a vocabulary well first of all try and use one that already exists or is agreed on by the community if you have a term or if you have terms in there that are not in that vocabulary but otherwise it fits try and get them added to that vocabulary and finally if that is not possible then only then and only then start creating your own vocabulary so please don't go out and create vocabularies for everything rather look if there is already a community agreed to a vocabulary. Also make sure that that vocabulary itself is fair so findable accessible interoperable reusable so if in your data set you should have a reference to that vocabulary you are referring to and make sure that that vocabulary can be found just as long as your dataset can also be found. Final point they make is that the data and the metadata should include qualified references to other data and metadata so what they mean there is that shouldn't just be a reference to another dataset for example but also an indication what that relationship is so it's not just it's related somehow to this other dataset but perhaps it is a subset of another dataset or it builds on another dataset using standardized terminology. A little more on qualified references from the perspective of the metadata especially it's valuable to not only refer to other players or other elements around the around your dataset but to do that using identifiers so for example if you are describing your dataset and saying well it was created there was somebody was involved in creating that dataset provide a qualified identifier that that person was for example the author of that dataset and if possible also use an identifier to identify that person that allows other relationships to be made and it allows further connections to be made and that information to be picked up and used especially for in machine when being analyzed by machines so just a list here of possible identifiers these are just examples there are more identifiers out there but for example if you're referring to an author include their orchid if you're referring to a publication use the DOI that is related to that publication if you are referring to software nowadays you can assign a DOI to a software package and refer to that DOI etc.