 Great. Thanks very much, Keith. And I'd like to thank Ann's for the support of ParitySec over the years. So ParitySec has been running for some time, and as you can see, it's become a significant collection. We have 31, most likely more than that, because it's increasing almost every day, but around 31 terabytes of material, representing over 1,100 languages and 162,000 files and 7,500 hours of audio. So it's a significant collection, and there's a huge management task involved in that. And one of those tasks also is making sure that this material is findable by the people. We want to find it. We have a catalogue that we've been working on for a number of years. We've built our own. Unfortunately, we didn't find one on the shelf that we could use, but the catalogue allows you to look at material with geographic point of entry into a faceted search. We have OAI, so Open Archives Initiative, and Dublin Core-based metadata. We try to be as lightweight as possible with the metadata because of our experience. We're all researchers. I'm a linguist. My colleague Linda Barwick is a musicologist, and our experience was that people just won't enter metadata if it's too complicated. So we've tried to make it as simple as possible and to make the catalogue do as much of the work for you as possible. So using control vocabularies, doing predictive data entry, and having a minimal number of fields. As you'll see here, we have a screenshot of the catalogue. We have the possibility to make the metadata private. So as Keith was just saying, fair doesn't mean that everything has to be made publicly accessible. If you're constructing a collection, you can keep all the metadata private and then publish it when you're ready. You can also assign various kinds of access conditions, including open subject and normal conditions, or closed subject to whatever conditions you want to specify. Because our project is really focused on language materials from small languages, that is all of the 7,000 other languages that are out there in the world, we include language identifiers for subject language and content language of items in the collection. And this is the linchpin that lets us then feed to a number of different harvesting services that I'll show you in a minute. Our online catalogue lets you specify geographic coordinates, which then also allows you to search using that geographic information. Because of the work we're doing, we have lots of connections into the region, in particular the Pacific, and we're actively seeking collections in the Pacific, collections of analogue tapes that need to be digitised. And you can see the various agencies there that we've collaborated with and continue to collaborate with digitising hundreds of tapes and then putting them into the collection and making them accessible. So when we talk about findability, we can talk about the granularity of finding. We can find collections and we can find items and we should be able to drill down into the collection to find things that we're particularly interested in. So we can sort of characterise findability on a scale, if you like, from 0 to 10. So if we talk about research materials, primary research materials that people have in their offices or in their homes, typically the findability of those things is about 0. It may be 1 if your colleagues know that you've done this work and you have these tapes sitting in your office. But a speaker of the language trying to locate recordings that you made with their grandparents, they're not going to be able to find that material. So from our point of view in Paradisic, we infer that these records must exist because we know that the research has been done. So we can go looking for it. And then what we can do with that is we could add records to our catalogue pointing to analogue materials and we do this in some instances. We also point at websites that we know exist. So there are some fine websites that have language materials on them, but the websites might be transient. And what we then do is point at the Wayback Machine or the Internet Archive entry for that. So here's an example of a text that was produced in the Solomon Islands put online by the Project Canterbury, which is an Anglican archive, online archive, but it's a website. There's no guarantee of longevity. And so by us putting it into our catalogue, it then makes it available and findable via the search engines that we'll see in a moment. So there we increase the findability of that to perhaps three out of ten and using the language identifier. So there you can see the three letter ISO 6393 code for languages. In this case, it's LKN. What we've also done is provided images of manuscripts. So this is a collection of papers produced by Arthur Capel during his life. He was a professor of linguistics at Sydney University. When he died, he left a huge number of papers, which we then digitized. We just set up a camera and took images of all of these papers. And as you can see in the bottom right there, there are a lot of handwritten original manuscripts, which were really valuable from a research perspective. But, you know, sitting in a box in his executor's house, they're completely unfindable. So putting entries into the catalogue and we put this through the Heritage Data Management System to put a HTML framework around it. And you can then find these items and, you know, resolve to the level of the image. Now, you can't get to the transcript of the image because at the moment all we have images there. But one of the next things that we do to increase findability is to include transcripts together with recordings. So here's an image from our catalogue. And what we have is time-aligned transcripts of recordings. These allow us to play the recording and you can imagine, because I won't show it to you, that as the recording plays, it scrolls through that transcript. So this is increasing findability. Significantly, you can resolve down to the level of words and find them in the context of the recording. One of the other things that we do is we embed some metadata into the header of the WAV files in our collection. We create a broadcast WAV format file, which is the European standard for archival formats of audio files. And you can see a little snippet of XML there, which is extracted from our catalogue and inserted into the WAV file before it's all sealed up and put into our collection. We use persistent identifiers of various kinds. Because the collection started, as I say, 15 years ago, we have an internal persistent identification system, which is a collection followed by an item number. More recently, in the last couple of years, we've put DOIs through the whole collection. So we have DOIs from the level of each file up through items and up to the collection level. You can see also that we have Zatero and Mendeley integrations. So that also makes things findable in that people will cite these items using this form, and they can click and insert them into their Zatero and Mendeley databases. We have an API. We have two feeds that we produce so people can link into our collections. RIFSIS is at the collection level, and that's what's harvested by Research Data Australia and other services. Trove also harvests that material. And the OAI-PMH feed is primarily targeted at the open language archives community. So linguists have been very good at setting up services based around these language identifiers and the OLAC page allows you then to look at all the material that's produced by any one of their 60 member archives for any given language. So it's a fantastic resource of finding information about the world's languages. And if we update an item in our catalogue, then the nightly harvest from OLAC will update that OLAC harvest the next day. So as you can see Research Data Australia takes feed and produces it in interesting ways. So the benefit for us is not only that our material is more findable, but some of these services present the information in our catalogue in ways that we don't. So you can do faceted searches in some of these services. And it also links into all kinds of other services and data providers that allow you then to do interesting new searches. There's the Open Language Archive Community page. They have a faceted search on the right and a whole lot of services that they provide advertised on the left there. If you're interested in languages at all, really the one-stop shop for finding information. What's in any archive in the world in their harvested system? This is the Virtual Language Observatory, which is a European service funded by Claren in Europe. They also take our feed and you can see that you can search our collection through that service as well. And WorldCat, the international catalogue of all libraries also takes our feed. So that's sort of on the big picture side of it and international search engines. On the other side are the people that we want to find this material out in the Pacific. And we've been working very hard to get material available in forms that can be accessed by people in the Pacific. On the top right, there's a really interesting little project that was run in Medang where they took recordings and played them at a local market and asked people in the market to comment on the recordings perhaps and enrich the metadata in that way. They then sent that to us in a spreadsheet which we were able to import into our catalogue. At the bottom, you can see a speaker of one of the languages who happened into my office in Melbourne and went through the collection and found his grandfather speaking and he was quite amazed by that. So there's an example of how unfindable I suppose the material can be that he had to come into my office to find it. And that's one of our big problems is how to make the material in our catalogue accessible to people who aren't perhaps always looking around on the web because they just don't expect to find material in their language. On the left, there's a man who's working in our office in Sydney. This was an ANS-funded project to enrich the PNG metadata in our collections and he's going through listening to material and adding metadata where he can. So one of the other ways that we're promoting the collection is by building a virtual reality project. So what you're looking at there is a map of Vanuatu and each of those shards of light coming up represents a language where there's a little symbol there. You can listen to snippet of the language which comes out of the Paradisic collection and you can see some information about how much we know about the language, what if there's a grammar, if there's a lexicon and so on and how many speakers there are of that particular language. Now this is generating a lot of publicity as you can see on the right. There's an article from the Papua New Guinea Post-Korea and on the bottom right there's an article written about this in pursuit at Melbourne University and so on. Getting this publicity is important exactly so that people will then go to look in the catalogue and to find information or think about collections that they have that need to be digitised. So it's an investment of time and effort to build the virtual reality but it's captured a lot of public attention. And it's also a research output in that it is driven by well-formed data in the Paradisic collection. We've automatically snipped 20 seconds out of audio files and used the naming convention and the metadata that's in the catalogue to then feed this virtual reality display. So ultimately we do want to get this material out to the Pacific and what's amazing really is that now most people in the Pacific have mobile phones that are accessing the internet. On the right you can see a poster for the internet on your phone in Port Vila in Badoatu and on the left you can see a church but above the church there's a mobile phone tower which is now the way that people are accessing all this kind of information. So we want to make our material findable for people in these remote locations even in the highlands of Papua New Guinea or in the most remote parts of the Pacific. So the catalogue is findable to them through various means including of course Google but we also need to make the data accessible, interoperable and reusable for them but I'm not going to talk about that now. So Paradisics created a standard metadata set that means that as the data comes in it's described with a light touch. As I say we apply as much metadata to items as possible but for some of the legacy material there's just very little metadata and we have to infer what we can. We also rely on people putting that metadata in online if they can or sending information to us. You're always open to enriching the metadata that's in the collection. The main point of the metadata is that you are able to then locate the primary records and have them play to you or see them or download them if you have the privileges. So all of that makes it more accessible and findable and by publishing that material the metadata through APIs for our discipline specific and more general search tools that makes it more findable as well. We do many things to try and publicise the existence of the collection including what may seem gimmicky, virtual reality or augmented reality but all of this goes to increasing public knowledge of the collection so that it will increase the findability but also increase our location of analog data that needs to be digitised. Part of all of this also requires data management training so that people know about how to build their own collections so we do a lot of training of researchers here in Australia but also in the Pacific and we also have a lot of engagement with community agencies in the Pacific and try to get funding to run digitisation programmes with those agencies. So that's our story about findability. Thanks.