 Welcome everybody. It's Jerry Ryder speaking from Ann. I'm sitting in for Karen Visser today who's unfortunately unwell, but it's my pleasure to welcome you all to the webinar today and particularly to welcome Doug, Rahul and Dave who are going to be presenting with us this morning. So today's session is, as the title suggests, in our data citation series. It's the second in our data citation series and we have four in the series altogether. This one is going to be a really interesting session because it's some real case studies from the Australian National University and the Australian Antarctic Division about their journeys, if you like, in implementing DOIs and data citation at their institutions. We'll start off with a presentation from Doug Monker, who's the repository manager at ANU. Then he'll hand over to his colleague Rahul, who's a systems developer at ANU, to talk about their experiences. Then we'll hand over to Dave Connell down in Hobart at the Australian Antarctic Division to talk about the experiences at AAD and then we can come back to some question time after that. Good. That's good. Okay. Why should we meet DOIs programmatically? A little context is helpful. There we go. We had a number of data capture projects and what we were actually doing in them was we were building a generic solution. In other words, rather than building individual tailored solutions for each project, we were building a solution that would actually be adapted to provide a generic data capture platform. This is basically a bit like having automated self-deposit by machines. In other words, the machine provides some information, which is metadata, uploads the data set, and it goes into the workflow. This is essentially the same solution as we used for sitting in the commons and for our data repository by where people come along and deposit data. It's all the same basic code. The code is based around Fedora Commons. Rahul wrote the actual self-deposit code for machines. It's no subtle. And because we're actually thinking basically about automated data capture, we're really trying to automate as much as we can. So all the way through, we need to make things as automatic. This actually is quite beneficial when we talk to human beings actually adding in data and depositing data themselves because people absolutely hate entering metadata. I've worked on a number of such projects and I can tell you professional people, professional data maintenance people hate doing metadata. Scientists and researchers really hate doing it. They just simply won't do it if they could avoid it. So the whole point is to make as much as possible, humanize it, make it as simple as we can get. It's really got to be simple. It's really got to be easy. Pre-filled feeds, all the rest of it. And this also works very well for automated deposit because it's the same thing. Computers are very good at generating lots of data, but they actually like doing it in a structured manner. So what we're doing is making things as simple as possible. The other great thing was that among our data capture projects, we had people like astronomy, genomics and so on. They actually have a culture of making data available for the substantiation of research, which is a nice way of saying they like to have the data available in support of the research papers. And they had this real demand for persistent identifier mechanism. Now, there are a number of persistent identifier mechanisms out there. I'm sure we've all played with handle.net and so on in the time, but we needed to make things simple and straightforward. That is easy. Now, remember what I was saying about humanizing the whole process? Because we had control of the workflow, we could make it look as something that just happens, which means people could actually go and request a digital object identifier and do the whole basic process without sort of anything. If you have to go to a special website, stick their fingers in the ear, dance three times around the fire, they simply won't do it. They won't just hate doing that sort of thing. So what I'm now going to do, having told you the message about making things easy, is I'm now going to hand over to Rahul. Hi, everyone. I'm Rahul. I'm going to be taking over the presentation from here. As Doug mentioned, we're trying to make the system of requesting DOIs as simple as possible. So what we've done is once a record gets created, a number of people have a look at it to a review of the information of the metadata in the record. And all a user has to do is click on a button that says Meet DOI. The system does all the background processes, all the checks, the verification gets the DOI from the service, updates the record. It's as simple as that. Of course, we have logs for all the requests and responses for reporting and auditing purposes. So that's sort of the technical aspect of how we implemented the solution to make it as simple as possible. Our solution can be further expanded. It's not perfect. We have a long way to go to accommodate all user requirements. One of them is the ability to version datasets. The issue with versioning is that the DOIs are minted at the collection level, not at the files within the collection level. The user comes to us and says, I've got an updated dataset and I'd like to mint a new DOI for it. The entire collection needs to be duplicated. The old data, as well as the new data, needs to be uploaded to the new collection. A new DOI needs to be minted. This again, while inconvenient, meets the requirements of minting DOIs, making sure the DOIs point to the complete dataset. And also, for the moment, we have restricted the ability to mint DOIs only to system administrators. We do have the capability of allowing researchers and research officers to mint the DOIs by themselves. Moving on, just giving you an example of what the screen looks like. It's all a matter of simply clicking mint DOI, a button on the right side, which mits a DOI to a collection. And our implementation is open source. We're using open source libraries as well. The source code is available on GitHub.