 Thank you very much for the invitation to share what we're doing here at Newcastle. So I'm just going to talk about from data to discovery in terms of our research on storage and the connections that we have. So in telling the Newcastle story I'm just going to tell you a little bit about the systems and the tools that we have and then talk about the three workflows that sort of make up those systems and tools and the connections and integrations between those. So to tell the story I'm just going to introduce you to the systems and the tools that are in this space at Newcastle. So for research data storage we have Home Cloud, we have an enterprise version of Home Cloud here. For data archiving and publishing we're using a tool software app that was created to run on our cloud that's called Crater. For data management and registry for the data management and metadata curation workflows we're using Redbox and Mint similar to what Chris was just talking about. And for publish and discovery we're doing that via our institutional repository which is Nova here at Newcastle. So I'm just going to talk a little bit about the workflow and how they all connect and just describe that to you. And after I've done that I've just got two small videos, short videos that just actually just show you that in action so you can tell you about it and I'll actually show it to you. Unlike Chris I wasn't keen to do a live demonstration because it would probably go wrong. So I'm using it now. So it's a research, so the very first, these are the three workflows. So I'm just talking about the connections between the two. So our research data storage. So in that is our Home Cloud which is Enterprise version, I think it's seven, that run at the moment and that sits on, I think it's a petabyte. And on that we have this app which is Crater. Crater was developed, it's made in 2013. We started working on Crater and it was born from work that Peter Sefton was doing at University of Western Sydney or Western Sydney University I should say at the time and it was a collaboration between University of Newcastle, Western Sydney intersects who are doing the development and in those early days University of Sydney as well. So Crater was about the problem that we had identified in the library about wanting to have this connection with the research data storage to look into our data management and publishing workflow for Redbox and the Mint. So and since Crater was developed, the development started way back in 2013, there's been a few development cycles along the way. So there's been a few sprints and agile developments to get to where it is now and also there's some future development coming which I'll tell you about at the end. So in that workflow, that research data storage workflow, that's what's sitting there. In the data management publishing one we have similar to Cris, Redbox and the Mint. So the Redbox is our metadata stores descriptive curation workflow and it's hooked up to the Mint which is our name authority service for our party records, our staff members, our researchers and also for, I forgot what I was going to say, our grants. Sorry, that's what I was looking for, information about our grants and then that's connected to NOVA which is for discovery. Sorry, I'll just run through quickly. So in the research data storage workflow, that first one now, what researchers do there or users of it? So they log into the old cloud environment that we have. They create a crate. A crate is a data crate. They add files to that crate and the files are the files that they work in files from there that they have on our own cloud so they add it to the crate. Then they have the opportunity in Crate it to also add metadata and from there they can review the metadata and then they can publish the crate. When they publish the crate it moves across a couple of things happen. One of those is it comes to the library into the next workflow, that data management and publishing and the researcher receives an email with a lot of that metadata in it. And then in the data management and publishing workflow, the one sitting in the middle there, that's where the library works on the metadata that's come across, the alert that's come across from Crate it. So that alert arrives into that system. The library works on it so we augment the metadata and we add metadata and we probably have more conversation with the researcher to actually work on permissions and probably more on descriptions. And when we're happy with that we publish a report and it goes across into Lola for discovery up through research data in Australia. So this is Vicki's highly sophisticated systematic diagram. So it's just a way of very simply demonstrating sort of what's happening. So we've got IonCloud, the researchers are in there with us. It's a storage they're working fast. I should say that IonCloud is just one of the storage options we have at Newcastle. But if you want to have the connections to publish, IonCloud is where we have the ability to do that in capacity. So from the Crate it to two things happen. When a researcher uses Crate it and they publish or submit a data crate, they press the button, which I'll show you shortly. Two things happen. So our metadata alert goes across to a red box system and it's like the button of a record. So it's an alert that has information that's been collected while the researcher's been working in Crate it. The second thing that happens is that the data crate itself, so a zip.file that uses the Bagot specification that came out of the California Digital Library, that data set then actually data crate goes into our storage layer. So the metadata alert goes across and it's ingested into red box. So more work happens there. In red box we all met that. And then from red box we send a mark, a DC, a RIFCS record across to Mova. Embedded from that metadata alert information all the way through that process, travelling with it is the URL to the data crate in the storage layer. So the institutional repository has a priv interface into the storage layer. So it's able to be the gatekeeper for the access to the data. It's publicly available. It's only publicly available to Mova through that priv network access. So just quickly, this is a very three minute video. I'll quickly just demonstrate what I've just told you in terms of own cloud and Crate it. So the researcher logs in, they see all their files in own cloud. They're able to toggle up and they'll see a little icon that's called Crate it. They can by default, they have a default crate for their data, but they can create a new one. So I'm going to the process of creating a new crate. So this is my study on green fox. So it's flattening the metadata to go. And this is a description on my crate. So when I fix my typo and I click to create and now I have a crate. This is told me at the top there in yellow that I've got a new crate. Now I'm toggling and I'm going back to my files on own cloud. And now I can add my right clicking the crate to let you add to the crate. So I'm just adding in my data dictionary, my population information on frogs, and environmental information. And I've got some images, but basically you pick and choose what it is as the researcher you want to package up that goes into that data crate. And when you've finished, it's telling you as you go that it's adding things to the crate so you can actually see it. So we'll go back to Crate it and we shall see the files have gone into our crate. Now over on the right-hand side the researcher or the user has some ability to add some metadata around those files that will go into that crate. And this is going across to start the butt of a record for the library to augment for publishing. So there's a few things here. So we've created information with the title, the creators, we're just adding them now. It's hooked up to our Mint system. So it's actually doing a lookup against the Mint and it's bringing it back. It's hooked up to the Mint again and it's searching for grants. So we can select the grant, pull that information back in and so forth. So there's some more work going on around what actual metadata should be here which I can tell you a little bit about. There's a feature to check the crate to make sure that all the items are valid and it's still there since you added them. If you want to hit the button to submit you get to review all the metadata that you've entered. At this point you can go back and change it or you can hit the submit button. And that submit button is then sending it to you from the research that has storage the data crate information to the library. So you can send an email to additional people that you're working with to say that that's what's happened. So that's how Crater is running on our cloud. The research can also zip up the data crate, a copy for themselves and also save elsewhere if they want to. So the submit button has done two things. It's set that data crate or that data set to storage for archival purposes and then it's actually sent it across to the library. So this is our red boxed instance which is not publicly available to the library user so it's fairly as is and I'll just start the process to show you what happens here. So when you log into the system the very first thing you see in the alerts is an alert that's Hunter Valley Green study on green frogs. So that's arrived in the source next to own cloud dash Crater. So it's telling the library where it's come from storage and it's arrived. So we start the process of looking at that record. We go into it and then we start working on it. And Chris showed you before there's various things the library works on. So we can add lots of information there through a conversation with the researcher as well. So that's basically how it works. This is just demonstrating that the information from the crate comes over and is populated as put in by the researcher into various sections in the system. When we're finished we hit the button to publish the record. So this is where we do it and the record is published across to our institutional repository and it just shows that it's actually being published. So finally after the hit publish button it arrives in Melbourne. So it has behind the scenes that's sending receives over as well and it's harvested from there. We harvest it up to Research Data Australia. So I guess the last thing I would say, so that's the process and that's the three workloads and that's how they're connected from the research story of John Cloud through the red box myth to the library through to Discovery on the other end which is facilitated through NIVA with that connection back into research data storage if it's applicable. So lastly I would mention that I've said that there's been a number of iterations with development on the Crater Tool and are currently funding further development and enhancements to the tool at the moment which they'll be trialling with Cloud Store Plus. So there's a group that's working on that. So obviously Arnet, Interceptor doing the development and also University of Western Sydney and University of Newcastle because we've been working on this quite a while now. So that's the end of my presentation. Thank you very much. Fantastic. Thank you Vicky. Vicky, if anybody has any questions, Vicky, can you put them in the... Oh, there's this one here already. It says once a project is complete and all cratable data is packaged up and published and archived, how do you ensure researchers go back and delete all remaining redundant data in only Cloud? I have to actually... It'll be a policy or a business rule within IT and I actually don't know the answer to what we actually do here. When you're selecting the file straight to a crate, Vicky, does it track where they are? If a researcher moves them around, does that then become disconnected from the crate? Yeah, so it was fairly quick. So I did talk about... It was fairly quick on the screen, it flashed up. It was that one of the icons across the top in the navigation and credit was a button that was checked and demonstration and I just did it and validated. But what the purpose of that is, it's like checking to see if the names have changed, if the files have been removed, and is exactly that because it's referencing where those files are from the files. So I presume then the advice would be to sort of structure a location pretty much where you're going to have it and set it and not change with it too much. Yeah, but if you do, you just have to do a little bit more work when you're going to package it up.