 But please, Edam, take it away. Awesome, thank you. So yeah, Kiora, good afternoon. Thank you for coming along. Second day, last stream session, just after lunch. So yeah, thank you for making the effort. Yeah, so I'm actually wearing two hats at Auckland Museum right now, sort of a slightly one I've been wearing for slightly longer and better fitting is a digital collection information manager. So I look after collections online, some of the data analysts in the museum, and our online center of database. And that job is all about getting our data out of our source systems and publishing it online and encouraging remix and reuse of that content. And I've also just put on a slightly new hat, which is Acting Head of Information Library and Inquiry Services, as that's managing the Documentary Heritage Collection and the whole information ecosystem at the museum. And as sort of mentioned, Lucy mentioned, this was originally actually a seven minute lightning talk. They got extended to a 25 minute. And then 10 days ago, Dave Sanderson, one of the conference conveners, rung up and said, hey, do you want to do a full hour? I said, no. But here we are, isn't that? So we've gone an hour together, isn't that fantastic? Croce of Capacity, this is one of these things that I've been working on in the back of my head for about, I don't know, 18 months. And so it's actually really, really great to be here with you. And what we're going to do today is I'm just going to walk you through my journey through this problem that I keep coming to at the museum. And my journey is just one way of getting through this. I'm not saying it's the only way. And actually I think I'm really interested in if you guys have done anything different, anything similar, or if you do, yeah, what we would do in this space. So I'm really excited to share it with you. So Croce of Capacity, what do I am I talking about? Hopefully I'm talking if this works. Oh no, I used the keyboard instead. There we are. And I think this is true maybe for my bit of the museum, that the bit that I work in, the collections and the collections online is actually that it's not dropping visitation numbers that's the problem for us. It's the Croce of Capacity that we've been asked to do a lot more with a lot less. And I mean, we've been doing this for a long time. Auckland Museum, we've been doing this for about 163 years and counting. And we've got a lot of stuff. So art, archives, a research library, collections, online centers, half the records of 200,000 service men and women who went overseas. And this has created this huge backlog. And that's what keeps me up at night. I think about the backlog. And so for all of you who are pretending you don't know what the backlog is, it's the unprocessed collections that we haven't yet got to. And I was speaking to our pictorial curator and he recommended 3 million unprocessed photographs in the collection. And at our current rate, that would take 75 years to get online. And that sort of boggles my mind. And so that's what I've been thinking about. Well, how do we tackle this backlog? How do we make that 75 years? Five years. What technologies and tools can we use to do that? I should quickly point out that Auckland Museum is doing amazing stuff in this space anyway. For those of you who saw Barb's tour, guess there about the Pacific Access Project. What that's doing is we closed one gallery, the Oceans and Coastal Gallery, on the second floor, if you've been into the museum. And we turned that into the Collection Readiness and Access Project, where we put 16 people who were just working on backlogs from across the collections. You've got Dave Sanderson's team in there, I think six photographers, sort of productionizing how we take our images. We've got copyright specialists and data analysts just cleaning up the backlog and trying to get as many new records online as fast as possible. And we're doing amazing things in that space. We've got 2,000 new records every month, 2,000 new images. And every day, we make about 4,000 updates to our collections online. So the way our system works is every five minutes. We take a data dump from our three source systems and we push that data online. And one of my favorite graphs to show people about this is this one, which was a couple of months ago now. We have 6 a.m. through to 8 p.m. And this is the number of records updated on collections online. So you can see 6 a.m. the first brave souls reach the museum. And they start making the first couple of records and they drop along and then more people come on and we go through and then there's these people who just go home, you know, like there they are, they're making a difference. So right till 8 p.m. My favorite thing about this graph is it shows that we're an active, we're not dusty and old, that we're generating new knowledge constantly. I also love this diagram for one brilliant reason is I've worked in about four or five museums and all of them have this pattern. I don't know if anyone can guess the one I'm about to show you. Morning tea. And you can just see that they're like absolute peak as everyone saves their work, drops down, we see nothing going online and then a peak afterwards because everyone's full bellies, full of coffee and they're ready to hit the floor again. You kind of see lunch and an afternoon tea hidden in there as well. I just think for me that kind of sums up and every museum I've ever worked in it's kind of like down towards 10.30. So we've got all these collections and we're trying to put it online but we've also got this problem that we've got some changing expectations. Our audiences are expecting us to do this fast. I mean, we've got people who you don't rent a video you go and get Netflix. I mean, I haven't watched an advert in like three years and it's people want their stuff and they want it now. You want some food, you get it delivered on Uber. And so how does the museum work with that? How do we, you know, that the people want it instantly? And my favorite story from when we did collections online which we presented here three years ago or two years ago. We put a million records online and we did that overnight. It was not overnight but we released them overnight. And the first emails I got weren't, oh my God, you're amazing, you deserve a medal which were naturally the emails I was expecting but were ones like, oh you didn't do this collection or when's my donation going to be online? And so it's instantly people kind of expecting that now we're starting to put things online we should be doing it quicker and they should be seeing their work up in five minutes. See that's kind of like the background of how do we deal with this. And then at last year's NDF, I don't know who was here and if there were many people here. Yeah, Dave Brown from Microsoft gave the keynote and he showed us Caption AI, Caption Bot AI which is this awesome little tool where you upload an image and it uses the Microsoft Cognitive Services and it creates auto captions and it creates tags and it gives you some metadata that you can add into your images. And I honestly left the last NDF sort of just think this is it, here's our solution. We're going to just throw all of these images through AI. We're going to get all those captions, I'm gonna put them online and then we'll be on top done, yeah sorted and I'll go home and have a cup of tea. And just I know Paul Rominch some of this yesterday but just what is computer vision and AI? I'll try and sum it up in really quickly. It's just using, you train a computer by feeding it visual information and then so when you show it something it recognizes what it is. I kind of, I did have a diagram I've taken out which was the way my toddler draws cars. You just, you know, you keep showing them the same picture and that's how he learns what it is. It's the same kind of thing. And it seemed like it was a really great way for us to move forward. I should quickly point out that I'm not suggesting that we use computers to replace our staff but actually allowing us to tackle that backlog head on but at the same time allowing the staff to do their jobs more efficiently and work on sort of the NALIA problems that we come up against. And so I got back to NDF from NDF last year and I threw some images in. And so the first one, I mean it came up, living room filled with furniture and a fireplace. This is using Microsoft Cognitive Services, 94% easy and some pretty good tags, indoor living room, table, window, furniture, ceiling. Woodward and view, chair filled fireplace, large decor. I mean that's cool. That's as good as I think we would have done if a human had done it and said, perfect, that record could go online. Of course, obviously we have to show you some bad ones. Sheep in front of a building. But what's great there is we've got 9% so we know this is kind of a bit crap and we know, and it starts getting really weird and you go building outdoor house, grass, front, sheep, pier. We carry on a little bit long horse. Where's it down? We get to train, zoo, field, riding tool, herd and track. No idea. But we know that 9%, we know that's kind of a bit rubbish. Maybe that's one that has to go back into some other system. And this one, which is even better because of yesterday's talk. So it's not a cell phone as Paul Rose system did yesterday, but I don't know what AI has against babies. And I don't, and just, yeah, it's really, I mean it works really well with architecture but you can see how we could really offend people. But again, quite a low confidence rating. And the way I'd done this was just one record at a time. I just, I was using that front chatbot AI and one by one. And of course, with the three million records that were still gonna take me quite a while. So I thought, I can do a little bit of Python and I'm kind of know my way around APIs. I'm a little bit dangerous with them. And I opened up the documentation. And it was just like, I just couldn't, I couldn't make it work. I couldn't get our API to work with their API to get a response that was actually usable to put back into our source systems. I spent a better time and then I just sort of like meh and I went back to my day-to-day job, which looks something like this. And then I kind of forgot about this problem for like six months. It was just something I played with, I had a go and then put it to the side. And actually it was another little side project where I was trying to make a Twitter bot. And I ended up on this forum sort of and someone was telling me I just needed to go and read the documentation. I shouldn't be asking such stupid questions and I needed to go and read the book. And I was over it and I thought, well, I wish I could just pay someone to do this for me. You know, how much easier would that be? And I stumbled across the gig economy. And I don't know if anyone's sort of been using this or looked or done sort of been venturing into this space, but it's kind of weird and awesome at the same time. When we think about these things, we often think about things like this. So Airbnb Uber, the sort of platform as a service where you're, you know, the platform acts as a place between the user and a provider and that they're allowing these sort of mini, what we'd call micro interactions or micro jobs costing somewhere between $5 and $50 completed by a freelancer using an online marketplace. And I forgot to remove my image caption thing. And yeah, so you kind of hearing a lot about it in the news things like Uber and Airbnb and this ability for us to access a workforce that's online. I mean, we already use freelancers, but this allows us to access them anywhere in the world. And so I ended up looking at these three sites, Fiverr Upwork and Freelancer, which are essentially sites where you can go and post small jobs costing anywhere between $5 and $50. And you essentially invite people to help you out. And so I put my API question up there. So, hey, can anyone take Auckland Museums Collection, take any one of these Microsoft services, the AI system, and the response I want is a JSON format, a JSON file that I can then import into our systems. And I wanted to see if I could do that workflow really quickly. And you literally, you put it on a job and you can see some of the, you know, from $15, $5. This was just when I think I just searched Excel to see what kind of things were coming up. And I put it on and 20 minutes later, someone had done the job. So I actually asked three different people to do it in the end, because I wanted to make sure I wasn't getting ripped off and actually they all provided me exactly the same thing. So I just paid for something three times. And it cost 15 bucks. But then I was able to throw our images through. And instantly, I'd cleared the first thousand images that had gone and got tags. They'd got, you can see what the caption is, the link to the Auckland Museum Collection, the image. And then some tags. And this is in the format that we could then just quickly ingest into our source system and start putting these images online. So taking that first cut and providing access to these collections that had previously been hidden. But we obviously have that problem of the captions and some of them being a little bit naff. And so I thought, well, could this, could we use this same system to solve that problem? And I ended up looking at Amazon Botanical Kirk, Turk, and CryoFlow. I don't know, has anyone used or heard of these two systems before? Yeah, they're kind of the same as the other systems, but really the jobs are tiny. You're putting in really small, sort of you're paying cents for jobs and they're being distributed to thousands of people. And so it gives you a huge workforce that you can access who can work on these jobs, really simple jobs, and sort of turn through them. In the end, I'll show you, I had to go on CrowdFlower first. And this is essentially what we asked. We gave them that Google spreadsheet in a slightly different format. So now you've got the image, the caption. Does the caption match the image? Yes, no. Intern or external, which is just something we wanted to check as, I guess, to make sure someone wasn't just doing something. They weren't just throwing the images through really quickly. How many people were visible and transcribe any text? And this was getting done for about 18 cents an image. And so we could take all of those images that were under 20% accuracy by the AI system and throw them through this system for a little charge and get them sort of proof checked. And then all the ones that said no, we could maybe then send back into the pile, into the backlog to sit for that 75 years. And up until this point, I've kind of gone, NDF, yeah, we're going to use AI. It's going to be amazing. Then, oh, it's really hard. And then I was like, yeah, crowdsource, gig source, we're going to be amazing. And so I'm still at this point. And then I went to Amazon Mechanical Turk. I actually didn't end up using it because I'll talk about it why. But really, here you can see the job that this person that I'd signed up to do just to give it a run is we've got a Tesco receipt, we've got the contents of the Tesco receipt, and then they're asking the people to fill it in. So it's really simple tasks that people can do. And of course, there's huge advantages to this. We really are just sort of embracing those changes that digital allows us. We've got a global workforce that we can have access to at any time. So I posted that job on Fiverr at night, and 20 minutes later, at 10 o'clock at night, someone had done it. And I was accessing this workforce in Europe that was able to do this for me. We're obviously only paying for the work we need. And we're gaining expertise at really a fraction of the cost, five bucks. And it kind of lives all those key words. It's lean and agile, and we're doing problem solving. And straight away, we've got collections online that were searchable and usable. So it's not, obviously there's a buck. But that Amazon Mechanical Turk site kind of gave me something a bit weird about it. I didn't quite like it, and I'll just go back to it. To do this transcription, we're paying 8 cents for the first 20 items, and then a cent bonus for every additional four. And I was sitting there going, man, that's so cheap. I mean, we could pay for this using the office swear jar. And like, we could actually pay for it a lot more. And just that money just seemed quite, I don't know, it seemed like there was something a bit dodgy about that. That seems like you're getting quite a lot for not too much money. And so I started doing research and jumping into what actually it is. And of course, you can't go online and start searching about the gig economy without finding some of these articles coming up. Yeah, robots have arrived, but they're actually made of flesh and bones, and freedom will serve them. And some of the articles reckon that it's actually really hard. And I tried doing it with both of the systems to work out what actually I was paying someone. Because when you're paying someone 8 cents for 20 items and then a one-cent bonus for every four items, it's really hard to work at an hourly wage. And it actually works at about $1.45. And then I started feeling really bad. Because we have some really good code of ethics in the museum sector. And these are things that we are held up to working in this amazing sector. And that we will provide appropriate financial rewards for the duty-specified museums out there. And also, you know, ICON. There's a principle there that members of the museum profession should observe and accept standards of law. And we should take other public into legal or unethical practices. And I just quickly, bearing in mind this is recorded, and I'm going, it goes on YouTube, I'm not saying that these services are illegal or unethical or are doing anything wrong. But what I'm saying is I didn't know. And it's really hard to find out. And is this something where we should be going in, should be trialing? Or is it something actually we go, let's take a step back? Because we don't know where that money is going. We don't know who we're paying. We don't know if they have the same labor laws. Or, I don't know, we're just funding someone in a little digital factory somewhere who's churning up my captions for me. And so after this initial, like this is amazing, we're going to really churn through this, imagine, no, it's maybe a little bit scary. And maybe it's something we need to take a step back. Maybe we need to look at the fair trade version. Or we have the money, so maybe we shouldn't be paying the absolute bare minimum, 8 cents, and be paying slightly more for it. And so we still have the problem. I still have all those images. They're all tagged. And I need to check them before they go online. And so naturally, after the gig economy, I went back to something slightly more traditional, crowd sourcing. So again, anyone use the Zooniverse Project Maker? Yay. And obviously, the Smithsonian Transcription Service, amazing services that give you a crowd of people who do this for free, who are dedicated, awesome people, who want to help us. I guess the concern I had with this is it takes a while for you to build the system and to get the crowd. And also, you compete against all these other awesome projects. And my project was kind of a bit rubbish, because I just wanted you to say, does this picture match this caption? And I also wanted immediate results, because I'm part of the problem. I want everything now. And I was sitting at work sort of going through this. And I was sitting in an all-staff meeting, which is when all the staff meet. And I was watching everyone come in. And I sort of realized that actually I had a crowd. I had this really awesome, passionate crowd who were sort of all around me, who wanted to help and were in here getting paid and meeting all those ethical standards. A crowd of people who I just had to find how to get engaged. There's about 300 of them. And they all work at Auckland Museum. But the problem is they're all busy. They're a beautiful bunch of people. But they're already cataloging images for me. They're all cataloging images for the organization. They're making digital content. They're doing research. They're staffing the desks. And so they're already busy. How could they help with this really small task? How could they help us get through that backlog? And then I remembered something. I remembered my graph. And I remembered actually a bunch of them were doing nothing here at 10.30. Because they're just drinking tea. And so what could I do in this space to get them to help me? And so I went through this kind of weird phase of thinking, well, so I went and stood in the staff room. And I thought, well, here's all these people. They're coming and going. And they're looking for things to do. And at one point, I was building a little tower where people would put their teabags. And that would describe which was the best caption. But that then involved me having to empty out teabags. And there was still quite a physical element. And so I was looking and just literally standing in the staff room thinking about how could I utilize this space? How could I make the people who are in here who are just sort of mindlessly flicking through their phones while they have their cup of coffee and help us tackle a backlog? And help us work with the collection? And it kind of struck me there they all were on their phones. And so I started looking at chatbots. I've just got my faxes. So I'll just quickly, this is from a study which says we send five hours on smartphones every day. And about 65% of that time is spent on communication related activities like social media, texting, emailing, and phone calls. That's three hours, 15 minutes. So three hours, 15 minutes every day where we are sitting on our phones just browsing and flicking through. And mostly on social media apps. I'm going to use Facebook as my example for this. So a chatbot is essentially a really simple computer program which allows you to sort of do like a volleyball or I think they use tennis of preset conversations that we can throw you some text, do a response. And then it's a bit like the old pick a path books as kids you had. And so you just have to build that map. And then you can give it to people. And they can pick their own path. And as they do that, we can collect data and use it to enrich our collections. So I looked at three examples. I looked at Chatfield, Dexter, and SQL. And to be honest, all of them are awesome. I went with Chatfield just because it has some natural language processing. So that means that as users type in their text, we can recognize roughly what they're saying and help them pick that path. And it also allows you, to be honest, to add some really funny little, because people can ask funny questions and you can give them funny answers. Where the other two do the pick a path thing really well. So what does that look like? It really is this simple as you're building these text blocks and you can take the user attributes that are coming from Facebook and build them in. And you're even picking how long do you want those little dot dots to appear. And we could come up with some buttons and help people through. And also if as people enter text, we'd be capturing it as user attributes that could then be emailed and stored in a Google Sheet. So we could really simply create this Facebook bot that was already on everyone's phone because it uses Facebook Messenger that would be allowing people to talk to. And at the same time, they'd be adding some keywords and recognizing the bit that isn't in here is the section that says does this caption match the image because we'd send them the image. And so to make this work, you also I realized a photo of me saying help us tag collections wasn't gonna work. So I decided to use carbine at the stuffed horse from the collection as my kind of mascot and posted up a bunch of posters around the museum asking people to talk to a horse. And so I gave the chatbot this personality of carbine and invited staff to talk and to help us tag some of our collections. You can scan, this is one of the QR codes from Facebook. I don't know if it's called a QR code, but it must be. And you can scan that and jump straight to the chatbot still up and running. Or you can actually search for carbine, the image tagging horse. And so I made sure that I stuck these posters up between the dirty dishwasher and the bins where I knew people congregate waiting for the microwaves. For us that's our central point. And in a 10 day period, 33 members of staff helped complete about 120 images. Which in 10 days, isn't that bad? That's work that was never actually going to happen. And the great thing about it is because these were staff members, we didn't have to upskill them on anything. They know how to catalog a record or most of them have some idea of how to do it. And we didn't have to throw each image past multiple people because we could kind of accept that they kind of knew what it was. And if a member of staff were saying this caption doesn't match, we could take that as given. It's not like when we're paying eight cents for something where we maybe would have to run each image multiple times to make sure that we're getting the right answer. Just, I mean, you're more than welcome to have a look. But that's kind of what it looks like and that those as it comes up. So we throw someone an image, we show them what the AI caption did as, we ask them, does this caption match the image? And then we ask them for additional metadata that we'd add in. And so if we go back to what this is, what the AI caption did as, and remembering our awesome train and horse at the bottom and pier. When this went back to the staff, of course we got a scientific name, why wouldn't we? And we got versions of the modern names and common names for the grass. And we also got some really cool things. I haven't included them here, but people would say things that the non-catalogger, so the people who were just coming to this because they worked in exhibitions or they worked in the visitor services team, were adding things just like beautiful or pretty, you know, blue sky and balcony. There's sort of terms that wouldn't necessarily be captured through our standard cataloging process. And of course I think that worked with that one, but also here we have the AI captioned this as a castle-like building in the city. Most people agreed that that was actually quite good, but people also added these, you know, obviously the additional data. We got addresses and names and Gothic Revival Architecture and Dunedin. So we were utilizing that pool of knowledge that we have in our staff and in a fun and engaging way, asking them to actually help us get through some of this stuff. Yeah, I didn't think I'd put the baby one in in the end because we just said that was a no. And so the so what, what that journey there is, I took the inspiration from NDF to go and do some AI work. I got stuck. I went and found the gig economy and used this online marketplace to try and help solve a problem. Realized that's a bit dodgy, a bit ethical and I'm not too sure about it and I'm not sure it's something that we as a museum wanted to be involved in. Went back to crowd sourcing, decided not to use that and decided on niche sourcing, which is something I've just made up, which is using that crowd, that niche audience that already exists in the museum and asking them to help us. And so really for me it was just like sort of, I know putting a jigsaw together and not knowing where all the pieces were and some of the pieces are kind of far away and not maybe they're from a different puzzle and but I do think this is the kind of stuff where we shouldn't be sitting idly by and letting these technologies and these things pass us. We need to be jumping in and giving it a go and seeing if we can use them and can we, we don't have to, sort of some of the talks today, give it a go if it fails, that's fine, at least we tried. And so I kind of, as we said at the beginning, that was the original talk that I wanted but I kind of just wanted to see what people thought about those ethical issues around using that gig economy, the global economy, even around using things like Facebook. I mean, if you use Carbine the Image Taken Horse you can ask it the word creepy and it will send you all the data that we've collected from your Facebook profile that gets provided with your user attributes. It's not too much but it's still kind of creepy. It's your profile picture, your name, you know, where you're from. Is that the kind of information that we want to be taking off people? Should we be using these kind of sites? And yeah, the ethical question, should we be using these tools? And has anyone got any other examples of where they've been using AI and how they've managed to cross some of those bridges around the accuracy? And is that, do we still need to have a human or someone come in and proof check everything? Yeah, and I say the only other thing, the only other comment that came a lot through all of the captions that came through and through the chatbot is next time if you're going to make a chatbot for a museum audience. You probably shouldn't use a horse, we should know our audiences. Go for a cat, everyone. That's honestly the most common comment that came through, but thank you. Awesome. Yeah, so we did 10 days just because we wanted to sort of trial what that would look like and realizing this isn't a long-term project, it really wishes to pilot. And we did, in those 10 days, we had 33 people come and tag about 120 images. So we found that it was the same people who were just kind of just scrolling through and doing it in their breaks. We could plot those times and it was during breaks. And 33 people still, it's 10% of our staff who got involved, so that's, I think we first need to get through some of those kind of those challenges around using these platforms. And I guess as a museum decided that's something we want to be doing. I think that for me, this little pilot showed that we could use an existing system all those systems that I showed you have free tier accounts. I think the first 500,000 users, you get a free tier and that's not gonna be a problem for us. And it just was a way of us testing would people do this on their phones? Would they engage and what was the quality of the data that was coming out compared to what the automated systems were creating? And I think going forward it's just, I think we want to get into crowdsourcing more and looking at how we can utilize those existing big platforms like Zooniverse. And this was just a way of us kind of cheekily checking if it would work and checking even. I mean, the great thing about these systems is you can change them on the fly. So we were able to do some real simple testing of the language and around what, when the first couple of users said they hated one bit, we were able to just quickly jump in, change it and really slim it down so that people could, yeah, just play along and yeah, see if we can make it a little bit fun. I was just wondering, given that you're using these third party services and you're effectively uploading your data to them, what terms they might ask of you with respect to rights in your images? So what we, because rights and that right management of uploading into the content was something that we were really concerned about and what we're actually doing is just linking to our API image. So we're trying not to upload content which was one of the reasons we chose, we looked at those three different systems, we wanted to avoid anything that was storing data in a, in a, we didn't want to give Facebook all of our collection images, that's not the, not what we're up against. And so yeah, it's definitely something that we need to be mindful of and keeping checking in of what those services and their terms. And that's why I think that, I mean at this point I don't have the answer, I just think it's part of that discussion of when we start looking at these, yeah, what is the, what things do we need to be aware of in this sort of, yeah, a sphere of tools and technologies? It's the same as the thing on yesterday, the shiny tool. We're quick to jump on it but we need to make sure that we're taking everything into account. And it's the same as when we uploaded things into CrowdFlower and the reason, one of the reasons we didn't go forth with the Amazon was because we weren't, it was just so technical, we didn't want to get into that. Thanks Anna. What was your plan pretending that you're going to go forward or with those hundred records that got tagged? What was your plan for that data? Yep. How was that going to go back into your content management system in a meaningful way? Does it, so what we're looking at is that there's two types of ways we were looking at dealing with that. One is just a straight ingestion into our source system but then tagging that as an automated record. So flagging to the user and to our collection management team that actually this has never been seen by a human. This is just like a stub skeletal record just for access. And the other option was when we have had someone go through it is adding them as user tags. So in the same way that we allow users to come in and add their own commentary to records online, it was kind of the same way. So we're removing the museum as kind of saying we didn't write this. It's essentially a user and in this case the user just happened to be a Microsoft AI system. Yeah, that's... I think the other thing to bear in mind as well is that a lot of this tagging and stuff is about discoverability and you can't get feedback from people if they don't see what's there. So maybe just the first step of... That's it, I think it's that first cut just to get the data online so that people can find our content and then, yeah, you're right, they start that loop. And people went, at least those records even though it said zoo and sheep and farm, at least they were findable. At least someone could use them, remix them and tell us if they were incorrect and we could fix them. Better than sitting in that backlog, yeah. The DigiVol profiles, the room that they use. And kind of constructed something to actually get people to tag it even though we use a controlled vocabulary. And what we found is it's a massive... We did a similar thing. We've just run a 12 month pilot under agreement between CSI Road, the National Gallery and us. Interestingly, on what you said, we actually got a massive fail in terms of the data that we got. But what we did find that it was great for and I think there's a massive kind of opportunity with what you're doing is it's a really great access project. So we being able to take it out to a market that we never would have been able to capture one being a camera, which is an issue. And second for us being a lot of the people with the DigiVol are natural scientists so a lot of people are probably here in natural history collections, which is not generally what a visualized community can engage with. So more than anything, it's probably my suggestion and I'd love to talk to you about it kind of further. But around that actually, sometimes you can see these as getting the collection open and sometimes that's as much of a benefit of what you actually get out of it. Yeah, I mean one of the silly things about this was we had people looking at our collection items from the exhibitions and visitor host team who are marketing, who never have seen these and were just kind of like, this is awesome. And also there's that kind of like, I can catalog, you know, I'm a collection manager now, this is easy. And so managing that expectation as well. And making sure we walk our collection teams through that, that this isn't in any way, shape, or form. It's just, it's helping enrich, yeah. Howdy. You can also say it's awesome. All right, again, that wasn't what I wanted to ask. I guess my question's really my problem, not your problem, but I wonder how many people in this room will be able to pay the eight dollars in their institution. I know, I have a manager who's got a P card, but we have to find something to charge it to. We don't have that magic, that magic line to charge that. And there's a lot of things out there that, you know, I'd be happy to pay eight dollars for or, you know, 16 dollars or whatever. But do other people, can they just, you know, they've got the credit cards? I mean, I have to, and I just quickly, I guess I didn't allude to that, but because this was a kind of a part, I did this mostly in my own time at home. And so the quickest way around that was using my credit card to get it done. And yeah, that's, you know, that problem of how do we, yeah, you know, nothing more to add on that. Yeah. Hi, thanks for that. I'm just curious about the effect in institutions of potentially concealing this sort of work in things like break periods or into the kind of gaps between existing work. Yeah. And what effect that potentially has on selling to management or selling to more senior people in the organization that there is a problem of capacity here that might need some more long-term solutions? Yeah, not entirely sure. I mean, for us, again, we just chose that lunchtime thing because it was correct, to be honest. It was funny and it was just, we noticed there was this group of people who were sitting, you know, browsing through the old magazine that's been in the staff room for the last 12 years and we just thought was this a way of us getting in front of them? I think being able to pull out some of those metrics and that data around what our systems are currently, you know, 4,000 updates across a team of 30 is pretty insane when you sort of see that and being able to show that data and to show those massive drops when people are exhausted and they need their coffee and their tea. You know, I don't know, has anyone else got a suggestion of how to do that? Open to the floor. This may not be that suggestion of how to do that. Just hearing you talk about using that niche-targeted crowdfunding makes me wonder about those times in our institutions, especially, say, to Papua and the big ones where there are queues and people waiting for things and that opportunity to look at your immediate audience. So people waiting for their coffees, people waiting in lines for stuff like literally here at to Papua and things like that. There is an opportunity there to get an engaged audience to feel like they belong and are participating in the museum and those environments. And again, because of those upload cycles, we can show that it's quite immediate that their records appearing online quite quickly. And I think one of the ideas around that niche sourcing that we were playing with was could we take, you know, where we have some botany students coming in? Could they help us tag those? Because we got scientific names out of those pictorials. So could we find those groups that are already coming into the museum with really specialist knowledge and then show them something that has nothing to do with their specialism, but see if they can enrich it? So in that case, adding botanical names to pictorial images. Could when we have collections of art of botanical prints or fish, which we have, could we show that to people who wouldn't normally see it? And they then get exposure to our collections but also help enrich. I think it's a really interesting, yeah, how we start using that niche audience and how we put our content in front of them. On a similar subject, at the Christ of John Gallery, we use the frames. Yeah. And seeing some benefits of helping. We brought them in and gave them scones. Yeah. Awesome. Just on a personal note, I thought it was really interesting that about the tea time thing. Yeah. Because the secretness of my coffee time in the morning. Yeah. I'm a person who not only works in the museum, but I also crowd source other museums' stuff outside my workouts on one of those obsessive types of people. But if you took over my staff tea time, I'd be really like, oh! So, I mean, it's just interesting that you've got uptake for that as well. Yeah. Because coming into November and December, it's also a bit of burnout time for people in our sector. Yeah. So recognizing that as well. That said, I also wanted to say, it's perfect for crowd source. So you should put it out there. I'd love to contribute in the evening. Awesome. Yeah. And it was, we were inviting people, so I definitely wasn't standing with my stick next to the dishwasher, getting people to sign up. And it was really interesting to see how the first few people to take it up weren't the collection staff, who I thought it was instantly going to be like our pictorial team, who were going to kind of be checking in on me. And it was just people who work, like I say, the object was trying, you can tag hopefully an image instead of the one minute 30, it takes the heat up a pot noodle or I know microwave your quinoa salad. And so that, let's try and make that. So while people have got that dead time waiting for the coffee to brew water. Yeah. I think over this side. Hi. Okay. I'm interested to see the misclassification of a person with a baby. Yeah. As someone holding a baseball bat. That is unusual and it was rather like the one of a baby being misinterpreted as a mobile phone the other day. And I wonder though, specifically the image of a baseball bat struck me as slightly incongruous in a New Zealand context because it's not really a New Zealand sport and it struck me that perhaps this is indicative that the AI had not been really trained on appropriate corpus. That's yeah. And we've seen that with another project that we've just done with our doing text extraction on for longitude latitudes of places, where again using Microsoft Cognitive Services where everything defaulted to the Cambridge in America, not the local. And so how we can help train our own models to work forward on that. Yeah. That's what I was gonna say. It did seem to me that a hybrid approach where you actually then train these AIs based on the captions that you have created either by your actual cataloging staff or through a crowdsourcing process you can actually get the best of us worlds. That's it. And we're saying we've got a million images in that collection already online. So we've got that ready just to throw across and to start building. And you can see there's definitely some, oh man, that's gonna be this year's thing I take away from NDFs. I need to build the New Zealand model. Because you can see with architectural and yeah, there's definitely a niche gap there that we could start filling. Hi David. Just on the point of invading people's tea breaks and so forth, I suspect that there was a bit of novelty in this current experiment. Yeah. But it does suggest to me that there's an opportunity for us as museum managers and I was the clear arms manager here. So I'm saying that with full disclosure that we could think about people's, the rigidity of people's job descriptions and that's an opportunity across an organization to foster interest in other people's areas of work and make it a legitimate part so that 15 minutes of anybody's day legitimately as part of their work could be part of, could be doing this kind of thing as a way of finding out what else is in the collection, what are the hopes and aspirations for their colleagues across other parts of the organization. Because realistically, 15 minutes less of your core task is not gonna make a hell of a difference but it might inspire you to be more connected to the place you work for. So I'd be really keen to follow that up. Yeah. On that point in coming back to the finances, the fact you had to use your personal credit card to be a bit innovative. And following on from the leaders' talk, I'm just wondering if a clear message has to be sent back to our managers regarding the financial criteria playing around and innovating with even a small bit of cash and how hard it is. I too, and the budget manager and the hoops you have to go through to spend eight dollars is quite extensive. And how can we as a sector within our own institutions have a small pot of money that you're allowed to play with? Yep. And innovation doesn't have to be the sort of the Mahookey $20,000, it can be five bucks. And if it all goes horribly wrong, that was only a coffee that we've wasted. Yeah, yeah. Hi, two things. Just to answer your point, our councillor's has just developed an innovation fund. If you can come up with a clever idea that costs 200, 300 bucks and you think it'll save a whole lot or intervene in a problem that's getting out of hand or what have you, they will literally hand the cash over. You can go and buy the staff bike or whatever it is you think will help. So that's one solution. And it's quite the attractive one to councils because of course they can then tout the ideas as their own. And secondly, Adam and myself, Paul, we're from Maston. Was there any intended significance in that band photo you showed for the gig economy? Because that's our senior archivist and we're a bit weirded out. No way. No, I just, I was on, I used Digital New Zealand for all the images and I just searched gig. And yeah, the whole man, there's that one, no. That one. Yeah, he's the guy on the right. He's one of your contributors. He's Neil Francis. Oh, there we go. Now honestly, that was just, that was a Digital New Zealand fine page one. There we are. Totally go through, so sorry about that. No worries. As more collections get opened up and they're more digitally accessible, that's great. But for me as a researcher, having something that says house, it's not helpful. Do you know what I mean? If you do that on Digital New Zealand, you get 100,000 pictures as a way of actually researching, you know, I get worried sometimes that losing the specialist knowledge from those descriptions is actually going to make, it's not, you know, for people who want an image of something for maybe, I don't know, a Christmas card, that's, you know, that opens up the collections, but in some ways also really limits us. I think there's a way of us how we start showing this information online. So we need to know that our collections have been seen by a curated collection manager, collection tech, or volunteer that has gone through that process and has all those links and those connections that make the search powerful, you know, show me everything depicting x from y. Maybe we need to, when something's gone through a process like this, it needs to be the sort of tick box removed from search because we know that if you're looking for photos of babies, the baseball bat one's not going to help and so it's how do we show that this record is a, like a stub record created automatically and then allowing the user to remove those from their search if they don't want it and only use the records that we know have gone through the standard museum process of cataloging. And we kind of have that on collections online, on Auckland Museum's collections online with a scale of completeness. So you can, you know, the ones that are sort of, nothing gets to 100% but the ones that sort of 90% are the ones that have gone, have had that curatorial research, have been using exhibitions and they're kind of awesome and then we've got the lower percentage ones which are the, if you're looking through, you kind of know this is, it's a stub, you just, it's just there for access. You can scroll right through. Yeah, awesome, cool. Cheers. I'm kind of curious whether you did this work in the presence or absence of an institutional policy around user contributions to metadata. We, because we have online cenotaph, which is a database of 200,000 servicemen and women and that allows contributions. So we've kind of already started having these conversations, well, we've been having them for the last three years around allowing people to add their own content to our collections in the terms of biographical notes, new photos and just different data. And with our own collections online, we allowed that tagging and sort of enrichment that we just, this is kind of just the next step of that. So we've started, we started having those conversations and this was just kind of the next step in the progression of, instead of it being a user sitting at home typing it, what's it when it's a, either a member of staff or a computer doing it? But yeah. Cool. I actually didn't think we were gonna make an hour. I was like, I thought this was gonna be like 33 minutes and we were all gonna be out drinking coffee. But I don't know if anyone's got the last question or, oh, too, howdy. We got there. It's not really, maybe it's a question. I'm just wondering whether the NDF community itself could become a niche community to help people especially from maybe smaller institutions to come up with ideas and iterate and test them with a specialist audience who can then in the process learn from what other people in the community are doing. That's, I mean, to be honest, if we only got to 33 minutes, I was gonna force you all to use it. And just to get another 50 people sort of tagging my images. But no, again, we have this niche, it's the same as using the museum and you guys are gonna be some great critics because we're all going through the same things. How do you? Yeah, I was just interested in the gig economy aspect and, yeah, and essentially, if you paid that 8 cents, you're sort of contracting out some of your work and whether that was something you just thought about discussing with the union or if you were going to pursue that further. Yeah, so I mean, we did the first, after the first little bit, we went and I did check in with our P&O colleagues, our people in the organization just to make sure I wasn't doing anything I definitely shouldn't. And it's just, I guess, being aware that even though something is only costing 8 cents, that you're right, you are essentially contracting out and you're paying someone to work on a museum collection and that's kind of a weird, which is why we, I say on that sort of inflated, this is amazing, I'm gonna do everything, it's $5 and I've solved 75 years worth of work, amazing. And actually, that's a reality hit of there is this huge ethical and legal problem and I'm not quite sure it's really hard with the terms of reference on those sites to kind of work out exactly who you're paying, what you're paying, and I say even those rules around what your hourly rate you're giving is and maybe until we know what those are, that's maybe we should be just taking a step back and looking at other solutions where we could maybe only be looking at a New Zealand audience or only be looking to pay fair wage and yeah. Yeah, I think that's a good reminder that our work may have ethical dimensions that we don't always think of immediately, so thank you. Awesome, thank you. Cool, I think, oh, one more? What some of those platforms have profiles of the people that are doing their work and that might be a way of looking at it ethically and evaluating with more information, up work and so on. Awesome, and sorry, David, did you? No, just to add to that, I can't think about it. This year about contracting out and are we taking away work that our staff would have done and so forth? We've already dealt with that with the 300 volunteers that we have, we have quite clear guidelines as to what's suitable for volunteers to do that isn't taking away from paid work and so this is within that context, so it's territory that we've already covered. It's just a different version of it. Awesome, cool, thank you very, I'm actually amazed we made an hour. I can't believe it. Awesome, thank you very much, thanks for listening and yeah, have a game. Thank you.