 This talk is called Art Seeker And it's about using headless Drupal to power an AI art recognition tool that we run So before I start an acknowledgement of country I acknowledge the traditional owners of the land where we meet today on the unceded lands of the Gadigal of the era or Eora I play respect to Aboriginal and Torres Strait Islanders past and present and extend that acknowledgement to all First Nations people here today In the spirit of reconciliation. I acknowledge the immense creative contribution first Australians made to the art and culture of this country me That's me so you can trust that this is the right speaker. I have the digital transformation manager over at the Queensland Art Gallery Gallery of Mognant I live on Bungalow land right now and that is me against the Art Gallery and Here is a picture of our art gallery This is on Karelpa points on the banks of Maywa and we can kind of see this body There's this beautiful artwork on the on the side of that which is called nightlife by Terrell, which lights up in an 80-minute slow-looking experience And I bring up physical place because it is so important to this technology digital project that I'm talking about today And I want to talk about our institution again. This solution is grounded in the requirements of my organization Our vision at Kwagoma, and I'll use Kwagoma Queensland Art Gallery Gallery of Mognant only one G We aim to be Australia's most inspiring and welcoming gallery a global leader in contemporary art of Asia Australia and Pacific I bring that up. We are contemporary art gallery We are not an art gallery of Queensland. We are a gallery of Australia Asia Pacific It is the confluence of those kind of geographic regions in the purpose of what we're trying to do is connect people with The power of art and creativity again This vision and purpose is what's driving this technology and the fact that there's a physical place where we want to experience this digital experience To me I run a program called the digital transformation project Ultimately, there's two kind of major components to it one of which is to digitize our collection When you go to an art gallery only about five at most 10% of the collection is on display 90 to 95% is behind the scenes. So what we want to do is photograph Those artworks work with the artist to take the best kind of portrayal of the work and that Convays their artistic meaning and make that available through digital channels Now we also want to improve our back of house processes so that it's easier for people to connect with the digital stories that represent this art So this is my system architecture. It's a lot of specialist applications connecting together We're not going to dwell on this But what I will point out is that and this goes from our point of sale software to our event ticketing to all kinds of things But what's important is that We're and I'll talk about the technical architecture later. There's no code in this. This is just about a Drupal solution Our Drupal solution sits here above where a lot of our content is captured We capture content and put it in our digital asset management system We study our art and we put it in the collection management system Which confusingly uses the acronym CMS collection management system But that all comes together in this Drupal site and because of this kind of decoupled architecture It's going to enable what I'm about to share So we had a bit of a problem space Which was how do we make it easier to access the art and digital content when you're on site? How do we remove barriers between? What the work in the content and how do we make art more accessible in the past with Lots of art museums have looked at solutions like putting QR codes next to works or little specialist apps Where you punch a number in and you get some content back But they're not very welcoming things like if you're intimidated by the art And you don't understand that like getting up really close and scanning a code and then visiting a website that you've really had to have engaged Kind of one of the more passive experience and then there have been things like in the museum space like beacons Which generate which push content to you, but I kind of wanted you to engage a little bit So that's where we came up with this so We're going to start with a little demo Because that's going to probably illustrate the point a bit better So now let's do a little bit of a demonstration of art seeker in action. So I've got the web app up. I activate the camera We'll get the work inside the crosshairs Take a quick photo and it pretty instantly recognizes it. Yes, this is the work will then come back and Start returning some information. We can get a bit of a description about it The color profile is really cool because what happens is It simplifies the artwork to find the five most dominant colors within that and then it searches the rest of the collection Takes a couple of seconds to find any other works that have that balance. So it's not looking for a single color It's looking for a balance of colors We ask how does this make you feel you can leave a bit of a discussion So we can thread for some resources in any materials that we have about that particular work We can just browse them here And then also we can kind of show what's nearby So if you don't want to go through the effort of scanning Again, because you know, you don't want to wait wait those 200 milliseconds You can kind of browse through everything else that's in the room and potentially look at those works and Also find some content about those So that's what we ended up developing So I'm going to go through a bit of a timeline as to how we got there back in December 2021 We took on an internship from QUT. We had a person who was working on autonomous cars And I wanted to look at the role of AI to help remove cultural bias out of image classification Reason I do that back in 2021 if you threw any kind of Asian Pacific Indigenous art at these AI image classifiers. They had no idea what was going on So if you took back to our mission contemporary art Asia Pacific Australia These things are built on out-of-copyright Western art useless. So that's why we wanted to look at how we could do that So here's a few of the experiments we ran First of all, we tried to just train some models where we kind of you know Did that kind of machine learning thing where you tag up lots of different works and it can identify things Okay, but because there's so many different Artistic style so many different nations. It really wasn't good for indigenous art. Then we looked at Asian art So here's some Japanese prints and then you'll have on the left There's a heat map of what it identified an untrained image of what was the points of focus inside that artwork And on the right was it identifying correctly more things of interest that you could search and find objects within the art Only issue is the only way we were able to validate that was because there were a lot of US Museums that have digitized their collections of Japanese prints and were able to throw lots of information at it to validate the model And that only really works because there was a lot of I guess similarities because they're prints and often we had the same prints that they did So maybe not so good Then we tried and you saw a bit of that in a demo. What about rather than just exploring by one or two colors We could look at a different way of exploring art through balances of colors one of the issues that if you're not an art historian Like me. I'm not It's very difficult to look at a work and go I know why I like that because of this kind of idea So we thought well, what about something's visually appealing to you if we can look at the balance of why that's visually appealing maybe you could search by aesthetics and that was quite promising and Then right towards the end of the project. We found this Which was if we built a model of what art looked like from different angles and then through some really bad photos at it It was very good at identifying them and that actually became the genesis of where we headed next so that was just Pretty much an internship project with a PhD student So then when that finished in July 2022, we gave ourselves a deadline of April 2023 to build a progressive web app on a shoestring So we wanted to narrow down to the right algorithm because you know Recognizing the work is using an AI model How to train that process how to scale it the UX design the kind of technical architecture and what actually makes it engaging Why we are aiming for that date is on April 2023 we launched creative generation which is where we invite the top year 12 students in Queensland to put in their artworks and It's a younger audience and there's only 32 of them So we could have like this one-to-one relationship talk about them ask them questions that they wanted to put so when they scan the work the artist could actually talk back to the public and Younger audience and on the showcase night went really well It got scanned lots and a lot of people left comments and Here excused the quality, but this was from the opening weekend So it gives you an idea from that concept of a few things how we had it going on the on the opening night So this is a quick demo of the art seeker up So the idea is that we just point the camera at work and then we can identify them so I'm in the up Well line up the work is guess as you can Find it and pretty much instantly finds the work then I can get the color information The statement I can leave a message for the artist that goes into moderation And kind of find out more about that. So that's good for a flat work So let's try that out now on something a little bit more three to go Instant so there we've gone and once we've seen a work we can then look back at history We can potentially favorite a work if we want so I'll put that in my favorites And that way I can kind of personally collect a show as I go through That was the idea to that you collect collect the works that you really loved and take it home with you rather than taking lots of photos Of labels and so on so that you had this personalized experience for when you got home So this note you already heard about you now where we are now is we did a soft pilot for A new show called small figures It was I'll go in the lessons later It wasn't the right show for it because these are these tiny things there isn't much information So you scan it you get the novelty one thing I learned in this process is when you get visitors using this up They're blown away for 20 seconds They're like wow it just instantly recognized it and then it's just standard like I should be able to do this everywhere, right? Even though it's really hot anyway So we've quietly launched this now and that works everywhere in the gallery So anyway apart from a ticketed show because copyright reasons every work in display in any of our buildings will work with this app So now I'm going to go behind the scenes was it built with well, we are here at Drupal South so it does I'll speed it up a bit Effectively we have a collection management system and digital asset management system that feeds into a Drupal 10 site That's hosted on platform Then there's an expo progressive web app, which is a react native opinionated framework You use that to take a photo of an image of a of a painting it uses it hits AWS An API endpoint that hits an inference engine that finds the most likely match to your artwork sends an ID back You go to Drupal get all that information You have your result So that that's the guts of it. I'm now going to talk about each part So Drupal 10 we have a traditional monolithic Drupal 10 site which runs our collection online. You can go visit it if you want. I highly recommend it I it's very nice. I may have made it So it is the anchor and source of truth for the whole project every day when we make changes to our collection system goes into here We have a digital asset management system when there's new images. Yeah, there's a web hook you fire that it populates the site Additionally content publishers use this site To make these things called digital stories which weave together all of the different art pieces We've got a few hundred pages of these now and that kind of creates a bit of context and rather than just the raw works We bring them in as well But more recently Additional to that monolithic site. We also have an API available that provides all of that content So on top of that monolithic website, everything's available via an API and that's what we use for the app itself So it's gone from and that is now delivering hundreds of thousands of API Lookups which affected our hosting which I'll talk about a bit later And most of them is through a content Is is through a custom module that that API we built now why we use Drupal 10 is There's lots of apps out there that galleries use that you can buy off the shelf But they all involve you copying and pasting your information putting it in that up and maintaining it And that is guaranteed to work for as long as you've got project funding So what we needed to do is build something that was sat on top of everything else We're doing so making every time a Registrar adds new information into the CMS the collection system. Sorry it updates art secret updates collection online So Drupal is the middleware it aggregates content from everyone else's existing workflows And it doesn't add anything in order to enhance everyone's workflow enhance this visitor experience Now we chose expo as our opinionated kind of react native framework It's very cool So there's two of us on the project I work part-time I've got the rest of that stack you saw earlier. So and I have um, we got drew here from guy resources shout out to guy We have I one day a month with Gaia to do my code reviews and Then Nick who you'll see very shortly in another video makes the front end and does the AI work Now this is really good I highly recommend getting if you want to do some react native stuff expose We're really fun getting into it. We had some issues with the camera library in old versions of Samsung Android that were nightmarish but apart from that. It's been pretty seamless So I'm going faster than make up time. I did have more to say but Next up is our faster AI inference engine. So this is where I get a little bit into the AI now when I say AI I'm talking about deep learning networks. I'm not talking about large language models Now during the early stage of the project We we use to some is neural network and the way to think of a some is neural network Is it's a bit like a fingerprint kind of thing you have a big database of fingerprints and then you find the closest look Issue with that is it does double look ups. So let's simplify the bigger it gets it gets exponentially slower So it was fine when we did um Creative gem where there were 32 artworks, but when we scale it to 1000 artworks The inference time was going going up to 40 seconds So and in terms of building the um the inference model that you look up against We were up to 72 GPU AWS machines in order to just build the model and sometimes they were timing out and big chat if anyone's from AWS Thank you so much for giving us the credits for that with no way. We could have afforded those machines So as it just got slower and slower, but it was really accurate So it was very hard to let get but we knew visitors weren't going to wait 40 minutes and I'm a 42nd Sorry, but here fast AI came along Which I think if you can read on the screen there fast AI making neural nets uncool again Thought when just picking up from Dries's note there about how its foundation going to fast AI was a bit like that It was kind of like very foundational solid fast, but it wasn't cool It was based in Queensland, but it is great. It was it just blew our mind away. How good this thing was To give you an idea it scales linear So and we were no longer we were now just using regular machines to build the model and it was taking an hour It wasn't taking a day. It was taking dollars to build to rebuild the um inference model It wasn't taking hundreds of dollars and we found that to make a an accurate kind of reference point We only needed six images not 60 It got very they got less confused by unusual shapes Which may not sound like much but if you take if there's a three-dimensional worker like a sculpture if you photograph it from one side There's a whole bunch of paintings on that side if you photograph it from the other whole bunch of paintings on that you need to be able to identify both and Why fast AI works really well is it takes into account the lighting and the kind of background colors So we still have the same is Neural network available to us and we can switch to it if we lack confidence in fast AI The disadvantage is about 95% accurate So it if it doesn't know it's it's it's like chat GPT it just make it just go Oh, this is good enough so you you can scan some works and it will give you the wrong result And that's why we have this card method of showing you the most Likely one and then you scan through them, but we figured 200 milliseconds versus 40 seconds. It's worth it So make almost there There's tech stack. So the last part is our AWS work. So the front end is a lambda function We have a dynamo database that holds a lot of the kind of inference training machine and Information and we use SageMaker, which is an AWS service for building AI models Every time we take more photos that we rehang a gallery so One of the issues with lambda though is it takes a little while to wake up So the first visitor in the morning was taking 30 or 40 seconds to get something So we now just have like a little service that pings it during business hours. So it stays awake still a lot cheaper than running a server and Also, I guess one really cool thing is although we still need SageMaker to build our models that we check up and get the artworks Against we can now host them on tiny little EC2 machines when we do the inference lookup So we've now just got like some EC2 smalls that are powering the API endpoint that give us the information about the art back Okay, so I went through really fast because I was making up time But one thing I really wanted to bring up was this audit app that we developed So one of the issues we had is that it was fine when there were 32 works for creative Gen, but when 50 works go on display each week Going around with our phones taking them all and then putting them into our laptop and then dragging them into an S3 Bucket and putting them in the right place and training them. It's taking all that time So Nick who you'll soon see who did the expo work built this little app here That allows us to much more quickly train our AI models And I actually think it's impressive that you can point your camera at any artwork in our galleries and get all that info back But I actually think what he built here is just as impressive So I'm now going to show how how we do how we train. So what we did is we actually built some tools to assist Us in that and I'll give you a demo of that now if I open up this internal app I can do the training by of this work by just finding it in the list So I'll just do a quick search Here it is And then I can go down and add training data So here I'll first tick this flag to take a reference image This is what we use to verify with the training data that it's the correct image And then all I do is take five images. So One from the middle and then I'll step around the artwork and just get on slightly different angles just to Give it some coverage of The different angles that people might take of this artwork And that's all we do to get the 200 millisecond response time for him So what so yeah, that's that was amazing because when we're doing 30 great But right now when we go walk through the gallery each morning and there's been a rotation by the curators We're like great take five photos and then when we've had enough rotations. We rerun the model So now we're on the other side of the pilot and we're in the soft live What are some learnings in my last couple of minutes? What's going well small agile team? I was saying to drew earlier. This is the most successful agile project I've ever worked on agile has become a little bit of a dirty word because it's like, you know fix budget Changing scope all the time, but we have I have one day a week a one day a month We've got resources to get my code done on the back end and that's all we do We do one day worth of work really compound and we release a new version of collection online on that day the rest of the time Nick is working on the site and building the front end and because of that It's we just meet once a week for 30 minutes pivot at test it and it's done So the operational overhead is tiny versus the build time Making it work within our workflows. That's so important The reason so many of these style of projects fall over is because they're additional to everyone's BA you This takes BA you and adds a visitor experience on top of it Look, how well the tech works. We didn't write the fast AI library But we get dumbfounded how well it works like Sometimes we're like going outside and it's like raining and we're taking a photo of a sculpture and the lights completely Different and or it's night and it just works and we're like you should not work off five photos But it is just amazing how well that thing works We've kept everything really agnostic I talked earlier about the API calls Because we started going headless the pantheon bill because that counts each API calls a visit was gone through the roof So we read platforms a platform pretty quickly that so being really mindful of not embedding too much In any particular kind of area was being really good and doing the kids. So I mean the young adults Was a really good pilot phase because we had this really kind of interactive audience that wanted to use it That wanted to leave voice memos that actually gave us feedback and said hey my old Android doesn't work So that was good What wasn't is that we started with cool tech, but we forgot that they were needed to be an engagement hook So we've always been catching up that the tech is actually better than the content we're delivering Content is king like there has to be a good thing that you get when you scan that So yes, it's great that we leverage our work, but we really need to pivot the way that we create work so that it's a bit better so ongoing funding I Nick is on contract and Yeah, we're running out of funding and innovation in a traditional org can be really hard We are an art gallery our sole KPIs visitation. So yes We're using digital to augment that sole KPI, but it's also very traditional org and this is a very different way of thinking about art it's not thinking about The main outcome being an exhibition brochure. I mean publication at the end that you buy this is about continually delivering content It's a very different mind shift Now I have one more video, but I'm not going to play it because I'm making up time, but I think Okay, so now we're at the end of this talk. I hope you enjoyed it We're going to demonstrate a few more. I'll just give a similar. I'll skip for this because I have a look at this one here So it gives you an idea that the 3d works Straight away as well. And that is a very asymmetrical sculpture that one and I will go forward to the next part. I'm looking at some historic photos here now I wanted to demonstrate on these in particular Because they've got similar frames. We've got similar color schemes and of a similar time and I wanted to show how well it can kind of work So let's have a look at this one So we get the idea Sorry, I had to race through that. There's a lot of work that went down I don't think we've got any time for questions. I did want to try and get a couple in if we do one or two Is there any any questions from the audience? Yeah, that's a really good questions And this is actually a wider gallery question altogether one of the few positives that came out of visitor experience in QR codes It's in Queensland. Everyone had to check in to go in So we kind of went to all this effort of training people how to use QR codes and then immediately said don't use them The the main way that we're currently doing it right now is through the floor office A lot of people have questions about the artwork. So you you talk to someone who's working there We're training them to show how they can use the app and give them the kind of materials to onboard them on how do I We're introducing way finding as well. How do I get between rooms? So our main way in is actually through physical people So when When that when the users are taking the photos, is that also feeding back into the learning as well like in terms of the accuracy of the Another very question. We we take all of the false We take the monitor when people don't correctly identify. So at the start It says is this the work and if they say yes, that means we get reinforcement learning But that particular set of training data is working well And if they say no it flags to us that we probably need to retrain that work But have you noticed an increase in terms of the accuracy of the learning models there? Like can can you track? I think you said it was like it's 95% accurate. Like is there To be honest the the usage because it's soft launched hasn't been high enough the areas where we've hard launched it have been like Nersic galleries that where the works are so different that they're Yeah, they're quite noticeable. We hope it does and we have definitely we take we take the error reports