 Kia ora everyone. I'm Paul Rowe, the CEO of Vernon Systems. I've been investigating options for the automatic analysis of images. And so today's a chance for me to share what I, or maybe more precisely, R2D2 has found. And so the first question is why might you do this? It can provide access to information that's already in those images. For example, is the image in portrait or landscape orientation. And then you could provide those as a way of people searching collections and filtering those searches. So here I've got all of the Grinch and Albrecht works in the Sargent Gallery, and I've filtered it just to show those that are in landscape orientation. You might also use these tools to create text that you haven't had time to enter manually. So subject keywords, captions. In this example I've just used the free single image uploader that Google Cloud provides. And we can see that it's even managed to detect the breed of cat correctly. We can also use these tools to find patterns that would be harder to spot with traditional cataloging. The Barnes Foundation used a product called RPI to find patterns in the colours, lines, light and shape of their collection. Here for example are all of the works that have horizontal lines in them. And some of the options still feel like magic to me. Tim Sherritt used the open source computer vision library to find all of the faces in a set of source documents. And through that he provided a dramatic visualisation of the people that face discrimination under the White Australia policy. He provided a new interface into those source documents. And as a side note you can do this not just with image files but also with audio and video files. So for example if you upload a video, YouTube can automatically create closed caption text. And this helps make the video more accessible. Those captions aren't necessarily perfect. So you may have to do some manual editing afterwards. This particular one is from the bad lip reading of Star Wars. IBM's Watson product does a similar thing for audio files. So it can automatically create text transcripts of those files. And that's a huge one if you wanted to increase the searchability of things like oral history. So we'll move on to image analysis. Some images are hard to decipher even for people. Don't be surprised if your precious painting of a tiwawa gets tagged as a muffin. The UK web development company CogApp have put together a site comparing three of the popular tools from Google, Microsoft and Clarify. And the website provides a good comparison of the strengths and weaknesses of those three different tools. So here we have a painting of the Princess of Sweden and Clarify has added a whole lot of tags including renaissance and cavalry. Now the renaissance cavalry weren't very effective but they had the best uniforms. Creating a full sentence caption is particularly difficult and this shows a typical example. So here we have the description describing somebody using perhaps an early model of cellphone that just happened to be shaped like a baby. It is a cellphone I swear. But the results can be amazing. So here Google Vision has correctly tagged all of the men with facial hair in the Sergeant Gallery collection and it's just in time for Movember. Now because the tags aren't perfect we gave the curators the option to delete specifically selected tags and even when those automated tags aren't perfect they can create interesting results. So these are all the works that have been tagged as circle. Now they're not all strictly circles but this is opening up new connections between those works based on the shapes and lines that are in those. And even the captions of artworks can be as good as what a cataloger might have manually entered in some cases. Now you can also use these tools to extract text that appears in the image. And so in this case it's the same uploading tool I've got and it's managed to correctly pick out all of the text. And as an example of when that can be particularly good is the paper's past website. So that's using text analysis of photographs and scanned documents to provide increased searching of those. Now I looked at a number of colour analysis tools and they were all pretty good. The challenge is that they pick out the dominant colours based on the original palette of the image. That's usually 16 million colours. So unless you've got a really enormous collection very few will be connected by the same colour. So what we did in this case is we mapped that to a much smaller palette. In this case the 140 named web colours. And so that means a lot more works are connected by the same matching colour. However as you're moving to a smaller palette you're gradually displaying colours that are getting further away from the original precise image colours. And so we could then search on several of these elements together. The subject keywords, the named colours, the image orientation. Here I'll search for house and white and landscape and all of these results are missing at least one of those elements in its original catalogue record. Now I've put together a whole list of links to all the websites and tools that I've mentioned and I'll be putting the slides up on slideshare. So to sum up the tools that are available are still in their early stages but they are helping to make it easier to search for images and they are creating new connections between collections. Thank you.