 And if you have any questions, particularly to Gina, you can actually join that community to learn more about AI and neural search. So what I'll be sharing is really from the perspective of flight, really just playing with these two for fun and educational purposes. So what I would like to do is actually really start the presentation by building the search app on the flight so you can really get to see how easy it is and how quickly it can be done. And then I'm also going to talk a bit about why we have decided, what are the advantages of neural search and how embeddings is a way to actually enable cross-model and multi-model searches in any problem domain of interest and also how you can go from POC to production and some thoughts on the implications of AI to moving to the future lab. So what I'll do is really, yeah, just quickly hop over to the Gina Cloud interface and start to set up a search app. Okay, so I've actually already uploaded my data to the Gina previously. And what we're about to kind of like build a demo on, I think I need to pull this. Okay, so what I'm about to do is this particular data set has already been uploaded. So we'll be building the search app based on about close to 4,000 images in the NGS collection. And what we've done previously is conferring to the Docker Ray, which is a unique Gina data structure for handling and processing multi-model data, i.e. image, text, videos, sound, and so on. So that's going to enable us to then subsequently build search on top of it. And all you have to do on the Gina Cloud AI interface is actually just to create a new search app, right? And we're going to be making use of this data set that I've really preloaded. So the data source is the Docker Ray and then this particular data set. And you'll be able to choose, so for the purpose of this demo, I'll just use the title of the artwork as something to index over, as well as the image, right? The image of the artwork itself. And you'll also be able to kind of like filter over different fields that you have in your data set as well. And then you hit deploy. So what's going to happen after this is essentially the Gina Cloud AI will take care of indexing and coding your data for you. So all we have to do now is basically wait for the server to be spun up. Yeah. So now I'll go back to, I'll backtrack a bit to, I'll backtrack a bit to explain a bit more about what we were doing exactly. I think you'll find that getting your projection right in present, in conferences is always most challenging compared to even building a new research app. Okay, so, okay. We also essentially, those are the two things that we just did. And now maybe let me talk a bit about why I was inspired to work on this problem in the first place. So I'm actually part of a team running a culture tech accelerator at National Gallery Singapore. So for those of you who are in town or if you are local, please visit us, right? So the problem that we're trying to solve here is really that at the gallery, we actually have an existing online collection search portal. It's making use of elastic search symbolic search capabilities, which essentially means that if you do not really know what you are looking for in the title or in the artist or whatever, you won't be able to find the artwork that you may be looking for, which kind of like defeats the whole purpose of search. And the reality of this is that we have a very limited set of metadata about our collection, which is close to 10,000 works. And our curators don't have the time to actually, you know, tag everything to make it more searchable for keyword-based measures as well. Not to mention that it's highly the most interesting job in the world for a human to be doing, right? So we are really faced with this problem of how do I take the existing data that I have and the very limited metadata and make it more accessible, more searchable for people, right? Also, given the fact that I think audiences and users are also going to be more and more accustomed to more, you know, new ways of searching data, not just limited by text, but sometimes you may want to find something that is close to something that you've seen before, right? By unloading an image, for example. So there's all the different things that we weren't able to do with the current collection search portal. Also, because everything is kind of in British English, the moment you spell something with a typo in another form of English, you won't be able to find what you're looking for as well. So anyway, you'll see this observation that kind of got me interested in looking at how neural search could be potentially applied to solve this problem. And I guess unlike other institutions like McCurry, it's kind of like we are faced with this problem of, do I spend two years having an entire team of engineers and AR developers build a re-ranking algorithm, right? Or do I try to find something that can quickly prototype and go to production as quickly as possible? So that is also why I think that kind of like search can ask Gina to really start to explore what the possibilities of neural search for locating artworks, yeah. And I think the core idea behind this is really what the core embedding is, right? So I decided that cats are cuter than dogs when I realized that you do not need to walk a cat. And essentially, the whole idea behind this is that anything that we can think of, right? And number of dimensions can be represented in transform using mathematical equations and represented in the 2D space like this. So this text and images can be represented within the same space. And a cute cat, I mean, the text is going to be closer to those two images of the cat that we see here. And that's going to be some distance away from the dog. And that's really kind of like the fundamental idea behind a lot of these similarities such as that we are doing with the different modalities of content that we have. Yeah. To give you a better visualization of that. So this is an embedding projector applied to about 500 works from our collection. So using the TSNA algorithm, we transform it into this 3D space, which then iterates and starts to cluster similar visual images together just based on images, right? So you kind of like see that the machine starts to put our calligraphic works together and then portraits from another cluster and so and so forth. So this can actually easily be extended to include other dimensions or other attributes as well. And the whole idea is when the user has a search query, right, that's going to be compared against what we already have in the embedding space. And then the similarity search is performed. So that's kind of like the gist of what drives the neural search in our context. And actually, I mean, just to kind of like take a step back to what was done before setting up the search ad. And this can all be done in less than 50 lines of code. And after like credit, Max from the Gena team for helping me with this as well. But essentially, Gena is able to cover how you abstract away a lot of the complexities in terms of deploying a neural search ad in production, right? So what we're doing here is actually really using this data class or API to represent multi-model documents. Again, like it could be an artwork with an image and text and so and so forth. So you create this data class and basically we are then able to leverage the Docker Raceful API to do whatever that you've just seen that we did, right? To actually embed and search and store and transfer the documents. And then after that, we basically then instantiate this data class with our actual data and cast them to a document. It's also kind of like worth noting that there are also more complex features to the Gena's Docker Race, right? You can actually nest your data set so that I can have an artwork that's part of a collection. I think they're also building, working on something that allows you to search over multiple collections as well. By the moment, it's kind of like limited to top level search. Yeah, and then of course, as we know, generally an article will have different levels of granularity to it, right? There's kind of like the paragraph and then there's the sentence and then there's the individual words. So if you want to be able to search over different granularities, depending on your problem of interest, you can actually also do that with Gena. Yeah, so essentially, later we'll be able to take a look at the demo that was actually built. And of course, after you've built your demo, you may be interested to think about now then how do I actually put it into production? So my background is in economics and I tend to look at a lot of these things from a point of view of a trilemma in the sense that unless you have unlimited budgets, usually you kind of have to pick two out of three things, right? So in our case, in the case of what is more like a public organization without an in-house tech team of engineers and developers, I mean, we tend to probably tend to fall more on the cost-effectiveness and performance side of this triangle, right? So you kind of lose that sovereignty of being able to build and maintain your own models in-house, which kind of has that trade-off where if the API or the service they are using goes under and it has happened before, right? Your entire thing kind of like falls apart. I think that that's just why for this reason that I'm actually a lot more interested in open-source models or open-source products like this, right? Because at least there's some way to maybe try to recover what you need to recover, even if the company goes under or whatever because it's kind of like all in the open, right? So cost-effectiveness is just really important to us then because generally I think comparatively the cost of using a managed service will probably be more cost-effective than having to maintain and in-house team of engineers and developers, yeah. But I mean, at the moment, I think this new forms of search, as everyone knows, are still a bit more pricey compared to older forms of search. So there's definitely a lot of room for improvement in this area as well. Okay, and then as far as performance is concerned, to me anything that is an improvement upon the current search portal is high-performing. So that checks off the performance box for me. Yeah, so again, the cool thing to me about Gina is really that it's able to abstract away a lot of these things that you see over here and all that we have to do is focus on defining the document and understanding of data and representing it properly, right? With that, let me just do a quick demo. And in the interest of time, maybe I'll just play this video. Yeah, so this is the, I mean, this would be an app that was deployed. And essentially when you do a text-based search such as a rabbit, you kind of like see, they also understand that a bunny could also be a rabbit, right? And then you can also look for more abstract concepts like angry men marching, and it turns your results about people going on strikes. So people questioning, where's my job? And you can also perform searches with images and actually mix image and text as well. So if you've been to the recent Van Gogh exhibition that I want to look for flowers from the National Collection, you can do that. And you can also refine this query with a text, right? Maybe you're looking for flowers, but I wanna rate flowers that look like Van Gogh's a piece of work and you can do that. And you will see that it's actually doing magic against like text and image. So that's kind of like how the whole multi-modality and cross-modality comes about. Yeah, okay. So yeah, I think, I hope that's given you an idea of what can be built with Gina and essentially, you know, if there are like other search apps of interest or things that you're trying to solve, pertaining to search, I think there's a lot that these platforms can do, right? So Yungi has a lot of like crazy smirks in this drama. If you want to find like which episode this particular screenshot is from, you can build that. If you want to buy stuff based on a photo of something that you think, you can also do that and so and so forth. Yeah. So anyway, I think to conclude, I think we're really moving from this age of like single model searchers and AI to one, there's a lot more multi-modal, right? And it's probably also worth noting that actually the text image embeddings used in the search that demonstrated is based on open AI circuit, which has also been used in stable diffusion journey. So if you're into that sort of stuff like runway, has also recently released a model that allows you to generate videos based on text image and so and so forth. So while living in a time where in the not too distant future, right? A single person will be able to generate like a whole K-pop group that can check a review, that can sing and dance and whatever with the help of multi-modal AI. But of course, I hope that that wouldn't be kind of like the only use cases that we can think of for this time. There's definitely a lot of problems and challenges in the world that we can use multi-modal AI to solve. So if you're interested in building multi-modal AI, you can join the GINA community and also for those of you who are based in Singapore and overseas, if you are interested in solving problems pertaining to AI as applied in art, right? In fact, you do not need to be passionate about art in order to build with AI for art. You can also join the YLab Discord, which is the cultural tech accelerator I was talking about. And I'll be more than happy to, I mean, further conversations about how we can use AI for good together. Yeah, thank you.