 My name is Skaj, I am a media designer from the Netherlands and I am finishing my master's degree of future design at Praxity University. During the last two years I have specialized in artificial intelligence and implementing machine learning in art and design. My graduation project is called N-Visage, which means to make up an image inside of your head. N-Visage is a software that contains multiple AI models and lets you make an AI generated animation video out of a piece of text or even a photo of text. My prototype is built for fiction books, which means you can make a photo of the back of a book and N-Visage will create a video based on keywords, emotion and the sentences of the book description. I started my research with the question, how can I use artificial intelligence to enrich the library experience of the future? And pretty quickly after I started my research and I started talking to people, I realized that I wanted to continue my challenge within fiction books and how people decide what will be their next book, so kind of the decision making of books. And after I've spoken to a few people I found something very interesting and that's even though people say you shouldn't judge a book by its cover, actually 90% of the people that I spoke to, a really important part of choosing their next book is what the cover looks like. And the cover is usually the only visual that you get from a book. Movies however are often not what a reader expected when it comes to for example what characters look like or how the story is going to like expand and that can be pretty frustrating. So how do you visualize a book without spoiling too much and without taking away from the readers their imagination? Because what someone imagines while reading is very different for each person and this is why I used a lot of user-centered design methods in my research. I met up with a group of people online and I hosted a few workshop sessions. What's the point where you would find yourself the most in library or like what area, what kind of genre is your main go-to when you would pick a new book? It was actually very fun and there were quite some interesting results that helped me to continue my research. It turns out that a more abstract video of the mood and the setting of a book actually can really stimulate the imagination. So as long as it follows a few guidelines like no character should be visible or recognizable in a video and no full storyline or plots should be there for example. By watching a book trailer that's in line with the book and its description a person might actually choose a book better because they could like already connect a bit better with the story and already maybe have a sneak peek of the setting of the book so they know a little bit more about what they choose to read next. I created it in this first prototype especially for fiction writers so that they could use it in a way to enhance their story to like expand on what they already have in written form and also so they have control over for example the cover of a book. It can actually be very useful for writers to have an AI analyze their text because in the end the AI learned from a huge amount of data from the internet what people in general imagine by certain words. Next to book descriptions using AI to visualize text can have a way bigger potential. Could you imagine important medical tests that you don't understand and having that visualized or if you are keeping a journal and this you would like to have visualized. I can even see use cases where illiterate people could really be helped by this tool. So how does it work you ask and it's a pretty technical story but I really enjoy this edge of creativity and technology. So here's it in short this software takes an input either a text file or it uses an OCR to extract text from an image. Next we have a piece of text of which emotions will be classified keywords will be extracted and also the text will be cleaned up. That means names have to be removed special characters have to be removed and it will be put into sentences split at the comma. From the keywords and emotion classifier you're able to generate a soundtrack with an AI model. And the soundtrack can actually then drive keyframes that can drive a camera movement in the final video. The processed sentences together with the keywords and the emotions can then be used for prompt engineering. This means that we will create new lines of text that will then drive the video frames so it can move from one scene to another to another to another and so on. These new sentences together with the camera movement are then led into an AI diffusion model and this diffusion model creates the new video frames. The individual video frames are then post processed and stitched together in template that I based on a research on trailers and teasers. After that sound will be added and the video is good to go. You now have a fully AI generated animated video that's based on the book description. I'm preparing very hard at the moment for the exhibition that will be in June. It's really going well but it's a lot of work. It's been really fun also. At the exhibition I will showcase my project in an interactive installation. I'm working with sensors, projectors and books and I don't want to give away too much of what it's going to look like because I really want you to come and take a look for yourself. The exhibition opens on Thursday the 9th of June at 7pm and I really invite you to be there. It's going to be in the Pragovka Art District and we will be with a group of ten fine arts and future design students showing our works and what we've learned during the last two years.