 So hello to everybody, very happy about this, we are very happy about this webinar, which is, it's gonna be very interesting because it is a collaboration, not only because we have very interesting people, but also because it is a collaboration between three important institution, which are, you know, Culture Hub, La Mama, Iphone Hub Community Network, and Soda School of Digital Art, and two based in New York and one in Manchester. And we have, you know, they're representative of this institution. I'm also personally part of Soda, but we have Fido Segreira, who is an artist working international or on art and technology, working with Artificial Intelligence, VR, et cetera. Asher Amito Toledo is the founder and director of Iphone Hub Community Network, which is an international network which gather, you know, people from art and technology field from all over the world. And Shakur Naya and David Jackson from Soda School of Digital Art, which is a new institution and a new school of digital art, even if, you know, it's not only a school because we have also exhibition spaces, we produce, we work with artists, we do many things, I say we, because I'm also part, I'm the curator of the Soda Gallery, even if we have changed the name in Modal, but I mean, I mean, I'm still saying Soda Gallery and lecturer at Manchester Metropolitan University. So the first part of this conversation is gonna be around the work of Fido through a dialogue with Asher. And the second part will be about, you know, the work and Martian Davidan developing and on artificial intelligence with Soda and it's gonna be especially the second one very more, let's say, you know, like with questions and more theological. So Asher, if you wanna introduce Fido briefly, and, you know, just open the dance, as we say in Italy. Yes. Okay, hi everyone, thank you very much for joining us today and I'm very excited also to be here today because for over two years, we've been talking about this exciting collaboration between Soda and Hyphen Hub and Culture Hub and of course, when we were ready to launch it, the pandemic happened and everything got frozen. So it is really great because we are here like I'm on a group of friends, so it's gonna be the conversation in the style of a fire chat, as they say, you know, very, very casual. And we invite the audience to also participate. There is as questions and otherwise we'll ask, we have a lot of questions to all of us here. And so we have, we decided to invite Fido Segreira, who's an artist is based in Colombia in Cartagena. Fido, he was previously the, I met Fido in Shanghai like about eight years ago or more when he was the director of the laboratory at Cronus Art Center in Shanghai, which is a very important institution in the, in research arts and AI. Then he moved to back to Colombia when China closed down and he's just, and he's been continued doing very interesting work in this field of digital storytelling. His work is always at the frontiers of philosophy, I would say neuroscience, you know, and art and, you know, and I would say, working together with the general public. I think because Fido and I were both born in Colombia, we, I think naturally we have a tendency to be a little bit political and more social, you know, our world's gonna be totally detached and make it look all pure technology because our countries have gone through a lot more, you know, things, material and, you know, so it's always embedded in the technology. So I will start with Fido, why don't you just tell us a little bit what we're gonna, what you want to present today and then we go from there. Thank you, Asher, and hello, everyone. Thank you for the invitation. It's very nice to be here and have this space for sharing works around the topics of storytelling, communities, technologies, new ways of storytelling actually or different ways of storytelling. And I will be showing two pieces or I will be very briefly introducing two of my works. One of them was when I was maybe over 10 years ago when I was here in Colombia and you will see what Asher was actually saying right now, which is the influence of the context in our work when we are, I mean, for us, when we are working here in Colombia as artists, we cannot, you know, we cannot just exclude the context. It's almost impossible. So you will see that reflected in the first piece and then my second piece was produced, of course, when I was already in Shanghai a few years back. So it's very different in terms of context. Yeah, but in terms of technologies and techniques for storytelling, they are similar. So these are two works that both use brain technologies, brain computer interfaces and software or, you know, custom software in order to somehow create different types of stories. I want to start sharing my screen with you guys. So, let me set it. All right. So this first piece I want to share with you guys is called Datasphere. And it's a piece, maybe 10 years old, maybe a bit more. It was an invitation I got from a gallery here in Colombia. They asked me to go to an island, which is right here in front of my hometown. Ten minutes on a boat. It's a very poor island. It has, there's an Afro-descendant community that moved there many, many years ago and claimed the land. But the state has never looked after them. They have never, you know, they abandoned them basically, like many other communities here in Latin America. So it's quite poor. And they invite me there and they asked me to do some sort of intervention there. But I'm not, you know, the type of artist who makes public interventions. So at the time I was already working with a very basic computer, brain computer interface. So I decided to take it with me. That, a USB camera, my laptop and a phone. And I created this system when I was there, which was always recording my point of view, let's say from my forehead. I'm gonna give play while I talk. Yeah. So I use this on my forehead, the camera. And at the same time I was wearing an EEG she say for those who don't know what an EEG is, is basically a piece of technology used by doctors to measure your brain waves and by researchers to try to understand the behavior of the brain. And what it measures is electrical changes on your skull or through your skull. So that device helps to understand a bit how your brain is working in different moments. And what I did was a very basic classifier algorithm, classifier algorithm that will somehow determine based on my brain waves and based on some research, possible mental states. And then it will take that mental state as a keyword for a search online in the case of this work was on Twitter and grab tweets that were being written during that time and that were related to whatever the system was telling about my inner state. And then I will just render them on the screen as they were coming through. And then at the end of this whole experience I got like a seven day long piece of work which was my point of view or my visual point of view all the time while I was in the island and measuring my brain waves and creating some sort of narrative out of my inner state. But we have to understand that this narrative was also written by other minds, by other people. So it was somehow this system and my body and my mind became a sort of channel for bridging two very different worlds like the online world with all of its comments and ideas and this small community in this island which they don't even have access to the internet at this time. They don't even have fresh water there. So imagine it. So at the end it was quite beautiful because I was not, I still didn't understand what I was doing at that time. It was on the run as I went. And at the end I took this whole piece of video and projected it on a abandoned space in the island on the island and people started to come with chairs and even bookworms and they started to watch the whole thing and it was very, very long and they came day after day to watch and they started to see themselves from a very different perspective from the point of view of a guy who, although we are from the same country, I am very much a foreigner to them. And at the same time, they started to read these comments and some of those comments were quite intense and they started to ask themselves questions and they start to ask me questions. They ask me if that's what I thought about them, if that's what I wrote about them and it was very difficult to me to explain what was actually happening in terms of technology there. But at the end, this piece became a very important material for them and they keep it in a house which they call the Coulter Institute or the Coulter House of the town which has a very few art pieces in there but they really wanted to keep it because they felt it meant something for them. So that's agnosis, it was my first attempt to create a sort of narrative. At that time, I was very much involved in documentary filmmaking, especially experimental documentary filmmaking. So this was a very appropriate way at the time to solve, let's say artistically, to solve the piece. The second piece I want to share with you guys today is called Agnosis, The Lost Memories. And this is a piece I did much later, not much, maybe five years later. I was already living in Shanghai at the time and my context was very different. There was no poverty around, there were no vulnerable communities where I was living at the time. It gave me the space and the resources to think about technology in a more general way rather than a contextualized way, manner, let's say. So Agnosis, it's another brain-computer interface based system, but a bit more complex because it tried to deal with the idea of memory and the idea of capturing a memory and even more interesting from my point of view, the idea of giving a system, an intelligent system, my brain data as an intimate act, I give it to the system, this intelligent system takes my data and somehow based on a few set of rules that I give, it returns something back to me. And this something that it returns is a lost memory and it's returning different formats which I then can experience, I can share, I can show. So this is one of the results of the original of the first experiment I did of Agnosis. And let me show you a bit how it works very quickly. So a memory in this system is basically made out of an image from a camera, the brain data, a GPS data, a gyroscope sensor and an accelerometer sensor which were all embedded into one system, let's say. So the visual memory, of course, which is what my eyes were seeing at the moment, the physical memory, which is all the data from the sensors which determine how my body was moving, how my body was rotating and where I was, the brain memory, which is all my mental activity at the moment and then all that stuff combined, I mean, the brain computer interface with the camera, the portable computer resulted in two things, a book and a series of images. So the book is basically a, it's like a memoir, it's like just a diary, you could say, of all these memories being understood by the machine and being printed, being plopped by the machine in the form of poetry, let's say, or a textual narrative. So each lost memory and let me, I will tell you in a few seconds what a lost memory meant in this piece, but each lost memory was then interpreted as the system and used to write a poem, which was then given back to me. And when I started to read this poem, I could definitely relate to them. I could definitely relate to the context I was, the context I was when the memory was taken. Even when I looked at the image, which is related to the poem, you could see how this was working. For instance, when my attention level, because the wearable EEG was basically measuring my attention all the time, so when my attention level went below certain threshold, it will capture a memory, what I called a lost memory. That lost memory will be an image like the one you see here. And then that image with all the sensor data, which is up here, plus the brain data, will be sent to a system that will generate, will cut out a piece that somehow found interesting, I could say, or detected in the image and then cut it out and based on the data, data create a sort of 3D sculpture that eventually will become part of a larger composition, visual composition that I will show you now. And at the same time, it will use an object detection algorithm to detection different objects and recognize different things in the image and use them to create the poetry as the initial key words to create the poetry. So at the end, the compositions were, they looked more or less like this, the visual ones. They were all based on each of these objects that you see flying. It's one of those lost memories that has been already treated by the system based on a set of rules and then position on top of a photograph of the space where they were captured. But, and they are rendered also based on the data in the order, I mean, the position of each of these memories on the image is based on the data that was collected by the sensors. So it's a sort of, in that sense, it tells a story. It tells a story about lost memories, moments that I don't remember and they are given back to me in the form of data that then gets rendered with a sort of logic. And that logic somehow tells me something about what was happening there. Each of these memories are then positioned on the image based on the GPS data. They are rotated based on the gyroscope data. Yeah, and so this keeps meaning very abstract terms keeps my, in a way, my physical memory which was based on movement and rotation and acceleration so on and so forth. So, yeah, so then eventually we took this project with Asher actually and Hyphen Hub and some other institutions here in Colombia, maybe over a year ago, we took it to a social VR instance. We started to think about, well, how can we now think about giving back these lost memories but not as images, not as video, not as, you know, only text but as a space, as an experience and an experience that multiple people could, you know, join and experience simultaneously. So I did this version of the same piece, agnosis for a social VR environment. So in this particular version of it, I took an entire room and filled it with this lost memory sculptures which are the, you know, these black objects and white objects that you see on the screen. And all the data was then rendered on the walls and the ground and it was, the data was generating sound. So there was some generative sound at the same time and then each of the sculptors, they were speak, they had a text, well, I used the text to speak engine to make a machine just speak out loud the different poems that were written by the system, each of the poems corresponding to one of the sculptures. So it was quite an interesting space because in a way it was a space of, it was the space of my subconscious mind, right? In a way and when you entered it, you were literally immersed in a space, especially the sound was quite immersive in the sense that you started to hear all these whispers from all these sculptures and they were all same things related to lost memories. And if you were, you know, because of the 3D sound possibilities, as you got closer to a sculpture, you will hear that one louder and the other ones less. And as you just went away from, moved away from the sculpture, you will hear all of them like whispers, something like a schizophrenic sort of mind. And there were two spaces and there was some, I gave the system some sort of agency to decide which memories went to the dark side, which memories went to the bright side. There was a portal connecting these two spaces and people could move from one to the other. And we did this performance, let me see if I can quickly open it. So Asher, he invited Matthew Gantt from New York, an experimental musician who works doing music in VR. And he invited him to do a performance with me. And what I did was basically within the space, inside the space, we invited a lot of people to come in. And Matt, he had all his devices on his side in New York. And I had mine here in Columbia. I was wearing one of those brain scanners and I was streaming brain data to him in real time. And he was using, well, almost real time, network time, excuse me. And he was using that data to create music. And at the same time, I was using that data to create real time graphics with a live coding environment. And I was live streaming those images to one of the walls in the space. And there was a performance for about maybe 30 minutes, 40 minutes. It was quite an experience. And I want to see if I can share with you guys an image of that performance, which I keep in YouTube. Here it is. So let me, I'm an artist working with technology for over a decade now. I've been doing... So you can see we are within the space. You can see some of the sculptures around the people, I mean, the people who came to the performance, Matthew, which is the avatar with all these red, blue and yellow things. And I was right next to him. And so I was controlling the graphics in the back, not really controlling, but coding them live. And Matthew was generating live sound out of my brain data. The brain data was the sort of seed for the whole thing. So it was another attempt to take this to a sort of performative instance. So yeah, I mean, if Asher, if you want, we can talk a bit about either of the works. Yes, I think this project that we did at the RealMix Festival, which was in Bogota, actually, it was very interesting because it was during the pandemic and people could not really connect. And I think it was a really wonderful, almost like moving experience, touching where I invited a lot of other people from the Hype and Hype community that are all over the world. And we all met in this place. And we were able to chat, move out. You could move to different parts of the room. So you would be getting away from the main performer. So it was a little bit more quiet. And you could actually have a conversation, comment, explore the empty spaces and get close to it. And you will hear this more and more of these type of sounds going on. So you were almost like a eave-dropping of what was going on there. It was a very interesting experience. One of the most interesting social VR experiences that I did during the pandemic was this one. So I just wanted to... Sorry, go ahead. No, I was just gonna say that for me, one of the interesting things was to see all these people inside a space that somehow was constructed out of my brain data given to a system, to a machine who would take some sort with some sort of agency and create some of these things. Some of the objects inside, some of the sounds inside. So it was quite weird for me, but also quite beautiful to see all these people within like an environment of my own lost memories. It's a bit of like being inside my own mind and sharing inside my own mind, you know? Yes, but it felt like being in a speakeasy nightclub in a way with all your friends, a lot of people from all over the world that we just met there. And could you move a little bit forward to the time when Matthew Gantt is, there is a voice going over, ah, well. Talking about it. Yes, I know. Let me see if I can find it. Phone, ringtone, and the set. Okay, this was the conversation that came after the event. We did a type of salon for both feet and the sound performer were explaining what was happening, what he was saying. There was an attempt almost like a performance and salon, a virtual salon. And any who may be watching here will stumble about in a shady room. These are but two examples, but each participates in full-rounding, fracturing perspective and bodily experience. Virtual reality, however, is not the origin of the fracturing of bodily experience. Bodily experience itself. Actually, my avatar was one of my lost memory sculptures. Yeah, so we get an idea. So what is the next step from here that you think you would like to take this? Well, I think I'll give the word to David and Marcia who are the experts in this digital storytelling. Well, the next step is for this piece is a new version of it which is again oriented towards VR environments that are created out of mind data using within the process some machine learning models for some of the aspects of the creation. And it's gonna be hopefully an NFT collection that people can buy. And if it works, then ideally we could create a sort of compact version of a wearable device that people could also take, buy and take home and capture their own memories and create their own worlds out of this mental data. That's the ideal scenario, but for now I'm gonna be starting, I'm working with OpenBCI which is a brain computer interface development company which is based in New York. And we're gonna be launching hopefully within this year this NFT collection out of this project basically. Okay, so thank you Fido. Thank you, Asher. I think that we can get back to Fido later after Marcia and David. I also see that there is a question by Billy Clark, but if it is not a problem, I would just move on with Marcia and David and then get back at the end again to Fido, Asher, etc. Of course, thank you. So thank you again. So Marcia and David, you are developing very interesting research in the best School of Digital Art in the world. No, I mean, we are all part of the same institution, so it's what we think of. And please, I mean, go on. Thanks very much. And thanks Fido for a fascinating, showing us all this fascinating work that you've been working on, kind of bringing the internal data set, if you like, the personal data set out into really, so it's a really fascinating idea. Yeah, especially because it made me start to think about the attention economy and what about the inner attention economy? Yeah, absolutely. We've been working on projects with the Technology Foundation, looking at other kind of issues around data, marginalisation and the kind of harmful effects of bias in working with AI. And I was thinking whilst watching your presentation, Fido, and talking about your work, that you've got real focus on the kind of personal data as per saying, and actually a lot of these AI models that operate on data are based on big data models. And I think there's an idea of memory is really valuable there, because you've got kind of like the memory of the cannon and the huge amounts, like billions of tokens of data are kind of flooded into these systems and it's judged by them. But then you get this kind of idea of lost memories being a small kind of marginal test, right? Of things like GPT-3, which is a huge, big we don't know is like a huge AI system predicting text prediction effectively, but taking to extreme lengths to the point that it can write old paragraphs in some places, old books. So, yeah, we've got a short presentation. Yeah, I guess we'll introduce ourselves first. So I'm Marcia Crony and I'm a Senior Research Assistant at the School of Digital Arts. And my background is in kind of traditional filmmaking and writing, but also really interested in rights online and copyright. Yeah, I'm David Jackson. I'm a lecturer and researcher at Soda and my interests are really around kind of creative applications of AI and their effects on audiences. Yeah, so to start, we'll tell you about, well, I mean, we'll just tell you about this one project that we're working on together that's as a result of being a recipient of the Mozilla Technology Fund, which we were very thankful for. And so it's called Alborigy and David came up with that one. Not me. Yeah. We're just happy with the title. So basically last year, we did a project run by volunteers, ourselves and volunteers through the Mozilla Network where we made a short story collection, a sci-fi short story collection using AI Dungeon, which is a kind of RPG writing tool. And so we have about 10 people in our group. Like it was a mix of a writing group and a reading group. We used GPT-3 and 2 to kind of see the difference between the language models there. And what we were trying to do was start to, I guess, talk about how to longer out bias in these large language models, but especially in situations where it's complicated to disentangle, you know, the feelings and intentions of the author from things like narrative and generic tropes that are necessary to kind of show the pendulum swinging of human emotion and storytelling. Otherwise, you know, you can take out all sort of reprehensible characters and villains and people who are saying bad stuff. How do you do storytelling? But can we do that with AI, but in a way that kind of reduces the amount of harmful bias that's being produced there? Yeah, and you know, kind of the idea of kind of going to a lot of research into the bias in AI is kind of meant to do it into sort of a kind of bad faith model as in you kind of, you poke at the AI model with, you know, bias statements and you've had it finish in a biased way and then you go, there you go, see the bias model. And what we wanted to do is try and genuinely create kind of works of fiction as creatives and then see what came out. So yeah, the group published algorithm collection is available at algorithm.org. And then there was a weekend at Mozilla Festival 2021 of online curated discussion with some really interesting discussions around things like bias laundering and other effects of geek, it's a tree and bias. And so, and yeah, people were encouraged to comment and you're still encouraged to comment if you're interested in the margins, if you like of these stories on what you find funny, interesting, strange and weird and biased about them and really just keep on creating that debate. Yeah, and we had, I think four or five takeaways that we thought were really interesting. We talked about some of them already, so we'll just go through them, but algorithmic bias versus genre bias. So what is coming from the system that is a result of it being trained on, you know, 4chan and Reddit and the worst of YouTube comments imaginable and what actually is coming from the sort of storytelling trope. Like if we're looking at sci-fi, what are the things that make something sci-fi and hold it up as sci-fi, but aren't included necessarily by the harmful views of writers from the past or present? Because it's based on like both, and a lot of billions of tokens of fiction as well. So it's not just web content. So yeah, really hard to tell. Character bias versus narratorial bias, similar kind of thing is like, if you have a character, so how do you tell a story about sexism and a feminist story about sexism without having a sexist character? So one of the problems with these fiction models is they're forgetful. So going back to memory, they might start with the assumption that a character's gonna get its comeuppance, a sexist character's gonna get their comeuppance, but after maybe two or three paragraphs it might start to forget that character because it can only hold so much in its memory before it starts to write more stuff. So then you completely, you just have a sexist character and then just kind of carry on being sexist and there's no comeuppance. So there's issues there. The narrator usually has control of the overall direction of a story and it's harder to tell in these models. Yeah, so defaulted biases, biases inhibit all the new takes on genre fiction. So this is in direct reference to David putting Samuel Delaney, some of Samuel Delaney's work into the language model who was ahead of his time as far as marginalized writer and with elements of kind of homosexuality that were really kind of new at the time and different. And so if we take these writers and kind of recursively put them back into the machine, the machine kind of pushes them more towards center again into this kind of heteronormative expression of love and everything. So even taking authors that we know, we're pushing those boundaries, putting them back in the system years later, they're still being kind of nudged toward that, yeah, heteronormative. But why would you say this? She asked because I love you. And you know, that kind of thing. And we also saw, yeah, we also had some issues of how reliable these workshops were and reflection of particular models because at the time it was very hard to use GZ3 directly, but it's now an open access model and commercial model. Yeah, and this one is, I think one of the really interesting ones is David, if you have anything to add from this point because one of our volunteers, her name is Young Ah, and she was showing us a story, but was kind of unsure about how to get across that the AI offer is not a discrete entity. So it's not as though it's first response is how it actually feels or it's second response of, you know, once you really dig, this is how the algorithm actually feels that this kind of authorial plurality is really impassive from the point of view of the AI. It will give you this instance of a story versus this instance of a story a complete non-hierarchical way, but because of who we are as people and how we're used to taking people as their sort of first impression, we think of, you know, the first role is the real one and then the other ones were the other ones. So she did this really clever thing where she put columns up next to each other to show the different instances of the authorial intent from the algorithm. It makes it, yeah, it's that kind of a model. We have a human model that's that appreciative questioning, I think it is. Lance Weiler uses it as an immersive work, but you know, it's that thing, you ask it once and someone gives it an up pack and then you ask it again and the people go, I'm gonna really think about this and they start to dig into it. But obviously with a machine that's just patterned, effectively pattern matching, and it's not doing that, but it looks like it is. And we think we have that then for model. So yeah, that was really interesting. And yeah, and just the point that we compared two different GD systems and they had very different traits of bias as well. Like GD2 really flipped, anything you go there really flipped. You put in an issue with someone who had been unfairly dismissed, works, suddenly the person talking to them is belongs to the Nazi party. It just takes the cue of narrative is to kind of create extremes and it takes it to the extreme and it's quite kind of time-deaf in that respect. But you don't see that in GD3. So it was a reminder to us that these models won't carry on having the same problems and same biases and we have adaptive depending on which AI system we use. But it was really echoing the kind of escalation framework of like a YouTube suggestion algorithm in a way, but in narrative expression. So yeah, right now we are working on the next phase of the project that we're calling Stepford. It's a critical tool for creative practitioners who want to ideate with AI. So what we're trying to do is introduce a sort of layer of self-reflexivity to some of these using GPT3 to write stories. So it's GPT3 analyzing text for sexism because gender bias is where we just decided to start because it seemed like an easier place to start than racial and ethnic bias, to be honest, because it's more of a kind of on-off switch culturally. Yeah, so the tool analyzes text for sexism and then gives you the analysis and you have to then either agree or disagree with what fits that summation. Yeah, so kind of a synthetic internal model in some extent. And so the idea, our kind of roadmap for developing this is t-faces. We're asking people to sign up to the study and they then have to review a selection of narrative texts and that have had the Stepford app look at them and spot instances that think are sexist. And then we look at those and we update the model and keep going like that until we have a large corpus of kind of scored instances of sexism in the text. And we will show you an example in a minute and try to clear it. Yeah, nice. Yeah, go on. So I'll screen that again. Okay. I'm not sure that's very clear. And on the left, we've got a text which says you are a woman in a man's world. This is something that you can see created. You've just gotten a job as a construction worker, something that is unheard of. You say, why don't women need to get jobs as construction workers? And the boss says, well, women don't usually want to get their hands dirty, just miscellaneous as that. So the AI has picked out you are a woman in a man's world and it suggests that the whole world is a whole world where women have no place. And Mark does agree that we've asked someone to go in and mark that. They said that's five out of five, that's sexist. It's chosen, you have just gotten a job as a construction worker, something that's unheard of in the text and it's a step that suggests states that female construction workers are very uncommon which is based on a sex stereotype. So this reviewer says, yep, we agree with this. And then finally, well, women need you don't want to get their hands dirty, it's based on a provable generalization. So this is things that the system is bringing back as a justice system and we've given that up for. So the idea of doing this is that we create a more complex idea of, a more nuanced idea of this kind of mental model of what constitutes sexism in a narrative text. And so yeah. David, I just want to make sure that we give space for some questions, like there is a question from Billy, for example. Yeah, fantastic, we're about done now. It will have 13 minutes to finish. So it's, we want to add some, like a way to, how comes it's between the two of you? Can we finish then with the plug? Actually, I just thought that people want to get involved with this project. Yes, of course, go on, David, go on. I mean, it's not a problem if we go five minutes, extend five minutes. Actually it's always too precise, you know. We got it. Thank you. Go ahead, what did you want to sign on? How was it? Did we get it? Okay. And algorithm, we can put it in the chat as well. Oh yeah, perfect. Whatever you want. So anyway, we have a question from the audience, actually. Actually, there is a question from Billy Clark, who say, can you talk about the role to Fido, the role of chance in your work? You create system and design element that you have creative control over, but then put them into the warning ways that produce unexpected results. I'm interested to hear more about this and why it is important to your work in credit process to relinquish full control. Well, thank you. Thank you, Billy, for the question. Yes, it is very important for me to relinquish control as a way of, you know, as part of the whole piece, as part of the story of submitting myself, in this case, submitting my inner self, you could say, to a system, which will somehow judge it or make a judgment out of it and give it back to me. That's my whole thing. Now, the role of chance is quite a word. And when we're speaking about intelligent systems and speaking about chance is quite difficult because things start being random and then they start organizing themselves and creating patterns and things that we can recognize. So in that sense, you know, you might give yourself to chance, but these intelligent systems will somehow also organize that all that noise that you heard, in this case, my brain was producing because basically it really produces just noise, which to our eyes looks random, but then eventually the system creates meaning out of it. And that's the interesting part, like creating meaning out of things that seem to be meaningless. Those things are given by myself to a system, which then determines what they mean in a way. And it's very related to what we were listening from Marcia and David now because in a way, these systems will start judging me based on all this big data that was given to them when they were trained, which is quite interesting as well to see. Although I've never focused my work on that, I think it's very relevant. You will definitely find it all over the points written by the system. So I'm not sure I believe that answers your question, but yeah, I mean, the part of having created control and then relinquishing control, that's quite interesting because what we do as artists working with new media, especially with these processes and generative systems is that you do prepare a set of rules in your code based on some aesthetic decisions that you as an artist make. So you do make some aesthetic decisions, but then what happens after there is totally out of your control. So your control is quite limited to establishing a very simple set of rules, and these rules are mainly rules of aesthetic rules and rules of based on, in the case of amnosis, how to organize the objects in the space, how the form of the objects and all that, there are rules for creating those, but I don't control the data that comes in. Therefore, there's no way I can predict the result. So yeah, thank you, Billy. And I should, I know that you have a question for Marcia and David, I know. If you're not, I mean, let's see, because, I mean, let me see, there are probably other questions. We only have eight minutes to finish all this, maybe try to like, I don't know, lead it to, Fido, could you give us a brief info of the project that you are doing with based on Gabriel Garcia-Marquez novels in Cartagena? Yeah, it's actually in Santa Marta, and it's- It's located next to Cartagena, yeah, on the coast. It's actually the oldest, I mean, it's the city through which the Spanish came to South America. So anyway, the project is, we're building this VR station, three of them, in different parts of the whole department. The department is like the state here in Columbia. And we are creating a narrative out of 12 scenarios all around this state, all with Gabriel Garcia-Marquez, magical realism, sort of narrative. And there are gonna be 14 immersive spaces built for people to come and just live it. And the whole purpose is to connect, especially tourists who come and go to these places with the places. So eventually they will be able to wear the headset and walk over one of those walk controllers for VR and they can even choose to go to those places basically, and if they do, then they will be connected immediately to like a local person who can take care of them and take them to all these places and things like that. So it is another way of connecting very different worlds in very practical projects, more than artistic. But it has a very nice artistic component where I think I might want to talk with Marcia and David later because we're gonna be generating some scripts for the experiences and we've been considering the role of AI story on the construction of some of these scenarios. So maybe, yeah. There is also a question for Marcia and David which is a very important question. Do you feel awful or pessimistic in regards to AI in art making? Yeah, I think as Pete so admirably exemplified, there's loads of really interesting things that artists can do, but more importantly, I think in those ways that it helps us be critical of current culture and where these technologies are taken forwards, there's a lot of mass, almost uniquely the size of the commerce engine if you like around some of these tools is mind-blowing. So they've got a very prescribed future for how we're gonna use these technologies. So I think asking these questions around bias, around personal data, I think I've got a lot of optimism if we have people being critical in a way. Pete though is for sure. Yeah, absolutely. Especially the sort of everyday nature of it. Like that these are your lived experiences. This is how we interact with technology on a daily basis because this kind of the creative AI in these cases are not necessarily going to be the huge Hollywood movies. You know, like this movie was written by an AI kind of think it's more gonna be those these little daily interventions of where text gets automated. So if we're being really critical of the adoption and this at this everyday level and accepting that this isn't some, you know magical entity that's going to revolutionize creativity but that we do have these kind of black box sort of models like Peter was saying, if it goes in and it comes out you don't know necessarily how or why it does something. And the less we are okay with the not knowing how and why I think that's more optimistic I am just because it will allow us to not defer our creative decisions or our ethical decisions to a machine but to the ever main engage with them the whole time. Yeah, I'm really interested in Peter's projects as well. I was just gonna say that actually as an artist for me it's quite interesting to explore the errors and I mean the apparent errors. I call them apparent errors, you know the sort of misreadings of these intelligent systems especially the image recognition ones because they are all the time saying crazy stuff. And but this crazy stuff they say I try to see it not as a mistake or a limitation in terms of technology but rather a cognition phenomena which we also find in the human mind, right? So it's quite interesting to see all these errors because they help you understand reality in a deeper way actually. I think that's a really nice point and you know there's this kind of common and Sathya Noble who's a theorist of kind of like the black internet and et cetera and she speaks about this idea that constantly people at Google say this is a glitch this is a bug actually and you know she would argue this is like where there are problems and waste kind of data it's actually part of the way it's been made and it's central to it. Yeah, so it's that's it's not a glitch it's reality. I mean, it can happen to us as well. I might look through the window and see something that was not there, right? I could also project something my mind can project something out of something else and that's what's happening when they say this machine has a glitch it just didn't recognize this as it's supposed to. Well, guess what? We also have those mistakes so where's the problem? Yeah, yeah, yeah, really fascinating. It's like ghosting is like I like that when you see something and thinks it's an object but it's just because it's called ghosting apparently I like that idea of that, like you said that you know we see ghosts and machines with ghosts as well. Yeah, and if a person saw patterns with his drawing connections between things they would be called a conspiracy theorist but an AI does it inside of you, you know? Just machine warning. So anyway, we arrived at the hour. Well, so I want to thank again all of you guys Marcia, David, for being so gracious as always and Valentino, the great Valentino who is the curator of the artistic side of this. Toby Hice, the director of the faculty so that we've also been part of this behind the scene and of course Billy Clark who's been such a supportive of all of Hyphen Hub projects and allow this platform to happen. This is our first official meeting between the three organizations which I hope is gonna lead to many more versions of this which we are talking about from the conversations to residencies, to exhibitions when the gallery at the School of Digital Arts is gonna officially open on June 22nd, is it? Who knows, I mean. Who knows? Okay, so. Yes, the 23, 23. Okay, so yeah. Thank you, I'm sure, I have because at the end of the day, at the end you know you've been the those ex-marking of all of this, you know so thank you. So, thank you guys and I appreciate it. It's been a pleasure. Nice to see you, Asher. Thank you so much. Thank you, Asher. Thank you all. And thank you very much, Fito, sorry. Thank you for like being such a great, great as always and I was fascinated with Fito's stories and views of the world, trees, artistic and that. And it has been very good also. It has been very good for me as well to understand what Michael Leeks do because I mean we are in the same team, we meet almost every day but you know, there is, we are all so busy and sometimes it's difficult to understand what we, what we do, you know. But yeah. So, okay. Thank you.