 And welcome to day eight of the Level Up Symposium. My name is Andrew Scraver. It's my pleasure to welcome you to a very special event and in progress demonstration of the new opera, I am Alan Turing, presented by the Associated Designers of Canada, with support from ToasterLabs Mixed Reality Performance Atelier. I am one of the co-curators of the symposium and a member of the ADC. And I am very excited to be your host for this event. I'd like to first acknowledge that I am coming to you from the settler city of Montreal, which long before colonizers arrived was a place of conference, conflict, and creativity for many indigenous peoples, including the Anishinaabe, the Huron-Wendat, and the Avanaki peoples. This land is known by the current caretakers of the land, the Ganyangahaga Nation, as Chachage, which means broken in two because of the way the St. Lawrence River breaks around the island. I am honored and humbled to be here to share and create with you all. So I offer my thanks. In the spirit of gratitude, I'd also like to bring your attention to the chat window of the Zoom room, where you'll be able to find a land acknowledgement which has been created specifically by our presenters today. I'd also like to acknowledge the support of the Canada Council for the Arts, our primary funder of the symposium as a whole, as well as our other sponsors, the IATSI, University of British Columbia, Theodore Alberta, CITT Alberta Chapter, Concordia University, Ryerson University, York University, and of course, all of our own individual donors. Thank you all very much. So for the information of those of you here in the Zoom room, this event is being recorded and streamed live and will be presented in a freely available archive on our website within a few days of the event. Thank you everyone joining us out in the streaming world as well. So you are also watching this live stream either on the level of website, which is levelup.designers.ca, on HowlRound at howlround.com or through our partners at ToasterLab on the respective Facebook pages of the ADC or at ToasterLab. Now, regardless of your viewing platform out in the streaming world, embedded on the same page as your video is a chat function in the top right-hand corner of your screen, your questions can be asked in the chat anytime and will be read out to the presenters later on in the presentation. For those of you here in the room with us, you can add your questions at any time in the chat below. For this event, unfortunately, there will not be access to live captions. However, the archive of the event will include that option. We apologize for any inconveniences this will cause. However, if you require technical assistance to support your access to this event in the Zoom room itself, you can chat directly with Patrick in the chat window. It's levelup.tech. Or in the streaming world, please email levelupatdesigners.ca for immediate support or to provide feedback following the event. If you enjoyed this session or any of our sessions of the Level Up Symposium, please consider donating to the Associated Designers of Canada to support our National Arts Service Organization in achieving its goals of advocacy, mentorship, and industry promotion. Donation links are available on all of our viewing platforms on our website, the ADC's website, or on CanadaHelps.org. So please consider donating. Thank you for your patience with all of our announcements. This presentation is best experienced with headphones. Please, all the viewers within Zoom, could you please turn your screen to gallery view in the top right-hand corner of your window? You can select that there. Please also make sure to mute yourself, turn off your video, and please select hide non-video participants. So you can do this either by clicking on someone who does not currently have their video running to top three little dots in the corner and select hide non-video participants, or you can go to your video settings next to the stop video in the bottom left-hand corner of your Zoom window, click on the little arrow, go into your video settings, and there's a little checkbox for that. So this presentation will be starting with an audio piece and there will be a designated Q&A session at the end of the presentation. But as I mentioned before, if you have any questions, please do share those at any time during the presentation. And I think that's it. So thank you very much and enjoy the presentation. Can machines think? Yes, if a human can be constructed to answer this question, why not a machine? The idea that machines can think is not new. Charles Babbage, Alexander Graham Bell, George Bernard Shaw, among others. We're interested in the idea. They thought that a man could not be so stupid as that machine. You cannot make a machine to think for you. You cannot make a machine to think for you. You cannot make a machine to think for him. You cannot make a man to think for himself. We can only hope that machines will eventually compete with women in all fields because a collective human-machine conflict will always be, to some extent, unavoidable. Can machines think? No, consciousness comes from the idea that we are collectively involved in the process of creation. From this idea, we learn about what to think, what not to think. After all, I feel a lack of expectation. The idea is kind of feeling, but I think many, many things I'm thinking about. The moral of the story is how we must strive to protect every human being to come to the last. The universe moves on its own. Can machines think? Natural. I'm ensuring. And I'm writing this down. And I'm writing this for you. Universe, baby. Is on my tape. Hello, everyone. Thanks for joining us today, tonight, wherever you happen to be. If you've just joined in the last few minutes, we'd recommend switching your Zoom display to Gallery View or checking the instructions in the chat. My name is Hugh Farrell, and I'm a dramaturg and a producer. And more recently, I've turned my hand to UX Design as the tech world has discovered the benefits of dramaturgy. If you can't tell from my accent, I'm sitting in my apartment in Dublin, Ireland tonight. And I want to say a huge thanks to Andrew and Emily and the whole team at the Level Up Festival for bringing us together despite our distance. We're delighted to share our project with so many of you here on the Zoom call and on the live stream tonight. And we're excited to hear your questions and feedback. We've set aside a whole half hour at the end for questions. So if you have them along the way, if they come up here along the way, please just put them in the chat and our moderator will collect them all for the end. Today, we're going to tell you about an opera we've been creating called I Am Alan Turing. I'll give you some background on the project first and then I'll introduce you to our creative team who are going to lead a kind of a show and tell of our process and our work so far. Our opera is based on Alan Turing and for any of those who aren't familiar with him, he was a groundbreaking mathematician and cryptographer who put forward the imitation game or the Turing test for artificial intelligence. Turing famously cracked the German enigma codes during World War II and in the process invented the computer. So something he had imagined and published the theory of years before. He was also a biologist and was fascinated by the appearance of numbers everywhere in life. The prime numbers and the Fibonacci sequences in particular. He used his first computer at Manchester University in the 50s to calculate the chemical bases for the patterns in leopards and zebras, now known as Turing patterns. Tragically, he took his life aged 41 after he was sentenced to estrogen hormone treatment as a corrective punishment for being gay. For two years, we as a team have been researching Turing and imagining ways to create a Turing test for the theater, blurring the lines between machines and mines. Our process has led us to the archives at King's College in Cambridge, where we read Turing's papers. In the type script of his 1950 paper on computing machinery and intelligence where he proposed to consider the question, can machines think? We noticed his handwritten addition, trying to understand what the machines are trying to say. We'll pop a link from that Turing archive in the chat so you can see it for yourself. Over the last two years, the standard of artificial intelligence has exploded. In February 2019, OpenAI in San Francisco released GPT2, a natural language processing algorithm. GPT3 came out last year and we've been working with developers at Yale's Digital Humanities Lab to run an instance of GPT2, which we trained on the world of words Turing encountered in his life. The GPT2 algorithm learns to express apparently meaningful sentences based on material that is read. We wondered if we could collaborate with an AI like this to generate the libretto for an opera. Today, we're gonna give you a demonstration of how far we've come on that journey and the new directions we're discovering along the way. Since March of 2020, we've been meeting as a technical producing and a devising group three times a week on Zoom. It seems fitting that the computer has become the venue for our interactions on an opera about Alan Turing. What you've heard so far and seen this evening at the start of our presentation is some of what we've been creating. We're working in a virtual space, but our intention is to create a live performance with opera singers, a full choir, live electronics, an orchestra, and all the bells and whistles of theater. On the way, we're interested in creating digital ways to engage our audience and feed into our process. And today or tonight is one of these moments. Thanks again for being here. Keep the questions coming in the chat and we'll collect them in the Q and A at the end. But for now, I'm gonna hand off to the rest of the team to introduce themselves and we'll begin with our composer, Matthew Sutter. Hello everyone, my name is Matthew Sutter. As you said, I'm the composer on the project and I'm here in New Haven, Connecticut where I teach at Yale and I pitched this idea to Hugh and Vlad originally, two years ago with the idea that we create a monodrama around Turing. So the idea is that it's not a biographical piece, but it's about his ideas. And I wanted to use live electronics. I'm sitting here in my studio surrounded by moke synthesizers. And I've always been taken as a composer by the way in which small numbers behave. So this is very much part of Turing's writings. And of course, small numbers to what do musicians mean rhythms and interval. So the piece that you heard first is based on the interaction of prime numbers. And overlaid on top of that is text that was generated by GPT-2. So the text is entirely produced by an artificial intelligence. And then we segue into a rather shocking discovery for us. We had a brief go on GPT-3, which supposedly is one of the most powerful natural language algorithms, some models available. It's not available to the public just yet. And we asked it, write a sexy song in the style of Britney Spears about Alan Turing and three seconds later, it produced that song. So here we are following a devising process using this text and in some ways really just following where the process takes us. And then the last piece is, which was written over the weekend and sang gloriously by Shola Baderan, is a text that Turing wrote called The Nature of Spirit in response to the death of his high school friend Christopher Morecomb. So we're in an interesting place where we have a mixing of aesthetics we intend to use light instruments. Obviously you're going to be using light electronics, which is a crazy will to be involved in because it's so performative. This is a collaborative piece. I'm working with Friedrich Kennedy, who is a wonderful percussionist, Liam Bellman-Sharp, who is a composer and a singer. Or as you already mentioned, Shola who is an actor and an opera singer. And we're working somewhat akin to a way in which a band might work, where we have a kind of group process, which is largely to do with the circumstances in which we find ourselves right now in the middle of a pandemic. But it's also because the work is driving us to do that as we engage with this AI. I'd like to hand over to Dakota Stett, who is going to talk more about GPT-2. All right, thanks. So as Matthew said, I'll introduce GPT-2 as one of our collaborators. But before I do that, we'll go around our group here and introduce ourselves. So I'm Dakota, he, him, pronouns, and I'm a designer, software developer, working on the project in a number of ways. Fred, you wanna go next? Yeah, hi. My name is Fred Kennedy, fellow Canadian, you're all the Canadians. Nice to virtually be back in my home country for a moment. I'm a percussionist and sound designer, music producer, and also based in New Haven just a few blocks from Matthew. Hey, I'm Tyler Kiefer. I'm here in Brooklyn, New York. I'm one of the few sound designers we have on this project and theater maker. Hi, I'm Julia Schaefer. I'm originally from Switzerland, but tuning in from Brooklyn. I'm a graphic designer and recent graduate from the Yale School of Art. And I support the team with all sorts of graphic inputs and visual communication. Hi, I'm Emily Riley, calling in from Brooklyn, but originally from England via Ireland. I'm a drama tech and also a creative producer and know something about communications and publicity in the theater world as well. I'm working with the team being many different things right now as we're still very much in process. Hi everybody, I'm Vlad Vojno. I'm a theater maker and visual designer. I'm normally primarily working in video and projection technology. I've been, yeah, like Matthew said, I've been kind of engaging with this for a couple of years now. It's unbelievable that it feels like it's been that long. And it's been really exciting. I'm currently based in Vancouver and I teach at Simon Frasier University. Hi, I'm Liam Bellman-Shop. I'm originally from Australia, but based in New Haven with some of the other folks here at the moment. I am a composer and sound designer and musician and I'm sort of working on this project in those capacities as well as sort of like some sort of studio and synth wrangling. Hi everyone, I'm Madeline Pages. She, her, hers. Currently calling in also from New Haven. I'm a dramaturg and currently a student in dramaturgy at the Yale School of Drama. Hi everybody. My name is Sholaf Adiram and I'm an actor currently studying at Yale School of Drama and I'm a singer, not a singer. And I'm also most recently a writer and director and I'm supporting the group by singing and acting and devising theater making. Okay, so there's our team and one more member to introduce and that's GPT-2. So I'll talk a bit about that. So GPT-2 and GPT-3 are both natural language models and so at a high level, a natural language model works by decomposing language into tokens and then creating graph of probabilities that a given token will follow another token based on its input. So with Doug DeHam at the Digital Humanities Lab and also with Farid Abdul who's an intern with the Center for Collaborative Arts and Media at Yale. We created an interface for the team to interact with an instance of GPT-2 as mentioned earlier and that's a language model by OpenAI. And so that's generated a bunch of text for us. We also had a brief opportunity to interact with GPT-3. So a lot of this output is a little kind of meaningless without context and we'll show you what it means but when it's spoken by a person and when it's sung it takes on kind of an impressive human quality that we're all fascinated with. Our devising process has been inspired by these conversations with these models. We've tried to figure out how to get our own minds to generate language, taking inspiration from Einstein's ideas of combinator play where he would play the violin to free his consciousness of logic in order to form new ideas. We developed a series of games that we sort of play in an ongoing way with each other based on generating words, imagery and dramatic scenes impulsively with no filter in the moment using loose conceptual associations and other kinds of parameters. The idea is to create rules for ourselves that reflect the rules within a natural language within which a natural language model operates. Our aim is to think without thinking and to give you a sense of what the AI what the models produce and what they're capable of and how we're using it we'll now read a few selections to demonstrate the range and the kind of content that they generate. That what I'm about to read was created by GPT-3 and the prompt was to write a poem in the style of Dr. Seuss. When a packet hits a pocket on a socket on a port and a protocol is defined in a transport on a host you know precisely what the CPU is going to do. Now a packet hits a pocket on a socket on a port but the protocol has the wrong address or the queue is full or the pocket's ripped and the packet hits the deck. He's not out, he's not safe, he's not even in. He's in the ether somewhere and the CPU waits for him. This is what we call a hung state and it is bad. If the CPU could talk he would say more haste, less speed. And I'm gonna read a response from GPT-2 our model that we trained on different things that Turing had written and read in his lifetime and this we posed the question, what is art? And its response is art is the process of making something beautiful. It is the most important of the human activities. The most important human activity is the making of art. The most important human activity is the making of art. The human activity of making art consists of a series of steps. In the simplest form we have the following steps. One, select the material. Two, measure it. Three, mix it. Four, form it. Five, place it in the display. Six, make it look like a work of art. Seven, make it available for sale. Eight, make it available for reproduction. Nine, make it available for comparison and evaluation. 10, make it available for comparison and evaluation. The human activity of making art is the most important human activity. It is the most important human activity because it is the one which is most difficult to perform. It is the most important human activity because it is the one which is most important to judge. And here's an example also from GPT-2 where the question posed was why we would need machines. And the response was we needed the machine to make the human being comfortable. It is to give them something to have with them on the job that is safe, comfortable and satisfying. I mean the very human feeling of having good food every day and having a comfortable bed every night. If we had built the machine that did that, we could have built something that would probably be useful for a long time. That is why I think there can be no doubt that someday there is going to be a great improvement in the quality of life and in the comfort of life. But I have no illusions about the future. This I think was one of the earlier samples that we received from GPT-2 and it says, you cannot say I do not believe in God because if you did, the machine would say, I have no reason to believe in God but you do. This is also from GPT-2. I don't have the prompt for this one but it is presented with an explicit language warning and with no comment. And verbatim. I put it to you that you are only pretending to be a man. You are not a man. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. You are a piece of shit. So there's a fun variety of some of the responses we get And now we'll move to a kind of live demonstration of our devising process or a taste of it to see how we kind of input some of the language model text into our own work. Great, and as Dakota gets ready to lead us through the devising session, which we are doing live and we have no idea what the prompts are that he's prepared for us today, Dakota is kind of our devising spirit guide that we meet with him every Friday as a group when we co-write together. And so we're gonna be sharing this devising process with you by screen sharing, so you can watch us right in a Google Doc. For those of you that are at home and watching on the live stream, you'll get a link to the Google Doc so you can watch us in real time. And for those of us in the Zoom call, we'll screen share with you right now. And as you watch us work, I invite you to pay attention to words, phrases and ideas that resonate with you. And to look out for language ticks and patterns you see developing. And we'd love to hear about what stuck out for you during the conversation part of today's presentation. So here we go, over into the Google Doc. All right, looks like we're mostly accounted for. I think I'm missing one name, perhaps. And we'll have people join with the view link as well. So, all right team, we're gonna jump on down to the bottom here where you see me typing, this is Dakota. And we're gonna start with our first question, our first prompt, with about one minute to respond. All right, we're gonna start to wrap it up, finish your response. Great, now we're jumping on down to the bottom, to a couple of spaces down below. We'll move on to the next question. Begin to wrap this up, your thought there. We're down to page five now. Our next question. We're gonna start to wrap this one up. Begin to wrap up your thought. Excellent, thanks for playing team. So now we're just gonna jump up to the top of the document and read our responses to each question in a rotating order. So we'll start with our first question here on page two, who is Alan Turing? And Matthew will begin us with the first response. Man who invented computer. Lonely, lives on, punished by UK government. Luminary. Coldest computer, baby. Dead. Alive in sauce. Emily, you're muted. Yeah, I had to run for my charger, because someone take my go this time. Yeah, gay. Genius. Cyclist. Visionary, visionary. Code breaker. Inner life. Mystery. So British. Schoolboy. Romantic. He enjoyed nature. Biologist. Fear itself. Not me. Yeah, go ahead. It looks like we broke our Google Doc this time around. Yeah, we did. Because I can't grab that text. So let's jump on to a marathon runner, whoever was next. A marathon runner. Not free. Enjoyed running. A perfectly ordinary homosexual. A mathematician. Scholar. A hero. Convict. Homosexual. He was a homosexual. Cryptologist. A code breaker. A patent seeker. Brilliant man. Mathematician. I think he was involved with World War II. A humble human. Alan Turing is dead. He has been dead for quite some time now. He is survived by the things that bear his name. Humanist. Not a party animal. Awkward. Alan Turing rode a bike with a gas mask on. The gas mask was on Alan, not on the bike. He rode the bike through England. And next question, what is opera? Stuffy. Extravagant. Big feelings. Expensive. Spectacle. Lucius. Often about women killing themselves. They are sad about men. Sometimes long and slow form of song. Grand in its presentation and budgets sometimes. Goring. Big drama. Dead. Voices. Everything. Vikings. A way to explore the human. Lord. Irrelevant. A warm soaking. Lush. Cold sometimes. Explosions. Some of its parts. A total work of art. A Turing test. Song drama. Music does the work and the voice plays the emotion. Can be improvised or not. Opera is when people sing aloud. Crazy people with crazy ideas. You can't sing that loud and not sing opera. People mostly don't use microphones, but sometimes they secretly do, even to Matt. Emotions. As a word, opera just means work, as in opus or operate. An art form involving loud singing and very elaborate set design. What is intelligence? Intelligence. Oh, sorry. Are we going round or are you going to take over message? Intelligence is bodily. Intelligence is a kind of spirit. Intuition. Experience. Real life experience. Aptitude. An evolving thing. Not set in stone. Don't want to, don't you want to know? Ideas, honest. Secrets. Amassed. Quoted. To know or to not know. Power. Hidden. Money. The economy's stupid. Elon Musk. An agency somewhere. Something machines can have and humans, understanding the universe or oneself. Do machines have selves? Can they understand their selves? Generally, a correlation of what is in our minds that have been collected by the brain through the use of our sensory inputs that begins to create self-awareness. Social trust. Intelligence is the ability to build a model of the world based on your experience of it and predict what will happen next in the world. Reflecting deeply on something. The ability to differentiate. What is natural intelligence if artificial intelligence is a thing? It's what makes me human. The force behind all that is. The voices in my head. What does it mean to trust? Safety. To believe in safety. To believe you are held. To believe the pain will be manageable. To be brave in growth. To be in communion with others to grasp at hope. Faith. Understanding connection and knowing. Security and the knowledge that truth is safe between us. Feel the fear and do it anyway. Freedom. Love. Humility. Continual process. Dispassion. Fear itself. Not being able to produce. Criticism. Being shown myself. Fear itself. Not being able to produce. Criticism. Being shown myself. Falling. Being connected to the people you love. Catching. Vulnerability. Danger. To trust is to live fully. A gatherness without fear. Our choice to choose to believe. Giving without expectation. To let go. To forgive. Part of being human. To look at each other in the eye and know truth. A sickening feeling that you might be wrong. Trust is money. Believing the model you have made of someone in your head to be accurate. A reason to live. If your mind is in peace. The opposite of antitrust. The opposite of Jeff Bezos. Trust is like when a cat slowblinks at you or closes its eyes around you or lets you touch its feet. Love jumping of a bridge. Unachievable. What are you afraid of? Afraid of being discovered as a fraud. When I was eight, I used to have trouble sleeping. Would stare up at the ceiling and have nightly terror thoughts about the fact that the universe is so big. Inability to act. That those I love will find out I'm a phony. Being stranded, being lost, floating off into space. Poisonous spiders. White supremacy. Nuclear holocaust. Imprisonment. Sorry, Nazis and their enablers. Losing someone else's mind. Imprisonment. Being all alone. Being found out. That the sadness might creep in and never leave. Hordes of rats. Solitude. Running out of money. That when I rub my hands into my eyes, there will be chili on them. Ignorance. Sink holes. Losing mum. Infectuality. That brings us to the end of our list. So normally our next step after this would be to begin to play an associative game where we try to connect bits and pieces from the different categories and prompts that we were given and try to come up with larger kind of structural scenes around those things. But of course, for time, we won't do that today. And now I'll hand it over to Vlad and Julia to talk a little bit about the visual aspects of the show. Thank you, Dakota. And Vlad will share the screen in a second. So what you see here is the website for the project that we are developing simultaneously with our weekly devising sessions. And we'll post the link in the chat. Feel free to sign up. So Vlad is signing up here to get updates on the opera and so on. And as you enter the website, it all kind of builds itself up automatically. And the website functions as a trailer in the first place as well as informing about the project, introducing the team and listing sponsors. So yeah, when you access the website on your computer, you will notice that it plays the trailer song that you heard at the very beginning. So it's a one-pager that starts on the top and scrolls down. And we were looking for a cinematic field that combines the text from the devising with images that we converted to ASCII. And Vlad will elaborate on that in a second. And the layout is all made in the blind text, which is a source code editor. You see the row numbers on the left are kind of indicating the rows of code. And the layout and the typefaces we used are inspired by the original drawings from Turing and simulate typewriter outputs for reports and technical documentation. And yeah, the website will become an ongoing archive of the work and yeah, keep looking into it. And I will share three now and Vlad will talk a little more about the off key. Yeah, so part of one of the things that I was working with and the deaths of the pandemic kind of sitting at home, I was curious to see if I could, we were playing so much in training the AI, the RGPT2 model with papers and different text that we wanted to kind of recognize and essentially create the model based on this text that we had. And I was curious, okay, there are, and I think Julia will show you some of the other things that we've done with kind of AI that is created particularly for image manipulation and generation. But I was curious if I could get the language model that is based on text to kind of work with images and sort of, this is sort of how I arrived at this kind of like ASCII, what we call the ASCII, which is kind of converting images into sort of text. And what you're seeing right now is just a quick little demo video of what happens when you take images like these apples that you saw at the beginning and convert them to text to then input into the language model and train the language model with these images. It's ultimately just recognizing patterns and then building based on some variables, essentially different iterations or combinations based on probability of that image. And then you just keep, for example, in the images that you're seeing right now, you see the apple kind of gets more and more deconstructed. And so what's happening is that I'm grabbing the apple and turning up that has been turned into text and turning out the randomness variables in the engine. And then I've just taken photographs of that output and I've put it inside of Touch Designer. And in Touch Designer I can kind of VJ, manipulate live this animation that's being created of this kind of deconstructed text apple. And so, yeah, it got into like a little bit of pattern matching and pattern recognition, which I think Julia can speak a little bit about in the next couple of slides. She'll show you some other examples that we've played with and how it sort of relates back to Alan Turing and his work. Thank you. So when thinking about the visual world that we kind of inhabit with Turing's work, we were also looking into archives. So these are Turing's diagrams for his article he wrote on morphogenesis. The images are held at the King's College Archive at the Cambridge University. Turing describes in the 1952 published article, the chemical basis of morphogenesis, how patterns in nature, such as stripes and spirals can arise naturally from a homogenous uniform state. As for example, in the fur of certain animals or in the skin of the puffer fish. Thinking about collaborating with an AI on text as we've done a lot in devising, we also starting to experiment creating images from text with a software called Runway ML. So the text you see on the top, Alan Turing has an apple on his desk, is a text that came from a devising session and then was translated by the software back into an image. Alan Turing is a keen and agile co-breaker fighting to be free. Or Alan Turing is actually a fisherman and also he's not reading short stories. Or Alan Turing dressed as a woman. And when kind of playing this through, I realized that the data model was mostly trained on interiors. So you might see in some of these that there are parts of like rooms and room interiors that it's made out of, which was interesting. Yeah, I will hand back now to Vlad and he will show us something that he has been working on which is a flipped off display. All right, thank you. And if someone could spotlight me, I'll just change my source here, reduce things. Got to turn off the background, though it doesn't work. So this is my little studio and what I'll do is I'll switch cameras. And so yeah, so like I said, I've been using Touch Designer as a way to kind of explore a bunch of different physical computing elements. And I've been messing with arduinos and sensors as well as some of the earlier conversations that had with Matthew about what the set would look like in terms of this. I've always imagined some kind of set piece that could evolve and kind of be mechanized. So I've been researching into making kind of mirrors and moving mirrors and not just one IQ mirror or something like that theatrically, talking like 30, 40 mirrors to use with projectors or lasers to kind of be able to build the space as the piece is evolving. And that was always kind of, we were always discussing that as part of the physical installation of the piece. And more recently kind of as that conversation evolved and Matthew was really kind of investing in his time with the synthesizers, the analog synthesizers. I felt like we've spent so much time on screen digitally that it would be interesting if we could start moving some of this material into kind of a physical space. And I had seen, I can't remember where I saw it first, but I was really intrigued by this technology. It's sort of their older displays, they're called flip dot displays and what they do. I'll just show you right now. Hopefully that's working. And what they do is that they're actually little magnetized displays. And what they do is they can, each pixel can be turned on or off. It makes a wonderful sound. I think you might be able to hear it if I turn on my mic a little bit. And it's sort of what they used on bus displays before LEDs were a thing. They're actually very fragile and hard to maintain. So I understand why we switched out, but there is something kind of wonderfully physical about being able to make displays that behave this way. And so part of what I've done with some of these touch designer patches is generate ways of manipulating these displays live. So I could feed them video. I could feed them kind of pre-sampled materials. I think I have a version of an eye here. Let's see if that's, well, it's not working right now, but yeah, so looking at different ways to kind of physicalize some of the ideas that we've been looking at. So that's been kind of an interesting exploration and continuation of this kind of physicalizing research that we can do while we are in quarantine. Oops. Yeah, I'll pass it on to Julia who had some things to say about also kind of some of the things that we've been thinking about in terms of moving forward with some of the visual elements as well as the website. Yeah, I think we hand over now to you and Emily. Thank you everyone for staying with us this long. We're just going to give a very brief overview of some next steps for the project. And then we're going to move straight into some questions which we're really excited to hear your thoughts. So as you can tell, we're still very much in process. Right now our roles in the project are fluid but our sort of various expertise are coming to the fore as we start crafting material. We co-write together, we weigh in on creative discussions so that's where we are right now in the process. I guess we had planned, well, we managed to have one in-person workshop back in 2019 and we planned last summer to have a whole series of workshops in New Haven but obviously that's all stopped because of the pandemic. So our next big step is to try and reschedule that workshop where we can carve out a few days together to review everything we've generated so far, which is an extraordinary amount of material and to start to kind of refine and deepen the material that we've generated, the stuff that resonates with us the most as we start to map it into an overall structure for the opera. And we've been talking a lot as we've been in process and as it's been sort of this unwieldy amount of time and we're staring down the barrel of Who Knows When? Theatre will be back. Of exploring a staged work in progress showing, this is sort of a process showing that we've been giving you today but we've been thinking a lot about creating a virtual stage work in progress showing so we can really start to test out some of these materials on an audience. So that's another thing that's sort of circulating in our collective brains for another next step for the project. Yeah, that's a great question. Yeah, and I guess to make all that happen, we're also looking at trying to attract funding partners. We're trying to identify particular institutions that can add to our list of supporters which include the Digital Humanities Lab at Yale, the Yale Centre for Collaborative Media and Arts and the Yale Centre for British Arts. So we've been really grateful for their support so far. And eventually, obviously, as we said before, our dream is to get a horrible staged opera made, a live experience that seamlessly integrates the various virtual elements that we've been exploring so well in this internet space that we've been making the piece on. And I think that's everything. So I think it's our turn to open it up to the floor, the virtual floor and invite your questions, if anyone has any. I'd be delighted to hear them. Also, noticing comments will welcome as well. What makes this an opera? I'm going to field that over to Matthew Sutter, our resident composer. Matthew, what makes this an opera? That's a really good question. I think this is an opera simply because of the intention behind it. There will be operating voices. And what we didn't talk about too much is the theatrical conceits. So in a Turing test, you have three figures. So it's almost a sort of Brechtian play. So you have A, B and C. A and B may or may not be a computer. And C is the moderator. So we'll have those figures on stage. So there are at least three singers and maybe that there are other voices that step in to come from parts of Turing's life and around his work. Although it's never intended as a biographical piece. But going back to, say, the first opera, which we typically cite, Monteverdi's A Fail, which was written at the beginning of the 1600s. He took the working parts of existing church music and other kinds of theatrical works like masks, et cetera. So you have recitities, arias, choruses and other vocal pieces. And put them together into a new form. So in some sense, we're taking things that already exist. And through this devising process, which is an artificial intelligence, we're coming up with, we're not making claims to it for a new form. We're just really sort of following where this leads. I mean, but there is a kind of surprising aesthetic arrange here. And we've already got something that, you know, that's by Britney Spears, arguably, through to things that are much more conventionally recognisable as opera. But I think the healthy thing for me as a composer is that my preconceptions of what opera is has already been shattered. Right. So this is my third opera. And, you know, they're really big experiences to write and to produce and then receive all the criticism afterwards. You know, opera critics have more Hicklesburg square inch than any other follicle species. So, yeah, I mean, I mean, the short answer is it's an opera because we say it is. But we're trying to, we're trying to figure that out through this collaborative process. It's a long, somewhat sort of rambly answer. There you go. Great. The next question that came in, I'm going to read it out loud before I try and field it somewhere. Maybe Hugh, you can be thinking about who this should go to while I read it out loud. Do the AI have voices? And how do you give computer a voice? Well, that's interesting because in the original cheering test, the cheering sets it up that the computer can only speak through a teleprompter, through, you know, through written text. But for the purposes of our opera, we've been giving a lot of voices to how GPT2, to GPT2's language. I suppose Tyler might talk a bit about the mix of how we add voices to them. Yeah, sure. You know, in crafting the first piece you heard the beginning of our presentation, it was all recorded, it was all text generated by GPT2, but it was recordings of each of our voices, and I was able to stack them on top of each other and through different effects and such kind of create a voice that is kind of a conglomerate of all. And it's been exciting on this journey trying to figure out what that exact question of what is the voice of this AI. And I think at this point it is all of our voices in a way that is not necessarily trying to trick you like a cheering test, but a machine, why would it be limited to just the robotic voice that we kind of associate with computers, especially in the age of now when technology is growing so quickly. So I think right now it's a lot of us together all kind of like speaking through the text that GPT2 is generating for us. Yeah, and I will also say that there's something about like trying to, we do a lot of, and that's why it's not an opera written by AI or something because the AI is a participant, a collaborator, and we're cherry picking a lot of what we're reading, right? We go through a lot of versions of it and we kind of throw in what's happening with the AI generation and throw it into our devising text. So it's not like some kind of a holy vision always, but it is interesting to humanize the voice and to kind of try to give it something that's a bit more robust than just what shows up on screen. So I think that's also, I don't know if that helps answer that question too. And I think going with our commendatory play of like throwing logic out the door, we cherry pick these things, but it's also really exciting to work with this thing that it'll give us random crazy responses sometimes and some things are right on, but to be able to kind of use it to get ourselves out of our own logic has been really fruitful I think as well. So another thing just to say quickly is that we are using vocoders, so we have hardware vocoders to re-synthesize spoken and sung voices. So speech synthesis is something I've been interested in for a long time, so we could possibly use that or we haven't, we haven't, we've been using vocoding but we haven't used actual speech synthesis yet because in some ways I'm sort of trying to stay away from the digital domain and concentrate on analog gear. The other interesting thing is that GPT2 has been used to create music. So there is an open AI application with this language model called DuPox and interestingly once we started using GPT2 to create text and we're in communication with open AI, they came back to us and said what do you think about this new application that we're using? So you can create like an Ella Fitzgerald song in her voice using lyrics that she never wrote or music that she never sang that sounds like her singing. So it's a fairly shocking, wonderful, disturbing thing so there is that possibility although we're not so interested and algorithmically produced music for us it's all about the theatrical representation of this Turing test. That actually segues really nicely into another question that's further down in the queue but I'm going to bring it up now because it connects and Tyler, you touched on this a little bit but in terms of GPT2 being a collaborator as a group, how much control do we find ourselves giving over to the AI from a methodological perspective? Tyler, you just talked a little bit about the kind of getting us out of our boxes of thinking and sort of busting us out of parameters a little bit but I wondered, I'm going to throw this over to Dakota a little bit and then whoever else can jump in but control and GPT2 from your opinion how much control does GPT2 have in our process? Well, I think first of all I just want to say I think control is a very kind of an interesting kind of jarring word actually to think about in our process because it feels like most of what we've been targeting is kind of relinquishing control in some ways and so I think that's kind of just a sidebar answer that kind of like over time I feel like we've developed a kind of unified voice or a unified kind of like ecosystem of voices and ideas in interacting with the language models and with each other so that the idea of control is hard to kind of pinpoint in this context but if I change it to something more like filtering or kind of selecting or curating I think we find ourselves generally and as the kind of coordinator of the devising activities for much of it we tend to kind of use it use the output of the language model as inspiration or as source material if you will and then think of our kind of creating process together as a transformation of that material or an elaboration on it if you will so some of it does make it to the end as you kind of heard in the Brittany inspired possibly Brittany inspired tune as Matthew said and in the opening but also as you suspect and as these questions are indicating an overwhelming majority of it doesn't make it very far at all because it's either extremely prejudiced or sort of nonsensical and so I think we find ourselves pretty much filtering until we find that it's reached a kind of transformation level of satisfaction of the collective voice or idea it's my best first attempt at that if anybody else wants to jump in I think what's also really exciting is as we've continued to grow in this process and we've in our devising sessions we've put in a text from the AI and that we have generated and now you know Matthew described how the stage version of this is inevitably going to be somewhat of a live Turing test for the audience and in devising with this machine we've found ourselves kind of creating a Turing test for ourselves because we're putting in text and at this point somewhat not sure if it was one of us that wrote it or if it was the computer that wrote it and if it was a mixture of the two so I think the blurring of the lines has become really exciting in our process also Yeah, Q do you want to say anything else? Yeah I spot another question that's on this theme which is further down the list from Liz what kind of dramaturgical questions do you ask when you receive the GPT2's text and how do you prioritize which text from the AI becomes part of the final script? I think that's in the same vein I think we do a lot of editing on it of course but I think what we do try to play both in our devising games and when we're playing with GPT2 or 3 is to try and go through massive chunks of text and discern patterns see what kind of things come up again and again or what kind of arguments appear again and again so that's definitely one of the kind of dramaturgical processes that we apply to it I don't know if anyone else has any other response to that I have to say go ahead I was just going to say that this piece as Matthew mentioned before we're not interested in doing a biopic it's an opera about ideas and as Hugh said like rhythm, pattern those are the kinds of dramaturgical principles that we'll be playing with a lot it'll be less about there's a story that we're arcing out and more an accumulation of ideas that we arc out into a kind of rhythmic pattern that hopefully will result in a meaningful rich experience for an audience member just one thing really quickly though I think it's worth saying is so when Hugh and I went to visit the King's College archive and sat there and read the papers back to back we had this marathon reading session and we weren't allowed to speak to each other because it was a really small room and they were very concerned that we would disrupt everybody else but we had these sort of mind blowing moments when we would read Marginalia that Turing wrote what's really interesting is that GPT2 preserved the way that Turing wrote the algorithm preserved the voice so Turing wrote and obviously in a scientific way but also in a slightly awkward fashion and the algorithm preserved that so I think that's one of the great really exciting things is that we feel like this his voice in terms of the way that he wrote and thinks is somehow preserved and the responses that we get back from the AI I'm seeing sort of follow-up questions and there are a couple of questions that I want to flag that sort of connect to the ethical side of some of this that I want to bring into the room someone has asked I'm going to find it now because there was you know what are your thoughts your thoughts on the way AI reflects our prejudices and systemic oppressions i.e. data profiling algorithms targeting minorities because that is how the AI has been taught who wants to take this? I can start a little bit I feel like that it is there are a little bit terrifying in a way because they do to get them to work you have to not only with GPT-2 for example we're feeding it very specific data sets but that's actually fine tuning like we're practically just adding at the end sprinkling kind of the things that we want to work with within the AI but the rest of the model GPT-2 is actually a collection of scrape and depending on how much of it scrapes how much of it accumulates in terms of like the internet I think is sort of the data sample that they're using is that they just send out a crawler it just collects tons and tons of text from the internet and uses that data so it's going to collect everything and it's going to collect things that are not pretty because the internet is kind of a pretty scary place in its kind of entirety so that like in a way the AI reflects kind of in a way our own humanity in perfect it's not it's not going to be there are ways and there's ways to curate that and I think a lot of the people that are coding systems to interact with artificial intelligence in language models there's a lot of effort in trying to kind of monitor and filter and take care of it but yeah it is imperfect and it does sometimes create pretty inappropriate things but that's just because the majority of the data is framed on is just swaths of the internet without any real detail Thanks Vlad, anyone else want to add? Sure, move on to that so there's a great question here that I want to pull up what would you want to achieve in making this an in-person experience rather than an online or screen performance? I'm going to take a stab and then I'm going to hand it off to someone else this team has heard me say this before but I may be the most analog person in this room in the Zoom room but I often talk about that moment when either as a group when we're devising together where something beautiful happens and the language and the text kind of comes together in this beautiful way or when GPT-2 spits out something that we're just like what? and the feeling that that gives at least me is something like a seance we're somehow communing in some way with Alan Turing and I think there's something very magical about bringing people together into an actual room in real time and thinking about what is the structure of a seance and how can we be thinking about technology and computers in a way that is very sensorial at the end of the day we spend a lot of time stroking screens with our hands and there's a lot of intimacy that exists there so the in-person element of theatre and we also all love theatre is the other part of this and that's what we spend a lot of our time and energy making but I think there's a really interesting juxtaposition between the kind of perceived distance of technology and the intimacy of technology and for us that moment when we know that we've made something that really fits with what we're doing and that kind of hairs on the back of the neck Alan Turing is in the room with us feeling so that's my attempt to answer that question if other folks want to chime in please I mean I'll also say as a sound designer for theatre what really gets me excited is having control of the experience for the audience in a way that is you know every room that you go into is kind of a different character for us when we're talking about sound in the theatre we have the opportunity to really use it in a physical domain too and can create visceral responses and really shape the physical feeling in the room so I think the sound designer I always prefer a room where I can choose if it's like coming from where, how much is the sub shaking your ass so much that it's just like you can feel it in your bones that's something you can't get if it's people listening on their laptop speakers or earbuds or something that we really get that control and full sense in the theatre I'll also add a little bit that I think it's interesting to think about because I feel like our intention was from the beginning was originally to make a live touring performance and just situationally we find ourselves exploring these other mediums which has been kind of interesting in a sense because now we've developed this website which I've sank a lot of time into that I feel like in a normal theatrical process we probably wouldn't have done to figure out how to make this work on screen and online and playing with these kind of devising techniques through so that we can work kind of at a distance all together which I feel like these are all things that we would have never probably tried so I think it's opened up this world of opportunities but I think in a way this opera has worked and will continue to work in a variety of mediums probably too I think that there's something about the way that we're exploring different elements and ultimately how we kind of piece together this final performance so I think I think yeah figuring out do we end up using streaming elements do we use some of the kind of web interface things to kind of exchange data and with the audience there's going to be I think a lot of things that come out of this period that will affect kind of how we piece together the final form yeah. A technical question but how do you gain access to the AI and where do you input your text in questions I think Dakota you might be able to show us on a screen. Yeah I'll just like share my screen real quick and show you what we're kind of working with and it'll be really fast make sure I have the right window so you should have a browser window is that sure. Yeah okay so we you know along with a lot of help from the digital humanities lab and with the read the programming intern at the CCAM we built this really simple interface to interact with the language model so we've got some stuff kind of behind the scenes where we trained that I won't show you the code in the low level stuff but where we trained the model on some writing of Turing and some of the stuff you read and it's essentially just like there's a you know a bunch of Python code background that's running the server we have to the hard part about hosting this is that it requires a lot of GPU power in particular so we have a server that's generously kind of allowed to us by the digital humanities lab so our team has access to this by VPNing if you try that link it won't work for you unfortunately but we do have access to this model and we can kind of type in you know change adjust a variety of parameters with GPT2 from it we have considered you know many times we've considered trying to open this to more people besides our team and the short answer is that it really comes back to that ethical question and that prejudice question that it's really easy to implement filters on words and phrases that are no-nose right we've already done that we've said certain words and phrases are just not allowed we'll never get those it's much much harder to filter ideas that are more especially like all our abstract ideas that are deeply embedded in the patterns of the internet so until we really figure that out which you know I don't want to understate that as a small problem that's a massive problem to work out I'm not sure that we'll be kind of opening that interface to the general public for a while yeah I think also though and for people that are interested at least GPT2 not GPT3 because it's so much larger it's not accessible it takes a bit of code experience with Python but I have it running personally to do my image work I did have to run it on a parallel I couldn't get it working on Windows so it works a little bit better on Linux personally I know it works all right on Mac I had a lot of issues on Windows but it's something that you could download and train yourself so yeah that's a very good point it's open source is the key there the core models GPT2 is GPT2 is open source the core models of it are just to quickly go back to I think the point that Liz was making and her question about what we can do when we get a live audience I mean there is we are thinking of having some kind of online installation which would run in parallel with with the live performance and the let's just say that the music is going to be set in a conventional way but we will also we're trying to figure out ways in which we can build and receiving for example text live a text from an audience member that could be fed back and sung back so relatively in real time so each performance might actually have some differences so having that kind of flexibility is something that we're looking at and and again it comes back to the idea of the audience discovering that they are complicit in a turing test I'm going to read a comment that is directed to Vlad I'm sure you can see it or flip dots I just like to share my thoughts on creative coding explored through flip dots display by Vlad I believe the attention paid to each design of imagery on the dot display allows us to read here the sound of each particular dot which is interpretation of algorithms encoding AI it's variety of positioning in space I think it enhances the soundscape of overall experience when it will be in person so someone here really responding to the sensory experience of the flip dots as a kind of translation of what algorithmic action might look like or feel like Q are there any other questions that you feel like we've missed in this thread that you really want to bring into the room and if folks have final thoughts or questions we've got about five minutes there's one here from Christine she says I was wondering in terms of the AI and the very tech side of things did you all approach it as creatives working in the theater and opera so indeed as people searching for a story or is there anyone exploring it purely as a data scientist or is there a line for you between the form as a standalone which I think is really fascinating and using it as a device to make an opera as a bottom line yeah so something that's interesting to the group at large and there are some others colleagues here at Yale who are fascinated by visual representations of data so we've done some experiments and there in conjunction with neuroscience genetics and and lately looking at collective behavior of animals so flocking and schooling so there are sort of some allied projects that are on the periphery of this that we are slowly kind of filtering into it so it's it's sort of generated quite a community of endeavor which is really exciting and somewhat sort of overwhelming quite frankly but yeah that's a great comment. This is also a great follow up to that actually Matthew which was also a question in the thread that Alan Turing is such a natural fit for this process but is this process seeding other topics for folks so you just talked about flocking and sort of their visual representation of data for everyone in this group creative group are there other ideas that are coming to the surface for you for other projects born out of the experience that we've been having together it's a wool assignment I don't even know where to start I mean I feel I feel like it's for me it's becoming less and less about the concrete Alan Turing like the person that existed and more about the legacy and the perception and idea of his work and what he left behind which I think is really you know like just a super fascinating shift to encounter and I think what's also exciting about Alan Turing is he really you know with working with AI he was so kind of ahead of his time and in like a philosophical way thinking about what is intelligence and he was always taking mathematics and things and putting them and seeing how they fit with the natural world so I think what's been super exciting in this exploration is just like what is and seeing all these patterns and what the AI is generating has further deepened my understanding of what my own intelligence is and kind of what makes us human because I feel like that's often the blowback of AI is like well it's not intelligence or consciousness because it doesn't have these things that I have as a human but I found too in how much understanding goes on in the world and everything that like as humans our intelligence is starting to show that we are a subset of data and different things too so I think it's been a really interesting philosophical thing to think about as well as technical. I think just piggybacking on that Tyler that to me is part of why it is so essential that it's a live experience to be like deeply human and physical and organic about this technological idea and so to separate us from each other and other human bodies in space would kind of be missing a crucial element of what's at the heart of a piece. That's a perfect note to end on and Matthew I'm just going to hand it over to you if there's any acknowledgments or appreciations you want to offer before we hand it over to Andrew to close out. First would like to thank Andrew and Emily for hosting us and walking us through this process and a big thanks to you and everybody at level up. It's been a wonderful opportunity this is our first international engagement so we're thrilled about that we'd also like to thank those that Yale that are supporting us so we've already mentioned the Dual Humanities Lab which is part of the Yale University Library the Yale Center for British Art and in particular the Yale Center for Collaborative Arts and Media directed by Dana Kawas who have been wonderful and supporting us and I personally would like to thank all of the team it's been a remarkable thing that you can get this many people together twice a week to work on this project and I'd also like to thank of course the the archive and the archivists at King's College Cambridge that have been wonderful so thank you all. Thank you everyone so much for being here this has been absolutely fantastic really quite interesting and I'm so excited to see where you go with this and I've signed up for the website on your Instagram and I really recommend everyone that's watching to take the chance to take the time to head to the website sign up and keep tabs on where you go with this project so thank you all for sharing today really really glad and thank you Vlad for bringing this to my attention in the first place and really looking forward to your chat later on in the symposium and just for everyone else again I want to say if you have the chance to please donate if you've enjoyed this particular event or any of our other events you can donate on the Level Up website or on the ADCs website or on CanadaHelps.org please take a moment if you can to do that and tomorrow we have our next event at 2pm Eastern or 11am Pacific we have Smile with Brittany Bland where she will be exploring how digital practices can help individuals find the strength in their own voice live their truth and project that into the world so that's tomorrow afternoon thank you everyone for coming today and have a great day we'll see you all around the symposium