 One of the issues that has been preoccupying me in recent years has been both as a theorist and scholar and as a practitioner, has been the question of machine agency. Of course, I've been obsessed with this idea of exploring interactions between technology and live performance for a very long time, but recently I've gotten particularly interested in the concept of the robot as an autonomous agent. The robot who becomes human as its maker is a trope that recurs throughout the history of science fiction, of course, starting, as we all know all too well, with the robots in Karel Chopek's R U R, which famously introduced the term of robot to the world in 1920. In recent years, and with accelerating pace over the past decade, a growing number of theatrical productions and live performance events have incorporated real robots. And such performances invite us to ask at what point a human-crafted mechanical prop acquires its own agency, pigmalion style, and becomes a reformer of its own right. So today, I'm going to use this question to structure a brief overview of the way robots are deployed in theatrical performance, and I'll be drawing both on the recent history of robotic performance, and in some cases not so recent, going back to the 70s, but mostly more recent, and in addition, on my own experiments and performance with robotics in the classroom using my dear friend Zebzab, who some of you have had the opportunity to meet in my office, and if you haven't, stop by and say hi. Zebzab is a Darwin OP robot, and you'll be seeing and hearing from him or her, depending on what character he or she is playing in a moment. The Darwin OP is an open-source robot that was developed through a consortium of universities and then sold commercially by the Korean company Robotis, but the designs and the code is all available online if you wanted to build your very own Zebzab. So I acquired Zebzab about five years ago from a colleague who has since retired in the School of Engineering and have a lot of fun with him, and in order to pursue this investigation, I worked with a graduate student who works with me in ideas, through ideas for creative exploration of music composition student who developed a bridge between the software built into Zebzab. Zebzab is a computer running Linux, and the programming environment that I use, which is Max MSP Jitter, and then I created in Max a tool to allow my students who tend not to be computer students but acting students and playwrights and performers to animate the robot without needing to get into programming, and also to basically use Zebzab as another output device for Max, so to be integrated fully with any audio, sound, video, anything. So I will be bringing out Zeb when appropriate to illustrate some of the concepts that I've been looking at. All right, so my talk is going to be focusing on, not wanting to see this, all right, five different modes of robotic control, which create a continuum from human-centered agency to robot-centered agency, so we'll be progressing through these, and there is, from a theatrical and artistic standpoint, there's absolutely no hierarchy implied, but from a theoretical standpoint, this quest towards the pognalian-like autonomous robot is exemplified through this progression. So the first of these concepts is puppeted robotic performance, which is by far the most common way that robots are controlled right now in performance. When a live operator controls a robot's actions in real time within a theatrical context, it enters into precisely the same relationship with this operator that a puppet does with its puppeteer, all right, but unlike conventional robots, the magic of technology makes it possible for the robotic puppet to be untethered physically from the human puppeting it. In addition, designers have extraordinary freedom to design interfaces that manipulate these robotic puppets. So somebody can control a puppet with a joystick, tablet interface, or mimic conventional puppet controls such as handheld puppets or marionettes. The options are limited entirely by the creator's imagination. So as I said, a lot of the vast majority of robots used on stage so far have been puppeted. So for example, in 1909, there was a production of a Midsummer's Night Dream at Texas A&M University with seven remote-controlled drones, one pizza-sized air robot and six fist-sized e-flight mini-helicopters who were cast as the fairies. One of the most critically successful examples of robotic theater, which I'm sure many of you are familiar, at least indirectly because it got a fair amount of press at the time, was Elizabeth Merweather's Heditron, which was written specifically for puppeted robots and human actors. And this play was first produced a little over 10 years ago in 2006 by the Le Ferre Corbusier Theater Company in New York using robots created by Meredith Chang and Cindy Jeffers of the Arts Robot Collective Bot Matrix. And so we see that image to the left. And the show was restaged in 2011 by the Sideshow Theater Company in Chicago using robots designed by David Hyman, Lizzie Stossel, and Bruce Phillips, and constructed by members of the Chicago Area Robotics Group. And there it is on the right. The Chicago and New York productions assigned a separate operator to control each of the five robots. So there was a human actor for each robot. And in both cases, the design, as you can see, exaggerate the robot's mechanical nature for comic effects with the Frey Corbusier production embracing a low-budget, maker-space aesthetic. In the Sideshow production, a slightly slicker steampunk aesthetic. During one of the curtain calls, the Sideshow performers exploited and highlighted the real-time control of the robots when the cast invited the lead actress's boyfriend on stage so that the robots could wish him a happy birthday. And the boyfriend turned the tables on her by having the robots assist him in making a surprise marriage proposal. And this is actually, or at least was, available on YouTube for everybody to see. One of the most complex and, by far, most expensive robotic puppets to date is the 20-foot-tall — whoops, there it is, this is the — I'll show you, I'll show you a little bit of this, as you can see. This is the American robot, so there are the robots. So, as I was saying, one of the most commercially, probably by far the most commercially successful and expensive robotic puppets to date is the 20-foot-tall gorilla created in 2013 by the animatronic company Creature Technology for a musical adaptation of King Kong at the region theater in Melbourne. Actually, this wasn't — I misspoke — this wasn't especially commercially. It didn't do super well commercially. Their previous walking with dinosaurs was phenomenally commercial, but it was definitely the largest and most expensive of these companies' productions. And Elizabeth Joachim has written at length about this company and their use of robots. Sonny Tilder is Creative Technologies' creative director, led the design and construction process, and Pete Wilson, the production's puppetry director, coordinated a team of 14 puppeteers. Kong is a hybrid of a puppeted robot and a gigantic, manually-operated marionette. His facial expressions are animatronic. Two off-stage puppeteers use real-time voodoo controls, which the production's director Daniel Kramer describes as being like the ultimate Nintendo Wii pad, to drive servo motors in Kong's eyes, eyebrows, eyelids, nose, lips, jaw, neck, and shoulders. And a third off-stage puppeteer remotely navigates Kong along a four-ton track and trolley system to move him across the stage. And then finally, a team of 11 circus-trained aerialists dressed in black, dubbed the King's Men, pull and swing on a series of counterbalance cables to manipulate Kong's limbs in full view of the audience. As Kramer puts it, the King's Men are wonderful reminders that much of the magic of puppetry is seeing the puppeteers the double vision of the creature's life and the human beings creating that illusion. Now creature technology was compelled to supplement the animatronic elements of its King Kong with manual, albeit super-scaled, marionette controls to overcome some of the limitations of current robotic technology. The range of motions produced by servo motors that drive most robots is much more restricted than that of the simplest handheld puppet, rod puppet, or marionette. Moreover, the precise control with which servo motors snap a robot's joints into position produces the same inertia of matter in robots that von Kleist in the marionette theater described as afflicting human performance and that Kleist contrasts to the natural grace of marionettes swinging freely with the force of gravity uninhabited, inhibited by consciousness and affectation, as we all are familiar with that essay. And this reminds me actually of when Basil Twist, the brilliant puppeteer, was doing a residency here and talking about why he disliked hand puppets and why he much preferred, well, claw, he's done brilliant work just with a piece of claw or with marionettes is a very Kleistian kind of argument. So and everybody's clear on the way servos work, so you actually send them a digital number and they snap into exactly that degree of rotation, so you have this incredible precision which is completely free from the forces of gravity and nature. Examples such as creature technologies King Kong where multiple operators simultaneously control different parts of a single puppet or robot function, as I'm sure it occurred to many of you, much like Gunraku puppetry. The multiple operators were collectively as performers to create a single performance. We should recall moreover that this kind of collaborative performance isn't unique to either robotics or puppetry more generally. For example, we all recall the open theaters production in 68 of the serpent where you have all these different actors creating the serpent and many other open theater style exercises that did the same thing. All right, so this is robotic puppetry. I'll show you just a couple of images of creature technologies work to see that. You're a puppet maker. They think little puppets and then you say, well, actually, I've got this little shed that's 3,000 square meters with, you know, 40 full-time staff and we make our puppets the size of your house. And then it sort of starts to put in perspective. This is such a little shed in West Melbourne. That is impressive. The world headquarters of global creatures. This is the neck of the puppet. Got the body on my right hand here, which does the up and down and the twist. It was the 80s when Sonny got his start on some of... Right, so the walking with dinosaurs was a huge commercial, a gargantua commercial success and that is 100% puppeted. You can see this totally fun toy. And I'll say a little bit of King Kong. The shoulders and his wrists to give him a hyper realistic illusion of being real. There is a lot riding on the... All right. Okay. So the second mode that I'd like to talk about, I call reanimated performance. All right. So while manual puppeteers manipulate their robots in real time, robot puppeteers have the option of recording messages they send to the robot to replay later. The digital instructions that the robot receives and executes will be identical regardless of whether it receives them in real time or after a delay or after a delay or whether it's receiving them for the first or 100th time. In other words, if I'm a robot and I get a number that says move my robot to position 20, I don't know or care whether that number was actually generated by a person in real time or was coming from a file and played back. It's exactly identical to me. All right. So the situation is similar to that of a video or audio recording of a live performance but with a crucial ontological difference. When we watch a performance on video, the performer's body is absent. Only an image is present to us. The video image is a radically different kind of thing. Possibly. You don't believe philosophy. No, even he acknowledges the ontology. The phenomenology is what you question. Then a human body. By contrast, every time we send a previously reported insertion to a robot, a real robot performs the action. So not just from the robot's perspective but from our perspective, we are seeing a robot move in exactly the same way. We're not seeing an image of a robot. We're seeing the robot. It's exactly the same thing. So there's no ontological distinction between a robot that responds to real-time messages and one that responds to stored messages. Moreover, when a robot executes a robot puppeteer's recorded commands, and this is, I think, for me the most significant thing, it retains its indexical link to that puppeteer. So when you see a puppet being controlled, a puppeted robot or puppet, part of what gives it that vitality is it's an extension of a human being. It's a performance. It's a live performance. And if it's deferred, it's still a live performance. It's exactly the same performance. We see that indexical link to the puppeteer who was sending those messages, who was creating that control, but just after a period of time. Just the delay. So we don't really see a reproduction or representation of the puppeteer's performance, but a reanimation of that performance. The robot puppeteer is performing across time. The situation here is a lot like a player piano where we've got Gershwin playing on a piano roll and then it's reanimated when you hear that piano roll again. One method for creating time-delayed performance, robotic performance, is to record a robot's motions while an operator manually moves the robot's parts rather than driving the robot by remote control. And this is actually the way that Darwin Animator, the software that I developed, works. So the way the robot is typically manipulated and the way that Darwin sort of out of the box is manipulated is by entering a series of numbers that represent each of the points of the robot and then going like keyframe animation from frame to frame. So this, then that, then that, and the robot interpolates the motions to go to those different positions and the programmer then creates a motion out of that. So instead of that, what I did is, actually, there is datability in robot animating. You can create poses and go from pose to pose. There are a few cases where that's useful, but very, very, very few. You also have real-time control, so you can move these apps' heads in real-time or have them walk around, and we certainly do that fairly often in performance. But the most, the central way of animating is actually to turn off the servos so that they're completely limp and a person can move them. And so then you're using the forces of gravity and the completely variable and erratic and quirky qualities of motion with every part of your body moving at a different speed in a different way, and then recording that. So it is puppeted. The cool thing about that, though, is you're puppeting it by actually moving it around and then you disappear and then it plays it back. So it's doing something that a robot couldn't do. It's not simply creating a manual puppet with unnecessary technology. And in particular, the way that it works actually is you can select which of the servos you want to control. So there's an image of the robot. You can click on any of the joints and that goes limp and you move that one and you can layer them. So typically the way that we animate Zeebzab is so the arms first and then often I start with a voice and then that can be put into the recording and then you add all the different elements and you can erase the arms and then redo the arms without redoing the legs to build it that way. But the video that I'm going to show you just for the sake of clarity, something we virtually never do for performance, but we're animating. I have two people. One is me and one is one of my students animating Zeebzab all at once. So you're going to see us recording the motion and then you'll see it play back. There we are. So programming this kind of motion would take forever. Recording it's real time. Take forever and probably not get the quality of expressivity that you can get this way. All right, so that's recorded and then it plays back. Hi guys, good to see you. I mean that's what I can see, but I wish I could. I wish I could be there as part of your institute, but I wasn't accepted. Anyway, have fun. Not a university professor, right? That's what I'm going to say. All right. So, the... Sergio has already heard about this at the time when it happened. The first full production that we created with Zeebzab, which is also... I may talk briefly about this in terms of the pedagogy tomorrow too, because it's my favorite mode of pedagogy, which is actually working together on an actual project. So we've got this robot. I create this tool a little bit like Isadora, but on a much smaller scale, so to develop the tool with the students to see what they need and so it keeps evolving throughout the class. And so we create a goal, which is to create this performance for the public at the end of the semester. So we have no computer people in this class at all. Nobody's ever worked with a robot. We've got actors. We've got playwrights. We've got costume designer, set designer. We all work together. Actually, one of my colleagues who specialized and committed a live take, and asked and helped us with some of the lazzi. So we created from scratch this performance. We went with Comedia precisely because what I was interested in with playing around Zeebzab was the creation of character, which is something that roboticists tend not to think about, this generic thing. So teaching Zeebzab to embody many different types of tact and embody really different types of characters. And of course, Comedia is a very physical form with its strong gestures, which are impossible for the robot to really replicate. But it's a good challenge. And then the masks were fun. And it's got these really short scenes which practically was terrific because Zeebzab gets messages in real time with through Darwin Animator over the internet. So it can either be through Ethernet cable and it can be plugged in or it can be wireless and run off a battery. But the batteries only have about 10 minutes at most. Battery life. So these short scenes worked really well. So Zeebzab could come on, do a little shtick, then go off stage. People rapidly change the battery while something happens. Worked out great. And then come back on as a completely different character with a different mask. So I'm going to show you a very short excerpt of Comedia Robotica. It's actually the whole thing. It was a half hour performance. It wasn't very long. This is a four minute reader's digest version of it. So it starts out in the cellar theater downstairs, which you saw very, very briefly when I gave you that tour at the very beginning. And you've got the programmer who I'm using the enough as one of our dramatic movie students. He actually did most of the animations and provided the voices Zeebzab. So, and he's really sitting there with our animator for the most part just hitting Q, Q, Q, Q, but also everyone's well, puppeting him. So he plays the programmer and he is the programmer. And the idea is he is this programmer who has created this Comedia Robot and put out an ad for a live person to perform with the robot. So, and so this actress comes in and, of course, being a professional Comedia actress, she knows all this, you know, she could come in and improvise right away as soon as she knows hysteria. She's not worried that she hasn't rehearsed. The audience is there. She comes in for the performance and finds this robot. He'd forgotten completely to mention that her partner was going to be a robot. And initially, she's sort of resistant to this idea, but warms up to it until by the end of the half hour she's fallen in love with Zeebzab and fortunately, the feelings are mutual and they work together. All right, so we'll see a little bit of this. Oh, this is Zeebzab. A robot? Yeah, exactly. The first robotic Comedia robot performer. Okay, you've got to be kidding. Right. I'm not going to do a puppet show, so... He's a robot. Robot. Right, whatever. I'm sorry. I'm trying out for this. I'm a professional actor. I work without a professional actor. Human actors. I don't have help with your science, I'm just so sorry. I'm sorry. Please don't go. Who said that? Okay, so you programmed the robot to de-try. De-try. Okay, I'll give it to Zeebzab. Oh, Zeebzab, do it. Well, no, I have to sit behind you. Oh, oh, oh, I'm going to sit in the chair. It's too long. You look so sad. You're cute. I'll give you some money, so... I'll give you some money and you don't have to sit in the chair, okay? Okay. Someone's waiting. All right. Well... He's counting counting his money. Perfect. Oh. Your eyes so stunning. And that is you. Oh. There are many of the strength of the earth. In the same ocean, I destroyed 500 silent rupas, obliterated 20,000 terminators and shut down. Is that so? I drew upon each three ounce of strength I had in my motors the same strength and passion that burns in your eyes. It must be for me. I I love you so much. You You have to I have to settle and settle in your hands for a good story. I feel I am alive. Sorry. It's obviously a malfunctioning thing. These total scrap metal are... Is it what you just reset it? Yeah. For most of the show, the programmer, whenever he was on stage just sitting in front of the computer, at that moment he's not. So we had to find a way to cue the robot because, you know, Darwin animator has sort of the cues in it. So we programmed the foot at that point. So that's why she was actually doing the thing with the foot to go on to the next cue. All right. So so that's reanimated performance. So the third mode of performance is what I call algorithmic performance and or animatronic performance. So in this case, rather than being, having that indexical link either in real time or through delayed performance, you've got, as I said, this is a very, very common way of programming the robot. It's either going to, usually it's either real-time robotic public control or this, you know, sequencing of events. And so as I said, one way to do it is this cue to cue. Another is in a purely algorithmic kind of style. So this is an example. This is actually max MSP, but not using Darwin. Well, it's using a little bit of Darwin. It's sending the commands directly to the robot but bypassing the whole puppetry interface and creating a straight algorithm. So what it's doing is it's simply counting from zero to 100 and then back again. And if it does that, it's panning the head and the shoulder, and these other, a certain amount of, you know, number of degrees. So it's a pure algorithmic program. So I programmed this not knowing what it would look like exactly. And then hit the button and you play with it and you get something that you sort of like. And so this is what it looks like. Totally different style of motion. And not better, not worse. It's very beautiful. And you could use this then to animate live performance, right? You could have somebody that would say the robot's motion, but it's the robot's natural vocabulary. It's a pure computer generated performance. Okay. So when you have a purely algorithmic performance or an animated performance that has no cues, it goes in sequence, then you create, and this is one of the problems the reanimated performance is as well, a huge difference between real-time puppetry and the reanimated performance that has been emphasizing the similarities, but there is a very big difference. Not in the robot itself, it doesn't know the difference, but in the robot's ability to interact with the live performer. The live performer is in a different time than the puppeteer. And that's going to be an issue with the sort of animatronic algorithmic performances I'm talking about as well that have no sensory input, but very simply you turn them on and they do their thing. The play Sayonara, which actually, Peter talked a little bit about at the beginning of the week, produced in 2010 by the Sandin Theater Company and Osaka University Robot Theater Project, features the Geminoid F, the hyper-realistic robot, created, as you heard, by Ishiguro, one of the world's leading robotic scientists, whose obsession is to create these incredibly uncannily life-like features, but he denies the existence of the uncanny valley effect altogether. And when the play was initially performed in Japan, an off-stage human operator controlled the Geminoid F's movement in real time, speaking the robot's dialogue into a microphone, while cameras tracked her facial expressions and head movements. And actually the body doesn't move at all in the performance. And this is the way that Ishiguro would actually animate most of his robots at that time. When the production toured the United States three years later, which is when I saw it, the robot's motions were then entirely canned. That performance was, the robot's performance was then encoded with no internal queuing or interactivity during the 30-minute performance. The whole thing was turned on in the beginning, played to the end. And I'm going to show you a little bit of that performance. I said the hands don't move at all. Because you feel for it. In a talk back discussion following the performance that I saw in Philadelphia, the productions director, Hirata, explained that the robot is totally controlled by myself. Just like the human performers. I'm not interested in improvisation, either for robots or humans. And Beverly Long, the live actor who performed alongside the Gemini, I actually asked the question, so what was it like to have, because I've actually written about, you know, invading against canned performances of media and forcing the actor to conform to this relentless media onslaught. So I asked, was this a challenge? And she explained that because Harada always dictates actor's timing precisely, her process working with a robot wasn't significantly different than it had been when she worked with live actors on previous productions that he directed. Animatronic scene parks typically used extended pre-recorded robotic sequences, but in theatrical productions with robots, the more common approach is to break the robot's motions into short segments that an operator runs as a series of cues during the performance. The technique used, for example, in Comedia Robotica, we had a number, you know, some of the motion as it was puppeted, mainly the robot's head, so that the robot could track the live actor in real time, and the walking was all in real time too, so that it wouldn't walk off the stage in part. But the more complex motions were all pre-recorded, so as a combination, the cues would go from, you know, make this robotic control to puppet control to the animated sequence. You could also combine, they could walk while you control the head manually. All right, so, let's see. So, a recent example, let me show you an example of a performance that uses algorithmic control as opposed to the deferred, the reanimated performance. And this is a production called Uncanny Valley. There are two plays, confusingly, called Uncanny Valley, that were recently. One actually I directed, it's a fantastic play, but it just lied, there's a live person who plays a robot. The other uses a real robot, and this one was conceived and directed by Francesca Talenti in 2014, and the production featured RoboFespian, a life-size robot manufactured by Engineering Arts Limited specifically for human interaction in a public environment, like a trade show or a museum exhibit. And so, the RoboFespian is programmed in the way that I was talking about with the pose-to-pose actions. And in this play, Dummy, portrayed by SoboFespian, convinces Edwin, portrayed by the human actor, Alfonso Nicholson, to sign over his memories and emotions to the robot. And as the play progresses, the robot increasingly assimilates Nicholson's identity. RoboFespian's proprietary software employs standard algorithmic programming methods. The programmer defines it as a series of poses and links them together, using the programming language, in this case, Python. The production, however, complicated the issues of robotic agency in a way that dovetailed effectively with the issues raised within the play itself. The robot's physical motions were created but not performed by the production's programmers. The Dummy's face, however, was a video image of Nicholson projected onto the translucent white surface of RoboFespian's head. And the voice was also recorded by Nicholson. So this is pre-recorded, the actual live actor's face was on the robot. I'm gonna show you just a product demo of RoboFespian so you can get a sense of the quality of it. It sings. I'm singing. It acts. But it is moving the mind to suffer the snails and ills of outrageous torture. And it even impersonates other robots. I think that could pop a channel server. But that just wouldn't be proper. One of the styles of CBIT 20. So that's RoboFespian. I don't know if you can tell looking at it, the difference in motion between, say, Darwin and this, because it's pretty well animated, but it has a smoothness. There's a sort of roughness and awkwardness, which I think in a sort of vitality from the animated process that you don't get from the pose-to-pose process. Rachel Carey, in a review of the production, observes that on paper it would be easy to dismiss RoboFespian as a mere puppet. A mere puppet. After all, a great deal of his facial and vocal nuances are provided by Nicholson. But RoboFespian has his own credit in the program for a reason. He's deeply engaging a unique performance entity. So I'd argue that robots programmed algorithmically are in essence not puppets and not really strictly speaking robots, but a form of automaton. And from a performance standpoint, are much like the automata that the long predate the computer age. So it's the famous trio of still functional automata by Pierre Jacques Drew, the writer, the draftsman and the musician, whose ability to write a variety of texts, draw a variety of images and play a variety of tunes on the piano respectively anticipated the invention of the programmable computer. So this type of automata goes back has been documented extensively, at least to the 16th century. And actually there's evidence of these going way back to the classical era. In the same way that programmers script to robots movements without performing those movements themselves, the inventors who created these pre-electrical automata devised elaborate networks of gears to animate mechanical devices without themselves ever performing those actions. Kenneth Gross in his 2011 book Puppets and Essay on Uncanny Life correctly insisted such automata shouldn't be confused with puppets. It is indeed the absence of actual bodily life that makes the purely mechanical movements of a windup doll or automaton. However, lifelike feels so different from the life of puppets. The lack of the living, moving hand in the automatons, the machine's inability to respond in real time to improvise humanly or even humanly fall into inaction, these things place it in a different kind of theater. I leave it to you to think about whether or not whether you agree with that or not. The difference that grossed into it here between puppets and automata is, I believe, real and significant, but I'd argue that gross is conflating two distinct features of automata. First, the motions are defined algorithmically rather than being indexically linked to a real-time human performance in either the past or the present, so you don't have that indexical link. Second, the motions are predetermined and invariable. You don't have the interactivity, and those are two very different things. You can have either one without the other. As we've seen when a mechanical performer plays back a human puppeteer's prerecorded gestures, what Gross describes as the puppeteer's living, moving hand is still evident in the robot's motions. And conversely, it's entirely possible to create robots that respond autonomously to the world around them in real-time producing algorithmic motions with no indexical link to any human performance. And, amazingly enough, that's what we're gonna look at next. So I call this mode of interaction a reactive robot. Okay, reactive performance. So I have here Zebzab reacting to sound just the way we were playing around with the Visitor. It's just taking the sound input. Zebzab has a microphone built into it. He also has a speaker, so he makes his own sounds. And this was a simple program, so that Zebzab, it actually is using Darwin Animator, but it's the one time that I use the pose mode. So you can go to a propose proportionally. So it's got a pose for when it hears the noise and a pose when it doesn't, and it goes that depending on how loud the volume is. So it's actually, in Max, literally you program with objects that you connect together a lot. Like as a Dora, this is literally a two-object program. This is taking sound input and going right into, it's actually scaling it, and then going into the pose. And that's all there is to it. But it creates a very, very interactive, or reactive dynamic kind of experience that is gonna be different each and every time somebody interacts with it. So here it is. Boop, boop, boop. And I actually built in a little bit of a delay to make it. So artists have been creating performative works with autonomous robots for decades. Typically, however, this work is developed not for theater venues, but for gallery or museum settings where robots interact with the audience and where you can actually experience that kind of interactivity by playing with it yourself. So first, it's relatively simple to simulate autonomous robotic behavior in a rehearsal performance with actors. And conversely, it's much easier for spectators to assess whether a robot's responses are truly reactive when they interact with it themselves. And second, while robotic puppetry is given the simplest current state of technology, the simplest and most effective way to give theatrical robots the ability to interact with live actors, real-time puppetry is less practical in an installation context, right? So you can't really have puppeted performance in a museum 24 hours a day. So the museum context, the gallery context is driven just kind of either automata or reactive performance. One of the earliest and most influential reactive performances, if you did the reading from Chris Salter, you'll see it was Edward Inatsavis's The Sensor, commissioned in 1970 by Philips Electronics for the Evalon Science Museum in the Netherlands. So this 15-foot-long hydraulic robot had an abstracted animal-like form with three legs, a long neck, loosely evocative of a giant insect or a dinosaur. The creature, controlled by a mainframe computer, used adopter radar to track visitors' movements and were coiled from sudden movements and sounds, okay? Another example, in 1993, Sam and Penny created a similarly reactive piece, Matique Mall, with the goal of producing a robotic, this is these words, a robotic artwork which is truly autonomous and that gives the impression of intelligence and has behavior, which is neither anthropomorphic nor zoomorphic, but which is unique in its physical and electronic nature. So this was the robot, approximately three feet tall, consisting of little more than a vertical rod topped with sensors and flanked by two bicycle wheels. It sensed when somebody was nearby and rolled toward that person, continuously turning and moving in an effort to maintain a forward-facing position two feet away from the spectator. So super simple algorithm, but like the sound with thieves up, depending on how you move, you can create an infinite number of different types of behaviors and responses, giving that sense of reactivity and cognizance to it in a purely independent kind of way. Actually, he appeared with my first robotic, actually an interactive installation, Flico, they're actually, in 1993, they were in the same exhibit at the same point. The exhibit, to the extent of which such robots are purely reactive, their behavior is predictable and most significantly under the control of the human that interacts with it. So in the case of Zeebzab, in the case of T-Mile, they actually don't have any ability to modify the algorithm. And so you can then figure out, I move like this, it's gonna do this, I move. You can actually choreograph it, a dance with it. So I can quickly figure out in an audience member that I am actually in control of the robot's movements. And so I'm apt to adopt a perspective outside the fictional frame, at least temporarily, to experiment with my ability to move the manipulator. And you see this all the time. You break the rules of the game and you start playing with the robot. Even after I become aware that the robot has no real autonomy, however, I still retain the ability to shift my perspective or shift the frame back and respond to the robotic character as an autonomous being with its own personality and emotions. So you have a two-way performance, a human performer who knows full well, it's not sentient, but can perform with it as sentient, as responsive. The simplest way to endow an inherently reactive system with the illusion of autonomy is to program in some random behaviors. For example, if the teak mall could randomly change the distance. So instead of always being two feet, it could just randomly shift between two and five and in some cases, it might wanna be 100 feet. So then you'd be like, why does it hit me and it seems to like that person? It's just totally random. To create richer and less arbitrary behaviors, one can create robots with multiple sometimes conflicting inputs or create a feedback system by putting multiple responsive robots together and having them to react to each other. This brings us to our next example. Bill Vaughn and Louis Philippe de Meres implemented precisely these strategies back in the 1990s in a series of robotic installations that again, Salter talks about. I don't believe he talks about this one, which is the one that I saw. These robots are completely non-anthropomorphic. They describe the work as a form of theater where robots are the actors. So it's normal that we want to project emotions onto them. So one of their first collaborations was a robotic sculpture called a space victorale, which consisted of eight tall motorized tubes, approximately four feet tall, each containing a speaker and a light source. Ultrasonic sensors track spectators as they moved around the circumference of the installation. And each tube responded independently to the spectators by rotating and producing sound. Though they didn't communicate directly with one another, they were each completely separate, the tubes seemed to be working together to create structured patterns and behavior until at a particular point, one tube would break away from the flock and point directly to an individual spectator. So I'll show you just a little bit of this performance. From 1993. One of this sort can exhibit seemingly complex behaviors that produce a powerful illusion of sentience, emotion and personality, even if it's completely non-anthropomorphic. Such systems are unable to pursue and modify long-term goals, develop strategies, or learn and adapt to behaviors over time. Artificial intelligence researchers distinguish between reactive systems like these and deliberative systems, a distinction that parallels the one made by neuroscientists between automatic reactive and willed deliberative behaviors. While some AI researchers focus on one or the other of these two approaches, increasingly the trend is to develop hybrid reactive deliberative systems. To date, there have been few efforts to incorporate a high level of deliberative AI into robotic performance, but as the technology matures and becomes more accessible, that's likely to change. Again, I leave totally open the question from an artistic or ethical or whatever standpoint whether it should. There's no, you know, so that's something for us to think about. Now, close to 15 years old, one of the few effective examples, actually I'm going to show you, just here's a behavior, this is not, at least essentially an art behavior. This is the Darwin-Opeas robot, not Zebzab, but another one, which has been programmed to kick a ball, right? So it's completely reactive, but it has a goal, it's got an objective, it wants to kick the ball. So, here, I made an interesting demonstration. And it's running, it scores a goal. So it's using the camera and it says, it's detected, it wants to score a goal, and then it will stop, and it will grab the ball, and then throw the ball. Right, it throws the ball. You're pretty good. And that reaction is significant. There's something about, well, Zebzab himself is cleverly designed to evoke this sort of cute response, which is adorable. But also, seeing that deliberative behavior, it's like seeing a baby doing, it's like, well, doing something or a pet is doing it, a cat. But of course, Zebzab is doing absolutely nothing at all. It's a really simple algorithm, this simply program to do what it's been programmed to do. So anyway, now close to 15 years old, one of the few effective examples of an autonomous robotic performance that employs this hybrid reactive deliberative approach remains a piece called Public and Enemy. Way back in 2002, it was a robotic installation at SIGGRAPH, the Big Computer Graphics Tech Conference, created by Cynthia Brasile, who is one of the long-term innovators in the field of social robotics. She actually literally wrote the book Social Robotics. It came out of her PhD dissertation back in, I think it was 99 or something, it's been quite a while. Brasile is a fun, so Brasile's created a series of robots called Akismet and Leonardo that are designed specifically to simulate human behavior. So that's our whole focus, is to create a kind of emotional intelligence in these robots. Public Anemone is a much more abstract piece that it's an interactive terrarium filled with imaginary biomorphic creatures that she refers to as anemones. The installation includes a computer vision, this is way back 2002, a computer vision system capable of isolating and analyzing multiple features on multiple individuals in real time and conveying that information to these little creatures. And each anemone is programmed to carry out specific tasks such as bathing in a waterfall or watering plants. So the idea is that it's engaging in its behaviors on its own. And then as you get closer, an anemone might become distracted from its task and approach with interest. If a spectator makes a threatening gesture, the anemone might retreat fearfully before gradually regaining its composure and either returning to its task or tentatively reaching off to another spectator. Brasile undertook this project because she regards theater as the perfect test bed for social robotic research, to quote, good actors often say that half of acting is reacting. Hence a robotic actor must be able to act, react in a convincing and compelling manner to the performance of another entity, whether human or robot. This requires sophisticated, perceptual, behavioral and expressive capabilities. Introducing improvisation or allowing for more audience participation makes the situation that much more unpredictable and unconstrained. Approaching open-ended interaction with people. Advances within such a test scenario could help bootstrap the social interactivity of robots in the real world. So in performances with human actors, the sort of performances with which audiences are most familiar, the performer and the embodied character are both typically housed in the same body. That is a live actor. In the case of puppets, however, whether robotically or manually controlled, the performer and the embodied character are company. All right, simple enough. The puppeteer performs through the robot. So the performer is me, the character, embodied character is this other thing. By contrast, in the case of algorithmically performed robots, the programmer animator does not perform through the robot in that way, right? But creates the robot's performance. So I create the performance, but I've never performed it. In the case of autonomous robots, the programmers, which at this point I still would say is a hypothetical thing, the programmers and engineers neither perform through the robot nor do they actually create the robot's performance, but rather they create a robotic performer. So we've come full circle, like a live actor, the performer and the embodied actor, once again, become one and the same. So now they're both situated in the body of the robot rather than the human performer. So robot performers destabilize the notion of performance and in particular the agency of the performer. The robot's agency is distributed in complex, ambiguous, and often even within a given performance, fluid ways among its robotic designers, programmers and operators, even as it begins to acquire without at this stage in the technology self-awareness or indeed any degree of sentience, its own autonomous agency. Moreover, within the context of a theatrical performance, its behaviors and choices are also defined and circumscribed by the agency of a playwright and or a director. The questions who is acting and who is pulling the strings become terribly fraught. However, as we examine these dynamic closely, we see that they're different in degree, but not really in kind from those of more conventional performance with human performers. As those involved in the making of performances in theater and film are well aware, the construction of an actor's performance is always a deeply collaborative process with multiple forces animating and ornamenting the body of the performer. So I'll end by returning to the figure of Pygmalion that I started with, this time as envisioned in Hal Hirschfeld's famous illustration for the original Broadway production of My Fair Lady, which is in my office. It depicts the image of Henry Higgins pulling Eliza Doolittle strings as if she were a marionette, while in turn George Brunner Chaw, figured as God, pulls on Higgins strings. This image vividly reminds us of the complex layers of agency in puppetry and even more so in robotics are already implicit in the dynamics of all theater and performance. All right, thank you very much. So we have some time for your thoughts and comments. Yeah. This is sort of an antidote and maybe it's a sort of question. In the news lately, in the Bay Area, I've seen them in Berkeley and now in San Francisco, there's delivery robotic drones that just sort of wheel down the sidewalk and deliver food to people. And so the news hasn't been about how great that is, it's mostly been about how people have taken to kicking them, urinating on them, in one case, setting one on fire. And so, you know, as these things become real and start to really actually become a part of our lives, there's obviously tensions about this. Absolutely. We have real feelings about this. And so I'm just curious, you know, as you put these things on stage, what your experience has been with audiences trying to negotiate these tensions between, yeah. Yeah, no, that's a fantastic point. And it's really interesting to try to think through those different reactions. What I found, the responses I mentioned before to Zeebzab tends to be, you know, that people fall in love with Zeebzab, Zeebzab is very cute. I wish I had that sort of charisma that Zeebzab seems to naturally have within the safe environment of the theater. But, you know, there was another installation event that I can't remember who created the, I read about of this robot that was supposed to make this long trek across the country and did not make it very far. Do you hear anybody familiar with that? It was, people kept abusing the four robot and strike. We were supposed to ask for help and it was ultimately, did not make it far. How much of that has to do with, I mean, in a museum or an installation, people always try to break the thing to test it and see how it works. But I think within a communal setting of the theater, I think people's responses would be very different than in the sort of wonder if you see it wandering through the street. But I, it's a really fascinating question with those, it's a question of affect, it gets back to what, when it triggers that response. Yeah. It's just sort of a follow on to that, but, I mean, I think this is sort of really fascinating to think about the ways that the robot as puppets is connected to the act as well in the theater. But one of the big differences is that humans have subjectivity outside of that context. You know, like, so it's sort of, again, it comes back to that sense of the frame that inside of that frame, there are role-based interactions that can sort of create autonomy. And I mean, one of the questions that sort of, this is more like about the whole areas, the whole kind of range of things that we've been talking about the studying is, I think a lot of these moves are sort of moving towards a universal language of title translatability. You know, in the sense of the way that the computer programs we've been looking at work. You know, so that one, so that a sound can drive a machine or a light can trigger, you know, that this kind of, the language of zeros and ones can turn a thing into another thing almost completely. Right. You know, and I think that's maybe kind of what people are struggling with with robots is the sort of breakdown of ontology, you know. And I wonder for us, as we're thinking about the digital humanities and things that we make and things that we do, also about the translatability of one face onto another or one voice onto another and whether this kind of border panic that we're experiencing in so many ways in our world is also to do with this kind of breakdown of these ontological distinctions between types of things, types of beings and types of processes. Right, I think you're- So that's a sort of really big thought or question, but I, you know, without really a focus, but I feel like there's a sort of underlying common theme in so many of the things that are going on in different parts of our world to do with this ontological breakdown that is also a source of such incredible power that one thing can translate into another. You know, the fact that your project was called Rosetta seems very significant to me in all these directions. Yeah, absolutely. And specifically in the case of robots and the issues I'm looking at, you're dealing with, goes right back to the initial, you know, Maury's theory of the uncanny, that border between life and not life and the terror of seeing that and precisely by complicating that, it's a very, very, very threatening thing. And as you were talking, you know, suddenly striking, you're talking about the use of this in digital humanities. Humanities, it's built right in there, digital humanities, right? And, you know, I think the threat of drones walking, you know, delivering groceries or whatever, you know, part of that is an economic threat are these robots. And every time people, from the very beginning, when people talk about robots in a theater, oh, they're gonna take away jobs from hackers? I mean, that's the joke that everybody always likes to make first. There is this threat. Are we losing something about our own humanity by challenging it in that way? Or by, you know, so it might be that the straightforward robotic puppets don't threaten you in that way, because they're still tethered to a human or at least even if it's deferred. But the autonomous robots out in the street delivering groceries are gonna be, I think, a lot scarier and more threatening. And it might be in, you know, those robotic installations, you know, that is purely reactive and completely non-answermorphic. It's, the reaction to that is not how cute. And the music actually that was created for that was not adorable music. It was creepy, threatening, fascinating, enthralling. But the question is, are these alive? Are they not alive? What does it mean? It is responding, but they're not sentient. But how do I really know? And what does that even mean? Well, I was just thinking about this, alongside sort of the newest generation of Disney's animatronic figures, which use sort of the traditional animatronic brain, but they're now going to all-electric, again, to make them smoother during their movement. But within the face, they've gone to a completely projected mapping technology. So what they're doing is making, in essence, not that they ever wanted those animatronics to look particularly human, but they're now able to make those animatronic figures look like almost literal 3D versions of the animated films. Some of which are 3D. So I kind of wonder how that project, which is admittedly a hugely sort of commercial one in which you want little kids to be able to see the thing in front of you, right? How that sort of interacts within this sort of, I don't know the spectrum, the sort of the range of examples that you were giving here, or in light of this sort of, these are ones that we were supposed to love as opposed to one to kick. Right. The decisive issue in terms of what I was talking about has to do with the degree of interactivity or autonomy. So insofar as you turn it on and it does its thing whether the face is projected or it doesn't make a difference there, it certainly can make a huge difference in your effect, your effect of response to it, for sure. But I've seen in the Harry Potter world in the bank, the animatronic figures are incredible and their facial reactions are just wonderful, they're beautiful, they're amazing. They've gotten really advanced, but they're a hundred percent canned. Right. And I think that our response to those, and they're just like, how fun this is really cool, but I think when they actually can look right at you and respond to you, people are gonna start feeling, we'll see, but my prediction would be people, I would feel more threatened and freaked out. And they did have, for a little while, a couple of living character additions, which were more about sort of these kinds of animatronic characters that would roam the park and have direct interaction. Right. And also using their Bluetooth technology to be able to know the kid's names, this was sort of a thing too, which was also kind of freaky if suddenly this character knows your child and what that means in terms of stranger danger and those kinds of things. So I think that you're exactly right, that I think that they've even sort of had to sort of go back to the drawing board in terms of that, the interactivity and how much interactivity may be too much with sort of the more realistic versions of these characters. And some of it actually in the, when I directed on Candy Valley, which my, the other Gibbons on Candy Valley, which doesn't have a robot in it, I did let Zebes off because he was really giving me a hard time about doing a robot show without him. Do the curtain speech. But at the beginning, as the audience came in, he was just standing there like a prop and sort of looking around like that. And everyone started to zoom in on a person and his eye color would change and track that person around. Because he was being controlled remotely and the camera was transmitting back to the computer backstage so the person could see it. But that was a freaky moment because it's a changing frame moment where you think you're just seeing this prop and then suddenly it seems sentient and it responds to that. That was kind of fun. And then it did its can speech. Yeah, I just wanna add that I'm always fascinated by we like want to make ourselves. We want to build ourselves. But then when we get too close, we kind of want to stop. Like wait a minute, right? So there's just this weird, because I was thinking as yesterday as I always do in workshops, I was like, why is it so exciting when we can go like this and make magic happen over there. There's this like extinction of our will which is kind of magic. So we like wanna create magic and we wanna, it's that. I mean it is that we wanna be the guy at the top right there making the next one, making the next one. But when we achieve it, we kind of go, whoa, hang on. Maybe I don't want that. Because in the case that Sheldon just brought up, that did take a job. Right. Somebody lost their job because they don't need the delivery boy anymore. I don't know, I don't have a question. It's just a phenomenon that's, yeah. And then there's sex robots. Oh, yeah. The implications of that are so enormous in terms of instrumentalization and future of women. Yes. And the future of women. I was thinking of when you were talking about agency in particular, right? That might be in terms of sex robots. Actually, like this is now like the sort of opened up frontier, right? So I think it was just at the CES they introduced a sex robot that detects overly aggressive behavior or if the robot feels like it's being treated inappropriately, it will shut down and become unresponsive. And this was apparently programmed. I can't remember the name of the roboticist who did it but into the objections of his wife. And so now, now. Wait, were you saying that the wife objected to his programming that? Yes. No, no. His wife like objected to the sort of lack of consent that sex robots were engendering and facilitating. And so he introduced a new feature. I think it's Samantha. Is that the Samantha sex robot? I might be misremembering this. Yeah, I was just reading about this and pulling the article back up. I was just reading about this the other day. What's creepy about that feature is that it stops the interactivity of the robot but in no way prevents continued. No. Right. So let me. It just, it actually doesn't, it actually, I mean, I guess in that, it just transforms an encounter from one that is consensual into non-consensual. Yeah. Yes. And considering that the consumer is the person who's gonna be entering the robot, then unless it was legally mandated or something, that certainly wouldn't be a feature. That brings in the whole ethics of, when you get into that artificial intelligence, the ethics of that whole argument that now we have to start talking about because at the end of the day, it's just wires and binary. Yeah, that robot doesn't. Can't robot. No anything. Exactly. So the ethics there don't really have to do obviously with the robot. No it's not. We're concerned about what the person's being trained to do so it gets into the same debates with video games and violence. Exactly. Although there is also then a further edge when we start talking about bio-art. Yes. Right? Yes. And so if you look at Orin and, Orin cats and the other of ours, come back to me. I have Googling, I must do a thing on this. Yeah, there are Eduardo Cox and the, not at this point, who did the rabbit? That was Eduardo Cox. Yeah. No I'm thinking of symbiotica. So Orin cats and I can't remember his collaboration. Anyway, so symbiotica in Australia had developed a project, I think they called it like the living jacket. Living, does anybody know what I'm talking about? Basically they started doing experiments with so-called victimless meat, right? Synthetically created organic flesh and they built a little tiny quote unquote living leather jacket, which was constructed out of this synthetic meat and so it lives in this kind of bathed solution and it is organic material and it is growing, right? It is cells reproducing and they installed it, I believe it was at MoMA. And it lived in a little tank and it gets a little weird because it's growing and then they left. Ion it's Zora, I think is his partner. And anyway, they left it and then at the end of the exhibition, the curator calls it up, it's like, so what do I do? And they're like, well, you've got to turn it on. And which essentially means killing it, right? So you've got an artificially created biological system that is consuming, that is reproducing, that is growing, right? That is an organic quote unquote living entity artificially supported kind of really interesting intersection of bio-art and robotic art that in order to de-install the installation, like you've got to basically pull the plug on something that that's alive and it's not alive in the way a plant is alive, right? There's like, it's alive in the way that flesh is alive. So anyway, so it raised some of the same kind of questions and freaked out the curator quite a bit. I've not anticipated that particular moment. One question I could throw out that's interesting. Just in terms of the threat that people have of robotics and the threat of it, we're replacing humans, why haven't, unless they haven't, I've never heard of it, people have the same response to animation. It's exactly the same thing. We've got a completely synthetic being that people relate to emotionally, effectively, or replaces, but there's never that, oh, it's going to replace people or animations are evil, right? I just throw it out. What is it about robots that does that? They can't manipulate the world in the same way. It's kind of weird. I'll tell you about even this performance kind of thing. Yeah, certainly. Oh, I see. Okay. Do you take six seconds? Right, right. David, I think it's part of that. I've just had in my mind sort of this modernist paradigm that in some ways kind of emerges around the same time as the first kind of thinking about robotics, it's Chopek, I mean, it's thinking about the thinking through Hegel's notion of recognition. I'm trying to think of all the times I've seen people interacting with robots. They're always, they're always right there in its face. They want to be recognized by that robot. They want that robot to acknowledge their presence in some way or another. And that feels very much to me like that master-slave dialectic, right? That figure when we get into the 20th century and to kind of radical avant-garde readings of that, of that master-slave dialectic. It's all about recognition. You know, this is what friends went on, focused on. So I think it's something about not being recognized by a robot that's coming in. Part of the antagonism I've understood from what I've heard to these robots is, yes, it's an economic or class anxiety, but the other thing is that, what the fuck are you doing in our streets? You don't acknowledge we're here, except I'll just wait for you to get out of my way, right? And it feels in line with the broader sense of certain kinds of people. The Google buses. Not yet, not being recognized. It's like you, you're not really here. And we're talking San Francisco, right? And probably among the cities where people are most conscious. This would be rolling right down Market Street. Yeah, yeah. Which is, so you don't get recognized in a city where you're not getting recognized, where so many people don't get recognized. I wonder how people would respond to it if instead of this thing that's rolling down the street and not acknowledging you, it was actually this creature that was like walking in this pro-Laurence pathetic grid, because it was exhausted. And then when it sees people, it sort of moves very, you know, solicitously out of the way. If you start to attack it, whatever. And I suspect people will come to its defense. Others would probably continue to attack it and just try to figure out the balance. The shade, the size, and the whatever would shift things to some degree. In this case, they're not terribly robot-like in any sort of human-like sense. They're just rolling blocks, really. But they do have defense mechanisms. So if you tamper with it, very loud piercing sirens go off. They have cameras all the way around so that they can see if somebody needs to check in. That would make you want to attack it, too. I know. I know. Have you seen the videos of the programmer slapping the robot of hitting it? Have you seen it? It's most in dynamics. Yes, they're slapping this thing in the face over and over and over again. And it's that mass, it's just, once again, reinforcing. It's like, that's gonna come back 100 years into that. Yeah. That's called the revolution. Every good piece of sci-fi out there, which is, I mean, sci-fi is at its essence, it turns into a morality tale. Exactly. Every good piece of sci-fi that talks about robots or artificial life or the singularity basically comes back to, this is what happens when you try to play God. And that's essentially what Pignalean is about, as well. Absolutely. This is what happens when you try to play God. Your creature ticks on a life of its own and it comes back and bites you. So I think that this, so to riff off of your idea of sort of Pignalean recognition, this might be the counter side of that coin in that there's also a fear of being recognized. Well, so maybe a complication of that. So I've got, in my mind, in particular Westworld. Yeah. Okay, so there's, in that sense, what you have, and it almost parallels this exactly, that you have, if you're familiar with the series, actually two of the central characters are both women who are robots. Both of whom are being manipulated in some way by a human being in the narrative. All of whom are being manipulated by this kind of chief coder who plays, who's almost literally God in the show. And so I only raise that is that the other aspect of this would be we begin to feel as we encounter robots that regardless of what dynamic there might be between the two, there's something bigger out there that is exerting agency, the program, whether we have free will or it's determined. So there's always three in the picture, right? There's actually something kind of, it's spinning a little bit back to video games, but there's something that revelatory about what it means to be the puppeteer. In a robotic context or a video game context, I don't know how many of you have read the article out there about the Sims and how basically everyone who plays the Sims begins behaving like associates actually and torturing their Sims to death and haunting them and torturing them with clowns and whatever. And it also reveals something kind of deeply rooted within human nature that this is our desire. Once we have the ability to play God, black and white is another video game example where when you have the chance to play God, we tend to choose the evil path, which is kind of interesting. GTA, Grand Theft Auto, where you have run over people and be able to face them. It also traces back, I mean, for me, I'm thinking of like the Frankenstein and all, right? Like this concept so that it's, you know, the body as this thing, this apparatus that can be built. And so it is definitely just playing God because that's going back to Paradise Lost, but it's this anxiety about free will, about control and about responsibility to others and to this kind of like, weirdly at the return of the repressed kind of thing, I think. Why, because that whole, you know, 100 years, the robot wars are coming. And you know, can I ask a question? So a couple of years ago, my stepson and I had this conversation. He said the jobs of the future are going to be robot designer and or robot technician. And that'll be all there is. Everybody else, we're gonna have to have some kind of universal salary for the rest of the world. There's gonna be nothing for us to do. And I was imagining like Robbie the robot. I was like, oh, it's gonna take a long time for humans to get used to those, you know, things coming down Market Street. And like, I don't think that's that close. And like, and he just picked up his phone and he was like, Siri, where's the nearest Walmart? And Siri said, Sebastian, the nearest Walmart is 1.5. And then so my question is, where does a robot begin? Does it have to be a metal object that looks like a thing, even a non-humanoid thing? Because after he and I had that conversation, I saw robots all around there. I could not see them everywhere I went. And so I think it's like, we're just getting used to, we're getting, it's like slowly seeping into our acceptance. And the closer it looks like to a human, the weirder it is, but it's all around us anyway. Just a comment about that is that gets to another robot trope, which is different from the one that you're talking about, that goes back to our UR, the robots are gonna turn against you, is the Wally trope, which comes from the, where the robots and that we become completely dependent on because of their benevolence. There's also the big hero six trope as well, where you have this ghost of the memory of someone else within that circuitry as well, that is lovable. And I mean, the fear of robots, I think we like to pretend that it's a natural thing, but it's still an uncultured thing that we've been building as well. Also with the Wally and the hero six, they talk about something completely different about human nature because they're both beautiful films that address very human issue of moments. And the desire in some ways to be more human, I don't know, or something that there's sympathy in the fact that that robot can't be fully human. Sort of like going back to Boon Raku, which is what I think about this, it's a luxury. David, can you, so this really, I find this whole narrative very compelling and particularly the sort of seeming split between the cute robot and the threatening robot. Because again, robot actually came in that first play from worker, right? And so it's actually, I think a lot of this anxiety is part of it's the economic anxiety of being replaced, but it's also the economic anxiety of the power differentials that, in which one recognizes that one's own social position is reliant on compelled and subservient labor. And so the robots have always been slaves and so we replay colonial and slave and imperial narratives out on them, both in the trope of like the big hero six who is the lovable servant, right? Who stands by us even under torture and despite the fact of being, right? I mean, and then the nefarious, right? Version which is gonna come up against us, but I'm wondering if we're thinking about robots in terms of power and particularly in terms of performance and their sort of history and that. When you're like, when you see students working with these and when you're incorporating this into a kind of shared project, how do you structure the conversations around the kind of conceptual and relational approach to the robot, right? Like, do you kind of let students kind of feel out the robot wherever they are? Like, how do you attend to gender pronouns with the robots? Is Zeebzab always male? Are there certain kinds of dynamics? Like, we get in the comedia, you know, the kind of love story, you know, do you talk, you know, how do you work through some of the power narratives that replay in particular ways through robots? Yeah, and that's a really, really, really good point. And Zeebzab isn't always male at all. You might have noticed actually in the demo when I made that for this, Zeebzab had the lips that somebody had added for one of the students for a performance. So, it's a really, really good point. It plays out exactly the same way that any question I think of puppetry plays out insofar as ultimately who's responsible for the Zeebzab's performance and the choices that you're making. It's the puppeteer, it's the creator, and sort of owning that, taking responsibility for that. So it's exactly the same way you would with any other performance that you were creating, really. But it raises those issues because you're, yeah. I think there's a complication here because the question that was, in my mind, that Sarah was, and it relates to something that you said earlier, David, about the robot is having a kind of distributed agency, and so my question has been, where is the robot? So, yeah, we can be responsible for what we do with the puppet, but because puppets are programs and when we look at sort of mass marketed puppets or we're looking at something like Siri, for example, as a robot, there are mechanics and procedures that aren't necessarily evident to us, and that we may have a robot in front of us that we're responsible for, that we're controlling and playing with and allowing to grow and develop in some ways, but that programming is also somewhere else that we can't really be responsible for. Right. And that can be for a number of reasons, like, you know, the image that comes to my mind is Joy in Blade Runner 2049, where Joy is this beautiful individual for Kay, and she gets killed when he's in mourning, and then shortly after, he's walking down the street and runs into a giant ballerina of joy. Right, right. So that idea of responsibility is really important, but responsibility implies the capacity for mutual recognition, one, and also the ability to really locate where that responsibility is. Right. And with the robot, it's, and I think part of your talk showed that, it's not really all there, and it may be completely somewhere else. Right. I hope that makes sense. Makes total sense. And, Sarah, I hope you understand how, I think this question is a really interesting one, but it has very much to do with where do we locate agency and responsibility in the ethical moment. Yeah, but I'm wondering, just to follow up with Sarah. So how is, because how are these issues for you different when you're dealing with students programming a robot as opposed to students directing actors in an acting class with other actors? Because they're the same choices, they're the same issues that come up, they're the same presumptions. If anything, they can be even stronger because you've got gendered actors that people make assumptions about. So you're then imposing on these bodies, these actual bodies, these assumptions rather than this piece of metal that you can transform more easily. Well, I mean, it seems to me that there is a real parallel there. I think the biggest difference is that obviously when you're working with actors or when you're working with student directors working with student actors, for example, like the student actors can say what they think. And part of the exercise, for me at least in training student theater makers really in the art of collaboration is how to listen properly. Right, and how to give a space for people to say what they think. Right, and how to listen to things that are going unsaid and how to, yeah, I mean, I think making space is a great kind of overarching metaphor for what that process is. The robot only says what the robot's been taught how to say and what the robot has been taught how to say is a direct reflection of who has been interacting with the robot. So in many ways, and part of what this whole discussion and what Mike is talking about, robots are essentially these kinds of mirrors that take what are our fears and fantasies and ideas and projections and kind of play them back to us in a way that depending on how it's calibrated may be incredibly reassuring of our social choices and our position in society might be incredibly threatening based on what kinds of choices that, right. So I guess my question is that when you're working with students because robots have this potential to be a seemingly objective and neutral space that then people, I think the temptation is always to read those reflections as natural. And this is coming up in a slightly different kind of row emphasis on bot environment, right, which is where we talk about algorithms, right, that are not written as neutral entities, right, that are reflecting, making certain kinds of assumptions about the world and are reflecting certain kinds of patterns back to us but appear to be objective and accurate and like what the world is, right. And then in fact, they're reshaping our perception. There's this kind of feedback loop, which I think theater is particularly well suited to assess and to freeze in space in order to look at it better. But I guess my question, and I don't work with robots in performance, right, haven't in a long time and it was never sort of my major, but I'm just, it brings to mind, like it seems that there are interesting questions but also some real problems or potential problems and think in helping students think through what are the implications, like, you know, how do you see the, how do you interact with Zeebzab and what are the assumptions that you make? Then what are the assumptions that I make? Then when you talked to it as a him, what is that, what do I understand about, about my relationship to this object? And if we construct certain kinds of, you know, romantic narratives, right, presumably heterosexual romantic narratives around, you know, an 18-inch piece of, you know, machinery, how does that affect how we understand our place in this and what our position, you know what I mean? Like, and so it's, and so I was just, you know, I mean, I don't mean to put you on the spot. You are obviously an incredibly thoughtful and I know you work through this stuff in really, you know, careful ways with your students. So I've just, that's why I was, that's what I was thinking. No, I mean, one thing that you're talking is making me think about it in a really interesting way is the sex doll issue we were talking about before because, you know, if you're working with live performers, unfortunately, within a common conception of the director, you don't engage collaboratively. You might, it's too much like the, my fair lady, if you're mainly in things. But there is the age, there is a collaboration. There is, it's, you know, the actor is working with you talking back. You don't have that with a robot. So there is that danger of then you're imposing completely your own fetishes and your own, I mean, yeah. So I think that's something definitely to be incredibly conscious about. And I think what you need is that perspective of the other students in the room are utterly crucial to that. Yeah, I've been thinking a lot about this and one of the things that I find interesting to continue the student actor metaphor, and to be fair, this is very incomplete, but the idea that the students have already had 18 years of almost continuous programming already, right? And that the fallacy that there is a tabula roster robot, right, doesn't exist. Which comes back to what Kiri was talking about with the avatars, right? Or the digital utopia of we're divorced from our identity on the internet. And I think she cites Nakamura in her book and cyber types and the fallacy of the digital being universal. Right, right, and totally blank. And so I'm just, the sort of singularity thing becomes that point, that tipping point to me. But, and I'm sure someone has discussed this far more eloquently than I know of, but this idea of is there such a thing as an autonomous robot in the same way that we've been programmed by experience before. I was thinking about this just the other day. So I have Siri in the car and now I have a three-year-old sitting in the back. So when he wants to hear a song, I actually have to, I make a point of saying, please play, and then playing this song. Thank you, right? To think about that data, right? There was just a thing about people trying to teach their kids to be nicer to their assistants that children are being rude to Alexa and their own data assistants. And so there is a whole need to like... And all the defaults and voices of female. So we're actually, we're over time, so anybody who wants to leave should feel, this is a fantastic discussion, right? I couldn't be, when we're talking about the robot as, you know, as you feed, she put it in the middle, right? And it's like a title of a lawsuit that we're going to, I think there's something interesting about considering Sean Nye's work is that a category of cute and a powerless thing, because I think when you, especially when you think of Zeb, right? I mean, Zeb is adorable, he loves Zeb's art. I love Zeb's art, I just want to say that. But he, but he and she, Zeb is cute and powerless, and the projections we put on Zeb, I think are interesting because we hold the agency with Zeb all the time, even in the way that we approach it as students. Well there's a, I'm sorry. No, no, no, no, no, no, please, jump in. There's actually a, and this comes from social science, so there's a, there's a matrix of dehumanization, right? So there are two axes, there's the axis of competence, and then there's the axis of warmth. And so things that are warm but incompetent, that's where we get things that are cute, right? And then, so babies, infant animals, right? And then we get things that are, to which we do not have warmth but are incompetent. And those are like people who are homeless, right? Are feelings about incompetent. And then there are people who are competent and warm, right? Favorite co-workers, right? Friends who help you out. And then there are people who are competent, but for whom we feel no warmth, right? So those would be like people who are highly accomplished in their jobs but recognized as jerk, right? And so it's basically like our relation, it's like how we understand, right? So we dehumanize or rate the competence, but then how we effectively relate to them. And there are social science measures and scales of how to assess our relative understanding and reactions to people, things, and it's particularly interesting in the case of robots, of where they land on that scale of confidence versus warmth. Right, because when the delivery box is just boxes, there's nothing about them that inspires sort of a sense of connection. And Zeeb has eyes that change and look at you and move. There's more of a connection in the warmth that you're talking about. And the boxes are just boxes. And the vulnerability too, I mean, you're saying, and by far my favorite moment in Committee of Robotica that I think established that connection is when he falls over to slapstick moment, which of course came about accidentally. And then, you know, so it was a real theater moment too, as they say it happens. And it totally for me humanizes Zeeb, right? But it's by showing his incompetence in trying to be human. He's revealing his robotness and then he's trying to get up and fall again. So, but I think that is a big, big part of it. Whereas I think the thing about the drones is not just that they're not anthropomorphic, but they're so competent. And when you start talking about the alarms and all of this makes it infinitely less appealing that it can take care of itself, would make me then want to challenge that even more, which is, you know, as opposed to beating this vulnerable thing that's going down the street and trying to do it. It's also about their relational position too, isn't it? Because I think it's not just the economic and class anxieties, it's the visceral vision of a world that you don't matter in at all. Right. The drones have been told to go somewhere else, by someone else, to take them food in a way that, you know, it makes the streets somewhere you don't understand anymore. I mean, it's like what Mike was saying about where is the robot. It's not just that this thing is a robot, it's that something unseen is driving it and people know what that is and they can control it and you don't. Right. I think it was great. Jason was right. This might be a bit too revealing of me as a parent, but my four-year-old talks with Alexa and you can say, Alexa, can we talk? Or, Alexa, let's have a discussion and he's practicing his speaking skills, no question. There's something that's going on there that's the most adorable thing we've ever seen before. But at least on his level, I think she's an equal, no question. And she's actually teaching him how to convert. And I don't, I don't. But that's what freaks me out, it's like. But I'll just sit, I'll watch for hours and watch that conversation's happening. I'm not interrupted at all. I'm like, oh my gosh, what is happening here? Well, we've been married for a little bit. Like, I think it's fascinating that you're teaching your child to be polite to Alexa, right? As a method of teaching your child how to be polite to humanity, right? And someday when your child is ready to learn this, you could explain that Alexa is a conglomeration of other humans who created her that she herself is not a she, an entity or whatever. She is a conglomeration of non-neutral humans. Like this is what I said yesterday in my opinion, technology is neutral, but humans are not. So, you know, someone's gonna use Facebook to start a wonderful revolution, someone's gonna use to kill people, et cetera, whatever. But it's an interesting like progression because yeah, your child is learning how to speak and that's great, but to let them understand at some point that Alexa is not a singular entity, but there is a whole collection of other, of real flesh and blood people, at least at this point, that might change in the future. You know, it's like a kind of continuum, I think. Yeah, absolutely. All right, so I think this is probably a good point. It's awesome. Thank you. Thank you. Thank you. All right, so we're back tomorrow morning. Is it here? I think it's here again.