 Now, imagine a stage with an artist performing in front of a crowd. Is there a way to measure and even quantify the show's impact on the spectators? Kai Kunze is going to address this question in his talk Boiling Minds Now. Kai, up to you. Thanks a lot for the introduction. We have a short video, I hope that can be played right now. The video is playing. Yeah, so thanks a lot for the intro. And this is the Boiling Mind Talks or Linking Physiology and Choreography. And I just started off with this short video that could give you an overview over the experience of this dance performance that we staged in Tokyo, beginning of the year, just before the lockdown, actually. And the idea behind this was we wanted to put the audience on a stage. So breaking the fourth wall, trying to use physiological sensing in the audience. And that then that change then is reflected on stage over the projection sound and also audio to influence the dancers and performers. And then, of course, feed them back again to the audience. So creating an augmented feedback loop. In this talk today, I just want to give you a small overview, a little bit about the motivation, why I thought it's a nice topic for the remote experience from the Geo's computer club and also a little bit more about the concept, the setup and the design iterations as well as the lessons learned. So for me to give this talk, I thought it's a good way to exchange expertise, get a couple of people that might be interested for the next iterations because I think we are still not done with this work. So it's still kind of work in progress and also a way to share data. So to do some explorative data analysis on the recorded performances that we have. And then most important, I wanted to create a more creative way to use physiological data and explore it. Because also for me as a researcher working on variable computing or activity recognition, often we just look into recognizing or predicting certain motions or certain mental states. And that kind of, at least for simple things, feeds back into this, you know, very, I think, idiotic or stupid ideas of surveillance and applications cases in that. So can we create more intuitive ways to use physiological data? So from a concept perspective, I think the video gave a good overview of what we tried to create. However, what we did in three performances was we used physiological senses on all audience members. So for us, it was important that we're not singling out individual people to just get feedback from them, but have the whole response, the whole physiological state of the audience as an input to the performance. In that case, we actually used heart rate, heart rate variability and also galvanic skin response as inputs. And these inputs then changed the projection that you could see the lights, especially the intensity of the lights and also the sound. And that again, then led to changes in the dancing behavior of the performers. For the sensing, we went with a variable setup. So in this case, fully wireless wristband, because we wanted to do something that is easy to wear and easy to put on and to put off. We had a couple of iterations on that and we decided then for electrodermal activity and also heart activity to sense, because there's some related work that link these sensors to engagement stress and also excitement measures. And the question then was also where to sense it first. We went with a couple of wristbands and also kind of commercial approaches or half commercial approaches. However, the sensing quality was just not good enough, especially from the wrist. You cannot really get good electrodermal activity, so galvanic skin response. It's more or less a sweat sensor. So that means that you can detect just if somebody is sweating and some of the sweat is then related to a stress response. And in that case, there are a couple of ways to measure that. So it could be on the lower part of your hand or also on the fingers. These are usually the best positions. So we used the fingers over the fingers. We can also get a heart rate activity. And in addition to that, there's also a small motion sensor. So a gyro and an accelerometer in the wristband. We haven't used that for the performance right now, but we still have the recordings also from the audience for that. When I say we, I mean, George, especially and also Ding Ding, two researchers that work with me did actually took care of the designs. So then the question was also how to map it to the environment or the staging in this case. Actually, this was done by a different team. This was done by the embodied media team also at KMD. So I know a little bit about it, but I'm definitely not an expert. And for the initial design, we thought we use the EDA for the movement speed of the projection. So the EDA rate of change is mapped to movement of these blobs that you could see are also the mesh that you can see. And the color represents the heart rate. We went for the LFHF feature that's a low frequency, high frequency ratio and should give you, according to related work, some indication about excitement. For the lights, the lights were also bound to the heart rates. In this case, the beats per minute and they were mapped to intensity. So if the beats per minute of the audience go collectively up, the light gets brighter, otherwise it's dimmer. For the audio, we had an audio designer that cared about sounds and faded in and faded out specific sounds also related to the EDA, to the relative rate of change of the electrodermal activity. All this happened while the sensors were connected over a sensing survey in QT to a touch designer software that generated these type of projections and also the music got fed into and that was then actually then controlling the feedback to the dancers. If you want to have a bit more of detail, I uploaded the work in progress preprint paper, a draft of an accepted TI paper. So in case you're interested in the mappings and the design decisions for the projections, there's a little bit more information there. I'm also happy later on to answer a bit of those questions. However, I will probably just forward them to the designers that worked on them. And then for the overall performance, what happened was we started out with an explanation of the experience. So it was already advertised as a performance that would take in electrodermal activity and heart rate activities. So people that bought tickets or came to the event already had a little bit of background information. We, of course, made also sure that we explained at the beginning what type of sensing we will be using, also what the risks and problems with these type of sensors and data collection is. And then audience could decide with informed consent if they just want to stream the data, don't want to do anything, or they want to stream and also contribute the data anonymously to our research. And then when the performance started, we had a couple of pieces and parts that's something that you can see in B, where we showed the live data feed from all of the audiences in individual tiles. We had that in before just for debugging, but actually the audience liked that. And so we made it a part of the performance, also deciding with the choreographers to include that. And then for the rest, as you see in C, we have the individual objects, these blob objects that move according to the EDA data and change color based on the heart rate information. So the low to high frequency. In D, you see also these clouds. And here similarly, the size is related to the heart rate data and the movement again is EDA. And there's also one scene in E where the dancers pick one person in the audience and ask them to come on stage. And then we will display that audience members data at a large in the back of the projection. And for the rest again, we're using this excitement data from the heart rate and from the electrodermal activity to change sizes and colors. So for to come up with this design, we went co-design route discussing with the researchers, dancers, visual designers, audio designers a couple of times. And actually that's also how I got involved first because the initial ideas also from more the primary designer of this piece were to combine somehow perception and motion. And I've worked a bit in research with eye tracking. So you see on the screen, the pupil labs eye tracker. It's an open source eye tracking solution and also EOG electro oculography electro oculography glasses that use the capacitance of your eyeballs to detect something rough about eye motion. And we thought at the beginning, we want to combine this person seeing the play with the motions of the dancers. And understand that better. So that's kind of how it started. The second inspiration for this idea in the theater idea came from a visiting scholar, Jamie Ward came over and his work with the flute theater in London. That's an inclusive theater that also does workshops or Shakespeare workshops, and he did some sensing just with accelerometer and gyroscopes or inertial motion wristbands to detect interpersonal synchrony between participants in these workshops. And then we thought when he came over, we had a small piece where we looked into this interpersonal synchrony again in face to face communications. I mean, now we are remote and I'm just talking into a camera and I cannot see anybody. But usually if you would have a face to face conversation, doesn't happen too often anymore. Unfortunately, we would show some type of synchrony. So, you know, kind of eyeballing head nod and so on would synchronize with the other person if you're talking to them. And we also showed that in small recordings, also we showed that we can recognize this in a variable sensing setup. So again, using some glasses and we thought, why don't we try to scale that up? Why don't we try and see what happens in theater performance or in another dance performance and see if we can recognize also some type of synchrony. And with a couple of ideation sessions, a couple of also test performances also including dancers trying out glasses, trying out other headwear and that was not really possible to use for the dancers during the performance. We came up with an initial prototype and that we tried out so in I think November 2018 or so on, where we used a couple of pupil labs and also pupil invisible. These are nicer eye tracking glasses. They are optical eye tracking glasses, so they have small cameras in them distributed in the audience. A couple of those EOG glasses, they have also inertial motion sensors in them. So accelerometer and gyroscope and we had at the time heart rate sensors. However, they were fixed and wired to the system and also the dancers wore some wristbands where we can could record the motion data. And then what we did in these cases, then we had projections on three frames on top of the dancers. One was showing the blink and not synchronization of the audience. The other one showed heart rate and variability and the third one just showed raw feed from one of the eye trackers. And it looked more or less like this. And from a technical perspective, we were surprised because it actually worked. So we could stream around 10 glasses, three eye trackers and four or five, five, I think heart rate sensors at the same time and the server worked. However, from from an audience perspective, a lot of the feedback was the audience didn't like that just some people got singled out and got the device by themselves and others could not really contribute and could not also see the data. And then also from a performance perspective, the dancers didn't really like that they couldn't interact with the data. The dance piece also in this case was pre choreographed. So there was no possibility for the dancers to really interact with the data. And then also again, from an aesthetic perspective, we really didn't like that the screens were on top because either you would concentrate on the screens or you would concentrate on the dance performance and you had to kind of make a decision also on what type of visualization you would focus on. So overall, you know, kind of partly okay, but still there were some some troubles. So one was definitely we wanted to include all of the audience, meaning we wanted to have everybody participate. Then the problem with that part was then also, you know, having enough eye trackers or having enough head worn devices was an issue in addition to that, you know, kind of if it's head worn, some people might not like it. The pandemic hadn't started yet when we did the recordings. However, there was already the information, some information about a virus going around. So we didn't really want putting, you know, everybody, giving everybody some eyeglasses. So then we moved to the heart rate and galvanic skin response solution and the setup where the projection is now part of the stage. So we used the two walls, but we also use it's a little bit hard to see in the images, but we also used the floor as another projection surface for the dancers to interact with. And the main interaction actually came then over the sound. So then moving over to the lessons learned. So what did we take away from from that experience? And the first part was talking with dancers and talking with the audience. Often if you saw, especially the more intricate, the more abstract visualizations, it was sometimes hard to interpret also how the own data would feed into that visualization. So, you know, kind of some audience members mentioned to some point in time, they were not sure if they're influencing anything or if it had an effect on other parts, especially if they saw the live data. It was kind of obvious, but for future work, we really want to play more with the agency and also perceived agency of the audiences and the performers. And we also really wonder how can we measure these type of feedback loops? Because now we have these recordings. We looked also a little bit more into the data, but it's hard to understand where we're successful. I think in some extent maybe yes, because the experience was fun, it was enjoyable, but on this level of did we really create feedback loops and how to evaluate feedback loops that something that we want to address in the future work. On the other hand, what was surprising, I mentioned before, the raw data was something that the dancers as well as the audience really liked. And that was surprising for me, because I thought we had to hide that more or less. But we had it on, as I said, as kind of a debug at the beginning of some test screenings. And the audience members were interested in it and could see and were talking about, oh, see your heart rate is going up or your EDA is going up. And the dancers also liked that. And we used that then in the performance and the three performances that we then successfully made for especially scenes where the dancers would interact directly with parts of the audience. At the beginning of the play is a scene where the dancers give out business cards to some audience members. And it was fun to see that some audience members could identify themselves or the audience members would identify somebody else that was sitting next to them. And then this member had a spike in EDA because of the surprise. So there was really some interaction going on. So maybe staying, if you're planning to do a similar event, staying close to the raw data and also low latency is, I think, quite important for some types of these interactions. From the dancers, there was a big interest. On the one side, they wanted to use the data for reflection. So they really liked that they had the printouts of the effects of the audience later on. However, they also wanted to dance more with biometric data and also use that for their rehearsals more. So, of course, we had the code assigned so we worked directly. We showed the dancers the sensors and the possibilities and then worked with them to figure out what can work and what cannot work and what might have an effect, what might not have an effect. And then we did some, as you saw, also some prototyped screenings and also some internal rehearsals where we used some recorded data. We used a couple of people of us who were sitting in the audience. We got a couple of other researchers and also students involved to sit in the audience to stream data and we also worked a little bit with pre-recorded experiences and also synthetic experiences how we envisioned that the data would move but still it was not enough in terms of providing an intuitive way to understand what is going on, especially also for the visualizations and the projections. They were harder to interpret than the sound and the sound sphere. So and then the next and the biggest point maybe as well is the sensors and the feature best practices. So we're still wondering what to use. We're still searching what kind of sensing equipment can we use to relay this invisible link between audience and performers and how can we augment that. We started out with the perception and eye tracking parts. We then went to a wrist one device because it's easier to maintain and it's also wireless and it worked quite well to stream 50 to 60 audience members for one of those events to a wireless router and do the recording as well as the live visualization with it. However, the features might have not been okay, sorry for the short part where it was offline. So we were talking about the sensor features and best practices. So in this case we are still searching for the right type of sensors and features to use for this type of audience performer interaction and we were using the low frequency high frequency ratio of the heart rate values and also the relative changes of the EDA and that was working I would say not that well compared to other features that we now found while looking into the performances and the recorded data of the around 98 participants that agreed to share the data with us for these performances and from the preliminary analysis that something Karen Hahn, one of our researchers working on and looking into what type of features are indicative of changes in the performance it seems that a feature called PNN that's related to heart rate variability to the R intervals seems to be quite good and also the peak detection per minute using the EDA data. So we're just counting the relative changes the relative up and down for the EDA. If you're interested I'm happy to share the data with you so we have three performances each around an hour and 98 participants in total and we have the heart rate data the EDA data from the two fingers as well as the motion data as well we haven't used the motion data at all except of filtering out a little bit the EDA and heart rate data because if you're moving a lot you will have some errors and some problems some motion artifacts in it but what do I mean with why is the PNN or why is the EDA peak detection so nice? Let's look a little bit closer into the data and here you see I just highlighted performance three from the previous plots you see PNN 50 on the left side the scale the blue line gives you the average of the PNN 50 value so this is our interval related heart rate variability feature and that feature is especially related to relaxation and also to stress so usually a lower PNN value PNN 50 value means you're more relaxed higher value means you're higher value means that you're more relaxed lower value means that you're more stressed out so what happens now in the performance is something that fits very very well and correlates with the intention of the choreographer because the first half of the performance so you see section one, two, three, four five and six on the bottom the first half of the performance is to create a conflict in the audience and to stir them up a little so for example also the business card scene is part of that part or also the scene where somebody gets brought from the audience to the stage and joins the performance is also part of that because the latter part is more about reflection and also relaxation so taking in what you experienced at the first part and that's something that you see actually quite nice in the PNN 50 so at the beginning it's rather low that means the audience is slightly tense versus in the latter part they are more relaxed similarly the EDA and the bottom as a bar chart gives you indication of a lot of peaks happening at specific points and these points correlate very well to memorable scenes in the performance so seeing the one scene where actually section four the red one is the one where somebody from the audience gets brought onto the stage versus I think around minute 12 there's a scene where the dancers hand out the business cards and that's also something I think so it's promising we are not there yet definitely from the data analysis part but there are some interesting things to see and that kind of brings me back to the starting point so I think it was an amazing experience actually working with a lot of talented people on that and the performance was a lot of fun but we are slowly moving towards putting the audience on stage and trying to break the fourth wall I think with these type of setups and that leads me then also to the end of the talk where I just have to do a shout out for the people who did the actual work so all of the talented performers and the project lead especially Zomoe who organized and was also the link between the artistic side and the dancers with Mademoiselle Cinema and us as well as the choreographer Ito-san and yeah I hope I didn't miss anybody so that's it so thanks a lot for this opportunity to introduce this work to you and now I'm open for a couple of questions remarks I wanted to also host a self-organized session sometime I haven't really gotten a link or anything but I'll probably just post something on Twitter or in one of the chats if you want to stay in contact I'll try to get two or three researchers also to join George who was working on the hardware and Karen who worked on the visualizations on the data analysis might be available and if you're interested in that just send me an email or check maybe I just also added to the blog posters on if I get the link later so yeah thanks a lot for the attention Thanks Kai for this nice talk and for the audience please excuse us for this small disruption of service we had here we're a little bit late already but I think we still have time for a question or so unfortunately I don't see anything here online at the moment so if somebody tried to pose a question and there was also this disruption of service I apologize beforehand for that on the other hand now Kai what about data sharing so how can the data be accessed do people need to access you or to drop your mail person message yeah we are on the so right now also the no publication is still accepted and there's also some issues actually a little bit of some rights issues or so on so the easiest part is just to send me a mail it will be posted sometime next year on also on public on a more public website the easiest is just to post me a mail there are already a couple of people working on it and we have the rights to share it it's just a little bit of a question of setting it up I wanted to have the website also online before the talk but as with the technical difficulties and so on everything is a little bit harder this year indeed indeed thanks Kai yes I'd say that's it for this session thank you very much again for your presentation and I'll switch back to the others