 in for a treat with the main highlights of the day. We have two keynote speakers finishing up our day to day. And before I introduce them, I just want to thank our sponsors, the theoretical science visiting program here at OIST, which is hosting our two speakers, and also the French Embassy in Japan for their generous support for making this event possible. Thank you very much for your support. And the topic of this event is the metaverse and virtual reality. The metaverse is a topic that has been loudly discussed in all the media around the world. Will we find ourselves in a future where everybody just sits in their bed wearing headsets and interacts virtually? Or maybe will technology take us in a different direction? These were some of the topics that were discussed today. And the lectures tonight will show you a kind of inside view into the laboratories working with these kinds of technologies. And you'll see that, especially for people who are not in this field, that the kind of impression that you might have of what is the metaverse and where are these technologies is actually much more diverse and maybe even more exciting than what the media portrays it to be. So I hope I raise some of your expectations of what we will see tonight. So let me introduce our first speaker, Shonichi Katsuhara, is a researcher and project leader at Sony Computer Science Laboratories. He received his PhD and interdisciplinary information studies from the University of Tokyo in 2017. Before that, he joined the Sony Corporation in 2008. And he was an affiliate researcher at MIT Media Lab in 2012 before joining Sony CSL in 2014. He's currently leading research on cybernetic humanity, which explores the new humanity emerging from the integration of humans and computers. And his research has been presented at top conferences around the world, published in all the scientific journals that you would want to publish in. And he has also conducted interactive exhibitions and implemented social implementations for public interaction. Starting this year, 2023, he's running the Cybernetic Humanities Studio as part of a new collaboration between Sony CSL and Poist. And so thank you very much. And the floor is yours. Thank you. So thank you, Tom, for a great introduction. So hello. Nice to meet you. My name is Shonichi Katsuhara. You can just call me Shon. It's, first of all, it's super honor to be here and talk to my research. And then I thank again to giving some of this opportunity for Sony CSL and also OIST and TDIC, TSPP, and French Embassy. Thank you so much for having this special moment. So I should just one more request. We have a translator in the back. So in order for her to translate in real time into Japanese for all of her speakers, try to speak slowly in your lecture. So even I have, honestly, 100 slides today, but I will try. I will try. And sometimes I use Japanese in complements to compensate for understanding. OK, so thank you so much. And then again, my name is Shon. And then I'm a research in Sony Computer Science, Robert Lee. And also recently, I joined OIST as a visiting researcher. And then recently, we started some new studio in the five. And then after the public lecture, we also planning the public also open lab. So you can also come and try some demos, which I will show. So so so let me a bit background of myself is like a three pillow, three aspect of myself. Is that one is a researcher about talking about investigating about the human perception, cognition or human augmentation. At the side of me by me is like an engineer to engineering some software development or system integration to make possible to new experience happened. And also at the side of perspective, myself is an artist to make the people that those kind of research into the society, the artist, I can add inspiration or some exhibition. It's a very important part of my project and then give into the question to the society. So I do sometimes the exhibition. So today, I want to talk about the cybernetic humanity. That is my topic. And also, what is the connection to the metaverse and virtual reality. But in general, I'm exploring about what is the self in the computer and the human integrated. So you can also imagine, oh, this is my cell phone. But if we have a multiple, sorry, the metaverse inside the computer, we can also having some virtual reality body. It's kind of the integration of the human and computer and the second. But not only some visual. We can also, I want to show some interesting video here. This is coming from the Sony PlayStation, but I'm not mean to the advertising Sony product. But just because I love this idea. OK. Hope I can, you see the sound. No? No? No. OK, give me a second. I should check some sound test setting because this is a good video. Let me, OK. So even that I cannot, you can hear me. I can change the setting. You cannot hear. OK, and you see. Anyway, so that is actually complaining about the person is I cannot do that. Usually I cannot do that. Something like that. Usually we cannot do something like this. But for instance, in the computer game, for instance, they say like, oh, I can do that. I can do that. I can do that. Usually I cannot do that. So of course, this is talking about the video game. But we can also imagine those kind of technology and now this is making happen in our real life. So from that point of view, OK, so we will get some kind of the possibility of doing something usually we cannot do, but I can do. But in point, some point, we actually thinking about then what we feel in those kinds of new possibilities. Of course, we can imagine having a new technology or new ability or new bodies. And it's a kind of augmentation, or having some virtual body or some new ability. But also how we feel it in subjective way. If we cannot feel this is myself, it's more like technology actually losing myself. So how we can balance and how we can integrate is to augmentation as a subjective is mixed together and then creating a new form of the humanity. That is because I put the cyber and humanity. So today I want to talk about the three topics and then about this is my action. This is my body and then this is me. So it's actually relevant, super relevant to the metaverse and the virtual reality. So I want to start my talk with this is my action. So the setting of these topics, I want one volunteer to show some human limitation. So it's a bit embarrassing, but if possible, Nick, can you help me to demonstration something? Sorry about that. It's sometimes happened to my presentation. Thank you, thank you. So now I will release the pen. So Nick, please catch this pen. That's it, okay? So maybe we can go there. Thank you, Nick. So thank you. I know it's a bit embarrassing, but thank you for helping to showing some human limitation. So that is the human limitation. So this is actually the wider massive project by the UNICEF and Kenji Suzuki. So as we can see it, actually independent of this, we cannot catch the pen. That is a human limitation. So of course it's also had almost impossible, but if we have some technology, hidden secret technology and connecting something, what happens? We can catch the pen very easily. Something like this, almost 100%. So why we cannot catch the pen? Because we are very late. We have a latency in the processing of the perception commission, the motor fly and also muscle construction. And it takes some up to 240, 350 millisecond. That's why we cannot catch the pen. That's why we cannot stop the car in the front of the action. But in the case of the video, sorry. I put the muscle EMG, electron migratory measurement to the person who release the pen. It means like measuring the moment of the release, muscle activated. And also we connected this signal into the, another person to the electron mass stimulation, which means like applying current into the muscle and then contract muscle directly. The thing is actually forced to move to catch the pen, right? Okay. So that's why the individual, they can catch the pen. So we actually demonstrate this phenomena in the many, many occasions, something like this. So, okay, they all, so in the very beginning, we are very happy that, oh, I can catch the pen. But we just think, oh, this is because of the computer. And then it's very nice, something like this. We are exhibiting those kinds of things. But we are doing some exhibition all the time. Something very interesting that happened. For instance, in the next video, right? So maybe I'm not sure that we can play the sound, but the next video, the first trial is without AMS, without any electricity. And the second trial is with EMS. So let's see. So he was very happy. And he said, oh, I did it. I catch the pen. But I actually experienced, oh, sorry Zenon, this is not your ability. This is because the EMS, right? But as many people mentioned, oh, but I feel I catch it. This is interesting because the EMS is actually applied just before our intentional movement. But still they feel like, oh, this is my move. So I think we found out this is very interesting phenomena and then try to identify what is the time course, what is the boundary of the self in the action. So we tried with this person, three persons, we tried the pre-emptive action project. And it's this very simple psychophysical experiment. And then trying to, we've applied many EMS and then tried the participant to answer how much did you feel this is yourself, something. Then in the end, we found like even we accelerate his muscle, their muscle, the 80 millisecond faster than you can do, but still we feel this is my action. So it means like, this is like we step back to more like abstract way. So this is like a pen drop experiment. It's actually my motion and also computer motion is integrated into the one muscle. And the question is then, can we feel this is my action? If not, it's more like for computer is actually me and I don't have any original freedom. But if you can design the assistant in such way, we can provide how I am doing this. Also computer is there, but of course maybe you can, maybe think about more like a complex task, something AI or sometimes AI is going to over perform the human processing. And also maybe you might remember, you might know about the recent LGBT things. In some point, they are exceeding some human priorities. And this is kind of my proposer, my askings on GTP to writing some grand proposal and they are doing the great, great. So it means like, when we think of the integration of the computer and the human, can we feel like, oh, this is my actual? So it's not like the LGBT things, but we also try to experiment like, what happened when computer and the machine, but computer and me doing some action, even with some like a decision-making happened. And then I just skipped on very detail, but in a very simple way, we did something like a simple task, like a choice task. And the EMS also help you to doing something. And also sometimes human mistake. And also we intentionally EMS also apply them correct or not incorrect motion. And then in the end, this graph in showing that how we feel this is my action is actually biased by the result. This means this graph showing like a, when we doing, when we outcome, the kind of the output was correct, we are more likely to attribute that success to me. And if the task was failure, we are more likely to blame the system but this is not me and this is because of computer. So these are actually human, kind of human nature, but when we think about kind of human computer integration, we have to really care about these things. Of course, we can apply these kinds of robot things, but for instance, when we think of the robot by ping pong, doing some ping pong over the system, it's actually very hard. It's almost impossible. And then maybe he will like, oh, it's pointless. But also we can imagine like an automatic robot movement to hit back the ball, something like this. And then not only the, so because we can do that, and then we can also imagine like integrating this automatic and the human motion together. But even if we design this kind of way, he will maybe think, oh, this is my action. And this idea is. So I think that those kind of things. So when we see now the computer and the human integration in action, we have to think about how we can design to keep the attribution of the self. And then I don't go deeper, but there are several factors to keeping sense of the agency, which is like a feeling I am doing. I don't know if I'm doing well. It's like a base on the two factor of the sensory feedback and also intention is kind of aligned together. So that is a talking of the action. And also I want to move on to the body things. So this is my body. So you know, this is my body. You see like our body is here. But also when we think of the virtual reality, we can change some visual of the conditions. So this is actually the very kind of five or six years ago and then with this Yamaguchi University, Yamaguchi Wycombe and also K-University collaboration. And that is watching your own body by through the virtual reality. It's very simple. It's even the wire frame. This can moving. I'm watching some, my movement in the virtual reality, right? And sorry, sorry, I will play again, okay? And something happened. So he will, he will be at the virtual reality and then moving something. And then he said, oh, I feel a bit lighter and a bit good condition. I didn't do anything as a physical. I don't inject anything, I didn't do anything. But only we apply some of the visual change of the actual motion. So not only the actual, the showing the actual movement, we anticipate it a bit, anticipate the predicted motion and instead of the actual body movement, we show it. And then he said, oh, I get body lighter. But important point is that he don't know the motion is intervened. But just feel like, oh, I am lighter. And then also you can easily imagine that if we apply delay. Okay, so we can also make you hungover even you don't drunk, right? Or maybe we can imagine like even you have a physically drunk but we can, I can also make your body lighter by virtual reality, for instance. But anyway, so that is an interesting point of this project is actually, we can control the sense of self, sense of the body by only the visual. We can also tweak your how you feel, okay? So I think like a change of rendering the body is actually providing a change in the sense of body. It's actually a great potential of the metaphors. Now also I thought keeping to the another potential of the kind of new metaphors or new body things beyond one mind, one body. So I show the robot arm, okay? So like we can also imagine like using robot arm, something like this, right? So, oh, I can do it. I can make it feel like I, this is my arm. So this is robot arm, so we can also copy it. We can also imagine, oh, I have two arms. I have two arms, I'm doing parallel things. And the two explorers ideas, we actually tried the parallel pimpon with these collaborators. And the idea is, what if we have two bodies with robotic arms in the two tables? And also like it's, we can also control the robot arm and then playing a pimpon in two tables at the same time. Of course, we might, you might wonder, but our conscious of our kind of mind is very, just one. And our attention is very similar. This is true. But when we think of some kind of a technique to split or some switching correctly or some kind of things, we actually can make it feel like, oh, I am doing pimpon, two arms at the same time. So there are some exploration of having a two arms. But also these kinds of experiments is giving us the question about then, can we adapt to martial body at the same time? Because now I'm using the martial body, my body. So I can imagine like my, I understand my body. But if we have a martial body, it means we need to adapt. We need to like learn to use our body, each individual body, right? So then to investigate this experiment, we did one simple experiment using the visual motor rotation. It's called the Parallel Apostation Project. So this is one, the Diderian and Sumamiko-san. So it's a very simple experiment. Like for instance, you can imagine where the headman is playing and also doing some kind of reaching task, your virtual reality hand. But at the same time, we apply some gradually rotating your body from the original, your original direction. So it means like even you are doing straight motion, your motion will be slightly rotated, for instance, left hand, left side, or right side. If we apply this rotation, very gradually, as you can see some here, we can actually adapt this rotation even without knowing it. This is called implicit adaptation. So if we apply this rotation gradually, we can adapt this rotation and we don't notice it. This is implicit adaptation. And also important thing is that in the moment of the, in the moment of the here, and we intentionally cut off some rotation. And then you see like a error is happening, moment error is happening, but it's not go back to the directory. It takes some time because our internal model is updated and it takes time, a bit of time to go back to original. So that is a usual rotation, adaptation to rotation. But when we think of two bodies, we can also think about the two rotation, can we adapt to rotation at the same time? So we did it in the virtual reality. So sometimes you see your body and then that body is slightly going to the left side. And then you see that another body is slightly going to the left right side. If you do it in your usual view, it's impossible to adapt. So you can see that there are many errors here. It means you cannot adapt. And if we can really think about, oh, this body is going to left, so I have to go right something like this. It's very cognitive demanding. But if we do it in the third person viewpoint of something, this is because you see your body in right side, a bit right side third person and the other side third person. Surprisingly, they can adapt in both rotation at the same time. And also more surprisingly, if we elase the rotation intentionally, there are after effect, which means like still, I mean a bit giving some error because they are adapted in internal model. So the conclusion is actually, if we do it in the first person, sorry, the first person, we cannot. But if we apply this rotation in the third person, somehow we can implicitly adapt this rotation without knowing it. So I think that is actually important to think about multiple body because we don't consume any cognitive resources to adapt to bodies. We don't need to think, but we can do whatever you want for bodies and then we can adapt to bodies in implicitly. That is the potential of the motor bus and also virtual reality. And also like in the last project, I just like a touch about this is me, okay? Okay, last quiz. This is me, see? This is me, without grass, without grass, but this is me. And do you think this is me? Maybe yes. Do you think this is me? Okay. But actually original one was something like this. When we see something, it's clearly we can see the changes. But if we, like we are very much less sensitive of the visual changes, if we have some visual occlusion, we are very, how to say, hard to detect some changes. Then we can also apply this idea to the recent machine learning techniques again to generate very, very morphing images, slightly gradually morphing images. And then recently we found like a visual technique too, like change the visual without people knowing it. Then we can also find like some like metrics. But here is what if this kind of things happen to my face, right? So we did it in the called morphing identity experiment. Then this is the base on the experiment with the exhibition I will show the video. So this is the experiment. The two persons come to the booth and then kind of the photo booth, right? And then taking a picture. And then after that, they can also talk to each other over the videos. But if important here, that these two faces will be gradually morphing each other, you can see here. Now it's kind of the middle of the two faces. And then finally, it's get to be kind of completely swapped. And then that's actually like provided chance allows us to investigate what is the boundary of the self? Do you think this is to you or not something? So we actually, so actually implementation. So there are several of the technology behind that and also like creating some of the game-based morphing system. So maybe like a kind of creating some interpolation of the faces, something like this. And also we can animate these faces by in real time. So that's why it's kind of like a defect kind of the type of the technology. Then you see like this face is going to be me points. And we put like this experiment, this system into the public execution. And then we actually collected data how much people understand this is myself or not. Maybe I guess a bit skip late, skip. But here for instance, as long, so when the participant guests feel like, oh, this is not me. We ask the participant to the press the hood switch. That then we can get the data of how much they feel like this is not me. And then the boundary of the self, then we collected data and then we found like, oh, actually there are some kind of the boundary of the self in the matrix, this matrix. Then this means like even in the 20% in the morphing it's actually changing, but still they feel like, oh, this is myself. Okay, so we actually experiment these kind of things. And then also we found like an interesting phenomenon of also there are some like a gender change then the gender difference is also like a bit sharp, but also like this graph shows like a, if you get older, your self boundary get larger. You might think about why it's happening, but it's kind of points like a female, male female actually be brought this one, the male more like having a broader sense of self boundary and the male female more like smaller boundary for instance. But maybe think of those kind of things. So there can be some kind of the counter of the self, the boundary of the self is there in the computer as a systems that we can also investigate then how these can counter self will affect our communication and the interaction in the meta-versal of the actual communication. Okay, so that's a project I recently did and also a continuation, some continuation and then that is a kind of project. And then back to some like my topic. So that is a kind of right now initial topic of the cybernetic humanity, but not only that because recently you might know many advancement of the technology and also like a kind of cybernetic after some like a new type of body is coming in the way of the technology. But also we have to think about the science or this new human and computer integrated situation. That's why I recently started to hear as a business researcher. So I want to explore about like a science of human computer integration. And I was looking for some opportunity for doing some researches. And then I found, oh, maybe this is the place. So I thank you for the ways of having me as a business researcher. I recently started from the lab of space in the lab of five, as we called, we called Cybernetic Humanity Studio. And then the sum of demo is actually working on that here. So again, so I just want to thank you to my team in the Sonyshia so and they are very brilliant and those without them we cannot do anything. And then that's my end of talk. And actually we had a demo in the lab of five after the public lecture. So please sign up and then come here. That's all. Thank you so much. Thank you very much. So we have time for a couple of questions from the audience. Let's use the microphones for the translator. Thank you for the super nice presentation. I'm interested in the, I want to have a question on the parallel adaptation one. So in your study, you said by changing the point of view that the participant get more easy to get used to the, get easy, easily, adapt the new, let's say, body schema. But I'm curious about, it is because of the difference of first person view or third person view or it is because of just the difference of the angle of the viewpoint. Because I feel like when I imagine I'm doing some manipulation when I see the mirror, I feel like it's sometimes difficult to adapt the manipulation. So it's third person view, but I feel like it's still difficult. So have you ever tried and investigate more different angles, not just left or right? Thank you so much. Thank you so much. Good question. Good question. So I think that the kind of the STEM question is like what kind of situation, what perspective provide us to sense of the dual bodies, dual coordination, dual queue to having a body. And honestly in the experiment, we found this phenomena in when we compared the first person and the third person. And then with third person works well. But we are actually investigating another condition to make this parallel adaptation happen. And then we found, roughly found like a third person is not the way, not the only way to make it happen. And it is actually there are, if we have some like, if we can design some, the spatial queue to create some like a context to the movement and the body movement, we might actually can produce same like a parallel adaptation happening. But also important point is like, we are, this is a working hypothesis, but also if we found, if we explicitly understand, recognize the rotation, this rotation will not happen, I guess. If we can start to thinking about, this is a, this band, this is this body, this way, this can do a rotation that doesn't happen. But only when we are implicitly doing without any knowledge, this is happening, I guess. But I'm still working on hypothesis but thank you for the great question. Thank you for the response. Another question from the audience. I think there was another hand over there. So we have one, two, three. Thank you for a fascinating presentation. I wanted to ask about the sense of self section, which raises so many interesting thoughts about all kinds of thinking about potential uses and training and teaching and many uses. But I was thinking about, when you were saying there was a point when, and really just what does it mean when you say that's not me anymore? And your gestures I suppose are all yours, but the image you're seeing at somebody else in that experiment, did you, I mean, maybe it's just going beyond the scope, but think about what's the impact when somebody thinks, oh, it's not me anymore. That sort of sense of kind of disconnection between what you're looking at because it seemed to me sort of psychological implications. It could be very, very interesting. So I wondered if you could talk a little bit about that. Thank you, thank you. That's actually a great question. And also that's why we made it to investigate the sense of self. Because I will talk about a bit precisely. So in this Muffin identity experiment, you see your face. Then your face, only visual identity will become another person. But movement itself is always real time. So it means in kind of the, in terms of movement, we embodied the face, but still identity go away. So they are actually interesting phenomena happen because it's, of course, in the very beginning, this is still me, it's okay. And then in the middle of the Muffin, it's kind of like I'm moving, I'm kind of using another identity, but it's okay. Interesting phenomena happen in the opposite, like a swapped version. It means like I am moving other faces. And still I feel like, oh, this is my motion, but still identity is not my me. And also sometime in other side, as a person is moving my face. And it's actually, I feel like identity, identity is I go into the different side. But still my real time sense of self is stay in the left. So it means that kind of the, you can also think of the minimum and the amount of some different level of the self. But in this context, some kind of the different levels of self is, how to say, having some conflict to each other and then giving some interesting, actually interesting question happens. Okay, thank you. So we have two more. Let's make them quick question and quick responses. Please. Thank you so much for this very inspiring presentation. And I really enjoyed the demos. I think I got the chance to try earlier. It's perfect. I was actually wondering something about this research. And I apologize if maybe you mentioned it and I missed it, but I was wondering if you tried to plot impact of gender, mixing gender in the pairs of male with male, male with female with male and all. Thank you. Honestly, so we didn't finish that kind of the controlled experiment because I could be collected data from the exhibition. But definitely those kinds of gender difference might impact the changes, I guess. So, but I short answer is unfortunately not, and I think this is interesting. Pass the microphone. Your stunning presentation. I'm interested in the world, your research, cybernetic humanity. Humanity have a broad meaning like a cultural meaning and a political meaning and societal meaning. So how do you incorporate diversity of the meaning of humanity into your research? Do you have a particular scheme or framework to do that? Thank you for asking this question. That is something I want to convey this talk. So initially, what I'm doing right now is more like for individual humanity, individual sense of self. But when we think about more like a broader way of the technology and the computer integrated, computer and the human technology integrated, I think we have to think about the more broader way of the social. And when we think about what is a, I was wondering about what is the best word to describe not only individual, but not only the society, the many of the continuation among them, the range I put society. So it's like now, my research is most likely focused on the individual, but in the collaboration with OIST, for instance, I want to expand this notion into the more diverse way. That's a short answer, thank you so much. Okay, great. Let's thank our speaker one more time. Thank you, Shun.