 So our next speaker is Anatole Lecouier. I hope I'm pronouncing the French. All right, thank you. He's the director of research and the head of the hybrid team at India Ren. And he has been leading a research activity on virtual reality for the past 25 years. And his main research interests are in virtual and augmented reality, haptic interaction, 3D user interfaces and brain computer interfaces. He was involved in numerous international collaborations, such as with the European guest XR or NIW projects. And he initiated and still leads the open vibe, open source software and co-founded Dementia startup company. He authored more than 200 scientific publications and has 15 patents. He served notably as a program chair of IEEE VR conference and associated editor of the IEEE TVCG journal. He obtained the India French Academy of Sciences Young Researcher Prize in 2013 and was inducted in the IEEE Virtual Reality Academy in 2022 last year. And this year, he is here with us at OIST, visiting us. Thank you very much. Thank you very much for the introduction. For the introduction. And thank you for being here. It's a bit late. It's Friday evening. I know that you are this beach calling you right now and nice area around with lots of nice restaurants. It's very nice and kind of you for staying for this very late event. Actually, I'm also very grateful for OIST, for hosting me in this TSVP program. So I've been here for a bit more than one month right now. And it's been already very fruitful in terms of exchanges, discussions. I've discussed with many people from different scientific fields, of course, closely related field, Tom and Chen. And it's a nice coincidence that we're here together. We can start collaborations. But also I could discuss with a specialist of coral fishes and octopuses. And actually, these were super interesting discussions and very fruitful also in terms of exchanges. So I really believe that this format of TSVP and OIST is very valuable. So I encourage, of course, people also to submit and come here. It's been a pleasure. I'm in the middle of this journey. And I'm pretty sure that the second half of this day is going to be even more fruitful. OK, so the idea today in this public lecture was to expose maybe what are the ongoing research topics in our community of virtual reality, showing the current trends and showing different paths that are taken in the community, somehow to illustrate what could be the future of our interactions with these numerical, digital, virtual worlds and virtual scenes. So I'm going to show different examples, different axis of research, hoping to foster maybe your imagination and foster an discussion. So I'm starting with where I'm coming from right now. I'm coming from this region of the far west of France, the Brittany region on the west part of France, the Rennes city. We are leading here a research activity on virtual reality, technology, perception, and application. We are a group of 40 to 50 people in this group. And we live in this nice city that I'm inviting you to discover, very different climates, actually, and very different food and architecture. So you can enjoy here in Rennes a more medieval European city-like architecture and food with also nice gastronomy. Of course, we are in France. Feel free to come. Another reason to come to our place is that we have this very impressive research center dedicated to virtual reality. We succeeded in the past 30 years in building one of the most, I believe, well-equipped and well-resourced center in terms of VR and XR technologies in Europe and maybe in the world. There are something like 120 researchers working on VR and XR technologies in Rennes. So it's quite a huge critical mass for working on this topic. For instance, this Immersia room, it's quite impressive. It's one of the largest cave in the world, cave system, where you can be immersed using glasses instead of VR headsets. This way, you can still continue to see your body, which is a little bit different when you use an HMD, which is cutting you completely from the reality and from your real body. The research path in our group is clearly oriented toward the design of these technologies, the design of 3D immersive and interactive technologies. But even if you are technology-oriented, we really like to work with people from cognitive science, perception, psychology, and neuroscience. As this kind of knowledge is inspiring us for designing our technology, and also we consider that these technologies can become, in return, an effective tool to study human perception, human behavior. It's a very good experimental paradigm and you can build stable, well-controlled experiments in VR. So in the end, we are reaching hybrid systems, combining all these promising technologies, mixing input and output devices like visual displays, haptic technologies. I'm gonna show some examples later on. Gaze tracking systems and even brain-computer interfaces that we started work in my team, something like 15 or 16 years ago. So today I'm going to investigate three main directions of research, trying to show you and argue that the future of VR technologies, the future of our interactions with virtual worlds is aiming, for instance, toward more and more physical engagement of your body and of your actions in the real world, and then transfer them to the virtual world, but also at the same time, more cognitive engagement of the user in this interaction. And maybe a ultimate goal or a unified scheme for interaction is moving progressively toward avatar-based 3D interactions with the virtual world, changing the paradigm of interaction, moving from the 2D mouse to this character which would enable you to interact in the future maybe with the virtual world. Okay, let's start with this first line of research. I'm talking here, for instance, about haptics. So haptic technologies, force feedback, tactile systems, actuated systems which enable to stimulate your skin, your muscles, and make you feel like you're touching, feeling real objects, although they are purely virtual artifacts. There are lots of startups, kick starters working on these projects at the moment. It's one of the big hype of the moment in the XR community. So you have examples here of exoskeleton gloves, haptic modules, many people are working on this topic. In my team, for the past 25, 30 years, we have been studying lots of different passes, lots of different technologies, ground-based haptics, wearable haptics, encounter type haptics, mid-air haptics, et cetera, I'm gonna show some examples right now. It's a very exciting field of research. There are lots of good ideas to design here. Actually, the problem is so complex. I mean, we had this discussion about the universal haptics. And is it possible, is it doable to achieve compelling haptic experience in the end? Probably it's going to never happen. So we have to consider like subpart of the problem and address the subpart of the technologies aiming at creating compelling situations and compelling experience, but maybe probably in a more focused way. Some examples here. So on the bottom right, it's quite recent work, which is very funny. It's encounter type haptics, providing haptic on demand only when you need it. So for instance here, it's a little tangible ball, tangible interface which come at the contact of your hand only when you are grasping the ball and then you can release the ball and the tangible interface goes away. So this is an encounter type haptics. There are many works and labs on this topic at the moment. This is, for instance, a combination of wearable device and passive haptics. You have a wearable system here which is pressing on your phalanx while you are touching this tangible body here. And interestingly, it's altering, it's modifying your perception of stiffness of this tangible device. So you can simulate different stiffness, different elasticity of this virtual body of this mannequin here, this tangible mannequin with these simple tricks of pressing more or less strongly at the level of the phalanx using wearable device. Yeah, another kind of engineering effort from our group here, it's to augment tablets. So it's a combination of surface haptics and ground-based haptics here. This tablet is just simply attached to a robotic arm which is moving in order to make you feel the relief, the texture, the friction, the resistance of the image here or behind the image. And last but not least, it's a very popular trend at the moment. It's mid-air haptics. So remotely, you can touch 3D, let's say holograms, using these matrix of ultra-sand emitters which are remotely simulating touch sensations at the level of your palm for instance here. Okay, and interestingly, what we like to do in our lab also is to approach some kind of low-tech approach and finding tricks, finding ways to compensate for the limitation of the technologies of the current technologies. It's one example, but it's a very important example for us who have been publishing a lot on this topic of pseudo-haptic simulating haptic sensations without any haptic actuation, without any haptic technologies behind with passive input device like just a computer mouse and make you feel the texture of this image, the relief of this image. In this case, we just play with the speed of the cursor. So we decrease the speed or increase the speed artificially of the cursor make you feel that you are falling into a hole or climbing a bump here. But just the artificial reduction or acceleration of the motion of the cursor which is perceived as a change in the height of the image and it's working extremely well. We have been doing a lot of experiments on this. So this was in 2D, but then we moved to and we shifted to other schemes and other interaction context. For instance, here we did something pretty similar to actually what you found in your, let's say hangover effect, right? Yeah, so here we call this pseudo-haptic avatar actually because we were changing the motion of your avatar and here similarly to what Sean presented we found that in this application you would feel that the dumbbell is heavier or more light depending on the deceleration or acceleration of the motion of your avatar. But this could change the way you can consider physical exercising in virtual worlds in the future and increase or decrease the fatigue of the user artificially. And these were also other examples showing how you can trick and simulate some graphical effects of elasticity here very powerful here to simulate the sensation of elasticity of an image or even on a tablet trying to simulate different kind of tactile sensations like here relief texture effects or elasticity effect friction effect at the level of the time. Okay, but from the very beginning haptics have been considered as a technology to enhance interaction with virtual objects coming from the robotic fields from tele-operation and progressively we are trying to consider completely disruptive use and usages of haptic technologies to enhance interactions with virtual worlds. So for instance, here we were trying to shift and change the paradigm and trying to see if haptic technologies could be used to simulate motion sensation and not contact with objects. So it's a completely different shift. It's a completely different paradigm to shift in the orientation of usage of these technologies. And here, for instance, in this example, we use haptic technologies to exert forces at the level of the head. It's a bit dangerous, by the way, but I tried. I was brave, I tried it. So we attached the robotic arm at the level of the head and then we translate the virtual acceleration of the vehicle. Here it's a plane, it's a flight simulator. We translate straightforwardly the acceleration into a force at the level of the head. So somehow we simulate a motion platform but at the level of the head. And this is extremely powerful to simulate eviction illusion, motion illusion and increase the sensation of flying here in this flight simulator. It's extremely impressive. But this is a completely different usage of haptic technologies. And I want to show you this one also because it's a very recent work showing how we can completely shift the usage also of a technology. In this case, still in the context of metaverse and virtual reality, our idea was to consider the use of this technology haptics as a means to support social interactions with others. In this case, what we found is that if you consider speech to touch technology, so that is we translate your speech or what I'm saying here, my speech into vibration. So it's very simple. It's just you put your finger on the loud speaker. Okay, when I speak. If I do that, I'm speaking right now and you receive my vibration, the vibration of my speech. What we notice in this paper is that if I do that and if you receive the vibration of my speech, you will gain actually 15% confidence more in what I'm saying. And conversely, when I'm speaking to you right now, I'm grabbing the microphone right now and imagine that there is a vibrator here which translate my speech into a vibration. I will feel 20% more confident in what I'm saying. Yeah, really. That's so surprising. So there are probably interpretations here. So this is what we call persuasive haptics, a new way of using haptics to foster interactions with people, increased trust and confidence in metaverse, for instance, in virtual meetings during virtual meetings. But that's a completely different way of using and considering these technologies. So now let's move on to the second axis which is completely different. It's how to make you more engaged now in the future of VR in a cognitive way. So without moving, but with your physiological body and your cognition. So maybe you notice that more and more XR systems and HMDs are integrating physiological data. By the way, with lots of ethical considerations here and potential good or bad usage here. But this is a trend in our community and these technologies to include more physiological sensing, physiological computing, and somehow modeling of the physiological state of the user and accessing the physiological data of the user. So we built like two years ago with the French Research Institute become this physiological HMD which is progressively integrating many sensors like cardiac activity, galvanic skin response, so electrodermal activity, ocular measurements, and even FNIRS infrared spectroscopy. So brain neuroimaging. And all together with this very integrated technology we've shown that we were able to extract in a very, let's say control scenario which was a flight simulator. It was possible to extract different levels of mental workload from the user, nearly in real time, like four different levels of mental workload. And this brings me naturally to the topic of brain computer interfaces which progressively are interlaced and associated and combined with virtual reality technologies. Actually we started to do that and to take this path in mixing, combining neural interfaces and virtual reality technologies together back in the beginning of the 2000s. This is the first video that we shot, 2006, 2007. We were mixing with Fabien Lotte, now an expert of the field. So in this very simple scenario inspired by Star Wars movie you are controlling the force of your brain to lift the virtual spaceship. That was super motivating by the way. And so we brought this in many conferences and made lots of demos. And in this case, we shown the compatibility of these two technologies and also the engaging actually aspects like Leah was saying earlier today that it's very engaging for the user to practice brain control and the learning of neural feedback, the learning of controlling your brain activity. Let me show, yeah, in 2005, we noticed that they were, actually we wanted to spread these technologies a bit more widely. So we noticed that there were not many open source tools on this topic. And so in 2005, we started a big research effort, actually a big engineering effort to make available an open source software for anybody who would like to start BCI research or BCI development. This is open vibe. It's an open source software which progressively established as one of the main software used in the field in BCI and real-time neuroscience in a broad sense. And this is, it's really a success. It's something that we would like to continue to support and maintain at INREA in my team to ensure that these technologies can be used by other researchers conduct and to start research on this topic. So based on open vibe, we designed different prototypes and illustration of what we can do by combining interactive 3D graphics and neural interface. This is an example of a video game prototype that we designed with Ubisoft company 10 years ago to show the potential of brain computer interface for controlling video games. So it's very basic. It's a space invader like a video game. You have to concentrate on those three different targets here and those targets are flickering at different frequency, five, seven and eight hertz. And if you monitor the brain activity in the visual cortex you can notice a peak in the electrical activity at the frequency that you are looking at. So if we find five hertz here, it means that you are looking at the left wing, flickering five hertz target. So we launched the goal left command. If we find eight hertz, means that you are looking at the cannon so we can shoot with the cannon. And you see it's very effective. It goes very fast actually. The detection is very rapid and you can control this game using this very basic idea. The same idea applied to the context of augmented reality. This was the first connection between a Microsoft HoloLens augmented reality optical see-through system and a neural interface to control a mobile robot using the same approach and doing this little pass on the ground with this same flickering targets and menu to go right, left and to move forward. But in my opinion, what is more interesting in BCI is not to control something, but it's more to monitor the brain and detect different mental states of the user and then adapt the interaction to the brain to the brain or to the mental state of the user. This is in my opinion a little bit more promising as the control with BCI is not very super reliable yet. In this case, it's an example of a proof of concept of a training simulator, which is adapted to the mental workload of the trainee. So if your workload is going too high because it's too difficult to follow this presentation, for instance, then I will be informed about it and I will try to speed down, right? For the interpreter, that would be super great. Semi-automatic warning regarding their mental workload. So in this case, we have these assistances, these guidance which are activated when the workload is too high, okay? And also, in the same line of research, more recent paper, here we've shown that it's possible to detect very rapidly in the brain when you are noticing an error in the system. So here, for instance, you are interacting in VR and something happens here which should not happen. The virtual object is frozen, which is actually an error which happened many times in VR. You have a tracking error. And we found that when something like this happened, your brain is reacting and we can detect this nearly in real time. So meaning that you can, for instance, launch some undo command, right? Automatic undo or you can flag the data at this moment in your experiment. So this is, in my opinion, a good pass for investigating the potential of this technology in the future. Okay, if you are more interested, we just released recently a state of the art on effective and cognitive VR, you can check. And the last pass, the last axis, I hope I'm okay on time, okay. So now let's consider a unifying principle of, let's say, avatar-based interaction. I'm talking here about this notion of avatario anthropomorphic representation in the metaverse in the virtual world. This virtual human or non-human creature, which is going to be representing you actually in the virtual world. So it's a very teasing notion. Actually, there are lots of works in the community at the moment on this topic, both on the technology, but also on the perception and psychology of this notion associated with this notion. And in our case, we're studying both aspects. We really believe it's a fascinating approach. So in this case, for instance, we wanted to check, yeah, how far can you go in this virtual embodiment of virtual body and sometimes even very dysmorphic body or different body, right? So this is what we call the six finger illusion. In this case, you put the headset, you look at your hand, you see a very realistic hand, similar to your real hand, moving accordingly to your motions, but there is something a little bit different. I guess you have noticed. There is this extra finger on the six finger, which is actually added in the model. And interestingly, what we've done here is we proposed a conditioning method to make you feel haptically the sensation of owning this finger. So to do so, we have this virtual brush here, which is brushing your finger, every finger, including the six finger. And in the real life, in the real setup, you have an experimenter, which is brushing your real finger on your real hand. So of course, there's a trick because when the virtual brush here is brushing the six finger, we have to do a little trick here. The real experimenter is actually brushing this finger here in a very synchronous way. So you feel a tactile sensation, you feel a tactile caress, a brushing of your real finger. And at the same time, you see the virtual brush passing over your six finger. And very rapidly, like in two brushes, you start to feel that you have a six finger. And it's very stable, very robust across users. And very strangely, when you go back to five finger, that's the final step of the demo, we change the six finger hand to a five finger hand. And usually people feel that one finger was cut. It's a very strange feeling, showing the appropriation of this, very fast appropriation of this extra thing. So this is, in my opinion, revealing how far you can go with these illusions. But we went even one step further. This was presented by other presenters today because it was a joint work with the University of Tokyo, PhD of Rebecca, virtual co-embodyments. So this is another very fascinating situation where in this case, we show how far you can go here. We are putting two users in the same body, in the same virtual body, of course. So these two people are sharing the same avatar and what they see from a first perspective is a virtual body which is reacting to their motion, but also to the motion of the other user. And we can play with the weight of this motion, right? Sometimes you have 50-50, meaning you are sharing completely the body, but also you can have like 25 or zero percent, meaning that you are a prisoner of this body which is controlled by somebody else. And interestingly, we found that people could actually very well adapt and accept this very strange situation and even could get good synchronization of their own motion to generate some very complex here tasks. And there is actually follow-up paper just released recently by University of Tokyo showing that this can be very helpful in a training context. So if you need to train for dancing, for instance, or learning a gesture, then your trainer or your professor can take the control of your body and show you how to do which gesture to do, right, complete. And this was found actually to be quite effective. Takuji can say few words about it later. Okay, and interestingly, avatars are not restrained to a virtual reality and that's one of the last slide. It's to show that these technologies is progressively moving from the virtual stage to the real life and to our actually real surroundings. We've seen some nice projection mapping actually applications this morning. And this is the same idea of how avatarization could also happen here in this room with projection, for instance. And you could play with your body, disguise your body, change your real body in a numerical way using, for instance, augmented reality. These are two examples of recent works from our group where you can change your real hand to virtual robotic hands, but the rest of your body isn't changed, right? And it's real life around. Or you can send your avatar, your real avatar of your body. I mean, it's a bit steady here, but in order to explore in front of you some dangerous location, for instance, you can send out your avatar and check here the climbing paths before you do this real climbing. So this is how avatars are progressively moving toward also augmented reality and the real life. Okay, and to finish with one illustration of what kind of applications we are pushing, we are believing in, in this context, in this very, I mean, impressive context of colossal investment, economical investments of key players at the moment, investing a lot of entertainment, telework and so on. We believe in our research institute that we should promote applications with a strong societal added value in this field of metaverse and virtual reality. For instance, education, training, sports, cultural heritage, and in this case, medical application. So since three years now, four years, in our group, we have a full-time researcher who is also a medical doctor. And with her, we are designing, co-designing technologies for patients, VR and AR technologies for patients. One example here, during the COVID crisis, we designed this medical application based on avatars and virtual embodiment for COVID patients who wake up from coma in intensive care units. And this is actually the first time, if I'm correct, that VR technologies are brought in an intensive care unit. It's very complex to do that. And you see here, the person here has just to wake up from coma and few hours after waking up from comas, these people are very weak, unable to eat by themselves, unable to walk. In this case, they are proposed, they are offered an HMD and they can see themselves inside a body of an avatar, very similar to them, walking, actually walking, virtually walking for 10 minutes. Also, they are actually laying on their bed, right? They are laying on their bed, but they see themselves walking. And they do that 10 minutes per day during 10 days. And our clinical hypothesis is that using this system, these people will somehow start their reeducation earlier and they will be able to walk again more fast, faster. Ongoing clinical study, very complex clinical study, but the results are very promising. Okay, so to finish the future of this past is probably leading to native hybrid technologies, very integrated technologies mixing different kinds of sensors, technologies, both physiological computing, neuro imagery, progressively, together with visual display and haptics. And clearly there are lots of applications and we are pushing strong societal positive applications like medicine, sport training, education, I mentioned it. Let's see the future, right? Okay, I'm mentioning this project, which has just started to finish with. It's very interesting, intriguing initiative to, it's a European project with lots of European partners working on the future of metaverse social interactions. And the idea is to find techniques to promote good social behavior, positive social interactions and prevent cyber bullying online in metaverse. And there are lots of interesting techniques to do here. And we are working on haptic techniques to do that. Okay, this concludes my talk. Thank you for your attention. Be happy to answer your questions. Thank you. Thank you very much. We're almost out of time, but we have a couple of minutes for questions and feedback from the audience, comments. You can talk about octopus if you want, or coral fishes. Right there, okay. Thanks, the first question is, as I said this morning, the most difficult one. Thank you very much for this first question. Okay, sorry. I was wondering, do you see much application in education for this type of technology? Yeah, definitely. So this is clearly a very good medium for education. So it's super engaging, super motivating. You can visualize phenomena. You can visualize, you can build practical systems and tools to learn in a different way, or in a complementary way with traditional tools. So there are lots of initiative on E education based on XR technologies. Of course, there is no need for rushing into VR, right? You have to be very careful in not over killing the ultimate goal of the pedagogical process. So in this kind of project related to education, you need to make a proper analysis of the situation with pedagogical experts to see what is the potential benefits here of this immersive experience on the education and the pedagogy. But clearly there are lots of things to do here. All right, thank you very much. One more. The second question is not always easy. Okay, so this one is about the persuasive haptics, which I found very interesting. And we're thinking so much about the differences between virtual interaction and face to face interaction after the pandemic and so on. And I wrote one of your comments down when you were saying that it could increase confidence in virtual interaction. And my human reaction was, I think it might decrease my confidence because it's capable of manipulation. And when you're talking about the example you were showing is somebody's argument might be more persuasive, that I found, yeah, rather concerning. Totally, I totally agree. So it's like often with these technologies, there is the bright side and the dark side, the dark side of the force, right? The dark side of VR. There have been several papers on this topic. It's super interesting to read them and be aware about this. And indeed, with this result, I also immediately thought about potential bad usage. Imagine a politician would equip the ground here with strong vibrations. We increase your confidence in what I'm saying, right? So clearly, our political debate where only one person is providing haptics, right? We'll increase the weight of what he's saying. Some lectures finish like this, people knocking on the tables. So clearly, there are bad uses, potential bad usage. And so when we publish this paper, we had this in mind too. But so we were wondering among us, can we publish everything, right? Can we publish this kind of information which could be used in a badly way? But our motivation was to also inform and be aware about this. So indeed, now we know that this is doable and that we know that there are potentially good usage. And in guess, we're going to promote good usage of this technology, that we know that there are bad usage. So we also need to find ways to prevent the bad usage if you don't want to get bad influence, right? And this would consist, for instance, in being able to cancel a vibration or turn down a vibration, but it's not always possible. I was thinking of Kuta, I don't know if Kuta is here. But the ground of Kuta for the basketball field, we'd like to try this also. Everybody feels more confident. Yeah. All right, thank you so much. Thanks. Okay, let's thank our speaker one more time. Thank you.