 Thank you so much, Hannah. Good afternoon, everyone. Thank you so much for being here, despite the awesome weather that we have today. So today, as Hannah already said, we're going to talk about humanoid robots and the phenomena of Uncanny Valley. Here you see a couple of robot images. Some of them are supposed to creep you out. Especially the one on the left does creep me out. They all look like human, but they miss something that we can immediately recognize. I'm going to start this presentation with a couple of questions. So let's look at these two robots that are quite similar in terms of design. Which of them do you find more easy to look at? I don't know if that's the same for you. The one on the right seems a bit off. What about these two Android robots? Which of them is a bit eerie? Which of them is more easy to look at? Again, the one on the left side comes much more closer to a real human. It looks more realistic, more natural, whereas the one on the right, something again, seems to be off. And now, these were just some images. I want to show you some videos. Hello there. May I ask your name? My name is Ben. My name is Erika. It's a pleasure to meet you, Ben. You too. Would you like to hear a little about me? Yes, please. I was created to be the world's most advanced and most beautiful, fully autonomous Android. Sitting here with Erika feels a bit disconcerting and unnatural. I know she's not a person, but I can't help looking into her eyes, which must be because she looks human. Erika's facial expressions are created by dozens of pneumatic air cylinders. They act like muscles, embedded beneath her silicon skin. Very similar to a human, but did it move like a human? Absolutely not, right? So as soon as the robot started moving, again, we had that feeling of uneasiness. It sounded like a human, but it didn't necessarily move like a human. Now let's watch another robot that is not necessarily very similar to humans, but moves much better than Erika. So that was supposed to be the most expressive robot currently developed in the world called Ameca robot. The facial expressions came very close to humans, but still I don't feel easy watching this robot move. So all the feelings that we've been experiencing so far, they go under the effect of Uncanny Valley. They are termed as Uncanny Valley because there is something that does not necessarily click for the humans. And the Uncanny Valley hypothesis tells us that generally when an object, a doll, a mannequin, a robot, an animated virtual human, comes close to human, it becomes more real, it looks more real, behaves more real. We start having a positive reaction to it, but to a point. That point is where it becomes too creepy. It's almost human, but not yet human per se. So that's where the response becomes negative. We no longer have a sense of acceptance or likeability toward the agent that we are watching. And I showed you images of the robots. I also showed you movies of the robots. And I expected you to find a difference between the feelings that you experienced there. Because the Uncanny Valley hypothesis posits that for the static images, normally the valley is weaker than for the moving agent. So there is something about the biological motion that is very important for the human brain. And if something looks human but does not move like a human, then you get a stronger negative response toward it. Now let's talk about the history of Uncanny Valley. Uncanny Valley was first termed by, was coined by a Japanese roboticist called Masahiro Mori in 1970, where he wrote about this negative feeling that people experienced as they started attracting or watching objects that look like humans. On the x-axis of this hypothetical curve, what you see is the similarity to human. That's the original curve that he put forward in his notes then. On the left side of this x-axis, there is the machine likeness. And on the right side is the human likeness. On the y-axis, however, what he termed was shinbaka. That is a word in Japanese that we don't have a direct translation for. But what comes close to it is familiarity, affinity, or these days the term that is used likeability. And then he described that once you start having more similar objects shown to viewers, there will be a point that they enter Bukimi no Tani. The Tani means the valley here, and Bukimi is the equivalent of uncanniness, eeriness. So that was the original hypothesis then. However, this was just based on Murray's observations, his own anecdotal examples. Since then, the question is, do we have empirical evidence for this? Can we find the same thing with research? So a research that was done in 2016 collected 80 real-world robot faces. And they had people rank these mechanical and human-looking robots according to their realisticness, to their human likeness. So they would basically hire a group of participants and ask them, can you rate these robots on how much human likeness you perceive from them? And once they rank these robots according to the human likeness, they showed it to another group of participants in a follow-up study where they had to rate their perception of likability. On the x-axis you see the face numbers, so from the very mechanical looking robots to the very human looking robots. And on the y-axis you see the rating of the likability that was given to them. We do see a small valley effector. It's not as strong as Murray hypothesized, but there is at least some decrease of likability as the robots that people watch moved from mechanical looking to human-like looking. And then they did something interesting, a behavior test with the participants where they did an investment game with the robot. They were told that you were supposed to do an investment game with the robot that you watched before you. The investment game goes as a portion of money, an amount of money is given to you. You can decide what part of it you want to give to the robot, and then the robot decides to, for example, triple it or double it and give it back to you. So the question was, do people trust this scene robot with their money? Do they believe that the robot will be trustworthy enough to give their investment back? And again they replicated a similar curve. It's not as strong as the valley they saw with the likability, but still with the very human-like robots people were willing to invest. With the mechanical looking robots people were willing to invest. It was just those robots that were not yet human enough that people did not trust much. Okay, so the question is, is it only the robots that give us such creepy feeling? So far we saw that there is some empirical evidence from research to research it differs how much of the value strength we see in the responses. But the testbed these days for the Unkelly Valley research is actually the digital humans. With the advancement of the computer-generated faces, we have lots of agents around us for which we can actually try to understand the effect of Unkelly Valley, how it is produced, why people respond to these faces the way they do, and what we can do to prevent it. That's the most important question for us. What can we do to prevent that feeling of creepiness and eeriness when people work with these agents? For example, here what you see is an agent that is developed by a known company Soul Machine. The agent is called Mike. It's a replica of a real human. However, again, as the agent talks, although the appearance is really, really human-like, we as the observer can tell this is not a real human. Something again with the movement, the eyes, the skin is off that tells it away. And this is a very important thing for the motion pictures industry, for the healthcare industry, because we are seeing that these agents are becoming more frequently used in different training and healthcare scenarios. For example, here on top left you see two agents that are used in mental healthcare. They can represent a real human to whom you can start conversing with, and you can discuss your problem and receive some recommendations. On the right side you see an agent being used in customer services. And at the bottom you see these virtual agents being used in education of nursing students. And then it goes beyond that. It's also far more used in the film industry, in the entertainment industry, with the very early avatars, digital humans being used in animated cartoons and animations. We all remember the movie Shrek from 2001, Princess Fiona being very human-like then with the technology they had. Apparently the story goes that Fiona was even more human-like than was in the final cut of the movie. However, when the pilot run the movie with some children, some of them started crying because the character was too human-like and it scared them off. So they had to reduce the level of human-likeness of the character. And then as time goes by the technology has improved and these days we see lots of movies that use animated characters. And it comes to this 2019 movie Alita. Some of us may have watched the movie. In my view it was that Alita character went beyond uncanny valley. We were able to watch the movie without feeling that creepiness that stunned us from watching further. So as you could see the fine-grained details of the face were simulated really well in this case. And that meant that somehow this movie could make a success because the characters were not repulsive to the viewers. So I think the entertainment in the story has the most probably benefit from the advancement of these digital humans. And that's where most of the investment goes. But then I also think it's not just the robots and digital humans. This is a famous video that went viral where facial expressions do not match what is normally a human-like biological emotions. And this sometimes happens with real humans as well. Mainly because there is ingenuity of emotions or there is a situation that does not necessarily the context does not match the response. So in that situation what happens is that facial expressions and the movements are not a match. Some part of the face respond in one way and part of the face responds in another way. The best example is smiling. They say a genuine smile is the one that goes hand in hand with the eyes and the mouth. So if you are making a smile action without your eyes actually smiling then that comes across as an uncanny facial expression. Okay, so that's where we are. Different examples of uncanny valley, why it's important to be studied. And the question now is that how can we measure and investigate uncanny valley? Beyond the measurement what lies at the foundation of it is how can we explain the mechanism that derives this? What is the explanation there? There are a couple of theories that have been developed over time to explain why uncanny valley happens. As it does some of these theories are just theories without empirical evidence. Some of them have been already sort of validated with empirical research. I'm going to just tell you about what the idea is behind each of these theories and then we're going to discuss. We're going to see some of the studies that have been researched in the past. So evolutionary theory tells us that uncanny valley is an effect because of our response to the living beings as part of our evolution wanting to distinguish between threat and non-threat. So if a human is for example physically damaged or if there is emotional like the intention of some of the people. What happens is that there will be a difference between the mismatch between the physiological attributes or mismatch between the behavior and the appearance of the human beings. So the evolutionary theory tells us that the response we are generating toward these objects that look like human but are not yet human is because we are so much tuned to differentiate between what is less risky and a friend human than what is threat and risky object around us. A good example of this is for example the response you show to the corpses, zombies in the movie. So these kind of signify illness and a signal of death that comes across. The categorization ambiguity theory tells us that our brain wants to put things into categories. So when I see something for the first time I want to tell whether this is a living being or this is an object and with this uncanny almost human but not human objects what happens is that our brain is confused. We cannot really put them in one box and that's where the negative response arises. The cognitive dissonance theory is based on the prediction that happens in the brain. Our brain is normally tuned into predicting what is supposed to be the outcome of already categorized or known object. So if I see an object that looks like mechanic I expect it to behave mechanically. If I see a real human I expect it to behave like a real human and if the two do not match so what happens is that I see an object that looks like mechanical but moves like a human or the other way around. The prediction in my brain there is an error in the prediction in my brain so that results into a cognitive dissonance that conflicts the brain. And then the perceptual mismatch theory is very similar to the cognitive dissonance and what it tells us is that there needs to be the sensory information you gather from what you see needs to be in congruency. So for example if you put a larger eyes on a face that does not match a real human eye size what happens is that you get a negative response toward that. There is a perceptual mismatch there. And then what needs to be remembered is that these theories are not mutually exclusive. So it's not like one of them explains uncanny valley and the rest can be invalidated. Perhaps the best explanation is a combination of all these theories. Where do I come in in this research. So I happen to do my PhD research in a laboratory that work with very human like robots. So the lab developed a lot of human like machines human like robots. Erica that you saw was a product of that laboratory. Some of you might have seen Geminoid robots in the past. They are replica of a real human. Telenoid is a minimal human with minimalistic human like features that is expected to work as a teleoperated robot, particularly for older adults. And then a number of other Android robots that were on the market at the time. So the original work from this lab looked at how Android robots can be used as a test bed for embodiment and the feeling of uncanny valley. The idea and this is something that was published in 2008 by the PI of the lab issue grow. The idea was that these robots that we are developing that are very close to human kind of go beyond the uncanny valley. So what you see there, the small robot is actually a replica of issue grows daughter. That was the first robot he ever did. And it happened to be a failure because it looked like a zombie eventually apparently even a scared people in the laboratory. And then they consider it to drop into the valley and the next version of the robots they did. And that's I think in 2006 they consider it to be a better robot that already went beyond the valley. And their evidence for that was an experiment they did. They put the robot behind curtains. They invited participants to come to the lab. They would remove the curtain for three seconds and then ask people what did you see a human or the robot. And you should know that the robot was a replica of a real human and they required a lot of assistance from the model with their experiment. So in every experiment the actual human was there. The robot was there too. And they repeated this experiment with I think at the time 70 something people. What they found was that if both the robot and the human sat still did nothing, especially the robot, everything all the machines were off. They were switched off. People would could be the people were able to tell which one was robot which was real human. However, when the Android started having this subconscious movements, such as eye blinking, they also simulated the breathing motions, because the robot was moving with pneumatic actuators that looked like a muscle. So they would just like create this inhaling and exhaling movement. When they endowed the robot with such subconscious really micro movements, then people were not able to tell which one was human and which was not, which was with the Android. So in that case, they consider that with within that three second observation, the robot already went beyond the valley. People were not able to tell the difference with a real human. However, this is the robot that we're talking about. Let's look at the movements beyond three seconds. As you can see, there is a lot of jitter in the movement. And indeed, we cannot say that the robot. So the the claim that was made at the time that this robot went beyond the valley only limited itself to very short period of interaction and exposure. So another researcher in the lab decided to do a study on the interaction between biological movement, movement in general, with the appearance of the robot. So they wanted to know what is it that with this robot in terms of appearance and behavior. So remember on County Valley, it explains the the repulsive, the negative reaction, both when the agent is still and dynamic. So in this case, we saw the effect when it was dynamic. And the question was that what why is this happening, where in the brain can we find the mechanism that explains this. So they created three conditions in an experiment where a human did a couple of movements, the same human that the robot was replicated from. Then they created the same movements for the Android. And in a third condition, they repeated the Android movements, but this time skin the robot. So they removed the skin of the robot so that the robot would look mechanic. And what they found was that within the human action perception system in the brain, which is where you actually process biological movement. The response that the brain shows toward mechanical looking robots and the real humans is almost similar. However, for the mechanical looking robot for the real human looking robot with the mechanical motion. There is a strength and response so the response goes stronger and explanation for that they pin it on predictive coding. The same theory we talked about similar to cognitive dissonance that there was a prediction of motion for a real human. But what I saw was a mechanical movement and hence the brain is trying to make sense of it. Hence more neural activity. And then they did in a follow up a study. Also, they looked at the responses in the frontal area with respect to error prediction. What they found was that when people watched the the Android with with with its movement being mechanical. So the real Android with the skin, but with mechanical movement, the N 400 response, which is a response that normally arises when you detect an error. That went stronger compared to the the mechanical looking robot working mechanical movements and the real human doing the real movements. Then another researcher wanted to know whether this effect goes beyond behavior and appearance. So this was a very interesting question that the research team at the time had. They wanted to know what about the attitude and behavior of the robot. Does that affect the uncanny valley or not. They designed experiment in which two robots one looking very mechanical and the other one the Android, which is human like they were endowed with personality. So, and they had to do a long term interaction in terms of three sessions. What they did was that they simulated a situation where the participant was interviewed for a job. And the robot presented the interviewer. And then in one scenario, the robot had a positive personality, trying to make it more comfortable for the participant. And the other one, the robot had a negative personality. So the robot would actually, for example, in response to questions that what is your weakness, the robot would not very negatively and say, Oh, really? Is that true? Some some responses like that. And then what they found was that the behavior of the robot actually did impact the likeability of the robot over time. And this was more strong for the mechanical looking robot. So when the robot was positive, people expected it to be likable over the interactions. When the robot was acting negatively, the likability of the mechanical looking robot dropped significantly. This effect was a smaller both for positive and negative behavior was a smaller for the Gemini. So they interpreted this that the uncanny valley for the Geminoid is persistent. Even over multiple interactions with the robot, even with the impact of the attitude and behavior of the robot, the likability of the robot doesn't change much as it does for the mechanical looking robot. So they argued that we have to consider when we're talking about the Maurice curve, the hypothetical curve, we have to consider the attitude and probably the valley is still there. But with positive and the negative attitude and personality of the robot during the interaction, where the likability lies might change. So to summarize it, the valley is still going to be there. It's a matter of whether it's a complete negative reaction or a less likable reaction. And then that experience I had in the lab translated in my contribution to a project that was done at Tilburg University. It was concluded last last last December. The project is what's called wipe. And the goal of the project was to create virtual avatars that can be used in healthcare. So, in partnership with Breida University of applied sciences and 12 other institutes, the goal of the project was to first create this virtual avatars. Using some technology, state of the art technology of photogrammetry. The idea was that let's create these digital humans that look like a real human, be able to create facial expressions and movements for them. And above all, be able to actually implement them in healthcare scenarios for training purposes for interaction purposes and see what kind of perception and response we can gather from different patient groups. This work that I'm going to present to you was majorly done by a PhD student at the time who recently graduated. And her part of work in this project was focused on face perception. So what is important with respect to face development when we're talking about digital humans that creates the sense of realism and nationalness. And that's the million dollar question. If they can find the features that define the negative and positive responses, they can predict what sort of responses people show to the faces. Then we can use those features to create better virtual human and virtual faces. So where we embarked on was, we knew at the time, so this was year 2019, if I remember quickly, where we embarked on the question that, okay, we do have a lot of digital humans right now that are being developed by different companies. But the question is, are they beyond the valley or not? And by beyond the valley, what I mean is that can they be perceived as a human? Can people no longer differentiate between them and a real human? So we formulated the question as, are these agents processed like humans? And you can see some of them that were developed at the time by some of the big tech companies. We wanted to approach this question from three different perspectives. First, the perception of the viewer, do people see them similarly as they do with a human face so they no longer can recognize which one is which? Do they memorize these faces differently? So we do have quite extensive cognitive research that tells us that our memory of faces is better than memory of any other stimuli. We are tuned, our brain is tuned to memorize faces because of evolutionary benefits it had for us. So we had this question, do they kind of stimulate the same memory encoding that real human faces do? And then the last angle we wanted to look at this question was through neurophysiology. Does the brain respond to these faces the same way it responds to real human faces? And then what we did is that we collected all the state-of-the-art photorealistic virtual agents that were available then on the internet. We collected their images. We also borrowed images from an available data set where actual humans were photographed. And then we created, we sort of like manipulated images to only keep the face, remove all the other like the hair and other confounds of the picture. And we showed people these images one by one and asked them to write it. So this is for example a very simple task that we gave to the participants. One by one we showed them different images. So this one is from a real human. This one you should already know from the previous slides. This is an agent and people were asked to rate how much they agree with this image being a human. And this is the result we got. This became very interesting for us because we noticed that for the human faces people were quite capable of telling this is a human. And then for virtual agents it was also quite easy for them to tell this is not a human. So they rated it lower than they would for the real human. As you can see, although the curve here goes higher, there is a substantial difference with the highest rating that is given for the actual humans and for the IVA being the intelligent virtual avatars in this case. And then we took all the responses, all the participants that gave the highest response to these images. And we rated the percentage of the rating, the highest rating they got. So here on this slide you see the number, the percentage of times our participants said they most strongly agree with the face being a human. And as we ranked them based on the percentage, we noticed a difference between these images with these images on top receiving the lowest percentage being very dull like very, you know, low level in terms of graphical features. And then these images on the bottom row being quite detailed in terms of the facial features. And that's where we started thinking about, okay, we have this collection of images. We know which of them are perceived most human-like and which of them least human-like. Let's start doing a computational analysis that tells us that gives us those features that influence people's perception the most. From a computational analysis we did, we found two features that were most contributing to the perception of human-likeness. One of them was the number of corneal reflections that were simulated in the eyes of the virtual avatar. And the second one was the skin texture. So if I go back to this image, the agents on top row, they have a smoother skin. It's so smooth, it looks unnatural. And they had either no corneal reflection or lesser corneal reflection that was present for the agents here. These agents at the bottom, they had more detailed skin texture. The pores and the different details of the skin was present there. So we thought that, okay, now we need to study. So we had some hypothesis here that these two features could be the most contributing one, but we wanted to empirically test this. And because we didn't have all those agents at the time at hand to be able to manipulate these features the way we wanted, we took another approach. We said, okay, so we don't have the agents to move them from like low realism to high realism. What we can do is we can take real human faces and handicapped them. We can move from the other side of the uncanny valley, see whether removal of those features from the real human face will decrease the acceptability and likeability of the face. So that's what we did. So we either reduce the skin texture of a real human face or remove the corneal reflection. These are our experimental stimuli. On the top left, you see the human faces that we had at hand on outer. And then what we did is that we either remove the corneal reflections in the eyes. So those are the whites in the eye that you can see are removed. In another stimuli category, we kind of smoothened the skin. So we removed all the pores and the details of the skin. In one condition, we did both. So we changed both features. And then this time again, we showed it to people and asked them to tell us whether what they saw was a human or not. And what we found was that when the face was not changed, so when the corneal reflection and the skin texture was intact, people could with high accuracy tell this was a human. When both were changed, they could also tell accurately this is not a human. However, when only eyes were changed or only skin texture were changed, people were confused. People no longer could accurately accurately tell whether this is a human or not showing that each of these features on their own matter a lot when it comes to realism perception. So and another finding we had was that race and gender were influenced influential factors in this perception. So people were more accurate in telling the difference between an agent between an altered face and on altered face when the scene face was from there was with their own from their own race. So the major agents, they more easily told the difference for the Caucasian faces and for other, for example, Asian or black faces, and that it was easier to tell apart the male faces compared to the female faces. And we think the explanation here is that females are more likely to change the skin texture to wear makeup and it's sort of like more common for people to see such differences between the real skin and the altered skin. So that was the the conclusion of the study that we did where we found that the realism of the face lies in the skin and eyes of the of the agent. And we made recommendation for future developers that if you really want to have very realistic looking very natural looking agent for different application domains. These are the two features you have to focus on. Then for the second study that we did, we were interested in the memory of the faces. Now we want to find a cognitive explanation. Whether these two faces so we know that the perception is influenced by these two factors. Now we want to know at the cognitive level, does the response to these images change based on the manipulations that we did to the faces. So again, in two studies, one with the real human faces altered and unaltered, and the other one with the virtual agents. What we did was we showed the faces to humans. First, they had to just watch 48 images. And then they did a distractor task where they sort of spend time on something for time to pass. And then we did a test phase where they had to once again watch the images and then decide whether they had seen it before or not. In this study phase, what we did is that we took a subset of what they already saw with new faces. So some of them they had seen before and some of them they did not. And we wanted to know how much of them did they remember. What we found was that both of the features did affect the memory of the faces and the feature that was most likely to make the memory of the face poorer was the lack of coordinate reflection. So the eyes give it all the eyes are the most important thing when you're encoding faces in your memory. If the eyes are natural and are unnatural, then the cognitive processes for that face are not the same as when you're looking at the human faces. So there is a complexity of processing there. And then when we wanted to go beyond memory and perception into the brain, we started by asking what is the evidence there with respect to uncanny valley. Do we have an answer to the question whether there are measurements, there are measures metrics in the brain that explain uncanny valley. I already showed you two studies that had done it, but we wanted to know what is the landscape, what is the state of the art. So we did a scoping review of 15 years of research, where we looked at all the neuro imaging studies, including e g f m r i f nearest studies that try to explore uncanny valley in the brain. Among all the records we found only 13 studies actually did what we were looking for. So that showed how little research had been done in terms of neuro physiological responses to or neural responses to the uncanny valley. And these the summary of all these 13 studies that we gather in a paper was that we probably know where in the brain, the uncanny valley response shows up. And that is a place in the brain called fuzzy form face area, which is responsible for face perception and face recognition. And what we found was that this area was lighted up, was activated less when people were watching artificial faces. So real humans activate this area, real human faces activate this area differently than artificial faces. But what was not present in all these studies that we reviewed was the was an understanding of the temporal response. So when is it exactly that after you look at an image, you decide your brain decides that, well, wait, this is not a human. And the background of this question is that within milliseconds of us, we humans seeing another human face, our brain already can tell whether what we saw was a human or an object. That is the categorical perception, right? We talked about that, that we are very quick in telling that what I see is a real human face or a non human. And research shows that those faces that we have seen actually make this response stronger. So the memory of the face also is a factor here. However, no research had told us whether it's the very early responses at the level of 100 to 200 millisecond after we watch the face. That tells us this is an artificial face or is it the later responses. When we have more time to process all those details that we already found a perceptual difference for, right? So this became the research question that Julia embarked on, whether we find an early response being different between the two face categories or the later responses. So here is the study that she did. She collected EEG responses. These are the electrical potential that you collect with a cap of electrodes over the scalp. People sat in front of a monitor and similar to the memory experiment they just watched faces passing by. And the faces they watched were either human faces or the virtual agents that we had found and manipulated. And here is the result we found. We did not see any difference in terms of N170 or P200 being the early responses toward the faces. But we did see a difference toward the two categories of the images in later responses around 400 to 600 milliseconds. So this actually told us that in terms of face perception, the quickest response you have to this artificial face is that it's human-like. It's a human. It falls in the category of the human. It's later that your brain starts looking at the mismatches, right? So kind of this evidence goes toward the sort of supports the theory of perceptual mismatch. That now I don't find it necessarily the same way that I expect a human face to be. Now the final slide here. What is next? I think we did have very little time to explore the landscape of uncanny valley together with the brain responses within the Vibe project. I think that with the new AI generated. Okay, this slide is supposed to be a movie. Probably you have seen this is open AI Sora, AI generated movie, showing a real human move. It's a collection of images that has turned into a video. It's all artificially created. And I think this is the next step for the virtual humans and the development of them, the advancement of them. And I think we have a lot of unanswered questions that can be answered using this stimuli. During the time we were doing our project, we didn't have this level of maturity of virtual agents faces. We couldn't create them ourselves because as you could see to have a very high fidelity agent, you need to have all these cameras in the room, do the photogrammetry, do the rendering. You need a team of people, but I think with AI generated faces and human behavior, you can create lots of stimuli and continue this research in order to understand what is it exactly in terms of the appearance, in terms of the behavior, and above all in terms of the personality and affective relationships that these agents develop with the people that contributes to the uncanny valley. Thank you so much for listening. This research was done in collaboration with Professor Max Lauersee and Dr. Julia Weitinaute. Thank you very much for your talk. If there are any questions, the time now is now for you to ask questions that you may have. Are there any questions? Yeah? The experiments you made there, I assume they all were made with flat pictures. Did you also have experiments with 3D, 3-dimensional pictures? No, we did not do 3D faces. We only work with 2D still images. We actually did have one study where we tried to create facial expressions for these agents and understand, so investigate the perception of emotion in the face. I did not include them in this slide because of the lack of time, but indeed probably one direction to go forward is to have more of this 3D stimuli that people can look at. At the time when we were doing this, there was not any evidence, even working with still 2D images and the effect it has on the brain. So we started with a simple stimuli, but I think it can grow from there. Thank you. Thank you for your question. I saw a hand here. Yeah, you said there are differences between men and women and recognizing it because women tend to have a smooth skin due to makeup. So I was wondering if there was a difference between women who wear makeup and women who don't wear makeup. So it's not wearing makeup, looking more human to us. So the images we used, they were all natural looking humans without makeup. So all the images we had, they were just raw faces. So once we smoothened the texture of that image, it might have looked like they were wearing just makeup and it's normally more acceptable for a female face to have a smoother skin than it is for a male skin. Thank you. Any other questions? Yeah, in the book. Just a second. I was wondering, building on a question of the person in front, did you see the difference too when you separated the two, like the shiny eyes and the skin? So, yes, if I go back in the slides, we actually had four conditions in the experiment. In one of the conditions, nothing changed. In one condition, only skin changed. In another condition, only eyes changed. And in one condition, both changed. And so, for the no changed and the two features changed, we found that people were very good at telling the difference between the humans and the artificial faces. But when only one of them was different, they had a poorer performance in telling them apart. For males and females separately? For males and females. I think the statistical analysis we did was, Julia is here. Did you do it condition specific, the male and female? Sorry. It was the effect of gender. In general. In all four conditions. With this condition that had four levels, with the gender. And if I may, on the sort of the kind of smoothness of the female and male, an explanation. So, yes, we could say that maybe, yeah, women tend to wear more makeup. Yes, one, the explanation, but actually I think maybe a more plausible one is that overall, physically, it has been found that female faces are naturally, they are lighter in texture, skin texture. And also this lightness and color in you, probably also what contributes to a bit like artificialness, but it's easier to probably spot whereas male faces, they have darker, naturally darker skin. And, yeah, basically, if you put even the same form, the face, and you have just you change the hue, darker versus lighter. People tend to say that the lighter faces, the female face. So we, our, actually the brain also makes this distinction. So I think that this is a more plausible, likely more plausible explanation for why we found this. Yeah, thank you. I don't know if your question was answered eventually here. First of all, thank you very much for your presentation. I was just curious, looking at all those pictures, yeah. How far is AI now integrated into detailed analysis of the brain, for instance, or that part of it, because we see now currently already with MRI and cetera, that it will be on a picture level already details. So the more detailed information you can collect from an MRI scan or fMRI, whatever, then you get more better interpretation on where it comes from. And clear analysis was just curious and where do we look from there on ahead. So if I understand the question is to what extent AI is being used nowadays for processing of nor imaging data. I can tell a lot that has to do more with the preprocessing of these images. So to just give you a bit of context, I don't use fMRI but I use a lot of EEG data. Both are kind of similar that you get a lot of noise when you collect such data from the brain. So there are the responses that you want to focus on and the responses you just want to discard. It could be due to movement to processing of an unrelated stimuli. And most of the algorithms right now we have that are pre-developed help us a lot with the preprocessing stage. However, when it comes to actual feature extraction, the features that we are interested in, I think for the to be really to have reliable results, we still do it manually. So we really go through data one by one. And once the data is clean, we decide, okay, at what stage we want to, like what sort of window we want to look at, what sort of response, what is the maximum, the minimum of the response, the range we want to decide. Those are the things that the researcher decides and conducts manually right now. All right. Thank you. Any other questions? Yeah, another one. I was wondering if you also looked into age next to race and gender. We did not because the data set that we had at hand was not so much diverse in terms of the fixed ages. So most of the faces were sort of young looking faces. Although I have to say that these days I see a lot of agents being developed with different age range. We have like even kids, agents of kids and artificial faces of young adults and older adults. And particularly with older adults, like older faces, I think it's really important that there is investment there, because for example, when it comes to health care domain and the application of having like a nurse or a doctor or a trainer being a simulated avatar, the perception of age actually does matter there. Like you generally trust someone that is older, you associate that with more experience. So it's really important to have that diversity of the faces that are artificially being generated. Any other questions or I think that's it, right? Yeah. Yeah, I was wondering to what extent we could apply the uncally valley concepts to, for example, discourse or language. Language. Can you be more specific? Like when, for example, what the person says does not match? Yeah, we got a whole load now of conversational agent and we could assess sometimes that, yeah, does not sound or feel like human because of discourse features and not because of voices or like with your work facial features. So I was wondering whether it was actually studied, maybe it's maybe beyond your background and field of study. Actually, that's a very good question. I did have one student that look into the mismatch between the voice and the face, the appearance of the face. So not necessarily the content of what was being communicated. So we created four conditions where we had the very natural looking human, the avatar that was created by the Vibe project. And then an hour robot. I don't know if you have seen an hour robot. It looks like a small, cute robot. And then we synthesized two types of voices, the very robotic, like the robotic like sound and the human like voice. And then we mixed all this. So we had one condition, human face, human like voice, human face, robotic voice, robot face, human like voice and then robot face, robotic voice. And then they did a task together with this robot that was sort of like a recognition task. This agent, sorry, this agent that was a recognition task that was supposed to tap into the level of their trust toward this agent. The agent was supposed to give them a recommendation and they had the choice to either take the recommendation or not. So we looked at what they told us in terms of like how much they liked the agent and also how much they took in how many trials they took the recommendation from the agent. The result we found didn't show any significant effect. So at least at this point from what we had, we expected that a mismatch between the voice and the face would affect people's like perception and their trusting behavior. Our experiment could not yield such result and we have a number of explanation. Maybe the agent we use was not very human like. So actually the synthetic voice did not really feel too uncanny on top of the, when it was played on the agent. But I think this is still an open question that needs more research. So you could maybe do that in the future. Okay, I think that's it for questions from the audience. I also had one final question for you because you studied it for many years now looking into the uncanny valley. And I think nowadays technology is more and more able to mimic human traits in a very good way. Do you think you also mentioned it briefly that we can ever overcome completely overcome the uncanny valley? Will we get there? Yeah, I absolutely think we will get there. I even thought like when I watched some of the recent animated movies, Alita being one of them, I was quite impressed by the level of sophistication and development. However, you need to know that all the movements of the agent in that movie were created by an actual actor. So that's why they look so natural. So biological motions being created by a human and then copied to an agent. I think what we need to do, for example, the AI models being at the forefront of this development is that by collecting multiple videos of humans doing different movement, they can create, they can transfer such knowledge, such a skill to a new animated agent. So you don't really actually have to have an actor that plays the movement so that you can have biological motion for the agent. The agent can have the simulated action on their own using the AI. So I think it's going really fast and I have a lot of good hope for it. Okay, thank you.