 Welcome to the 15th Tijsig Talks. These are the last Tijsig Talks of this academic year. Do not last Tijsig Talks overall, because we will continue with this next year. In this 15th Tijsig Talk we will have a program where two people, Morad Kortay and Jaekoma Spickler, who are both interested in robotics, but from a different perspective. My name is Peter Sponk, my co-host is Marie Postma, and Marie will now introduce the first of our two speakers. Thank you Peter, good afternoon. Our first speaker is Morad Kortay. Morad is a system professor in the Department of Cognitive Science and Artificial Intelligence at Tilburg University where he is focusing on autonomous agents, gaming and robotics. He has a background in computer science and before joining the department quite recently he was a postdoctoral researcher at the Humboldt University in Berlin. During his PhD at Scuola Superiore Sant'Anna in Italy, Morad was a part of the Human Brain Project where he contributed to the development of the Neural Robotics platform. His research concerns modeling human condition and intelligent social interaction in humanoid robots. What's interesting about the interaction is that it can be human robot interaction, but also robot interaction. For example, in a situation where two robots are interacting, one of them being a caregiver and the other one being an infant robot. In his work, Morad is creating properties for robots that we tend to think of being typically human-like properties. So among other things, he proposed to model emotions and artificial agents by means of computational cost of perceptual processing and a decision-making agent or decision-making system. And the same mechanism can be used to simulate trust of a robot towards its interaction partner. We're looking forward to your talk Morad. So today I will talk about humanoid robots for symbiotic societies. Basically, I'm using this umbrella term to introduce my postdoctoral research. And in particular, I will defend or advocate the interdisciplinary research in this direction. So to contact research in this direction, first I need to set my goals. And here are three steps of my goals. First, I am collaborating with psychologists and neuroscientists to extract core components of intelligent social interactions inspired from cognitive developmental systems and validate and validate these models by using either physically or virtually embodied agents. Oftentimes, they are humanoid robots. By achieving first two goals, my last goal is the contribute to feature of symbiotic societies by making these robots more trustworthy for their interactions in society. So now we set our goals, but we need a research framework to achieve these goals. And this research framework comes from a cluster of excellence in Berlin. Here we had a framework to collaborate with people from different disciplines. And this framework called analytic and synthetic loops. To achieve this framework, first you have to define your goal as a team, as an interdisciplinary research team. And the goal is intelligent social behavior on humanoid robots. And this framework will enable to integrate knowledge between two different disciplines. The analytic side, we have psychology, neuroscience, philosophy and that kind of disciplines. From the synthetic side, we have more technical fields like robotics, engineering and other kind of engineering disciplines. And not that this is not a theoretical or a conceptual framework. You are literally as a roboticist sitting next to a neuroscientist or a psychologist to achieve this goal. And let's see what we achieve so far within this framework. And today I will bombard you with a lot of different types of research. So if you have questions, at the end of the session we can go in detail. And the introduction of this research project will be quite superficial due to the time limit. So the first project is assessing that whether humans are co-representing robots as a human or in a different way. Okay. From psychology literature, we know that if you are naming an entity or object within the same category for each object, you become slower. Okay. Like let's say I am defining my favorites game developers, starting from John Karma, then John Tom Blow. And with each name, I become slower. And if I partner up with someone as a human, if he or she named her favorite game developers, I will show the same fact. I'll become slower. So this show that we are somehow co-representing our partner's speech. Then we integrate a human robot in the setting to assess that, okay, for the humans, yeah, this effect exists, but what about the robots? And surprisingly, we found that this effect does not imply for the robots. Quite surprising fact, robots, if you partner up with robots, you become faster instead of become slower. And in another research, we go one step further and assess that what is the neural representation of the different interaction partners as a human, as a humanoid robot, and as a computer tower. So we create a simpler game experiment. The participants in the scanner interact with these three partners. And we record fMRI data from the participants. And we assess that theory of mind network is activated for all of the agents, all of the other partners, but there's a ranking between them. For the humans, it's more activated. Robots ranked as a second and computer tower ranked as a last. So these are the most interesting research that we conduct. Of course, we are also doing some pure robotic research. And you see on the bottom left, in this setting, we are enabling or creating a framework for our robots to achieve a heterogeneous interaction. By heterogeneous interaction, I mean the interaction partners has a different physicality and different cognitive levels. One of the arm of the robot can be short or a robot can generate different sounds. That's the physicality. And these robots also can have a different type of expert tests. Like one can be capable of using correct names for a specific category. The other cannot. And of course, we have a framework. We cannot blindly implement this framework onto robots and enable it to enjoy the freedom to live in our society. We need some guidelines. And we are in the process of creating a trust framework based on European and Japanese ethics standards to make robots more trustworthy. And later I will go in more detail. We also have another ongoing project starting from robot-robot scaffolding and designing a teacher robot by using the human teacher's gestures and realizing them on robots and employing them on classroom. And on the bottom left, you are seeing one of the latest works. We are using a gated multimodal reinforcement learning in here. Here, the now robot interacts with three different partners. Each person has a different skills. And now robot is differentiating skills of its partner. And tonight is a submission date of this paper. So wish me luck. I think cross our fingers. The last project in here is on the discussion level. We are planning to provide a robot as robot self based on infant literature. So in the infant literature, it is quite a developmental stage of the infant for acquiring a self. It's a stage. Back in the first stage, it has an ecological self. It cannot differentiate the action of others and action of itself. Everything confined in a single entity. In the interpersonal self, it can differentiate its own action and other actions. At the last level, cognitive self can individuate the partners. And this is still in the discussion level. And we are planning to provide a computational model of robot self in this sense. So up to now, I just abstractly mentioned about what we are doing. And I select a few examples to be more concrete for your questions. Here, what we are trying to do, enable our robots to interact heterogeneous partner as a human and as a robot. And heterogeneity is everywhere in the world. In this picture, this caricature you see in here, no two agents are seen as a robot or as a human. So we shouldn't expect a robot to interact with the same partner or similar partner during their lifetime. So to do this, we design a four-layered cognitive framework, a four-layer trust framework guided by ethics principle from European and Japanese one. But today, I will not discuss about ethical principles, just give you a computational idea behind this framework. So for the cognitive level, we currently implement three different components and I will introduce them in the next sense. So for the, we first create an interaction framework and this framework requires three different parts. In the first part, you have an agent. Agent should interact with the environment and the interaction partners by using their, it's a cognitive modules. And we have an environment. Environment is providing a task to the, or agent. And task in here is to find the room or patterns, which is less noisy for our robot to process. And during the interaction, interactive experiment robot interact with three different partners with different physicality as a simulated agent, as a robotic agent, as a human agent, and with different guiding strategies like reliable partner, unreliable partner and random partner. Reliable one, whenever robot has help, it will show a pattern or, yeah, it will show a pattern that is associated with less cognitive load, less cost or less energy. And at the end, robot will learn through interaction. It is simple, a model free learning method. And it will distinguish both interaction partners guiding strategy and find the patterns that associated with less amount of cost. Of course, you can say that, okay, this is quite boring grid words, but in robotic case, it's not that simple. Still, for a single partner, you need to interact with it for more than 20 hours. So it's quite expensive in the sense of time. You can use a, you can adopt this framework in a game setting, like a Donkey Kong in here. Instead of finding a pattern in the maze, you can, your agent can locate the letters that are useful to reach and save the sprites or increase the scores. You can also use a tabletop games with the actual human partner and robot partner in this setting. But again, it will require a lot of time to play with this game and human will get bored. So from the technical side, we have three different components, as I said, but you can add additional components like explainability as a component in the framework. And the first component is a multimodal auto-associative memory. Here we have an energy-based model. It's called Hopfield neural network, but you can use another one like a deep leaf network. And with that energy-based model, you will have an input image and network will converge into some state. And the operations that reach that steady state will count as an energy or cost. Then we use this cost to fit in our reward function. Basically, if you are moving from high-level, high-cost state to low-cost state, you make a right choice and we are rewarding you with a plasma. You don't need to use minus one or plus one. You can directly put the energy to the system also. It will work. And lastly, we have a partner-preference formation module, which is a module that provides your agents like turbine-like functionality. With this module, the robot can infer what kind of strategy that your interaction partner is following and try to avoid that interaction partner during the experiments. And with this setting, basically what you can achieve, you can achieve integrate multimodal information. You can achieve heterogeneous introduction, interaction. And you can basically solve this kind of simple maze or game-like problems with your robots. Of course, I introduced a lot of project in here and none of them is a solo project. I am working with amazing colleagues, collaborating with amazing colleagues. And I hope in my next talk, I can add people from the table call zone. And lastly, I would like to advertise a workshop that we are organizing in incoming IROs. If you think your research is overlapping with one of the topics in here, please let me know and I will encourage you to do a short paper submission to our workshop. Yeah, that's it from my talk and I'm ready for your questions. What I didn't get is what is a high cognitive load for a robot? Okay. High cognitive load, a sample that it didn't see during its training and it's contaminated with environment noise or you are merging with the patterns. Basically, if robots see unfamiliar pattern, it will retrieve a high cognitive load for its perception process. Okay. Can I have a follow-up question? Sure. So I think this definition of trust is very powerful. So it's basically it means that you can trust someone's perceptual analysis of the environment and you don't have to spend your own resources on analyzing it. Where there are cases or situations where you thought that you ran against limitations for this type of definition for robots? Yes, the limitation is, one of the limitations is the adding additional sensory modality. Like we used audio visual information, but practically you can also add the motor command of the robot. But that will bring additional computation for our experiment and we didn't do it. Do it. Another problem is noise, both from the robot and from the environment. That's another problem. And lastly, sometimes interacting with human partner forget its script and it didn't interact with the robot properly. And that can be some sort of problems with or or framework. But you did not run into cases where you thought that you would actually need an additional definition or that this one was maybe too weak. It was more about fine-tuning certain scenarios, if I understand correctly. Yeah, exactly. Yeah, that's one of the things that we should unify within this workshop. Okay. What we mean by the robot trust. We are just using inputs from psychology to define our trust. But later, I hope, in this workshop will come more unified definition of the trust. I have seen some of this research of other people that where the one thing that I found really interesting is that you say, well, people can play games with robots and you can then robots can learn from that, et cetera, et cetera. Where actually the challenge for them was to make the interaction feel natural to the people that they find it an entertaining game player. So to an extent fits that in your research. Can you say something about this? Yeah, that's a quite interesting question, because we have some meta-net studies in the literature, how to make the task more playful, like some more features of your partner or partner that understand that your cognitive load and act accordingly, like the partner that puts you in the flow. Okay, tasks will not be, your assigned tasks will not be too hard for your skills. But you will understand that you cannot go beyond this and you will disengage with the task. And trust should be so simple for you that you can achieve. So your partner should make you in the flow, actually. That's what I mean. Yeah, well, these are two aspects, of course. One is the technical aspect of how do you, are you an interesting game opponent or game player? And the other one is within the confines of the game itself, the game rules and the game logic. But the other is, are you and are you an interesting social game partner in the sense that is it just the mechanics of the game playing or is that also that social component? And I think that is more core to your research and probably the most interesting part of it. Sure, social component is important because, let's say, joint attention. If you are playing game and you're not paying attention together with your peers, it means that you are not engaged with the task and that will signal your part that, okay, this game is not important for me. I don't want to play in that sense. I'm just playing for the sake of killing my time. So there's no game full component in that.