 Johanna Seidt, what is Denmark doing in order to promote responsible AI at the moment? Well, Denmark is actually in a very interesting position. The philosophers at AU, at Aarhus University, started very early on in an effort to communicate not just to Denmark, but to the rest of Europe, how important it is to involve humanities researchers directly into the entire, not only the entire discussion about artificial intelligence and robotics, but much further also into the actual production process of AI systems and, in particular, social robotic systems. So we started in 2014 with a conference series that we call Robo-Philosophy Conferences. We focus on social robotics because the problems that arise in artificial intelligence actually are amplified once artificial intelligence systems get embodied and get embodied in a fashion that people relate to no longer in the tool relationship, but in a completely different new type of social relationship. So social robots are no longer tools, that is the slogan. And for that reason, of course, the opportunities, but also the risks of this kind of embodied AI are at a different scale. So we started very early on. We were fortunate enough, supported by external funding, to receive the means to communicate this research. Right now Denmark is putting additional funds into the development of the area. So I think in many ways we are as far as, in particular, the interaction between technology and humanities research, but in particular also the humanities research on these areas is concerned. It is curious, and this is our main message, that in all other areas, engineers turn to experts, but precisely when it comes to the social domain, there is the presumption that engineers are actually the experts in ethics and sociality. Why is that so confusing? I have no good explanation. I cannot really explain that. I can only tell you that I have been trying for many years now to change the general conception here. Remind engineers that if they were to build a weeding robot, they would go to a botanist, to a specialist on plants. When they build a social robot, strangely enough, they do not turn to the experts on types of social experience, on human social well-being. They do not turn to the humanities scholars yet. At best there are collaborations with psychologists, but that is not the whole story of human well-being. What can academia do then to promote responsible AI? Do you think there is a need for a new education, or should it be inside something that is inherent to existing educations? I think we definitely need new educations. We have, at Orchid University again, we have begun, but Danish universities have shown similar efforts. We try to bring the humanities and engineering closer together. For instance, we have now a new subsidiary, a CFA, Humanistic Technology Development, where we also communicate to our humanities students that in the future they need to prepare themselves to work much more not just as commentators on social developments, not just reflective, but actually proactive. So the change from the reflective humanities to the engaged proactive humanities. That is the goal, and that goal can be promoted, implemented by creating new educations that either just bring together students from different disciplinary tracks that need some time in between in project groups, or simply by creating new interdisciplinary educations. But what will it take to have AI to get AI into the public domain, so to speak? How do we get a public debate on AI? Because to most people it's not something that they care about. They shrug their shoulders and perhaps smile a little awkwardly. It's something for Hollywood to decide and show us what it is. But when will we start discussing the implications of AI in public? Well, I think we actually have begun. The concept of fake news is very much in a public awareness. The fact that AI is influencing our elections, that this information is threatening democracy, this is something that we see as a general public topic right now. But almost always as a threat, not as an opportunity. Yes, and I think the next step will be, I mean, part of the communication, as it were, is missing. We also have, of course, news about AI discovering outperforming humans in the diagnosis of cancer, discovering new medicines and so on. That story needs to be promoted to some extent, but I think before we do that, we actually should be very, again, take sort of the cautious path, involve the experts and talk very in great detail about ethics, intelligence, motivation, action, cognition, information processing. There are so many terms that at the moment are in the public discussion and are not clearly defined, often perfectly, wrongly used. So I think it's a beautiful task that the humanities scholars have to get together with their colleagues in engineering to bring together the right expertise. We are discussing at the moment as though, you know, we are doing, we are trying to solve physical problems with folk physics as it were, right? So we need to bring the right expertise to this discussion and that's certainly within the responsibility also of the researchers to engage themselves more in the public discussion. One other thing that we can do is, and that is an initiative that the so-called Foundation for Responsible Robotics is at the moment implementing, we can develop quality marks. So in the same way in which certain food products now carry quality marks for, you know, ecological production, in the same way we can and will in the near future introduce quality marks for technological systems, applications that are produced relative to certain standards of responsibility. Like ISO standards. Yes, exactly. Or this will be a special quality mark for responsible technology development. Thank you. Thank you.