 Welcome back to the AI for Good Global Summit here in Geneva day two and I'm really pleased to have, I hope I get her name right, you'll forgive me, Amy van Weinberg who is also the co-director of the Foundation for Responsible Robotics. Before we really go into it what does that mean? Well the Foundation for Responsible Robotics is a not-for-profit organization and we're established in the Netherlands and really what we're trying to do is bring academia together with industry together with policymakers. So our whole goal is to really create this atmosphere of joined-up thinking and the whole picture though about this joined-up thinking is how do we lead ourselves towards this vision of responsible robotics. What is it and how do we get there? And I know just before you rushed to the studio you just gave a speech to everyone here. What was it about? I was talking about this thing that we are referring to at the Foundation which is this vicious cycle. So we're pointing out that these days there is this thing called a vicious cycle related to the production and the use and the development of robotics and artificial intelligence products. And what I mean by this vicious cycle I mean that sort of one thing leads into the other where we have a lack of accountability on the part of companies who are making the products. We have a lack of incentives to actually do better whether that's policy, whether that's regulation, whether that's just sort of public awareness and these things are leading to the creation of bad products, robotics products, AI embodied products and what I mean by bad products is that they contribute to the erosion of societal values. No decreasing privacy, no transparency and when you have this lack of values or this lack of appreciation for values then you have customers who have no power. It's the companies who have all of the power. And when you're in this situation you can see very easily how you go right back up to the top of this vicious cycle a lack of accountability. And so what we're trying to do is to point out that we're in this vicious cycle and to say it doesn't have to be this way. We can change it, we can turn this vicious cycle into a virtuous cycle where we create incentives for companies to do better, to want to do better and the way that we are suggesting to do that is through the creation of a quality mark. We are trying to create something that's similar to fair trade or when you see rainforest protected. You can see the label on the product. We want to do that for robotics so if you go into a store to buy one robot product you can pick which one you want based on whether or not you see this label. And what the label means is that the company has had to go through this really strict rigorous accreditation process. Have they paid attention to transparency, privacy, sustainability, fairness and then if they have they get this sticker, this label. If they haven't done a good enough job we give them an assessment on things that they can do to do better and this way what we want to do is open up the black box of how are these robots actually being made, how are these AI products being made, empower the consumer, the customer and then we change that cycle. We sort of place accountability on the part of the companies. We incentivize them to do better because customers are going to be buying products that they consider to be better. We're creating better products and we're really contributing to the flourishing of values. So how's it going down? Yeah. Yeah, it's it's it's a long and difficult process. Right. So I don't want to come and paint the rosy picture that oh everything's perfect and it's fine. There's a lot of obstacles. We're figuring out how we do this. We could not do this without the help and support of Deloitte. We have contributions of their expertise, their skills, their knowledge through the Impact Foundation. And so they're giving back to society by spending their time working on this project. But we are making really great progress at the moment. We do we have the framework created. We understand what we need from companies. We've got a lot of interest from robot companies who come to us and say they want to be pilot companies. And now we are just starting our momentum on getting the piloting going. And because you need international regulations for this to work, is that why you've chosen this setting the UN to talk about it? So we're we're also hoping that this is something that can inspire new kinds of regulations. So we we don't want it to be regulated that you have to go through this quality mark. We want this to be an inspiration for these are the kinds of policies and regulations that you could be putting in place. And yes, you're right, this is the perfect venue to be having that conversation. It's moving so fast, AI right now that you know, well, you're based in Holland. Isn't it a bit like putting your finger in in the in not the down, what do you call it? Yeah, yeah, yeah, yeah, yeah, in the bridge, not the bridge is the problem so big that we can't actually overcome it is a flood coming in. Yeah, yeah, no, I understand. And so that's why this is a multi pronged approach as well. So we have the creation of this program. But at the same time, we need to communicate about it. We need to work on branding. And it's really about consumer awareness. We need them to understand that this cycle exists and is out there and we're right in the middle of it. But that we also want to empower them to understand that it doesn't have to be like that. So I it is it's a huge question. It's a huge problem. We need the support of companies. We need the support of policymakers of customers of academics of pretty much everyone. And and we're starting here. This is what we're doing. I think the word is hold the dykes putting your finger there. Okay, but you were here last year. It was. I suspect your message was pretty similar then. Last year, I was trying to raise awareness about how ethics could be used as a resource. How ethics wasn't something that you know, I am also a professor in ethics of technology. And my job isn't to just sort of point fingers. You're doing this terribly. You're doing this wrong. But it's really also to steer in the direction of good. And so last year, I was talking about this in much more general and abstract terms that we should use as ethics as this way of how we can go forward to create AI for good means that, you know, we have to look at what the good is, and then we can understand how we get there. And this year, I'm talking about something much more specific. Instead of using ethics to help uncover what the issues are. Now let's get into hardcore problem solving mode. It's time. So overall, finally, optimistic, cautious work. That's a good question. I would say because because we have the momentum that we do on the quality mark, I am becoming more and more optimistic by the day. I do believe that these products are going to provide incredible benefit to society if we get it right. And this is one way that I think that we can get it right. Okay. Well, that was Amy van Wijnsberg from the Foundation for Responsible Robotics. Thank you very much for your time. Thanks for having me.