 Good morning, everybody. Great to be here in the AI for Good Summit with Gary Marcos. I'm Gabriela Ramos. I'm the Assistant Director General for Social and Human Science and the sector that is overseeing the implementation of the recommendation on the ethics of artificial intelligence. Gary. And I'm Gary Marcos. I'm a cognitive scientist who studies artificial intelligence. I'm a serial entrepreneur who built a company that I sold to Uber. I'm an author of the book Rebooting AI and I'm spending all of my time these days thinking about AI policy. And I think that's a very good way to start because you have been calling a lot of the attention on the issue of governance and it would be great to hear you saying a little bit more about that because we also think that that's where the issue lies. So I had a wake up moment in February when Microsoft released Sydney or Bing and this New York Times reporter had this crazy conversation with it and I thought Microsoft would probably take it off the market until they could really fix it and they didn't. And it was a reminder that we're still at the mercy of the big companies to try to get stuff right and we shouldn't just trust big AI companies to get AI right because there's so many risks. And although there are lots of advantages, there are lots of things that could go wrong too. We need to figure out how we as a society can make sure that things go right. I think part of the reason that you and I have kind of found each other is we're really looking at these problems in similar ways about what the risks are and what needs to be done about them. And also with the very clear understanding that we can shape the technological transformation. It's not something that is exogenous to us and that's what UNESCO is really the message that this is not a technological conversation as a societal one to ensure how these technologies can help us solve our problems and not create more and actually 193 countries in UNESCO and now the US is back so 194 countries are of the view that we need to put the human at the center human rights, human dignity, environmental sustainability, fairness, inclusion as the outcomes that we look for these technologies which is not always the case. And for that I think that institutional innovation is very important at national, regional and global level and we agree with that. And I do, I think we agree very much and I think we agree that there's a gap right now between the principles that everybody is subscribing to, very well articulated in those guidelines and getting companies to actually do these things. So my favorite example right now is transparency. Everybody agrees that we need transparency but we don't know what's in GPT-4 for example what data it's trained on and the data that it's trained on makes a difference for things like bias and so forth. So I think you and I would agree that the next step is really how do we make the step from the abstract principles that we want to the reality, implementing them in the real world right? Yes but I think that then we need to change these false notions that regulation will kill innovation because in all the sectors the pharma, the bio, all these sectors are very well regulated not excessively in some ways and therefore I feel that we really need to think how do we develop these rules of the game so that the technologies deliver much better. There is a little bit of disagreement because I don't think that I'm looking after how do I work with the big tech, that's fine. The governments, the governments need to upscale and deliver for their duty of care because they are the ones that are paid to protect people. So an idea that I think we're converging on is model governance. So I've been working to develop a nonprofit I'll talk about today where we want to try to model the governance that different countries need if they can't develop it themselves to kind of give them a lift to do that and it seems like you've been thinking about very similar ideas. No but that's why I think that our listeners or viewers they would be happy to know that we will be joining forces because now we develop a tool to know where countries stand in terms of their capacities, not only legal but cultural, sociological, technological, scientifically. The readiness assessment methodology by UNESCO, we are piloting in 40 countries and more to come. We will know where they are and then we will work with them to see what kind of institutional developments need to happen, legislative, regulatory but you're completely right. What we need to come out is with the model governance framework for AI and then governments decide how to do it because neither you or I are going to tell the governments what to do with themselves. They know better but we might have some benchmarks that they can throw in. We're calling what we want to develop governance and inbox and the idea is to make it as easy for each country to do what it wants but also to give them a chance to customize and do what they need for their particular country. So the idea of joining forces around this is just fantastic. And watch that space because I think that we will be working together to deliver better for AI for good. Thanks so much for talking and more soon.