 Welcome to the AI for Good Global Summit here in Geneva. My next guest is Cathy Kobe. She is a partner in EY's risk practice, advising clients on the risk and control implications of legacy and emerging technologies. Cathy, thank you for joining us. Thank you for that. So Cathy, to start with, AI, can it be a force for good? How can it be used to help deliver the UN sustainable development goals? Well, I think one of the great things that AI is really good for is actually synthesizing data into information. And if you think about all the areas where better information, more equitable collection of information, can help the SDGs. So if you think about just education, for example, being able to really understand each student individually in regards to whether they best able to learn, how can they learn? And then there's some great applications of using AI to basically synthesize a learning program for them. And so if we could now do that for every student around the world in their language, in their context, that could be so beneficial in regards to bringing up the education levels, which then impacts poverty and the ability to get a meaningful workforce. And similar advances like that are also available in health in regards to individuals that are in very remote locations that don't have access to great health care, being able to use AI for diagnoses just through a picture on a phone. So I think that's where there are some great uses of AI. So as we use AI more and more in our daily lives, what impact do you expect it to have on society and individuals of a role? Well, to me, I've always viewed AI as any technology is very agnostic. It's the intention behind its development. It's the use cases that are built. And my concern in my role in kind of focusing on trust AI is that the functionality being developed for AI I think is outpacing the governance model around it. Really having impactful conversations about what is this going to be used for? What kind of secondary impacts could it have? Are there areas of discrimination where we're building something that isn't going to be available or accessible to everyone within the world? And so I think that we need to kind of need to think about as a society, to put some governance, to each have a participation, participative role in regards to determining how we want to use this technology, how and some of the defining the problems. So I think a lot of this technology starts with defining what problems you want to solve. And so far, there's some criticisms that a lot of those problems are being designed in the scientific labs, which are not very diverse right now. Less than 20% of the developers are women. There's a very low participation from a number of minority groups around the world. Disabled individuals are not as involved. And so I think as a society, having those people at the design table, because they'll come to it with different problems and say, well, for my local experience, this is what I need AI to do for me. And if those problem statements get developed, I think that the technology companies will find them, will find the ability to solve them. So to develop AI systems and solutions safely, everyone should be included. It's a collaborative project, isn't it? It is. It is very much inclusive. And I think that there also needs to be a trust behind that because if the users perceive that this is being developed in a development lab without my participation, that the company that is sponsoring it is a kind of a black box to me just as the technology itself can be, it may not get adopted. And so a lot of people have said, well, it's interesting that people can be given very strong evidence that an AI health diagnostic system is up to 10% to 20% more accurate than doctors themselves. But they don't trust it. They can't have a relationship with it. And as well, they do realize that it does make mistakes. And they don't quite understand where those mistakes could occur. Could it happen to me? Can I ask questions on how it made this diagnosis? And so I think we also need to think about from that perspective, because we have great intentions and we can provide the technology. But if you don't provide the framework around it, the governance frameworks, the accountability frameworks, you won't get the adoption rates that will really allow it to have the full impact that it can have. Cathy, thank you very much. Thank you.