 Thanks for joining us here in Geneva for the AI for Good Global Summit 2018. My next guest is Francesca Rossi. She is AI Ethics Global Leader at IBM Research. Thank you very much for joining us. Thank you. So you were here in Geneva this year, co-organizing a track on trust. And you've introduced this concept, it's the trust factory. So what can you tell us about it? So yeah, so while we were thinking about how to put together this track for the summit, the trust in AI track, so we brainstormed among the, you know, the organizers how we wanted to structure it, what topic we wanted to have. So we decided to structure this track around three teams along three different dimensions that we think trust in AI can be spelled out. So for us, it should be trust between the users of an AI system and the AI system itself. So the system should be trustworthy in the sense that maybe it's not biased, it's fair, it's explainable and the way it uses the data of the user is transparent and things like that. Then another dimension is trust among different stakeholders involved in AI and among different communities, among different corporations, producing AI that should collaborate beyond the fact that they may compete on the marketplace, but also trust among different cultures that may have a different idea of how AI should be developed, deployed and used and so on. And so, and then by thinking about all these different dimensions, you know, we put together nine different projects that we presented yesterday during the track and many people involved, you know, few people involved in each one of these nine projects. So in a very concrete way, we thought about the whole trust in AI, which we think is very fundamental in building AI. And then we were thinking, well, but these nine projects, you know, after all are just nine examples of how AI should be thought about and what we, how we, you know, what is the main really core thing that we should think about when deploying and developing AI. So why don't we think about expanding the track and going beyond what we can say in one day at the summit and what we can put together and start at the summit. Then let's launch this idea of a kind of a global incubator for whoever wants to put together a project on building a trustworthy AI. And the main motivation for the track and also for this idea of the global incubator is that we think that if AI is not trustworthy, then it will not be adopted as widely as it could be. And these benefits would not be exploited by us. And it's not good to under-trust AI because we would not be able, if we don't trust AI enough, then we would not be able to get all the benefits that it can give. But also if we trust it too much, also it's very bad because we are giving, assuming that it has capabilities that maybe it doesn't have. So that's not going to be good. So we really want to build AI that we can understand was the correct level of trust. And so this idea of the trust factory, and we already have a website, trustfactory.ai, is to really continue and expand the work that we did for the track and put it at a global level and also in the long term rather than one year like these nine projects will be. So once trust is established, that's where we can start reaping the benefits of AI. So let's talk about the benefits a bit more because you are involved in many initiatives to use AI as a force for good. Yeah, so in the last, I would say, two years or two years and a half, there has been really a huge amount of initiative that have been started, research centers, units within corporations or within universities or within governments, declarations, strategies for AI in Europe, everywhere, China, Russia, US, whatever. And so around really trying to understand what it means to build AI that is beneficial for individuals, for societies, and trustworthy and in a responsible way such that not only the AI can be trusted, but also the corporation building the AI can be trusted. And so really there are many, many different initiatives. IBM, which is the company where I work, has inside a lot of initiatives, both from the research point of view, there are a lot of papers that are being published regularly around how to detect bias in data, how to mitigate and how to recognize bias, even if you don't have access to training data, how to make AI system more explainable. And so from the research point of view, how to make them value alignment, to make sure that they follow some optimization criteria to reach some objective, but at the same time also they follow some ethical guidelines that may be relevant for the task that they are trying to address. And so inside there is a lot of work in terms of research, but also work in terms of collaborating with the rest of the world in trying to understand what it means to build this responsible AI. So for example, IBM has published a data responsibility policy. So IBM is a company where we are not going to reuse the data of our clients for other clients or other tasks. And that of course is very attractive for our clients. But on the other hand, put us in a kind of a more difficult position, because of course if you have less data than your machine learning approaches, your data driven approaches have less data they can work with. So we have to compensate with other things like symbolic AI, domain knowledge, reasoning, and so on. So data responsibility policy, then we publish the principles for the cognitive era we call them where we state explicitly what's the purpose for the AI that we want to build. And the purpose for us for AI is very clear. For us, AI should augment human intelligence and not replacing it. So that means that we are focused on that kind of AI because we are working to help other companies to use AI in whatever they need to do. So we want to build AI that helps professionals do their job as well as possible. And then in this other initiative, so we are for example, founding members together with the other five companies of the partnership on AI. So the partnership on AI was funded by six companies, among which IBM, Amazon, Apple, Google DeepMind, Facebook, and Microsoft. And from these six companies, we started this idea of a platform for discussion, multidisciplinary and multistakeholder discussion on issues related to the pervasive deployment of AI in our society and the impact of AI. We decided that this initiative was going to be open not just to companies, but to many other stakeholders like NGOs, civil societies, universities, professional associations. So now we have 53 partners starting from six in the beginning of 2017. Now we are 53 partners, so which only I think about 30% are companies and everybody else is known for profit. Because we think that only this very multistakeholder approach can help really understand what the issues are, identify them, define them and resolve them and possibly get to the best practices on how to deal with these issues. And we are addressing the issues of deployment of AI in our society from different thematic pillars we call them. So one is fair, transparent and accountable AI. So you want to make AI that is fair, transparent, explainable, accountable and so on. The second one is called Safety Critical AI, which includes all those AI systems where there could be decisions which are life and death decisions, like for example healthcare or self-driving cars or other things. Then another one is AI and Jobs, the impact of AI and Jobs. Another one is Human AI Collaboration. So if you want the AI system to work together with humans and to make it a real team, there are issues that you have to resolve to make sure that really there is effective teamwork between humans and machines. So all these addressed in a very diverse and inclusive environment. And we hope that the output will be in the various working groups of the partnership in AI will be really guidelines, best practices, better understanding, deep understanding what the issues are in every possible sector of deployment of AI. And the fact that this initiative, compared to all the other ones that are more academic oriented, this one has this presence of the companies, I think that is very unique and brings the voice of those that know what happens when you deploy AI to the real world. A final question about the summit itself, because you were here last year during the launch of the event, how do you reflect on the work has been achieved and on this year's summit, which is more action orientated? Yes, so I think the two summits represent an evolution of the overall field of trying to build AI for good. So last year, it was very good to have these two communities, the UN agencies on one side, which means the institutions that can know what problems need to be solved, like for example, those related to the sustainable development goals. And on the other side, the AI community. And these two communities needed to know each other and to understand each other, the terminology, the issues. And so last year was, I think, not very concrete and actionable because there was the need for the two communities to just talk to each other and know each other. But this year, I think, is much more concrete and actionable. So these four tracks that happened in day two were really very, very concrete. Today we saw, I mean, I could only join my track. Of course, I didn't see the other ones. But today we saw the summaries and they were all very concrete and a specific list of projects with a specific timeline. And meaning that in 12 months at the next summit, we hope that all the projects, some track, our track had nine projects and another track, I think, had four or five, another had, I think, 15. So really, in one year, we will see really a lot of concrete results. And I think that this reflects overall the state of the discussion where there was an initial period of time where the different communities, the different experts of various disciplines needed to know each other and understand each other. And now they're ready to be more concrete and to produce specific outputs. Francesca Rossi, thank you very much. You're welcome.