 The press conference implementing responsible AI, it's really all about how do we engender trust and contribute to a global discourse about how AI can be responsible and be effectively implemented. So I'm joined here by Minister Iswaran from the government of Singapore. He's the minister for communications and information and he's also the minister in charge of trade relations for Singapore. I'm also joined by Brad Smith who I think everyone knows here is the president of Microsoft. We're happy to have you back, sir. I'm also joined by Kay Firth Butterfield who is the head of the artificial intelligence and machine learning platform here at the World Economic Forum. And then last but not least, we have Diana Paradis, the CEO and co-founder of SUADE, one of our tech pioneers. So to kick things off, minister, I'm going to turn to you. I believe you have a statement for us. Good afternoon, members of the media and also to the fellow panel members. I want to start by saying I'm delighted to be here with all of you for this press conference together with the WEFC for IR and also the industry partners. Today, Singapore together with WEF and industry partners, we will be taking another significant step forward towards AI governance. Collectively, what we are seeking to do is to build a trusted AI environment that will guide organizations to deploy AI responsibly. So these efforts, they really build on what we started last year and announced shared in Davos last year. That's when Singapore launched the Model AI Governance Framework to guide businesses to deploy AI at scale in a responsible manner. This framework translates broad ethical principles into pragmatic measures that businesses can adopt voluntarily and have a ready-to-use tool to help deploy AI in a responsible manner. A year on, the framework has gathered further traction and support. We have been working with WEF-C4IR to promote the use of the model framework, including engaging companies locally and internationally on the world economic foreign platforms. Also, international and local companies across diverse sectors have adopted or aligned their practices to it. This year in Davos, I'm pleased to share with you that we are taking yet another step forward in AI governance. Together with our partners, we are releasing three AI governance initiatives to further guide organizations in deploying AI responsibly. These are the first is an implementation and self-assessment guide for organizations, or ISEGO. The second is an addition, second edition of the model framework that I talked about. And the third is a compendium of use cases. And let me elaborate on each a little bit. As part of Singapore's collaboration with WEF-C4IR to drive AI and data innovation, we have co-developed an implementation and self-assessment guide for organizations. This guide will help organizations assess the alignment of their AI governance practices with the model framework. It also provides an extensive list of useful industry examples and practices to help organizations implement the model framework. The guide was developed in close consultation with industry, with contributions from over 60 organizations from across the globe. These include Microsoft and Swade Labs who are here with us today on the panel, as well as other established industry partners, such as Data Robot, DBS Bank, KPMG, Google, MasterCard, Salesforce, and Visa. This guide will pave the way for future peer assessment. Professionals who are proficient in AI governance could use the guide to help organizations in implementing the model framework or assess the organization's implementation. Secondly, we are continuing to enhance the model framework to keep pace with rapid developments in AI. The second addition is enhanced with real-world practical examples of how organizations have implemented the model framework. It also includes additional considerations for responsible AI deployment, such as robustness and reproducibility. It is the result of extensive consultation with industry and governments internationally. Today, 15 organizations globally have taken up or aligned themselves to the model framework. And in fact, together with these organizations, we are also releasing a compendium of use cases to complement the second addition of the model framework and the self-assessment guide. These use cases demonstrate how various organizations across different sectors, both big and small, and local and international, have implemented or aligned their policies with the model framework. And this adoption demonstrates the relevance and practicality of the model framework for organizations who are deploying AI. We believe that these three AI governance initiatives, collectively, will be of interest and value to companies who seek to deploy AI. These three interlinked publications are, from Singapore's perspective, our own contribution as part of a continuing effort to participate in and contribute to the global AI discourse and developments on AI ethics and governance. And we are looking forward to engaging more like-minded partners as we work collectively to strengthen the model for AI-related policies and standards. This work, we believe, will pave the way for the next bound in global digital economy developments by fostering trust and strengthening collaborations between the public and private sectors and also with all other stakeholders. Thank you again for joining us today. Thank you, Mr. Minister. I think that is a wonderful summary of where we are right now. But I'm gonna turn to Kay Furth, Butterfield. Kay, I think we can all appreciate 15 organizations aligning to this model framework, providing use cases. That is no small task. But can you maybe tell us where, how did this start? Where did this come from? What's some of the background? And what was your role as leading this at the center in San Francisco? Certainly. So as head of artificial intelligence for the World Economic Forum, my role is actually around governance of AI. And so as you will be well aware of, there is a lot of discussion about the need for governance in the media and in companies and amongst governments. And so what we do at the Center for the Fourth Industrial Revolution is we look at what are useful projects that we can create to fill those governance gaps. So at the moment I lead 10 projects of which this was one, and we look forward to working with the government of Singapore in the next phase of this particular project. What we, the way that we do these projects is that we have governments or business or academics or civil society come to us and raise a governance issue or maybe we look for that governance gap. And in this case it was a marriage in that we had seen this governance gap and Singapore were interested in working with us upon it. What we then do is build a community of all multiple stakeholders and then work together to create a robust outcome, which we believe that we have done with Singapore in this model framework. We have also just released a toolkit for boards of directors to understand how to do the oversight role for companies. And I want to mention that both the toolkit and this project are downloadable from our website and they are free. And what we want to do and always have wanted to do with Singapore and it's the ethos of the center and the forum's work in this area is to scale so that it's not just companies that have already been involved in the work, but this is a tool that any company can use, any company can pick off our website and delve into and hopefully find insights that they can use in their business. So it is a pleasure to have been involved in this work with Singapore. It's equally a great pleasure to be involved with Microsoft and to have Brad Smith here who is also co-chair of the Global AI Council, which we have out of the AI team at the forum. Brad. Well, thank you Kay and thank you all of you for the opportunity to be here today. I would just offer a few thoughts. First, I think that this really continues the leadership that the government of Singapore and the World Economic Forum have both brought to these issues around artificial intelligence over the last few years. Yeah, I have the opportunity to meet with government officials from around the world. I still clearly remember my first meeting in Singapore as well as my first meeting with the minister here a year ago. And one of the things that I've always appreciated is the government's commitment to move quickly and be a leader in this space. And the partnership between the government of Singapore and the World Economic Forum, I think has been very important for this work to progress. Second, I think that there is something very distinctive and valuable in the approach that the government of Singapore has taken. In some places around the world, people start to look at these issues. They begin to appreciate their complexity. And they feel that they're going to have to study these issues for many years before they start to offer even an initial framework of how they should be addressed. And the fact that the Singapore government last year took one step and this year took another step, I think is very much a reflection of the kinds of progress we need. There is no single answer for all time with technology that is this young. But we should not wait for the technology to mature before we start to put principles and ethics and even rules in place to govern AI. And so I think that the approach taken by the government of Singapore and the World Economic Forum really offers together a guidepost, if you will, for the need for speed and how it can be attained. Third, I think it's also interesting if you're thinking about this to consider how the various pieces that the government of Singapore has put together really fit together. Because in a lot of areas, one develops rules or principles or provides a framework and then stops. But having these use cases, for example, are in fact really important when it comes to artificial intelligence. And if you're not familiar with the field, you might ask, well, why? It's not something that we tend to see in every other field. And the reason is because artificial intelligence actually behaves differently depending on how it is used, who the user is, who the individuals are, who are being served. And all of that really requires a development of different use cases so that different organizations, whether they be businesses or NGOs or parts of a government, can really not just think about how to use AI but in fact start to use AI in a responsible and ethical way. So for all of those reasons, I think I would conclude by saying that what we're seeing here today is actually of real importance. I think it's of real importance to the Singaporean economy. I think it is the kind of step that is putting Singaporean companies in a position to move faster, to implement AI, and be at the forefront of technology, not just in Singapore, not just in Asia, but in the world. I think it's the kind of step that is helping to make Singapore a leader, a leader not just for technology but for trust. And when you think about what it takes to be a leader for technology, I would say it takes some of what has also made Singapore a leader over the years as a financial center. It is a place that people trust. And I think finally, one of the really good things about this is that this is the kind of effort that is gonna serve not only Singapore and the people of Singapore and the economy of Singapore but frankly the world because we're all learning together and the more governments and countries can share what they are learning, share what they're doing to develop these kinds of ideas the better off we're all going to be. So certainly on behalf of a company like Microsoft that has a big presence in Singapore but obviously a big presence in the world, I just wanna say how deeply we appreciate the opportunity to be involved both with the government and with the World Economic Forum and how much we value this kind of initiative because I think it shed some light on where we all can go together everywhere around the world. Thank you very much. I think that thing like having the initiative to bring together large corporations like Microsoft with the government of Singapore and also with one of our tech pioneers is really one of the strengths of the World Economic Forum and it's center for the fourth industrial revolution. This co-designing process really creates a lot of value add and we kind of hope that these case studies will help fast track and accelerate and scale these projects around the world because the fourth industrial revolution is waiting for no one. So I would love to hear more from Diana if you could kind of tell us a little bit more about the importance of tech governance and having that right up front in this process and then maybe some of the challenges and opportunities when your company went to implement this framework or test it out. That's right, thank you very much. So it's really a pleasure to be here. It's been a wonderful experience to contribute to the framework and to some of the examples as well. So at Strait what we have been doing for the past few years is really providing regulation in a box for the financial industry and regulation with all of its complexities and all the different colors and shades that you can find globally is a very interesting topic to try to tackle and it was very clear for us from the beginning that we would have to leverage on the best the technology had to offer in the industry if you wanted to be pioneers and if you wanted to be innovators. So obviously looking at AI, NLP, machine learning, automation tools was basically very high on our agenda as a company. So what we realized very quickly obviously the governance reality around implementing AI was very important. There is a reality around regulation and the repercussions it can have for a particular customer of ours in the financial industry in terms of fines. What it can also have in terms of implications for consumers is actually quite relevant in terms of the risk that can bring to the industry. So doing this properly and bringing the right amount of governance was really essential for us from the beginning. So right governance in many ways also means taking the right responsibility and I think there is a reality that it is the job of every innovator to really take a very hard look as to what their innovation is bringing to the market and the impact it has on society. So what we're talking about is really self-assessing yourself and making sure that the technology you're developing is fundamentally having a positive impact on humanity. So the reality with AI, it is pioneering ground. So effectively this is a new world for everybody. So we're all innovators. We're all pioneers in this conversation. So it is very important that we all start the conversation right from the beginning and actually the model framework around here and the conversations we're having around the self-assessment guide is also how you will be doing this positively as a society. So what we found and why we've really appreciated this work and why we found that it was really compelling for the industry is because the devil is in the detail in a lot of these conversations. When you create technology, you know that when it comes to AI and machine learning, there's a lot of detail and sometimes in these conversations, they're usually done at a very high level. So actually nothing effectively progresses. So what's really interesting here is that the framework, so the model framework really goes into a level of detail that we hadn't really seen before and it's very commendable from the Singaporean government and the WEF to have been looking at the amount of detail and like we were saying, to really look at companies that are of very different sizes, are very different parts in their journey of innovation and including companies like ours which are effectively innovating in the space as we go along. So we found four key challenges that we actually have seen the model really addressing very well and precisely. One of them is obviously in data. So the quality of data is something that's very well understood as part of the process. So understanding the lineage of data is very well addressed. Minimizing inherent bias which is actually a topic that we feel passionate about because obviously bias in the data and the data not being revised from the beginning and on a regular basis could actually have repercussions that are quite dire, I would say for society in general. The other problem that we found was very much thinking about human involvement. So do you actually have a human in the loop, out of the loop, over the loop? There's a lot of realities around positive and false negatives. There's a lot of detail around all of those things. And so what you want really with a framework is the right amount of flexibility in order to think about all the different use cases that you can encounter. Because from one country to even one system to the other, to one algorithm to the other, to one industry to the other, the level of complexity that you find in creating algorithms is actually very, very big. So a framework that works for the industry or for any industry effectively has to be a framework that is flexible and really allows for the right level of detail and scenarios that you can encounter. The other point is very much ethics and trust, which Brad was mentioning as well. The point around ethics, I would say is actually, so we always speak about ethics around AI and how to do this properly, but the ethics also comprehends taking along in the journey a certain amount of layman language and explainability almost to consumers to really understand what this AI is going to do in their life. And so addressing those issues in the right way fundamentally means that AI is also going to be adopted at a much faster pace and being embraced rather than being resisted. So one of the things that the framework was trying to really address, aside from really doing AI in a responsible way and at scale, was also to really enhance the consumer's understanding and confidence around the acceptance of AI. And that is very much in line around the fourth industrial revolution, upscaling people, making sure that they come along in the process of a society that is basically going to be changing with technology. And the last point very much is the element of liability. So as we all start using AI more, where is the liability and whose fault is what when it comes to generating certain models? And obviously all this ends up coming down to explainability and explainability in itself. So there are certain things that are very explainable, but there are certain other things that are more subjective and difficult to capture. And one of the things that we really loved around the framework was really the opportunity of looking at your internal governance and understanding in detail how to restructure that in order to get to what liability you're taking as a company when you're actually adopting AI. So from our perspective, this is a huge success story. This is really a wonderful example of public and private collaboration. So very much at the core of the stakeholder capitalism that the web really promotes. I think that one of the things I was mentioning before is that we are all innovators now when it comes to AI. And I would really urge and invite, I would say, more governments to take part in this wonderful effort that the Singaporean government has started to lead because it's really something that we need a lot more of. And companies to really join this movement of effectively setting standards for the industry when it comes to data and artificial intelligence. And the way we do this for more interoperable systems where effectively you're really looking at a world where we're promoting an effort for ethics, for AI and a responsible use of it. Thank you very much. Thank you so much. We have a few moments for questions from the audience. We have about two minutes. So if there is a show of hands, if we have anything, if not I might let our panel go and give them two minutes back. But we might have one, we have in the white jacket. And could you just tell us what news outlet you're from? I'm a journalist from Phoenix, New Media from China. And I have a question to Microsoft's Mr. Smith. And we know nowadays facial recognition has caused considerable controversy with Google and how to keep a balance between the improvement of technology like AI, 5G and the law. I want this question. Okay, and then do we have another question? I just want to make sure we can hear both of them. Just given the time here, can you just pass that mic down? Only a very quick one, National and Public Radio Switzerland. The definition of AI, how did you settle on that one? Because that's one of the core parts in writing regulation in this kind. So we have one question that's about definitions and then another question that's more around the governance structure and why that's really important for facial recognition, which I think is a very hot topic around the world. So, Kay, did you want to just give us a quick bit about the governance and then we can turn to Microsoft? Yes, surely. So yes, how do you define AI is a major part of any work that we do. And at the moment, we're really looking at machine learning tools. And so when we were writing this, we have a definition of artificial intelligence that actually really looks at some of these machine learning tools, for example, like facial recognition, like natural language processing. Just before I hand on to Brad on your question, we do actually have a facial recognition project that we're running out of our team and looking at the governance of that. Brad? Sure, and I'll just say very briefly, yeah, there was some press report earlier today about debate between Google and Microsoft about facial recognition. And the truth is, I actually think our two companies mostly agree on these things. Although we happened to be in Brussels never in the same room yesterday at the same time and apparently answered the same question differently, which was enough to lead to media scrutiny or suggestions that we were debating. But I'd really build on what Kay said. From our perspective, facial recognition technology is both very important, and it's the kind of thing that can create both benefits and challenges, and if misused, can create harms. And therefore, it is very important to have governance of it. And it's, in fact, I think precisely the kind of technology that benefits from the type of framework that we're talking about here. Ultimately, I think it's the type of technology that will require not only voluntary application of frameworks by the companies that are developing it or deploying it, but it will also require rules that are embodied in laws to ensure that it's used well. And I think that this kind of work is in fact the kind of guidepost that will help us develop these. Well, thank you very much. We're on a tight schedule today. I'm sure everyone is gonna go back into the cold for their next session, but I would like to thank my distinguished panel for your time and for your thoughts.