 Welcome to our session and most importantly, the launch of the Global AI Action Alliance, intentionally shortened to Gaia. If you are members of the forum, please remember to stay with us after this panel for the discussion. And also please everyone use our hashtags Davos Agenda and especially for this session Global AI Alliance. So what is the Global AI Action Alliance? It's a multi-stakeholder collaborative platform and project accelerator to speed up the adoption of inclusive trustworthy and transparent artificial intelligence across all sectors and all needs. It provides an umbrella platform for everyone working on responsible AI to be able to come together to engage in real-time learning, pilot new responsible AI tools and approaches, scale the adoption of best practices, and undertake collective action on emerging challenges. And then through the forum to amplify that work everywhere that it's needed through our networks in government, arts, academia, civil society and business. So being a real connector for everybody to learn from one another about responsible artificial intelligence because we're focusing on a positive future. Extending the use of beneficial AI for everyone no matter who you are or where you come from. So please reach out to me and join our Alliance. It's now my pleasure to be able to welcome our panelists. And I want to start with a question to Vilas Starr. Vilas is the president and trustee of the Patrick J. McGovern Foundation and is with Arvind Krishna, our very first co-chair of Gaia. Vilas, you're known for your working global AI for good. Why do you think the Alliance is needed and why is it needed now? Thanks, Kay. Good morning. Good afternoon. Good evening to everybody joining from around the world. It struck me this morning as I prepared for this panel that the term artificial intelligence as familiar as it is maybe to our panelists and some of our audience. For much of the world still reflects a pretty abstract vision of maybe something that feels like a science fiction future. But I've sloshed through the rice paddies with a farmer standing next to me. Showing me how the predictive analytics that are delivered on her phone change how she plants and harvests rice. How she acts the livelihood of her family on an everyday basis. I've seen how AI can capture and bring to life dead and dying languages of Indigenous peoples. Giving, of course, voice to the vocabulary of those languages, but also celebrating the identity of peoples. I spent time with children playing with AI-enabled toys and seeing how that dynamic learning creates new wings for creativity and inspiration. Since we're among friends, I'll also share with you I've probably spent an inordinate amount of time playing with those toys. But we've also begun to see how AI is affecting our lives in maybe less positive ways. How it's creating new divides and civil discourse. How it's giving voice to implicit and sometimes explicit racial bias. How it's silencing the voices of the most marginalized. So for me, artificial intelligence isn't a term about an abstract future. It's about a dynamic that's affecting every person on the planet every day right now, right here. And the fact of the matter is right now these technologies are being created by technologists, by academics, they're controlled by policymakers. It's not my view that they're necessarily the wrong hands, but it's certainly the case that they are too few hands to shape our common future. If we're all going to be residents of a global human village, my question is shouldn't we, we the collective we also be the architects. So this is an urgent question to me, standing in the civil society, heading an organization that cares about AI for good. We're seeing that the ways that we're defining how individuals interact with systems today, how society gives over control to autonomous decision making systems of seeing how a lack of a community voice is shaping where these technologies are going will define our future action and a more collaborative approaches needed. So Gaia Gaia is a place for a very diverse set of global actionaries to come together, not just technologists and policymakers, but civil society nonprofits the direct beneficiaries of these technologies to come together and think about how do we create AI that that that attempts to achieve the sustainable development goals. How do we build data technologies that are grounded in intermediaries and fiduciaries that protect our interests. How are we bringing communities together to shape not just the development of these technologies, but how they're used how we learn from their use and how we inform the next generation of products. To support this work, the Patrick J McGovern Foundation today is making a series of commitments to support the use of AI for good. First, a commitment of $40 million in grants to support nonprofits and community organizations to begin the process of deploying these AI products in pursuit of their already existing goals. These organizations know the challenges that we must face. The Alliance will bring together partners that can help to do that. This is a continuation of our 2020 work where we've looked at climate change and building a tech workforce for the future at a digital health economy all through the lens of how AI can support this work. Second, we're committing to create a new infrastructure for technology services for direct data and AI support to organizations that want to use these products for good. We'll be making more announcements about this the spring so I hope you'll follow us on Twitter at Velasdar at PJM FND to hear more about this. And finally, we're committing to be participants in a global conversation, one that brings together all of our friends in the technology sector, government participants and academics, but also innovators, people who on the ground are deploying these technologies and democratizing access, people who are solving the fundamental challenges of how we feed our families, how we make sure we stay healthy, and we find economic opportunity. For us, it's not about the seat at the table that I think many of us on this call already enjoy. It's about inviting others to pull up a chair. So the end goal of the Alliance of our work at the foundation and I think of all of the participants on this call is to create an artificially intelligence enabled society to shift our focus from building responsible and ethical AI products to building a responsible and ethical society that is grounded in these products to use artificial intelligence to create genuine equality. I'm really excited for the launch of this I'm looking forward to our discussion here on the panel. Okay, I'll hand it back to you. Thanks so much. Thank you so much. The lesson again. Thank you for your donation to the Alliance. I want to move now to Arvind Krishna, who is CEO and chairman of IBM and has also agreed to serve as co chair of Gaia. Arvind. Can you also share with us why you think the Alliance is needed and how you see it playing a key role and also about your kind donation to our Alliance. Thanks, Kate. Let me also answer the question and build a little bit on what we last talked about. Your statements are no longer enough. Society is asking leaders, leaders from everywhere business government institutions to act to restore faith in our institutions. So why around AI artificial intelligence is one of the rare technologies that offers to unlock 14 trillion. Yes, that's trillion with the D in global productivity. That is GDP over the next 15 years. But there's a big inhibitor. The inhibitor is trust. Can we trust these technologies to truly do what we'd like them to do to do what good policy would dictate they should do. If it's a smart speaker, if it's something like that, probably it's fine. We don't need to regulate it very heavily. But if it's making life and death decisions, policing decisions, medical decisions, decisions on which government service you may get, all of which is going to happen, the genies out of the bottle, then we need to act to ensure that these technologies are being used in an ethical way. And the business lens that I always look at is that the ways in which AI is applied has to be applied in an ethical way. So to touch a little bit on what we will do. We have a long history on these technologies, but IBM has an even longer history in trying to be a good steward of technology and good stewards of technology have to act in a way that technology is applied for good. And so in that context, we are going to donate in number of our toolkits, the fairness 360 toolkit, the explainability 360 toolkit, and something which we are working on called fact sheets. So fairness and explainability are trying to watch and ensure that AI is not biased is indeed fair as humans apply the word fair, which comes more from some of your backgrounds in law and society than from mine in technology, we will leverage the expertise of all of you in applying that into the technology. Explainability humans tend not to trust things that are a black box. So you have to make them more explainable and fact sheets, something which we and a number of others are working on to try to explain where did the data come from. What's the provenance of the data. Can you explain who trained the AI so that it's much more transparent than before. And so bringing all of this in and working with a global audience, because I do believe, and I know global opinions may differ that business or private industry has a very big role to play in how these technologies get developed and deployed and also create skills that will allow people to participate in that $14 trillion productivity improvement. And so I'm really excited to work with the global AI action alliance. And let me put the emphasis on the word action, not just a panel discussion, and that is what excites me. Thank you so much, Arvind. And what a tremendous gift. I know that many of you listening today also come from companies who have tried to get your heads around this work and have your own systems of doing so. So I'm hoping that you will follow Arvind's lead at IBM and also want to participate in this sharing and making sure that everyone benefits, not just those people in the global north where AI is at the moment so concentrated, but also in the global south. And that really brings me, Henrietta, to a question for you. Henrietta is the executive director of UNICEF. And thank you first of all for agreeing to serve on the Stereco of Gaia, but as head of an international organization, you see the benefits and risks of AI for children across the globe. How do you see Gaia playing a key role in protecting and nurturing our future generations? Well, thank you, Kay. And just like Vilas and Arvind have said, I think it's going to be a very important turning point in how we as a world structure and think about artificial intelligence. You know, one third of the users of the internet are children around the world, and yet we have one half of the world not connected. So what happens when they go online? They see a whole nother world. There are AI enabled toys, and there are virtual assistants, and there are chat box, and there are those who tell you what to learn next, what to read next, who to be your friend. So many guidance activities happen on the internet for children. So Gaia will be a very important vehicle for us to see this world through children's eyes because they are going to grow up in this world, and it will be important for them. I might also mention that there are a couple of things that I think we could be doing as a team. One is to Arvind's point about action. Principles should be taken into practice, and that's what we and everyone in this session should be thinking about. How do we actually take this into the real world? How do we design children into the use of these systems? What about government regimes? How do we reflect human and child rights and diversity and inclusion? So number one is lead by example, and that's exactly what Arvind and Vilas are talking about. AI policies, and let me add in that I think that Gaia can join several other alliances. So under the generation equality, we are part of a leadership group on technology. Gender and girls are going to be very important that they are part of this world for artificial intelligence. So that will be number one, joining up of Gaia to several other alliances that are at work. The second is a point, Kay, that you were making, which is that as we expand connectivity, it's going to be very important that we focus on all the children and young people, people in countries that are not just where we currently have connectivity and data. So we're collecting lots of data from children and young people who are in the developed world, but less so in the developing world. But we want them to be part of it. So let's design that one from the beginning. The End Violence Partnership, which I am co-chairing, we've just made a $10 million investment to help develop and scale up technology and AI solutions that will track online sexual predators. It's the abuse on digital online platforms that I think are very, very important. We need to scale up machine learning and the ability to be able to detect, remove and report sexual images and videos of children. We need to prevent online grooming. We need to disrupt live streaming of sexual perpetrators and AI can help us to do all of that. And then thirdly, Kay, I really love that you have created a new youth council. I think having young people part of it is going to make a big difference and they need to have their voices and their thoughts heard. So there are three ideas for you, Kay. Thank you so much, Harietta. And I couldn't have schooled you better in talking about the AI Youth Council before I moved to Will I Am? Because actually this was Will's idea. I think about two years ago, he looked at the people in the room and said, there are a lot of old white folks here. And we started the AI Youth Council on the back of that comment. So I'm delighted to be able to introduce Will I Am, Chief Executive of I Am Plus, and Will. Not only did you help us think about the AI Youth Council, but also you are chairing our judges for our smart toy awards, which we launch just next week. But I want to take you to your new song, American Dream. It includes a call for better funding of education. Can you see a way of AI playing a good role in education for our children in America but across the world too? Yeah. We saw firsthand the whole entire world how fragile society is and how everything could be shut down. And people didn't realize that school for kids that are coming from underdeveloped communities or at-risk students, how school is a refuge. And learning from home is hard when your parents, you know, have two jobs. If your parents don't, there's a lot of parents that don't even have jobs and can't feed their kids at three meals. And school was the way for them to have their three meals, right? Some parents only can afford dinner. It's hard to learn when the teacher is taking shortcuts. And there's a lot of awesome teachers that came through and stepped up during COVID. But there's also a lot of teachers that took advantage of the fact that, you know, they didn't have the traditional regiment of grading. So AI could be an amazing tool to help educate a whole new fleet, a whole new troop of folks that are going to go out in the world and create new jobs, let alone fill the jobs that are not filled. But then, you know, that's the front end of it, educating teaching folks. But then you also want to upskill and encourage kids to build AI tools from inner cities and, you know, poverty-stricken areas. They need to be a part of that conversation. So it's both ways. It's front and back. You know, educating AI, educating people, and then people building AI that's going to serve society, right? Because you're not going to have, you know, proper training data if it is not coming from folks that live in those communities that understand those communities. You're always going to have bias if the folks that are going to receive that AI and the folks that train the data did not understand the conditions, truly understand the conditions of the folks that are going to receive that AI. They're living in those environments. So it's really important. 360, AI, educating folks, people building AI and training data. So we have a truly unbiased, trustworthy platform. Super. Thank you, Will. And also, as I said, we are just going to launch the smart toy awards next week. And I wondered what the judging team hopes to showcase for children who will, of course, be living and working with AI. Can I answer that? Sorry, Will. Yes, I wanted you to. I apologize. So right now we're at a, this is the beginning. And so an AI toy is question mark, right? There's a lot of AI in our lives now. And it's a very loose term. A lot of people don't really understand what AI is. They think it's sci-fi. When actuality it's around us every, and every device that we hold or some type of AI inside of it. But when it comes to kids, you know, I'll be looking for AI that is mindful and has a human approach and takes into consideration the moment of a kid playing. And then when that AI, what that AI is doing to mine and monitor, because that should not be, a kid should not grow up and worry about the toy it played with. And how the AI was learning the child that that to me is a very thin ice that we have to make sure that a kid is protected as a child. And when they get older, right, when I was playing with GI Joe's, I didn't have to worry about the GI Joe learning me. And then, you know, when I was playing with transformers, I didn't have to worry about the transformer and Optimus Prime learning me and then selling data to an insurance company as I got older because it understood and predicted what I might be like. So these types of things we need to make ensure that a kid is safe as a child and protected as an adult. Thank you so much, Will. And sadly, 30 minutes is never enough time. And as we come to the last portion of our time together, I just wanted to do a lightning round. Vilas, anything you want to find closing remarks on the role of civil society in this area. Thank you Kay. I'm so excited for the comments on this panel they reflect exactly why we need a diversity of voices I will just say two things. First is, we need to make a joint commitment to reverse the flywheel to let technology development be led not by what's possible, but the opportunities that are out there to make the world a better place. I think about partners like Refuge Point working to build self-reliance among refugees, UNICEF's incredible work that Henrietta has shared, MIT saw bringing together innovators, WRI building global geospatial intelligence. The challenges that these organizations are solving should be driving how we create technology. And the second piece that I'd like to share is this panel has been just on face such an incredibly inclusive group. To me, it presents some model of the global conversation we should be building, bringing together civil society, corporates and technology companies, governments and artists that bring an inspirational view to build a single shared future for all of us that are grounded in the same principles of equity, inclusiveness, and creating a better future for all of us. Thanks, Kate. Thank you. Super. Thanks, Kate. Look, I think every responsible organization that develops or uses AI has an obligation to make it a force for good. So with that, let me just divide it into business and government. Business should focus on accountability, speaking to the same point well-did. You've got to be accountable for how you develop and use the AI, including all the data associated with it. We'll stick to that simple example, and that leads to both ethics as well as governance. Government has a role to play in regulating, but I believe that you can't regulate with a blunt hammer because that stops innovation. So you should regulate based on the risk. If the risk is going to be small, be light on regulation. If the risk could be heavy, be heavier on regulation. And we know the high risk AI applications. I'll mention one that we have all been concerned about for years, which is facial recognition and how it can be used or misused. Henrietta gave some great examples, actually, of how it could be a force for good. But many of those same examples can also be turned into a force for bad. So I think that that's a great example of one which we can use. You began, Kate, talking about some of the echo chamber on misinformation. That's another force for bad. So government has a heavy role to play. I don't think we know the exact answer, but that's why we need an alliance like this one to perhaps converge on things. So those are the two that I would leave for this audience. Thank you. And Henrietta in 20 seconds or less. So, let's have digital public goods. Let's have an internet of good things. And let us try to connect all of this toward education. It's the number one request we have for children. And it's the best ladder out of poverty. Indeed, absolutely. And I think we all agree with that. And so I would like to remind everybody who is a partner of the forum to stay with us. I would like to thank everybody for joining. And I would like to leave the last 30 seconds for final comments from Will. Thank you so much. It's an honor to be here and be a voice for the folks that reflect my path coming from poverty and speaking on behalf of those kids that are still in poverty. And we've never been here. Humanity has never been at this intersection where the next version of computing is predictive and can be can further put people that come from where I come from and more detriment. And it's an urgent ask that we up skill invest in inner city kids education around AI autonomy robotics computer science. If there's a baseball field in a basketball court and every single junior high school in high school, there needs to be computer science programs robotics programs. It's a human right that people are brought up to speed to compete and solve problems and train AI and and understand the complexities of data. It's a human right. And so I'm here to not only speak on but do my part and make sure that kids that come from communities starting with the community that I come from are equipped with AI stem skill sets. I'm proud of the work that my foundation does and the kids that have joined onto the program but we need more of it. And so Davos can can do a big human heart, hand up to where, you know, we can scale the work that we're doing in Boyle Heights to to educate and bring people up to speed. So, as I say, thank you to all of our panelists. Thank you so much for the donations from the less and our vind and please join Gaia contact me and let us take this conversation forward. Thank you.