 Hello and greetings from Washington DC and the US Institute of Peace. Welcome to all of you joining us from around the globe for the artificial intelligence and the next generation of peace builders event. For those who would prefer to listen in Spanish, you can find a separate live stream with Spanish interpretation at the bottom of the event webpage on our USIP.org website. My name is Michael Phelan. I serve as the managing director for the Center for thematic Excellence here at USIP. The Institute of Peace was established in 1984 by the US Congress and is a national nonpartisan institution committed to conflict prevention and peacebuilding globally. One of our greatest priorities here at USIP is supporting our next generation of peace builders and amplifying the unique and vital insights that young people bring to this space. Our Youth Advisory Council provides a platform for young leaders from around the world to share their expertise with USIP staff and partners and ensure youth perspectives are incorporated into USIP programming and research. Today's event itself is brought to you by the members of USIP's Youth Advisory Council. They recognize the need for a conversation on artificial intelligence and its impact on peacebuilding. A conversation that weaves together multiple generations, geographies, and disciplines. As we enter the era of more advanced artificial intelligence, we are faced with both unprecedented opportunities and challenges. AI holds significant promise to be leveraged as a positive tool for conflict prevention and peacebuilding, but also presents risks that could undermine these very objectives. The next generation will be the stewards of technologies like AI and will play an instrumental role in guiding how humanity employs and regulates them. It is an honor to be joined by a distinguished panel of experts who specialize in the crossroads between AI, peace, and societal impact and we thank you for your participation today. We're also privileged to have the participation of young peace builders from around the world. Now to guide us through this timely conversation with you, I'm pleased to introduce Ms. Oni Papa. She is a leading member of USIP's Youth Advisory Council, an alumnus of our Generation Change Fellows Program, and the co-founder and her own right of Kilosco Youth in the Philippines. Oni, I invite you to lead us in this conversation and thank you. Hello to all and thank you so much, Michael, for kicking off this discussion and welcoming all of us to the artificial intelligence and next generation of peacebuilding webinar. So good morning, good afternoon, or good evening, depending on where you are joining us from all around the globe. I'm Oni Papa, a member of the United States Institute of Peace's Youth Advisory Council, and it is such an honor to moderate this crucial and timely discussion on the nexus of AI and the next generation of peacebuilding. I would love to begin our discussion by introducing our distinguished panelists for this exciting event. Firstly, we have Dr. Andrew Embry, associate professor in the Gracias Chair in Security and Emerging Technology at Georgetown University School of Foreign Service. He's also affiliated with Georgetown Center for Security and Emerging Technology. Dr. Embry is the co-author of The New Fire, War, Peace, and Democracy in the Age of AI. Next, we have Bronca Panik, the founder and executive director of AI for Peace. Bronca has an impressive 15-year global career and is a passionate advocate for leveraging AI for peace, security, and sustainable development. Our third panelist is Ms. Alexandra Hakunson-Schmidt. She's currently working with UN Women's Regional Office for Asia and the Pacific, focusing on gender-responsive digital security in Southeast Asia. Finally, meet Zia Tuzani, founder of TUNACT and a USIP Generation Change Fellow. He's actively working to engage young people in the decision-making process in Tunisia, and is also data scientist in the field of AI. With a panel that can provide expert opinions and information on the intersections of peace and AI, this message is for our audience members. Please feel free to type your questions in the chat as they come to you. We promise to come back to them and dedicate a time for Q&A section towards the end of our discussion. So please head on over to the chat section of the very event page you're currently watching, the live stream right now. Leave any questions that come to mind. And now without further ado, let's dive into this engaging conversation and explore how AI can shape the future of peacebuilding and, with it, society at large. So for our first round of questions, we begin with our panelist Zia Tuzani. Zia, to set the stage for our audience members, could you please help us disambiguate what we mean when we talk about artificial intelligence? What sets AI apart from other technological advancements of our time? Sure. So to answer that question, I will try to answer four different sub-questions. The first one is what is AI? And we can define it as the ambitious discipline that tried to mimic human intelligence. And to do that, historically, we have been through different school of thought. The first one, the 50s, 60s, was what we call symbolism, and it tried to mimic human logic. Like, for instance, syllogism, like socrat is a man, all men are mortal, therefore, socrat is mortal. So we tried to implement this kind of logic with computer. However, that kind of approach needed also lots of human input in the form of knowledge based to let the computer be able to do this kind of logical cause and consequences. So one typical example that we saw all, like in old version, for instance, of Microsoft Word or Google Docs was through Scrammer, Czech Speller, like they were kind of good, but sometimes they were really limited. So that's one way that we try to reach this artificial intelligence. Another school of thought start to gain some momentum in around 2010, and it's what we call the connectionist school. This school is based mainly on architecture that we call neural network. It really gained momentum because at that time, we start to get lots of data available, and also the computer were becoming more and more performant and also cheaper to use. So one typical example that shows that kind of architecture is like chat GPT or, for instance, also Google Bart. Now, second question, like how does, for instance, chat GPT works? In traditional computer science, we have to go to every steps in order for the machine to follow and execute some task. What is like a change is like for connectionists, for instance, we use another approach that we call machine learning. Rather than coding everything, we let the machine learn by itself those rules or patterns. The way we do it is that we give to the machine lots of example in order to learn from, and we give it the answer. And this example will be input, and the answer would be the output. And we let the machine figure out the function that will map this input to this output. So that's really what we're doing now. So I will say now why we speak so much about artificial intelligence. We speak it so much now because we're like experts in the field, and like data scientists like me were really surprised or astonished to see how well tools like chat GPT was able to perform and give so much precise answer. Like in 1950s, Alan Turing developed a test in order to see how much machine were able to have some intelligence behavior. Like the test consisted to have one person in the room and one computer in another and a human in another. And the human in the first room have to guess like through question and answer, who, where is the human and where is the computer. Nowadays, chat GPT could pass that test so easily. So that's one thing that make us feel like amazed, but also a bit scared. Another thing I think also is that for long, like since Aristotle, we defined intelligence by the fact that it's the ability to master a language like for Aristotle, like what differentiates human from animal is that master of the language. And when we interact through chat with chat GPT, we always feel that there is like some mastering of intelligence. So all of that I think is also leading us to see the potential of like AI now and the acceleration that happened in the past like year and also think about those consequences because like, I think everybody has heard about the pose that some researchers have asked to do in AI because not I think because they want really opposed but to make people and public opinion and political leader aware of like, it's really accelerating now and we really have to think on how to make some governance around it and how to maybe use it for the best because we can use it also for the worst. So I really think that this technology now are giving lots of opportunity and challenges in all fields, including also a peacemaker. That's amazing. Thank you so much for letting us in on that framing of AI. I want to ask other panelists, would anyone like to expand on or further clarify this definition of AI? Please feel free to unmute. Hi there. I think I just built on Zia's good points that he walked through and just add a few additional insights for the audience. I mean, this question really matters. It might seem like an academic point to debate the definition but it does influence questions around regulation and governance, technology protection when you think about export controls or investment screening and it also affects how officials, how young leaders think about galvanizing positive trends in the field. So it is really important to grapple with. I will share with the fellow panelists, it is very difficult to know how to define it, partly because there are so many different subfields, so many different applications. I think many experts who are at the cutting edge of the field will not necessarily agree on the definition and it's evolving so rapidly. The fundamental concepts tend to shift. So it's difficult to pin down but I will say that I do think as a field or AI, sort of like a mathematics, there's many different ways of doing AI and one of the ones that's obviously become so popular as Zia alluded to is machine learning. The idea that you can use computers to execute algorithms that learn from the data and this ability to learn as opposed to wrote execution is what sort of defines the capabilities of these systems. Their ability to understand complex patterns and data to infer decision rules and we've seen really an explosion of data and computing power in recent years which is really spurred on this subfield of AI and machine learning and there's a subfield of that and deep learning which is a set of statistical techniques that uses neural networks that are composed of these layers of neurons that loosely model the human brain and that's been behind a lot of advances in gameplay but also in other areas. And I would just wrap up by saying that this really matters because it's not just a technical question. A lot of the policy problems that all of us care about as young leaders, as activists, as scientists, as humanists, they grow out of the technical architecture of these systems. So for all of you watching, I really would encourage you to try to grapple with some of these technical questions because you'll find that so many of these tough problems that we deal with in our societies grow out of them and especially as Zia had sort of noted, a lot of the modern AI systems today, they fail in ways that are different than older classical systems. So it's also important to understand the strengths but also the real limitations and weaknesses so we can use them and deploy them appropriately. All right, thank you so much for that, Andrew and Zia. Now our second question is actually directed towards Dr. Andrew himself. So hi again, Andrew. So from what we understand, your book emphasizes how AI, like fire, can either be a transformative force for good or a destructive force when mishandled. So in your opinion, what concrete steps should democracies take to ensure the harness air technology in a way that promotes democratic values and mitigates risk, societal and geopolitical stability? It's so, you know, I sort of come at this with a lot of humility because it's the duality sort of of modern AI and its applications is sort of really mind boggling. And in so many ways, we've seen it do incredible things. One example from the company DeepMind is using AI and a technique called reinforcement learning to model the 3D structure of how proteins fold to within a width of an atom. This is really important for things like drug discovery. It could help cure diseases like Alzheimer's or Parkinson's. So it's really extraordinary. And at the same time, earlier this year at Carnegie Mellon, they used AI technologies linked to a hypothetical laboratory to see if they could get it to produce, you know, to just identify misuses of these technologies and it could potentially come up with instruction sets for a World War I chemical agent. And in the same way, you know, we've seen this also in laboratories in Switzerland at conferences be able to use computational machine learning models to design toxic molecules as a way to guard against the risks of misuse. So the very same efforts that we see that are so promising for global health could also be turned against it. So I think this is just a really tough problem. And I would say for all the peace builders out there, you know, we've seen incredible uses of these technologies, right? They could be used to facilitate digital dialogues to expand the number of stakeholders and voices to do conflict mapping. They could be incredibly valuable for early warning and information sharing. But you could also imagine cases where these technologies could be turned toward other ends such as disinformation or misinformation in ways that puts the safety of our peacekeepers around the world at risk. So for peace builders, this is very important. If you go back to an understanding of AI, you know, modern machine learning as computers that are executing algorithms that learn from the data, you think about these three core components, data, algorithms, and computing power and the talent that underpins them. There are different levers of influence that officials and young leaders can pull on, indirect and direct levers to shape those core components. So this is a field where people's involvement matters because you can shift the trajectory. Think about data, right? What can we do to facilitate data security and data privacy and protection, right? There are smaller data approaches that are being innovated to AI that don't necessarily have to leverage massive data sets. There's privacy enhancing technologies that are still nascent but developing, which allow you to sort of answer questions without necessarily gaining access to the individual data of people in the pools of those data. There's ways to think about innovating and computing power that could spur a lot of advances. But there's also ways to control large clusters of advanced AI chips so that they're not used by bad actors. And there's a lot of efforts underway to think about how we can promote democratic legitimacy and more safe and responsible use cases for AI through better incident reporting, through better systematic evaluations, and more fine-grade empirical work that's being done to support this. So I would just urge everyone watching about how to think about this. There are an incredible array of positive use cases in the world for AI. And people's voices on this can really make a difference at the state and local level in their governments, but also multilaterally across the board in shaping how these core components are being developed and in furthering education and workforce policies to make sure we have the right talent, thinking about how to develop these models in safe ways. So it's, I think, a challenge for everybody, but also definitely an opportunity to make an impact. That is wildly interesting. Thank you so much, Dr. Embry. Now perhaps to build on that point about harnessing AI responsibly, we'd like to ask this question, especially to Ms. Branca Panik. Branca, you've been working specifically in the area of human security. So we wanted your thoughts on how can AI be employed to actively promote what is termed as positive peace. Thank you. Thank you, Ani, for your question and thanks for the invitation. Congrats to all of the organizers. Thank you. I think this is absolutely crucial to bring new generation more on board. And they're already there, right? Just we are trying to amplify their voices more in this conversation. Andrew, Zia, there's so many things I think this conversation can last now for hours and hours. So many amazing things that you brought to the table. Let me maybe try to connect the positive peace conversation to this and explain how we are from our personal journey, from my personal journey and our organization as AI for Peace are actually doing this. So working on positive peace, it's not a new thing, right? The practitioners have been working on this for more than a decade. But what is crucial to understand is that this entire environment has been changing this entire time. And we saw this with many stages of the development of the technologies. And now, I think Andrew explained this very well, why this is now different, this new stage of the development of machine learning. And our entire work is positioned now, we call this the age of algorithms, right? And as peace builders, as practitioners, sustaining peace practitioners, we just have to ensure that we are building the positive peace, having this in mind that the entire context change and that we are working in the age of algorithms. What we mean by that, we just work on strengthening institutions, strengthening structures, seeing how we can build these attitudes, build the resilience to sustain peaceful societies. And as we already heard, right, we will be hearing this all the time, like availability, more availability of data. We are all creating the, even now, we are creating an amazing amount of data, even with this conversation with our phones, just having our phones in our hands with improved computer processing with especially lower costs of cloud access data storage, right? This work, what is different now? This work has been possible only a couple of years ago, only in big institutions that had resources to do this, right? Big universities or big private companies. Now we see basically with the lower costs and better access to the technology that other actors are coming to the stage. The entire ecosystem is trying to grow at this intersection of positive peace or sustaining peace and the artificial intelligence. So AI is becoming a powerful tool and this is why we are looking into this direction. And maybe just to share a little bit of a private journey, when we were starting our organization, we looked into the entire field and we realized that when you talk about peace and security, we saw that militaries using AI and military uses are now especially multiple and advanced in warfare, in situational awareness, in threat monitoring. Everybody heard, I'm sure, in this audience about so-called killer robots, autonomous weapons systems, some applications, amazing applications in healthcare, battlefield healthcare. But what we as peace builders, the founders of AI for peace had a previous experience either in peace building or humanitarian action, we realized that applications in our field are still very limited. And we wanted to change this situation, right? To empower our own field to be able to be more vocal voices, more active, more informed voices in this field. So we look into all of these technologies that both Andrew and Ziad mentioned, right? We look into machine learning, natural language processing, image processing, and we look how they can be used to collect data, to process data, to uncover different patterns. I think this is an incredible value of this technology. Human eye is not capable to always catch some of the things that machines can do. We also don't have that capacity often. We work in emergencies, right? Sometimes we need to process this material, images that we collect from war zones or data that we are collecting. We need to process them more quickly so we can get a sort of helping hand from these technologies. It's a sort of a new tool in a toolbox of peace builders that we can potentially use and augment our capabilities. Andrew already mentioned some applications, not only potential, but current applications of this technology. We are also using, especially AI and conflict prevention. I think the emerging technologies allow the revival of the entire conflict early warning early action field. We are especially looking into this atrocity prevention. Hate speech. Hate speech is something that peace builders were traditionally involved and now are trying to improve this work with the help of algorithms to try to allocate instances of hate speech online to see when these instances transferred to in-person violence as well. How we can predict the probability of certain instances of violence happening. Human rights protection. This is a very important field. We are looking into how, especially GIS and satellite images in combination with machine learning can be used for protecting human rights. Climate and conflict. I'm sure there are many in the audience here who are interested in following the climate issues. Look into this intersection as well of peace and climate or climate and conflict and how data science can actually help tackle these issues and realize what are now the correlations between the two fields. I really want to mention what is crucial and central in our work that we are also shedding light on. We are excited about these tools, but we are also shedding light on risks and emphasizing the importance of embedding ethics through all of the stages of this work. Design, development, implementation of these technologies. We, in a way, pause even our applications of machine learning until we develop the entire work stream around ethics for AI for peace, which is now publicly available for anybody who are applying data science in this field to really make sure that we embed ethics by design, that it's not an add-on to this work. It's not something that we do just at the end of the process. Again, there are many more things that we can unpack, but only let me stop here. I start with ethics and end with ethics just to amplify the importance of it, but we can come back to some of these issues later on, depending on what the audience is interested in. Of course. Thank you so much, Ms. Branca. This discussion truly is at the heart of finding where we can leverage AI in terms of peace and security and ethics, as you have discussed. To explore the constraints surrounding this and how we may be able to respond to them as well, this question is for Ms. Alexandra Schmidt. Alexandra, hi. Branca has touched upon the implications of AI on peace building and human security. With your background and our subject matter, how would you elaborate on the potential ramifications of AI technologies concerning gender equality and their effects on peace and security? Okay. Thank you so much for this excellent question, Ani. And also, thank you, C. Ed, Andrew, Branca. I mean, I think that this conversation is already showing the diversity of perspectives that would have in this field and also how complex it can be and how many terms there are to unpack. So in terms of the potential ramifications of AI concerning gender equality, we have seen that numerous studies have shown that wildly used AI systems exhibit gender and racial biases, and in these systems, women, and particularly women of color, tend to be disadvantaged. So to better understand what implications this has for peace and security, you and women is currently undertaking research together with the United Nations University, looking closer at AI and the context of the women's peace and security agenda with a focus in Southeast Asia. So this is the perspective that I will be coming from today, just sharing a few of our preliminary findings of the key thematics that have kind of emerged without research. So here, we have seen that stereotyping discrimination and the exclusion of women by AI systems have had significant consequences for peace and security. For instance, as AI is primarily processing the information that is being fed into it, these systems tend to repurvey general misconceptions and prejudices. Unless AI systems are trained to account for expected bias, they therefore risk reinforcing stereotypes along factors such as gender, sexuality, age, religion, and so forth. Women are widely stereotyped by AI systems, and many commonly used large language models and image generators have been found to generate misogynistic or sexualized content to women without being prompted to do so. So just based on the basic prompt, it generates haunting on women. Such stereotypes may enhance misconceptions about certain communities and can fuel disinformation and hate speech towards said groups, and I know that Franca touched on this briefly. Across numerous countries in Southeast Asia, we have seen that this has normalized violent acts and harassments towards women and ethnic minorities, feeding into pre-existing conflict dynamics. So such content can also be weaponized to bring the support to certain political agendas. So one notable example from the Southeast Asia region is the spread of disinformation, which led up to the widespread violence against Rohingya and Myanmar a few years back. So while online hate speech was widespread, rumors about Rohingya men harassing non-Rohingya women were circulated to foster negative sentiments across Rohingya or against Rohingya communities at large, which is believed to have fueled and generated public support for the violence that these communities face. We also have AI-powered recommendation systems on social media, which can further exacerbate the issue of stereotyping. For example, by only exposing users to content which they are already engaging with, digital echo chambers can emerge. And such echo chambers may create fruitful environment for radicalization or violent extremism, especially if these are centered on violent content. Echo chambers, for instance, have been identified as one of the key drivers in the emergence of in-cell culture, which is commonly characterized by misogyny, homophobia and racism, and has provoked violent hate crimes and violence against women across the world. In terms of discrimination, we have also seen that AI systems are less likely to work with voice and facial recognition technologies containing a larger margin of error for women's faces and voices. For example, in the context of peace and security, facial recognition systems used in migration management systems could negatively affect women's ability to move safely across borders, particularly in the case of forced migrations. These systems tend to be less accurate for women. So, taken together, these issues are largely a result of the exclusion of women in decision-making, AI, technology development and peace processes at large. And it has been found that a lower representation of women as developers of AI and technology can cause discriminatory effects to go unnoticed due to unbalanced training sets and inappropriate testing. The digital gender gap can also exacerbate these issues. For instance, with fewer women having internet access, they leave less of a digital footprint, and hence, less data for machine learning systems, for example, to pick up on. So, while AI carries a vast potential for peacebuilding purposes, these bias issues are making it less likely currently for such technologies to result in gender-responsive outputs. And ultimately, this may serve as an obstacle for women and young women, peacebuilders, to use emerging technologies to advance their peace efforts. And it's therefore important that we look particularly close at the intersection of gender and peace and security in the context of AI and technology development to ensure that access and the benefit of such technologies is actually equal. That is highly fascinating. Thank you so much for that, Alexandra. These discussions are so important when we're looking at something that can be considered yet another variable. I'm pushing for the agenda of social equity and peaceful cohesion. And at this point, I just want to express that everyone's responses have really imparted a lot of nuanced information to our audiences at home on AI and peace, an intersection that really does need to have more conversations surrounding it, which is why I'm really excited to flow into our second round with questions. And that brings me back to Dr. Andrew Embry. Andrew, we hear a lot about AI and global competition. What are the geopolitical implications of this technology? And on a note, what do you think it means to lead an AI, to measure leadership in that realm in a way that is useful to policymakers? Well, I actually think this topic flows very well from the insightful comments that Bramka and Alexandra just made, because I think the first question I think of when I hear this is, well, leadership to what end? There's obviously competitive dynamics afoot. And one of the things that all young leaders need to think about is, what are we driving toward? What is our shared goal? And do we want to design, develop, and deploy systems that are flawed in the ways that Alexander rightly described? Limited, biased, reenacting, discriminatory, and passing justices. So this is a real question. And I think there's a tension sometimes between speed and safety, because there's going to be pressure in the market, pressure geopolitically to field these systems as fast as possible. And it's up to every thoughtful voice to hold leaders in industry and governments accountable for the principles that they articulate to push back against these concerning trends and questions around bias. And it's also important to think about leadership around common use cases that can support things like the sustainable development goals. One of the challenges, I think, is that we often focus, if we're talking about who's leading in the headlines, we focus on sort of research output, who's publishing more papers or who's got more patents. But innovation and tech leadership is really more complicated than that. And a lot of times you have to look at quality metrics, you have to look at not just innovation, but what the Scholar Jeff Dinh calls diffusion capacity, how it spreads and is adopted in various processes in society and in our economies. And there's not always a first mover advantage. Sometimes it's important to be focusing on safe and responsible use cases. I think that's an essential element of leadership in this space. The other thing I would say about leadership is that so much of the field of AI is open and collaborative and international. So while countries might be surging ahead in one particular subfield, they're often collaborating extensively, not just with partners, but sometimes with competitors. And this has raised all sorts of big debates today about the balance between research security and openness, which is long defined at the field of AI. I do think one helpful framework that I've used, it's something that I've written on with a former colleague of mine, Elsa Kanya, is to think about what are the core capabilities of these technologies, what are the critical enablers, workforce development, investment, R&D, and then what are the ecosystems that really tie them together, education systems, immigration, our multilateral diplomacy. And so to think of it in these three levels helps, I think, structure some of the activism and the engagement that young people can leverage for this. Because at the end of the day, a lot of the accountability is going to come, yes, from algorithmic impact assessments and better incident reporting and much more fine-grained metrics. If we think about all the concerning use cases, we need to think about what are the concrete harms? What are the metrics that we can develop to measure them and hold people accountable? And some of that will happen in legislation and in regulation. Some of it, a lot of it will have to happen from the public, from young leaders holding people accountable and recognizing the trade-offs inherent in this. And I do think that there's so much exciting work that can be done. AI for science applied to weather modeling and climate, but they're trade-offs. A lot of these data centers are also using up a lot of energy and there's concerns about how to trade this off with all of the good things that so many people are doing on combating climate change. So I do think that this question of leadership really matters, but it's more complicated sometimes than the headlines would lead us to believe. And it's really important, I would say, for people to get comfortable with open source data because there's so much out there right now that would allow all of you to really get a sense of where the capabilities are, where they're headed, which ones are perhaps helpful and ready to transition to applications that could do so much good and which ones are nascent. And I think there are great tools out there just to share one that I think people might enjoy exploring. A center that I'm affiliated with at Georgetown, the Center for Security and Emerging Technology, has a suite of tools called the Emerging Technology Observatory. These are open sources, these are available for people to explore, and it can help give you all a sense of where this remarkably diverse field is going. And I would encourage all the viewers to try to get comfortable with this and to explore these tools as part of your toolkit for leadership. So I'll leave it there, but I do hope that more and more people engage on this and get comfortable with these kinds of open source methods. Wow, thank you so much for defining that aspect of tech and AI and policymaking. And we're sharing that resource with us. I will definitely check that out after this. So to branch off from that line of questioning, we want to direct the next point of inquiry to Ms. Bronco. So Ms. Bronco, following Dr. Embry's insights on the geopolitical landscape, what do you perceive as the most crucial aspect policy makers and practitioners must comprehend about AI's influence on global peace and security? Thank you. Thank you, Ani. And before I maybe cover that question, I just have to connect to some of the Andrew's points. I think this is such a fascinating topic of actually talking about leadership and maybe posing this as a question. I'm not sure if audience has a chance to jump in through messages in this conversation because I'm sure you have an audience based in different parts of the world as well. Just looking how this conversation around leadership is being shaped in a different way in different parts of the world. And then from different stakeholder groups as well. And I'm super glad that Andrew mentioned this tool because you also see how the young peace builders who are not yet on a position of, let's call it a position of power, can take a leadership role and can actually shape this conversation. I can't skip to mention as well like different because I lived in the US as well. I lived in Europe now. I live in Latin America. Just mentioning how European leadership is perceived differently in AI development just through, you want to ask the question about the policymaking regulation. Just looking into how they are leading the way in actually regulating this technology or even in the previous stage with data protection, the GDPR. And what was the effect of that leadership in other countries? So not only looking, I appreciate this comment Andrew a lot, like looking into the number of papers that are out there, the number of patents. I think the impact of this technology will be shaped from different angles as well. And one of them will be how we actually regulate this technology. We see different approaches in the US that is concentrated more on innovation and then other parts of the world as well. But the invitation to the audience jump into the chat and share your opinions. What do you see in your countries? What is shaping the leadership when you think about AI? To now connect this to your question, I do think and I would like to brought this to the conversation, especially when we talk about peace and security, that we need to also make this distinction between artificial narrow intelligence and artificial general intelligence. Because I still see this in some circles, even a predominant conversation that is shaping even the policymaking when especially when we look into risks, right? What do we see? What are we scared about when we when we talk about AI that quite often this conversation goes into directional imagining the systems that we still don't have. And many do not agree if you will ever reach the level of intelligence that is on the level of humans and then super intelligence that is more intelligent than humans, right? And this is where so much of this, you see my passion is like just like kicking in in my voice. This is where the conversation around peace and security is also being shaped on because so many are simply going and looking into that direction of being feared by systems that will take over the world. This is one thing that I want to mention. The other one, especially with policymaking, and I'm coming back to this distinction that I mentioned previously, making sure that we understand that security field is not the same as peacebuilding, right? And that we look into the majority of the work that was happening now, which is which was really amazing. And I completely support that an incredible activism that came from different sectors, especially from activist civil society, but from noble peace winners as well, engaging on this activism against the killer robots, right, against the autonomous weapons systems. I think that policymaking needs to concentrate more on the peacebuilding dimension as well. And we see that happening in a certain format now for those in the audience who are interested in the United Nations and what are the conversations that are happening at the UN level. The Secretary General started this initiative called our common agenda. As part of this process, I want to flag two really interesting and there are many, of course, on the country levels, but this is really exciting because all of the countries at the UN need to agree about this, right? It's a special format where we are having a conversation around AI. One is the global digital compact that is covering intensely AI topics as well. And then the other one, and I'm looking at the intersections of two is the new agenda for peace, right? So how we are looking into peacebuilding, how do we actually acknowledge that the setting that we are working, the entire world is working on, the UN is working on, is changing, and how do we need to adjust to that? The last one that I want to mention more now, switching to practitioner side of the story, you mentioned the question like looking at them as a whole, but I do think there is a value in making a distinction as well and looking into what practitioners actually need to do, and I would love to see us jump on that conversation as well. We work a lot with data scientists, right? We do this, these efforts of bridging the policymakers, people who have field expertise, who are topic experts, but have zero data science skills, and the data scientists who don't necessarily have a knowledge of peacebuilding, of deep causes of problems that we are tackling. And I do think still that, and I'm sending this as a voice to young peacebuilders in the audience, right? Think about that, like how to enable this communication and conversation, how to build these bridges. In our projects, I often see that data scientists are also scared into jumping in our field and saying, we know the data science, but we will let the peacebuilders cover certain issues, especially when we talk about issues of ethics. I do think we need to build our knowledge more on both sides, right? It's not only a one-way direction. It's not that only peacebuilders need to become a data scientist a little bit. Of course, they need to build their awareness, to raise their awareness, to demystify this technology. But I really think this work needs to go on the other way as well. We need to get excited data scientists about this topic and to get them on board and to do this work together. That is an amazing take on the ethical and regulatory dimensions of AI from a global standpoint. Thank you so much for that, Bronba. At this point, I want to shift the conversation to the roles and in what capacity young people can fulfill these roles in AI and peacebuilding. We believe Mr. Zeed does not take us for that. Zeed, if we could get your thoughts on this. From a youth engagement perspective, how can AI tools be utilized to prevent the spread of misinformation and hate speech among young people? In addition to that, how can youth use AI to promote peacebuilding in their day-to-day work? To answer that, I will start with a quotation from Yuval Harari who said, technology favour tyranny. I think really this is the challenge for any democracy in the world. I think free and fair elections would be more and more challenging for any democracy. We have seen in the past how much people can influence them using profiling people on social media, specifically people that could be indecisive in order to change or orient influence their decision in voting. The most famous example is the one of Cambridge Analytica. Unfortunately, we'll see more and more of those kinds of things. You can add on top of that deep fakes like videos that are really good to imitate famous people in their image and in their voice. We know there is a kind of crisis of trusting traditional media in our society. Having that will definitely not help and raise much more people being less confident in what traditional media could say. I'd also on top of that, that it's becoming so easy now to generate articles that can feed some kind of orientation. Also, which is the most scary thing, both would be even more performant to really being able to switch people opinion. With the interaction that they can have with people, they will learn some pattern in order to switch people opinion and not just the indecisive people. All of that I think are really challenges for our democracies. At the same time, I also think that AI is a tool like radio activity in that time. It was used to develop arms, but it was also used to cure people. It's really important in that situation and to really answer concretely the question to empower youth people in order to better understand it and also use it because it's amazingly how easy using AI tool became nowadays. I can tell you in the last 10 or 5 years, for instance, if I had to program a model for computer vision, it was incredibly complicated and you spend lots of time just parameterizing the server in order to do that. Now, it's becoming really much, much easier and accessible, let's say. The cost of intelligence became really low. I'm saying that to any youth people and peacebuilder that are listening to us. It's really an opportunity for us to learn with technology. I will say also that it's not something that is just for people living in developed country. I came from a developing country and with all the resources that you have on internet to learn, it's really something that anyone now can do. I really encourage anyone to really size the moment to learn more about this technology and really try to use it for the best or better way, let's say. I think also that there is already some examples that use AI for peacebuilding like one is from the UN. They have used it in Libya and Yemen in order to better understand public opinion to find common ground for peacebuilding. There is some way to use it for peacebuilding and again, the more people will train, specifically young people, the better I think we will be able to counterbalance the bad news of it. I think also there is maybe one big challenge for any peacebuilder is the access of data. Really, what made FussModel amazing is the data. There is already some kind of open source data available and lots of people are also using them, for instance, to detect early sign of crisis or conflict in the world. I think for peacebuilder, using machine learning became much more easier, but the biggest challenge now is really having access to data that will enable them to use those tools and for developing a way to maintain or promote peace. I think this is really the challenge. Just to conclude, because I have heard lots of people speaking about killing robots, I think one thing that could destabilize word stability and peace is really that. That's really what concerns me, because first, any model that you train has bias. It's really to understand how much it is biased and what could be vertical consequences. Sometimes it's not something that is important because there is no consequences on humans. It's really to understand that for killing robots, what concerns me is that all FussModel are trained generally with data that are kind of labelled by human. There can be lots of what we call false positives. Training FussModel will also learn from human errors in the past. That's something really concerning me. Also, the way it is will be, I think, implemented in the future. The more we will go, the more we will see government that want to use it. Until now, we have kind of human in the loop to control it, the final decision. But if the adversary, for being more efficient or more quick, let the machine decide by itself, the other will do it so. That could be creating some really challenging issue for humanity in general. That is amazing. Absolutely. I agree with what you just said, Zeet. I just want to say that we at the USIP Youth Advisory Council do see the matter of artificial intelligence pushing for integration in the mainstream use of data and its framing as well as a tool to further innovation. We see it as a growth opportunity, which makes these discussions that much more important because we don't want to approach it with apprehension but with curiosity. This brings me to a question that we think Ms. Alexandria Schmidt can really bring us home to. Really come back home based on the question of AI's nexus with next-gen peace building. Alexandria was able to highlight the audiences the way AI can be a tool or a hindrance at the community level. We want to ask you, how can AI and emerging technologies be leveraged to support the women peace agenda with specific attention to young peace builders and what are the challenges that need to be addressed in moving forward? Thank you so much for this, and I think that I will be touching upon a lot of the issues and topics that Zeet mentioned this far and just bringing back the conversation a bit to the issue of biases that I outlined under the previous question. So while undesirable gender biases are a policy challenge in leveraging AI to promote gender responsive peace, I do still believe that such technologies have a vast potential to support the implementation of the women peace and security agenda as long as we approach it by taking a mindful context specific and inclusive design. So as many speakers mentioned throughout this conversation, one of the great strengths of artificial intelligence is its ability to rapidly recognize patterns and facilitate the analysis of big data sets. And I think this could have significant implications for, for example, conflict early warning and response mechanisms which Branca touched upon previously. And AI could really enable them to produce real-time snapshots of important developments in volatile contexts. So such technologies may for instance be used to detect the spread of disinformation by automatically announcing the content and cross-checking it with verified data pieces. And this would simply allow for a more rapid response to the spread of disinformation which is an important component to conflict prevention and de-escalation efforts. However, in terms of early warning and response mechanisms, I think that we would also need to expand our thinking a bit in terms of what indicators to account for ensuring that these also pick up on gender dynamics which may be relevant to the issues that we want to monitor. For instance, a few years back, you and women in partnership with Monash University conducted research looking closer at misogynistic narratives, both online and online, and saw that there was strong correlation between the rights of misogynistic narratives and increased support for violent extremist ideals. And this might have to be an important indicator to consider when you design such monitoring systems. Moreover, and I think this is something Cid touched upon as well is that as deep fakes grow more advanced, disinformation campaigns are becoming more sophisticated. With manipulated video and audio content growing more difficult to detect with the human eye, however, AI applications are currently being developed to pick up on the subtle path which a human reviewer might overlook. We have seen in our research that the spread of deep fakes, often containing pornographic content, have been found to be a commonly used tactic to silence and discredit women public figures, including women peace builders, including women human rights offenders, and effective means of countering this type of content would fill an important function in safeguarding women's safety and well-being while ensuring that their work remains undisrupted. This is also something that Cid mentioned. It's such an interesting example, but we have seen that AI has been used for large scale digital peace dialogue, such as in the case of Yemen and Libya. And I think that this example really highlights the potential that AI could carry in order to make peace talks more inclusive and accessible, particularly to groups who face barriers in accessing traditional decision making spaces such as women, and especially young women who face even more barriers in contributing to traditional peace dialogues. Lastly, we have also seen that women peace builders, and also in particular young women, largely rely on social media platforms to conduct their work. For example, in the Philippines, back in 2019, young women organized a large-scale social media campaign to encourage young people to vote and support the Banksmore Organic Law, which was really important in institutionalizing the peace agreement in the southern parts of the Philippines. I think that this is just such an excellent example of how social media was used for this process. However, taking the issue of social media back to the conversation we're having on AI, in this context, it is important to recognize that women face disproportionate risks on online platforms. And while there are automated content filters and reporting mechanisms wildly used across social media platforms, put in place to limit the spread of harmful content, these have been criticized by women's rights organizations for not being gender sensitive or appropriate to local contexts. For example, in non-English speaking contexts, and many are calling for the redesign of algorithms that are put in place to offer protection to ensure that they truly are responsive to gender-based threats and harassments. So, just wrapping up, I believe that one of the key challenges we're facing in ensuring that AI and emerging technologies can be used to promote gender-responsive peace is that the discussions are still taking place in silos. And I think this ties back nicely to what Debanco was saying before as well. So, while debates on gender and AI and AI in the context of international security are ongoing respectively, little work is actually being done in the intersection between these fields. And as we're currently witnessing an acceleration in the debates on AI in the context of international peace and security, we need to ensure that gender perspective is included as an integral part of these dialogues and not just added as an afterthought. And this really requires a multi-sectoral dialogue. It requires a whole society approach with level dialogues between civil society, governments, the private sector and subject matter experts. And I think that the women's peace and security and the youth peace and security agenda as well can offer helpful frameworks in advancing these debates further. Thank you so much. And that is such an insightful take on gender-responsive peacebuilding and how we can reconcile this with AI development. Our discussion has been incredible and it's almost unfortunate that we are heading to the last portion of our webinar, the Q&A session before that to close out like all of our panelists to address the many young people, young peace builders listening into this conversation. If you can, in 30 seconds or less, what can the next generation do to help shape the trajectory of AI in a positive direction that benefits democratic societies? I can repeat that in 30 seconds or less. What can the next generation do to help shape the trajectory of AI in a positive direction that benefits democratic societies? And perhaps we can Dr. Andrew. I'm very glad to have a chance to share this and I think there's so many good comments so far. I would say I used to have the privilege of serving at the U.S. mission to the U.N., working for Ambassador Thomas Greenfield. And she convened a UN Security Council discussion where a remarkable Kenyan activist, writer and thinker, Nandjala Nyabola, spoke. And one thing that she said stays with me. She called on us to widen our perspectives and to think generationally, transnationally, and multilaterally, if I remember this correctly. And I think that's a powerful charge for all of the young leaders. In the multilateral space, we really do have to think about how to build confidence, confidence-building measures to ensure that misperceptions and inadvertent escalation doesn't take us down a dangerous path. And there are lessons to be learned from how we've employed this and used wise diplomacy to mitigate tensions in the past. I think we'd be well-served by trying to think about how to apply that in the case of AI. On transnational perspectives, we've heard already in Alexander really sum this up well. This is such a diverse space. It's not just governments driving this. Many industries are in the driver's seat, non-governmental institutions, philanthropies, state and local governments. I encourage everybody to actually try to go beyond these silos, as Alexandra mentioned, and really try to challenge assumptions of each other and build ties across countries and societies. And generationally, I do think this time horizon really matters. There are so many problems that fire the imagination of young people around the world, inequality, the climate crisis, the future of work. I think these are challenges that AI is well-suited to contribute to if used appropriately. But I would just end on this, that it's deeply relevant to know the context in which you're deploying it. And so many of these systems rely on a fine-grained understanding of context. So all of you who are listening, who can be translators between the technical world and the policy discussions, your understanding of regional and local context will really matter to the extent that these systems are contributing for the public good and for sustainable development, or for being turned toward ends that are not stabilizing. So I think this is really a challenge and a call for everybody to elicit their voices and make sure that people who are often not at the table in these debates are really part of the conversation. Perfect. Thank you so much, Indri. And perhaps I'll be able to hear from Sponka. Andrew said it so well. I don't know what to add. This was beautiful, even to wrap up, but maybe to just amplify some of the things that were already said by all of the amazing speakers today, especially on Alexandra's points. I really want to speak to young people, but especially to women and girls, right? And having in mind how little of a representative they are in this field, just to encourage them. And also there is a responsibility on all of us to create this space, but for them as well, just to motivate them to take this space and to amplify their voices. Not only if they are especially, of course, if they have these skills, if they're data scientists, but even if they're not, I just want to encourage people that this is a field that is now cross-sectional and goes, it's not only about data scientists to make decisions about in which direction the AI will go, right? It's about all of us, no matter which expertise you have, especially if you're having, this is another group that I want to talk to and to encourage them to participate, people from Global South and just or from Global Majority, just to amplify what Ziad already said. It's not anymore that only people in developed countries have capacities to do that. And I see that every day through our work that there are so many amazing voices, especially youth voices out there in the Global Majority that are doing amazing work, bringing this, the power of this technology closer to their countries. They're the ones who understand these contexts, right Andrew? They have a real, not only professional experience, but living experience in these contexts. And they know how to use this technology for the good. So just to encourage all of them or all of you in the audience, right, to take this space and just to lead all of us. You are the ones who will be able to catch up with this incredible speed of the development. So we count on you. Thank you so much for that, Ms. Branca. And stuff actually, can we know your thoughts in this Mr. Ziad? Yeah, just queuing Ziad Tuzani once more. Yeah, just your message to young peaceholders out there. Yeah, so I already, yes, I say, restate what I said. Like I think really, we have incredible opportunities for youth to learn like very so many available resources now free resources on internet to learn to master that technology. So various, I would say kind of not really excuses to not like get interested and try to develop our skills. And specifically for young people, like maybe in the past, it was like for previous generation, it was to learn to use a computer. And I think that was kind of challenging. But people that are born with that like don't find it so challenging. I think for AI, it would be a bit the same thing. New generation would be so used to use it, that it will become something as much and more mainstream. So really try to learn those tools. They are really much more accessible than before. And the more people use it for good, the better we can balance the bad use of it. Because at the end, I think it's really that like if we can have more and more people using it in a positive way, we'll be really able to outmatch the negative consequences that this technology will also generate. Thank you so much, Mr. Zid. And of course, the last but not the least allow us to hear your take on this, Ms. Alexandra. Yeah, I mean, I think so many great things have been said already. And it is difficult to add onto this without repeating. But I really do want to emphasize that, you know, AI, I think has a tendency to be a bit mystified, or I think that a lot of people are discouraged to enter these conversations because it's seen as something inherently technical. And as all the speakers have emphasized today, it's really that all experiences and knowledges are needed in the continuation and the continued understanding of this field. So really, this is a call for like young peace builders, young data scientists to really get together and have a discussion to see how can we leverage our expertise, and how can we work together to solve the issues that our world is facing today. Just a compliment as well, I would also like to flip the question around and ask, what can we do as well to ensure that young people have access to the spaces to ensure that they can shape the trajectory of AI in the positive direction? Because ultimately, that is all of our responsibility, right, to be respective of which fields we're working in, we should make sure to really take the expertise and perspectives we use seriously, and ensure that they also have a given place in the decision-making table. That is amazing. Thank you so much, Ms. Alexandra. And this has been a really in-depth and fruitful discussion. And honestly, you know, Ms. Alexandra flipped the question just now. It's conversations like these that decentralize information from the people who have the expertise to people who are curious, who want to know, who are apprehensive, who have no idea. So thank you already. This is, you know, one step towards more steps in further decentralizing information about AI and peacebuilding. So it is now time to actually take a look at what our audience members have been thinking about, wondering about what burning questions they have had while we all were locked in conversation. I've received my first question, and perhaps Ms. Bronca could take this question from young peacebuilders. How can AI be used as a tool for building empathy and understanding? Are there any existing AI tools that can help with conflict mapping? With conflict mapping, I would separate this from building the empathy conflict mapping. Absolutely. There are many, many tools out there that have been used to try to either conflict or violence, depending on different approaches organizations were taking. What is additional layer, I think, of the potential of these tools is going a step further from mapping, right, with all of the limitations that these tools have, but just as I said, providing some type of an early warning. And to connect this with Ziad's examples of the hate speeches, well, we did a very interesting project in Sri Lanka, where we were actually working with local just to give a sort of an idea to peacebuilders in the audience, how they can jump in such a project and join the data science experts to build a model that can predict the probability of hate speech happening online. The same types of tools have been built to predict the probability of violence happening. Lots of work, Alexandra, in this field of gender as well, trying to map the gender violence across different continents, different countries, and trying to build more visibility to what is actually evidence. When we talk about evidence-based policy, this is maybe this new stage of entering into evidence-based policy that is provided now with the big data, with huge amounts of data that we have. The second portion of this question is more complex. Where does the empathy comes in when we talk about these tools? Because some of the these elements are purely human, right? And this is something that we saw in the previous couple of years in the developments of these technologies in different fields, like even diplomacy, where active listening is very important, where even empathy comes to place. Where does emotion come to place when we talk about dialogues, when we work with conflict communities, when we work on reconciliation and so on? And this is where the human touch is still needed. And this is why we are often talking about human-centered technologies as well, human-centered AI, making sure that we do have this human in the loop who is bringing some of these skills. But I would also like to mention an interesting development that we just witnessed through generative AI. Because for years, we were thinking that artistic skills are also inherently only human and that we will never be able to develop the machines that are capable to produce something that is so impactful as the art that humans are creating. And we see now, well, many would still disagree with that if this is art or not. But we definitely see that some of these generative art products are creating deep emotions with humans, even winning the artistic contents. So I would kind of keep it open. Let's see. We don't know, technologies are developing in an interesting direction. Who knows what we will see on the empathy side as well in the next couple of years. Thank you so much for sharing that. And it is really interesting. And I also have another, just very quickly, I have another question here. And perhaps I could talk to Dr. Andrew. Question from young, sorry, let me just, okay, question about regulating AI. We speak about regulating AI. But on the other hand, companies are arguing that this will slow down innovation and maybe let other non-democratic countries take the lead. What is your thought on AI regulation? Two quick points. One, just to build on something from the previous comments, there's the American author, Slough the Wind, talked about technology as the active human interface with the world around us. And one of the ways that we can think about empathy and perspective taking is to realize that there's sort of a human definition of intelligence, but intelligence is all around us in nature, in plant and animal life, in rich ecosystems. And as we're innovating and thinking about the next stages of development and algorithmic innovation, I think there's some fascinating cues we could take from the rich world around us as we try to protect our planet, but also as we see that, as we de-center ourselves and realize that there are many different ways of doing intelligence. And in that symphony, I think there's inspiration everywhere. So just to kind of share that as a perspective for people to think about, on this question, I would say there's a common, I think it's a common misperception that regulation and innovation are diametrically opposed. And there are a lot of sectors and cases in history where smart, effective regulation that's benefited from wide public input is actually makes for safe products and it makes for better trust from consumers and can actually spur innovation. There's competitive races afoot here. There are many different regulatory debates around AI. Many different countries are pursuing this, including different competitors. And many competitors are actively developing regulation around. So I think it's actually a misconception that they're diametrically opposed, but regulation has to be smart, it has to be done wisely. And I think a lot of work actually happens on procedural innovations when many agencies learn how to adopt and do interpretive guidance for these. So the big debate, one of them right now is, do we need new institutions given the rapid pace and complexities of these technologies or can existing agencies catch up and apply? And I think probably some of both is necessary. But one of the challenges with regulation is better understanding what you're trying to regulate and having a better monitoring and measurement infrastructure. So you can really understand where the state of the art is. It's very difficult to do, but I do think that as more and more people become comfortable being part of this debate, as more leaders open up space for this and as open source tools become widely available, more and more people will be able to do this in a smart way and to think about risks at different levels, right? The risks that are inherent to the reliability of these models, the risks of misuse, but also broader structural risks that we all need to worry about as these technologies become integrated in so many different facets of our societies and our economies. Thank you so much for that, Dr. Andrew. And I am so thankful and like once more on behalf of the YAC that the panelists have given their time, their expertise and their beliefs in the agenda, a few participation in the realm of AI and next generation building. And it's more a saddened that are coming to a close. But testament to how insightful and engaging our discussions have been, we do have a pretty long list of questions that unfortunately for now we will be able to accommodate. But there will be more opportunities in the future for our audiences. Please do not worry. And our panelists do have a public, they have a presence on social media. So perhaps you could also reach out to them through channels. It has been a wonderful discussion, hopefully the start of many other youth led activities that aim to explore this. You guys have been phenomenal. We appreciate your contributions so, so much. Thank you and more power to you guys and work in peace building. Of course, our audience members, you guys have been amazing. And we hope that everyone exits with more knowledge and action than they initially have. This has been your host, Tony Papa. Let's all see each other again in the next webinar. Good night. Good afternoon. Good morning, depending on where you are in the globe. We wish you a peaceful day. Thank you. Thank you, everybody.