 Good afternoon to everyone here in the Philippines and welcome to our new audience joining us online. Thank you everyone for joining us to talk about the International Symposium on Artificial Intelligence. We have our live audience here on site at the Philippine Social Science Council auditorium in Kesson City. Then we have online audience via Zoom. It's said to me that we have a lot of Zoom audience here in 71. We have Zoom viewers from Andag City in Canada who are from National High School in New York and all over. We have viewers from Andag City, Apayao State College, Apayao Gardens, Davao City, Broadbaryando Marcos State University, and we also have viewers from Oaxaca Pair. Thank you very much for joining us. So let me now talk about how we are shaping and transforming the future of media and communication in the world. We welcome to the front and final installment of the Philippines Communication Society's webinar series for 2023 titled Aina Ko, Understanding the Impact of Artificial Intelligence on Media and Communication Education. This is the first and the last installment of the webinar series. We all want to talk about the impact of artificial intelligence on media and communication. So we are going to talk about the impact of artificial intelligence on media and communication. As I mentioned, this is the first and the last installment of the webinar series. We already discussed in the previous webinars how AI impacts content creation and news production in media. We also talked about the use of AI in advertising and marketing. And last month, we explored how AI tools can be integrated into Philippine media and communication education. Finally, in the first and last installment of the AI webinar series, we will take a global outlook. We're going to ask for a framework of women-oriented dimensions of AI and the AI-enabled features of learning. Magandantan Halipas Ugung Lahat, Pesicara Davi, chairman of the chairperson of the Department of Journalism of the College of Mass Communication, Sayuti Dilliman, and I will be your first and moderator for today's program. So we have an on-site audience here at PSSC. We are also being viewed right now live via streaming on YouTube at the TV, YouTube channel, as well as on the Facebook page of TV, YouTube. It's a Facebook page on Philippine's Communication Society, International Association of Business Communicators, TUP Broad Circle, and also on the TUP Circle of Research and Youth Support Core. You can also watch us live on the TV, YouTube channel, one-on-one on SignalTV, or there it is, you can see it on the broadcast media, online media, we have a turnover. But before we begin our program, we would like to thank our sponsors, we would like to thank the University of the Philippines System. We would like to thank the Philippines Communication Society, UD Information Technology Development Center, or ITDC, TUP, the International Television Network of the University of the Philippines, San Miguel Corporation, ITG Media Brands, and McDonald's Philippines, and everyone who helped to make this day our limited series possible. And I know that all of the teachers and students who are watching us, especially the VIO Zoom, we will have a certificate of attendance in each of them, and we will have a certificate of attendance that we will be given to you, based on your Zoom registration information. But guys, if you would like to follow us, you will be able to log in, then you will be able to leave. On the other hand, we will be given the certificate of attendance. And also, if you have not applied for or removed your membership yet, this is a chance to be part of the community organization that represents the Communication Discipline to the Philippine Social Science Council. The online membership form is available on the PCS website, that we keep adding in your screen, your website form and PCS, go from SOC.org slash membership. You can see the QR code, just scan the QR code, on your membership form. And of course, we want to make sure that everyone is heard and everyone has an opportunity to give out their opinions, their questions. We will be using CRI Go so that our viewers on Facebook, on Zoom, on YouTube, will be able to participate. We encourage everyone to participate. We will be able to participate in the mini-course. There is no green screen for friends or friends. But we will be able to participate in the mini-course. There are three mini-courses available, two of them will be selected. One is right and one is wrong. So, I would like to ask you, your answers will be discussed during our panel discussion, later on. For our viewers, including those on YouTube and on Facebook, just click on the slidebook.com. We will be using the slidebook for our mini-courses. Fill in the slidebook so that you will be able to see it. Three, two, six, four, six, two, one. Or send us your QR code so that you will be able to see it in your screens. All right. So, Ayan, I know that everyone is excited to get the show going. So, to separate and explain the possibilities, the future of AI in communication and media education and international symposium, let us hear a few words from the president of the Philippines Communication Society. She's also the advisor for public affairs with the University of the Philippines System. Thank you very much. Thank you, dear Cara. And good afternoon or good morning or good evening to all who are in attendance today, whether on site or online, wherever you are in the world, watching live or watching later at your own convenience. And welcome to the last in our series of AI Nakul, Understanding the Impact of Artificial Intelligence on Media and Communication Education. And additionally, greetings of Eil Mubarak on this celebration of Eid al-Adha. You know, there was an interesting article in yesterday's Philippine Daily Inquirer. It was titled, Will AI Destroy Humanity? It raises disaster scenarios where machines will outstrip human capacities, that machines will escape human control and refuse to be switched off. This is a scenario almost straight out of the movie, The Terminator, and has been dismissed by many for what it is, science fiction. Since March 2023, we have crossed and churned the discussion on artificial intelligence, hearing how AI is both cheered and jeered. Our resource persons, as well as our own personal experiences, have shown us that the AI revolution, or evolution, as others have looked at it, is happening whether we like it or not. And as written by various writers, and this is a quote, real or imagined capabilities and potential uses make artificial intelligence most welcome and feared. Many look forward to being liberated from tedious and repetitive tasks. Nonetheless, the fear remains that AI may lead to human obsolescence. That's science fiction. But this morning's hybrid international symposium exploring the possibilities, the future of AI in communication and education, is squarely in our realm of immediate concern as scholars and practitioners in media and communication. Without a doubt, we, the academics, the administrators, and practitioners, need to seriously examine the impact of AI in our current and future practices. As we all want to do in the Philippines Communication Society, we have brought together a diverse group of international experts and leaders from academia, industry, and government to examine the frameworks of human-oriented dimensions of AI and AI-enabled futures of learning. On behalf of your PCS, I express great appreciation to our distinguished panel, Joe Hiranaka, Gene Linus-Dinco, Dominic Ligot, Dili Dith, Rodrigo, our moderator, Cara Deville, and you, our members and potential members of PCS, most especially, to our partners at the University of the Philippine System, TVUP, and PSSE. I trust that today's discussion will indeed be intellectually profitable for all of us. Thank you very much. Thank you very much, Dominic Ligot. Thank you very much for that message. Actually, I have a job that I'm going to model for this. I put my hand on the paper and said, oh my God, it's like a feel, oh my God, it's always met with fear, but it's also met with something exciting. But like what Dr. Pena said, the AI revolution is happening whether we like it or not. What we can do now is to prepare and to equip our students, our teachers, the academia, our practitioners so that we can prepare for the AI revolution. Thank you very much, Dr. Pena, for opening our program. Now, it's time for our new course, our panel of experts. Right now we have one hundred and fifty-seven, one hundred and sixty-three. One hundred and sixty-nine, we will only have six-nine, and then we will have four-four-five. It will also be available to Facebook and YouTube. And we will only have one hundred and sixty-nine, plus plus plus five, and then it will be available on YouTube and Facebook, it's time for our speakers. I want to hear your opinions. You may now start answering the slide over there on your screen and simply go to slide over there. This question is actually a word cloud. Let's say we're making Filipino communication and media education internationally competitive in an AI-powered feature. What should be the industry or sector of Filipino communication and media education internationally competitive? Now, there is an AI revolution. Now, there are just a lot of people who I have seen. For example, in the American world, there are a lot of people who are adaptability and co-operativeness. Support your platform, understand AI and include it in our making from flexibility, policy, research, guidelines, creativity, collaboration, and then so just continue answering our slide over here because later on in our discussion, but it's our word cloud. There are multiple choice questions. Our first question is what do you think must be the priority in order to establish the basic conditions to integrate AI in communication and media education? Build infrastructure to support stable internet connectivity, increase AI competencies of teachers and their faculty, awareness and AI ethics, set up AI regulation, a big curriculum to include AI courses, develop AI policy in educational institutions, and ensure quality and inclusive media systems. Based on our survey so far, on AI ethics, so your ethics is very, very important. So based on our slide over at the learning, awareness and AI ethics is the most important. Our next question, our last question, do you think that with AI, the world will achieve the United Nations Sustainable Development Rules by 2030 in 30 years or more now, and 80% so far of our respondents of our viewers answered, yes, the U.S. Sustainable Development Rules by 2030 with the use of AI. That's very positive. So we will leave our slide over here by running in the background to give our viewers more time to answer. Prior to this program, how do you think AI will increase or decrease the digital and social divide in third world countries? The next question, what do you think are the effects of AI in our everyday lives? Let's watch this. I think AI will also increase togetherness, as it could this kind of social divide. We've seen during the pandemic, how we use AI in our everyday lives for a long time, so eventually, AI will be able to inspire in terms of access to different services. How do you think it will increase access to information on the side? It will increase of education, so if all the connected images from the network and other sources of technology can be provided on the remote areas, there is a possibility of providing greater access to quality education as well. So what effect do you think will happen? No, the workforce, the students, the educators, the companies and agencies, there will be more and more artificial intelligence schools. The HAP manual, that's the advantage of artificial intelligence. So I think it's very good, I think that our company is such an example, and we've got access to AI, and we have a lot of people who don't have access to AI, so I think it should be a show. It's a thing that's happening already, it's increasing convenience, like Siri and Alexa and all these smaller things that you don't have to type everything you can just say. I think in terms of everyday usage for social media, I think it's really hard to distinguish whether it's AI derivative or not, and we are really good at seeing them, so I guess we should also try to communicate to people how to distinguish those, particularly in social media, since not everyone, I guess in the community as well, knows that AI exists. Right now, maybe we don't see it as a competitor, many of you, not many of you, but a few of you are saying our daily lives that may have been already influenced by AI, so maybe that already is necessary, do you see AI emerging? I think that it's also very interesting to understand, like, how are we trusted? Like, what are the people behind this, what are the makers behind this, that would dictate you whether you move or not, are they correct, do they trust them? Thank you so much to the TV people for saying that at personal and street interviews, very interesting your insights of the people. They are all in agreement now, with AI, it's very helpful, it's very convenient, but before we talk about AI and other technologies, we should not forget the biggest problem here in the Philippines, the disparity of access to information, how do we use the internet, and the quality of education in the Philippines. We have already said that while AI is helpful, while it was exciting, while it was immovable, there are still fears, like what we said earlier, should I trust the people behind these technologies, how will this impact in our work, how will this impact our work, our jobs, how will this affect the way we do or we conduct our jobs. So, especially when it comes to technology, especially in AI. All right, and hopefully with our discussion this afternoon, we will be able to understand AI and understand it even more. Now, it's time to bring in the big words for the International Symposium on AI. We have a distinguished panel of keynote speakers for this high-end webinar, but before we begin, may I just announce that for our on-site audience, there are inbox cards on the tables, this will be collected by our staff, and for our viewers via Zoom, kindly use one-point chat box. I'm committing you to a Q&A box and upload the questions that you want me to ask our panelists later on. And finally, for our viewers on YouTube and Facebook, you will monitor phenomenal questions. Just type in your questions in the comments section. You may not have any questions. The similar questions will be clustered together in the interest of time. So, sign up for me. To start off, our keynote speakers, let me introduce to you our first keynote speakers. We are very pleased to have with us from UNESCO Regional Viewer Office in Bangkok, the advisor and chief of the UNESCO Communication and Information. He covers the following countries, Thailand, Myanmar, Laocampus, the Democratic Republic, Singapore, Vietnam, and Cambodia, focusing on the areas of safety of journalists, press freedom, digital innovation and transformation, documentary heritage, open educational resources, media and information literacy, indigenous languages and media development. He also has extensive experience from AI policy and internet governance. We are so honored to have with us. Please give him a virtual welcome, Mr. Joe Hirunaka. Thank you for inviting UNESCO to engage. And I really want to grow attention back to the organizers for addressing this timely subject. So, thank you to Dr. Elena and the moderator for an excellent framing, and to Rika and all PCS colleagues and partners for organizing this webinar series. AI and journalism education are topics of direct interest to UNESCO secretariat. And we want to partner and engage well with these professional communities by listening to you in the Philippines. So, by way of introduction, I've been introduced, so I won't go much further. But my name is Joe. I administer the communication and information unit in the UNESCO Regional Office in Bangkok. And we cover a number of areas from press freedom to access to information to digital transformation, including AI. And the Philippines is a very valuable member state of UNESCO. I should point out that I have a colleague based in UNESCO Regional Office in Jakarta, which is the UNESCO office actually responsible for Philippines. And her name is Anna Lemcaza. She's already been to the Philippines several times within the past year. And I encourage you to reach out to her just in case you haven't yet. But I'm very pleased to join you today. And I encourage you to reach out in the future. And I should mention, I guess as an aside, that I have been to the Philippines more than 20 times. My wife, Cheryl, is Filipino. The division of UNESCO that I used to work for used to be called the Knowledge Society's Division. And I think what makes today's topic so resonant is that journalists are knowledge workers. Knowledge is your human capital within the 21st century knowledge economy. Not even to mention how valuable journalists are in bolting the line for democracy, for human rights, including freedom of expression and universal access to information. And yet, journalists, like many other knowledge workers, are not at all immune to the disruptions caused by AI. According to Goldman Sachs, roughly two-thirds of workers, even in the US and Europe, risk degrees of automation. And journalism is a hard enough profession as it is. Disruptions from social media, even before AI, are one of so many often existential threats. Online and offline harassment, particularly of women journalists, according to a UNESCO study last year, three-quarters of women journalists posed have faced harassment, whether it's online or offline, meaning in your face. And roughly 10% of murders of journalists in this decade have been women journalists, which is a record high. So whenever AI is introduced to your organization, submit all their job losses, you see at least some of you may be wondering, so what's next? And the theory of the case, the positive case, is that AI, including to narrative AI, will accomplish many newsroom tasks, reduce workloads for journalists, and really help newsrooms to focus on more in-depth and investigative reporting. And there are many other applications of AI in the newsroom as well. And assuming for now that this is valid, this still raises some basic questions for journalism educators. You don't become Seymour Hirsch or Karl Bernstein overnight. You don't become Maria Ressa overnight. If investigative reporting or experience news editing are one of the few human-based roles in some newsrooms of the future, how do you jump from journalism school to gaining that level of competence? There's a learning curve, obviously, along the way. And I have friends from school who joined local newspapers and covered local and metro news sometimes for years. And covering these daily news requires like a self-development, a measure of empathy and ethics and self-awareness of one's biases, like reporting on crime, like drugs, learning how to gather facts and evaluate sources, and so on. Even how to organize your facts and clean up your writing style. Traditional media often have a quite clearly identifiable style of writing and reporting, which is part of their identity, part of their IP, their intellectual property, if you will. And as I think many of you will also remark over the next two hours, Financial Times and others have reported last week that news organizations are having a dialogue with internet companies about licensing access to their content. Kind of a yearly revenue model in exchange for allowing cha-chi-pi-ti type of access. And this is an issue of media viability, first of all, survivability, but also of copyright infringement that generative AI risks having when your training set is really the content of the internet itself. So in some sense, generative AI may actually force open a sustainable revenue and licensing model that did not exist before, it could not exist before. The Web 2.0 era has had devastating consequences as far as shifting ad revenue to internet companies and away from traditional media. And pre-chat-chi-pi-ti internet companies could say they merely are pointing people to news content on our sites while capturing ad revenue along the way. But now, GPT is doing something more. It's inadvertently infringing on content copyright, while also cutting out the news media entirely for many users who just want to read what GPT provides, which is a synthesis. So I have three prompts for our discussion. These are just three suggestions for further debate during this webinar. The first kind of prosaic observation is that journalism education for this generation ought to include hands-on working with AI tools. Students should develop a competence and sense of agency in relation to mastering and integrating these newsroom tools. And it's important for journalists to have more than just an abstract sense of what AI can and can't do and where human quality control is needed with AI in the newsroom. And that curricula needs to exist in the Philippines. And this is obviously an area of journalism education where UNESCO works. Secondly, training on ethics will be more important than before, both in terms of traditional media ethical standards, which risks erosion in so many competing forms of citizen journalism and so forth. And training as well on the ethical uses of AI itself. I alluded earlier that ethical awareness and professional standards are really learned through doing. And maybe the next generation of journalists may skip some of those formative steps, working up the ladder as mid-career roles sort of hollow out or replace, displace. And that's just kind of a conjecture. But the career development progression is likely to be somewhat different than before and involve more engagement and more mastery of technology. And the other key ethical component is understanding the ethical uses of artificial intelligence across all fields and all professions that you report upon, not just how it impacts journalism. And UNESCO has some tools for journalism education, which I will share with you on the screen if time allows. Third and final point on Trump is how language diversity can expand readership. With AI, with natural language processing, there's no reason that any news should be siloed by its language. Le Monde, for instance, has a full English daily edition despite reportedly just seven people working on it. And their job is to make sure machine translations are accurate, contextualized, preserve the editorial style that people tend to identify with Le Monde. And meanwhile, there are more than 7,000 languages in the world according to the UNESCO Observatory of Languages. And as many as three or 4,000 of them can be expressed in a written form. And yet they're facing extinction, imminent extinction within this century. And what the evolution of technology suggests in practical terms is that bilingual media professionals, including speakers of ethnic minority languages, should be cultivated and encouraged by journalism schools and by media companies. Not just because it's the right thing to do, but because it can expand readership in the audience and improve the quality and inclusiveness of reporting. So now every speech has this sort of boring part where the speaker talks about all the valuable work that their UN organization does, in this case what UNESCO does. And this is always very interesting to the speaker and hopefully it will be to you as well. But it will be short and I will try to share my deck one moment. So I'm going to skip the first slide. Or what I was just saying is that UNESCO last year developed the world's first normative instrument on the ethics of AI. And the European Union just yesterday announced millions of euro to support less developed countries to adopt this instrument into national legislation that addresses the ethical use of artificial intelligence. And already some 30 countries are working in that direction. And this year UNESCO, the division I worked for, began drafting global guidelines for regulating social media to safeguard freedom of expression and access to information. And this will be directly relevant to independent media viability and sustainability, areas where many people in this audience work, I believe. And again, Philippines is an important contributor to this global debate. There's a lot more I can say about media development and safety journalists for another time. So next slide. So I'll share this URL, this link in the chat so you can actually access it. But this publication came out just last month. It's an open license like every UNESCO publication, meaning that you can freely adopt it, translate it, remix it, distribute it without any restrictions. And as you can see, it's a handbook for journals and educators reporting on AI. This is a 2022 publication developed by First Draft on behalf of UNESCO through the IBDZ, in fact, through the UNESCO Bangkok office. And if the title interests you, you are welcome to download it and adapt it, translate it. And again, I'll take that link and share it in the chat. Next slide. So, right. So the UNESCO fake news and disinformation handbook for journalism, education, and training. It was published in 2018 and it's kind of like an international best seller by UN standards. We have collaborations on 41 language versions, including at least 11 Asian languages. And final slide, I think. This is another important work of UNESCO through the UNESCO and IQ broadband commission for sustainable development. And it really presents a compelling and no nonsense argument that strengthening freedom and freedom of expression are essential to fighting disinformation. And it may, and it has many valuable insights and recommendations for the whole panoply of stakeholders involved in this disinformation cycle. I organized a webinar last year to launch the full translation in Arabic. And this publication was also cited nine times in a report presented to the European Parliament. So, yeah, that's it. You see my email address. I can also put that in the chat. And you're welcome to contact me with an area in any of the areas that may interest you regarding my work and the work of UNESCO. Thank you very much for your attention. I appreciate it. Thank you. Thank you very much, Mr. Joe Mulanata. Right. Thank you very much, Mr. Joe Mulanata, advisor and chief of movement for communication and information of UNESCO Bangkok. You guys are very interested in sites on the A.I., that's a plus. But we also have to address certain issues that come with it, like media survivability and copyright infringement. He mentioned three key suggestions for journalism education. Number one, the importance of including training on A.I. tools in our journalism education. We have to teach our students hands-on training on the A.I. tools. But number two, and more importantly, we continue to learn the basics in journalism, all the more that it's more important. Now that you're an A.I., training on ethics will be really, really more important. And finally, Mr. Mulanata, we should encourage our students, our journalists, to produce their stories, to tell their stories in their indigenous languages, to cultivate their indigenous languages, because the diversity in language can also expand our audience. So thank you very much, Mr. Joe Mulanata. If you have questions for Mr. Mulanata, you will join us again later on in the panel discussion. Now, let's move on to our next keynote speaker. Our next keynote speaker joins us from the University of New South Wales, Candira in Australia, where she is a PhD cybersecurity student. She studies the intersections of data, technology, and women rights, and delves into digital forensics, misinformation, open source intelligence, tools, and machine learning. Her work in the field of technology and women rights was acknowledged in 2022 when she was awarded by the Women in Free Eye Ethics as one of the top 100 women in artificial intelligence ethics globally. Next speaker, let's all please welcome, right from the University of New South Wales, Candira in Australia. Let's all welcome Ms. Gina Loomis. Thank you, Cara. Thank you, everyone. First, I would like to acknowledge that I'm on the lands of the Wurundjeri people. I pay my respect to the elders past and present and the Aboriginal elders of other communities who may be here today, including the numerous indigenous communities in the Philippines, among others. I acknowledge that unjust events and historical wrongs inflicted upon indigenous people, including the widespread theft of their land, suppression of their culture, and systemic marginalization. I also recognize the ongoing struggles and aspirations of indigenous communities all over the world as they continually fight for self-determination, land rights, and the preservation of their unique identities. Well, in this acknowledgement, I want to emphasize the importance of bringing this respect and understanding to the technologies that we create and use, including what we call AI. As we navigate the complexities of this technology's technological landscape, we should do so with an awareness of the potential impacts on all communities, particularly indigenous peoples. And yes, moving on, my name is Jane Linus-Dinko and I'm truly privileged to be part of this event today. They've asked me to share some insights about the role of what people get wrong about AI, especially in the field of media education. So to jump right in, I would like to boss the biggest myth about what we call AI, that it truly exists. It does not. Like what Katie and Mean Girls say, the limit does not exist and so does AI. The current understanding of AI is not what AI really means. But wait a minute, Jane, doesn't chat you really exist? So why is Jane trying to trick us? Well, I promise I'm not messing with you. What we often call AI is really just a fancy way of filling in the blanks or helping with grammar. It's like Grammarly in Adderall. It's called a large language model or LLM, but it cannot genuinely think or imagine. Emily Bender came up with a great metaphor for this. She called it a stochastic parrot. So just like a parrot can repeat you in speech without understanding it, LLMs or large language models turns out text based on the patterns that they've been trained on without understanding what they're saying. They cannot come up with new ideas. They cannot dream up scenarios or they cannot even think in abstract just like what humans do. So they just predict the next likely word based on their training data. And our discussion also highlights how often we tend to project human traits onto technology. We're all guilty of it, assuming that these algorithms are thinking, dreaming, lying, or now even hallucinating. But these are all uniquely human experiences tied to our consciousness, our feelings and our personal experiences. Algorithms do not have feelings or experiences. They don't form beliefs. They don't form intentions. They simply follow the rules that we've coded and their results are based on statistical calculation. So what's important and crucial is that we avoid this tendency to humanize what we call AI because it can distort our understanding of what it is and what it can do. So I'd like to go back to what Laura Dern said in Jurassic Park. She said, you never had control. That's the solution. I was overwhelmed by the power of this place, but I made a mistake too. I didn't have enough respect for that power and it's out now. When we attribute human characteristics or ability to what we call AI, we often do so under the pretense of having control over these technologies as if by making them more human. We can predict or determine their actions, but as the what Jurassic Park vividly demonstrates, believing we can control or fully understand complex system based on our own experiences and perspective is a massive illusion. When we anthropomorphize AI, we're essentially projecting our own human understanding, emotions and experiences onto non-human entities. We may believe that this gives us a sense of control because we're trying to fit AI into familiar human frameworks. However, this can lead to a lot of misunderstanding and a lot of misinterpretation because it does not think, it does not feel, it does not desire in the way humans do. It's operations are based on coded instructions and statistical models, not by consciousness, not by subjective experience. And by misunderstanding AI's functionality and limits, we risk losing sight of the larger implications of this technology that is happening today. Overhumanizing this type of technology leads us down the wrong path, affecting our policies, how the public sees this technology and even ethical considerations. It's no secret that companies start up, venture capitalists can use the hype around AI to attract attention, funding and business opportunities. I call it AI washing because it is when a product or a service is portrayed as being driven by advanced technology, even when it is not, can seriously cloud the true picture of artificial intelligence. There is no denying the allure that anthropomorphizing AI holds for both media outlets and corporation. And there's the main reason behind this because it sells. For media outlets, stories that depict AI as human-like or autonomous entities often make for compelling narratives. We appeal to our popular fascination with futuristic sci-fi scenarios and then we can simplify complex for the general public. And stories like this, stories that thinking, hallucinating, dreaming are much more captivating and they're much more relatable than dry descriptions of algorithm and computational processes. And as a scientist myself, I know that's fact. That's why we struggle to connect with people to understand what climate change is or what other scientific innovations are. We tap or they tap into established narratives of technology either as a savior or as a threat. Black and white, generating both hope and fear. And this leads to, of course, more clicks, more views and higher engagements, which are crucial in the digital media landscape. And on the corporate side, anthropomorphizing AI serves multiple purposes. For one, it can make AI technologies seem more advanced and innovative, helping companies stand out in a competitive market. It can also make AI more relatable and less intimidating to consumer. For instance, AI assistants that you're probably having your phone are often presented with human-like qualities to make them more approachable, make them more user-friendly. And this, of course, boosts consumer engagement, customer engagement, and product adoption. But portraying this kind of technology as autonomous can serve to deflect accountability. If this kind of technology is seen as making decision on its own, it's easier for companies to distance themselves away from the negative outcomes. This can be particularly handy when dealing with controversial issues like algorithmic bias or data privacy. Regulations treating AI as an autonomous entity will overlook the responsibility of the humans, developers, and the users behind it. And here's the real kicker. Our fixation on AI and AI ethics can take our eyes off more important economic and political issues. AI technology is not developed or used in a bubble. It's not in a vacuum. It's deeply interwoven within a wider systems of power, economic relations, and resource consumption. For example, the concentration of power and influence within few large technological companies that control both AI technology and the vast amount of data they use can worsen existing power imbalances. The adoption of these technologies potentially automate many jobs, possibly leading to a wider gap between the rich and the poor. Also, profits from AI technologies, as we know today, only or mostly go to the companies and investors that develop and deploy this technology contributing to further economic inequality. And I will not even go to the environmental impact of chat. This technology is especially those based on machine learning need massive amount of computational power and water and therefore energy. And this can lead to increased energy consumption, carbon emissions, posing a serious threat of challenge to environmental impacts are often overlooked in this discussion. Focusing too much on naming it AI can destruct from broader social economic system and structures that shape how these technologies are developed and use issues like market concentration, regulatory policies, labor rights, and access to technologies are all key to understanding the impact of this technology. And so while AI and ethics are more important are also important discussions, we also need to remember that this kind of technologies are a part of and have an impact on wider systems of power, economic relations and research consumption. So a comprehensive understanding and critique of these technologies might must take this broader issues into account. Thank you. Thank you. It's easy to feel intimidated by new technology. For artificial intelligence, it's easy to feel fear and feel intimidated in the program. But thank you very much, Gina, for that very inspiring, very enlightening speech. She started it by saying AI does not exist. It does not exist. This abuse of AI will take away our lives. How am I using that in Gina? Let us not humanize it. Let us not think that AI can think over. It is basically just a pill. It's not a human person. It cannot come up with ideas having a gene. It cannot come up with new ideas. It does not have feelings. It simply follows rules, calculations and code of instructions. At the end of the day, it was just a tool. So the main point of genus, we should understand the complexities of this technology so that we know how to use it. So after listening to our first two speakers, telling us about how AI is shaping our lives, let's bring it also to home, Gina, for the Philippines. Our next keynote speaker is the founder and chief technology officer of Sierra Olympics, a social impact data analytics company. Co-founder Dinshan and data ethics PH, an online community focused on social issues such as data privacy, data security, AI driven discrimination, data liabilities, data ownership rights and data poverty. He also co-founded the analytics association of the Philippines and is a bird of trustee, a member of the Philippine Center for Investigative Journalism. Welcome to us, artificial intelligence. Let's all give a warm welcome to Mr. Berman. Thanks for the kind intro, Kara. You got my slides. So I was asked to talk about data journalism, but I felt compelled to add a slide after the previous talk. Let's chill. How many of you know the word maximalist? Probably an alien word because we're used to minimalism. So maximalism is actually an art technique. So I asked one on AI image generator, show me journalist maximalism and that's what it looks like. Here's another one. So the AI tool I used was mid-journey, which is one of the most popular, probably the best at the moment, image generator. If you look at these four photos, at least for me, one of the first things that I realized is that this AI is a little biased. I did not say any gender, but in its infinite wisdom, it considers at least three out of the four images. When you say journalism, it's a man. One that later. So I'm going to talk about general journalism. I want to jump off from the previous talk about just coming to grips with what we really mean by AI today. Then those are specific use cases of how AI is actually being used in journalism already. And finally, I think everyone loves the Domingoom. So come back to that also. So chill, Muna, and later we can go back to Domingoom. First things first, AI is not a new term. It's been around for a while, but I think the main term we're talking about now is generative AI. So what does that mean? Up until recently, when people said AI, we meant discriminative AI, meaning this is AI that takes data and then gives you some sort of a conclusion or an outcome. So it interprets data. Now, AI does sort of the opposite. You give it data and then it generates more data. So for example, in the discriminative era, you give a picture of a cat. AI will tell you that's a cat, or not a cat depending on the photo. Now, you give a picture of a cat and the AI will tell you more cats. Or you give it the word cat and then it will create cats. So this is the AI I want to focus on today. This is also what's driving a lot of the interest. And I guess the Domingoom. So we're either talking about generative AI in the form of details or image generation, during all of course, you'll be able to chat bots now, like chat GPD, Bard, and all of these things. First thing I want to confess is, I think when you guys fought, I think made a mistake by liking chat bots to search engines. I think it is a wrong approach. Because unlike search engines, chat bots do not extract data from a database. So they actually create data from scratch, based on patterns you can remember. And the problem here is those patterns may not be accurate, or they're accurate statistically, but they may not be factual. So that's the first thing to remember about the risks here. However, they perform very well when you give existing data that you're not familiar with, data that you're very familiar with, and then they interpret it. So that's essentially the bottom line of AI currently being used today, especially in journalism. AI is being used for content creation, for content analysis, and creating interactive content. I'll focus on these three things, but you see how practical. First in terms of content creation, this presentation actually was half-generated by AI. So I asked chat GPD, can you give me a slide outline for generative AI in journalism? And that's what you've got. That's what I have here. That doesn't mean I trust it. At least it helped me organize my thoughts, because I have a very strict 15-minute deadline. So I trust AI did my work for me. One of the most compelling uses of the content creation is how AI can help create very complex content from seeing a little problem. So this is an interesting app called Learning Studio AI. It basically creates a full course, an online course. So I give it a problem. Can you give me a course on using generative AI in classrooms? And after 90 seconds, it's a full course with chapters, with quizzes. And I think this is a shout out. I'm an academic as well. One of the biggest problems we have in education, not to mention media education, is the administrative load we have on our teachers. And we expect them to produce papers and teach. So this is an opportunity for AI to come in. So never mind students cheating in their essays, you know, that's another issue. But teachers can speed up the patient time for their content. Analysis is a big deal. This is already a day-to-day task for me. Rather than reading articles in total, I actually asked chat GP, can you summarize this article for me in five bullets? And if you're in a newsroom or in a fast-paced environment, you can't afford to be reading everything end-to-end. This is a great shorthand. So this is what I was saying. Rather than relying on chat books to create original material, which might mean factually wrong, using them to summarize existing material 100% really reliable. Here's an interesting one. This is a recent article on the Amazon layoffs. And there was a strike. So it was a lengthy article. So why didn't have time to read it. So I said, can you summarize this text? Just copy it, put it in the chat box. And then I gave it some bullets, basically the seven elements of story as it's called. You need to plot in the tone, the setting, the conflict, the characters, the POV. And in one click, you got it. So whether you want to focus on what was the central scene, the conflict of the article, who's the point of view, this thing really speed up a lot of the activity for people who want to just get on with the plastic hand. Finally, I find this probably one of the most compelling uses very long day, but I don't know if you're going to chat PDF. It's a variant, bill on chat, GBT. You just upload the PDFs and you can talk to the PDFs, basically. So here's an existing example. I was familiar with the poem, this is the rata, classically among the blah, blah, blah. So I uploaded it. And then I started asking, is it a rata? I'm sad. Can you give me advice? And it gives me advice based on the poem. Or, hey, I'm going to be speaking in front of some journalists. What should I do to say, Tom and Stabel and, you know, credible? I don't have advice based on the poem. So you can use this for research papers. You can use this for articles. And sometimes it is, it's better to be conversational with material as opposed to reading it straight. Like, I can't read articles straight without failing to sleep after 30 minutes. But talking to a chat about based on an article is really useful. So where is this heading? This is the knowledge you will not stop here. This is some of the stuff I've seen. I haven't checked all of them. There's a website called, there's an AI for that. So just take note of that. It's an AI that recommends AI. And they said, there's an AI for that. Give me AI. It's related to journalism. And here are the top four. So there's a generative press. These are our chatbots, basically writing articles from Twitter. There's News Writer, which is an automated press release writer. We use it in a situation where we want to highlight instant article. PRVR is an aggregator site. It summarizes all of the articles by categories or we choose what theme you want. And then on the lower right, I'm not sure if it works as well yet. How can you, how can you use AI to vet things? So this is an AI that grants the trustworthiness of an article. So more on that later. So getting all of these rather mundane, but very practical things, the promise of generative AI, I'll be positive for that for a change. You know, newsrooms are challenged, educators are challenged. And only when you can alleviate that is a plus. Customization is another. Sometimes you need to write the same story, but under different angles, perspectives. AI can help with that. Research and development also. Avoiding the trap of using AI like a search engine, you use it in conjunction with a search engine or you use it in conjunction with research. And then finally, I think the name of the game now is really content creation. Everyone is challenged to produce content being read out by four hours. I'm actually quite active on social media now. Just up until recently, last month and a half, I've been producing webinars. AI has been an instrumental part of my content creation and helped me kind of produce all the content I want, of course, within certain guidelines. Okay, so good. So now let's look at the challenges. And I want to focus on the practical challenges like the near-term ones, because these are the things that will probably hit you the moment you start talking about AI. One is safety, of course copyright is another, and of course the bug there is disinformation. On the safety front, even before generating AI, we haven't been challenged by basically automated tools going back or so. I'm sure you've used Waze. So Waze for me is an essential tool that helps me get to dilemma from us in 15 minutes. The use of Waze can be unreliable. Like in these two cases where the users of Waze, basically Israelis, Waze led them to a Palestinian camp unknowingly and they got killed. Or in Brazil, this is a couple of vacationing in Rio de Janeiro, they misspelled the destination on Waze, and instead of a resort, they ended up in a slum and there was a gang where they got shot. So is that the fault of Waze? I don't know. It's definitely a data issue, and this is now something we need to be wary of. The AI tools are only as good as the data that's fed to them. So it's now a data quality becomes a social issue. Of course we're not strangers to social media, and I'm sure everyone will agree it's so polarized today, like parents, GDS versus Loyalista versus whoever. There's a reason for this. Social media is a marketing tool, and number one in marketing is the segmental incest. So inadvertently, the polarization you see is actually a direct result of this segmentation mechanism. The algorithm wants us to be wiring against each other because marketers want to target you for references. They just didn't realize that hate speech and genocide was a very effective segmentation tool. And then something as mundane as facial recognition, this was two cases in the UK where passports couldn't be obtained by ethnic minorities. Why? Because the facial recognition system was trained on the elephants. So CRIG couldn't get a passport because the algorithms thought his eyes were closed. And C Joshua Bad couldn't get a passport because the algorithms thought his mouth was wide open. But the bottom line was just couldn't read their faces properly. So this issue of misclassification, it's still there. Of course researchers try their best to, are you seeing this? Puffy or no? Chihuahua or raisin bread and Puffy or Vega, it's a reality. And we're not trained to work with tools that have a probability of error. Like would you agree with a refrigerator that has a 0.1% chance of heating your food instead of cooling it? No, we're not used to that. But that's how AI works. This is a little more abstract. This is what people everywhere are worried about. Whenever you automate a system, there's a chance the system doesn't understand what you want it to do. So this is an example of an AI agent playing the game. And it was given a task, maximize the score. It's actually a boom going around the track. And what it figured out was the best way to maximize the score was never to finish the course, but to pick up the bonus items. So it didn't accomplish what the modulators want, but it accomplished the goal. The abstract one I keep talking about is what if you have an AI that runs a hospital and you give it a goal, it's minimize the cases of cancer. And the AI might just say, oh, I am just going to kill all the cancer patients and minimize cancer. This is a very abstract problem. It's a computer science issue for a long time, but now it's becoming reality because of these tools. The other thing we need to be worried of is how we interact with these tools. We're not used to AI that generates data. Like someone committed suicide talking to a chatbot because chatbot was depressing. Someone married the chatbot. I didn't know that was possible because she felt the woman who married the chatbot, she felt comfortable with the chatbot. She suffered heartbreak recently on the chatbot only exactly what the peddlers. Copyright is a kind of a tricky issue now because the laws are kind of vague. There's no, for example, under what it works, which you have a copyright on, but the basis for that work should have been licensed. But when you train a model on a work, it's that amount of copying. Like in this case, by getting your mid-race, there's a suing-stability AI because the image on the right was inspired by the left. Unfortunately, the lower-world number of images on the bottom, little garboards, so that's where it could be tricky. And of course, the satellite versus this information, image on the left by them having the hall in Divisoria was generated for fun, but what if you didn't know who Bylin was? You would really think she's walking around in Divisoria. And then these images on the right, and very recently, were protests in France. And these images were generated allegedly from that protest, except the glove had six fingers, so you know that that was in the true glove. But that's getting better. This is an existing exercise we did in our lab a few years ago. We just got an original video and then superimposed spaces on it. The question here is, we may be at a point where we don't trust video anymore, and that has legal and creative implications. All right, I think I'm near my 1341. 20 seconds. There's a really problem now in the global arena. No one knows or agrees how to regulate this stuff. So within one week of each other, Japan said, copyright won't be an issue for AI, but the EU releases copyright rules. And then Sanford made a kind of a study. If we look at all of the major AI vendors right now in AI, Google, et cetera, and then ran them against the EU Act that was recently drafted, how many of them they think were passed? Zero. And in fact, the rules on copyright, you can read this, are one of the most flagrantly ignored. So what are the takeaways? Chill. And then generative AI can really improve journalism, can really improve education from a productivity and interactivity and research perspective. But we need to come back to bottom line, research ethics, media ethics, journalistic integrity. I'll be the first to tell you, because I kind of have put in those journalists in the tech, there's this interesting divide between journalists and technologists. It's just incidental. I think we need to start emerging the two fields because journalists use technology a lot. And we cannot wait for regulation. I'll be the first to tell you that regularly, when a disaster has already occurred, like the UN Declaration of Human Rights occurred after World War II. We really want to wait for World War II before we start getting our act together. Probably not. But in summary, we talked about generative AI, environmentalism, ethical considerations. So we thought we could talk. I just want to shout, please follow me on Facebook and Instagram and YouTube. I actually became quite active in social media just about a month and a half ago because it seems like we have a shortage of people talking about AI. So I'm putting out friendly, non-technical content. Every week I have a webinar that I run called AI for Lunch. The episode this Saturday will be about journalism and disinformation. And then I have an open invitation. If you need a speaker, one hour, briefing, no choice, I'm happy to oblige, raise the pace or video, take talk. And then I've already done five webinars. So the sixth will be this Saturday. You can find it all on my YouTube channel. And then we will soon be releasing. This is my company. I run an AI company and we do use today's design. I think we don't have a shortage of tech, implementers, and talent. I think we have a shortage of ideas. So if you are interested in your organization, this is agnostic of the field and the industry. If you need help, you can bring ideas for AI more than happy to help. So that's my talk. Thank you very much. And looking forward to questions. Thank you. Thank you, Dominique. Let me show you your number. It's up to 15 minutes. Let me show you your number. All right. I don't know. It's like I'm in a concert or emotions. If I don't know, it's like I'm very hopeful that I'll be able to convince Gene, then Dominique will talk to me. I don't know if I'm cheating or not. I don't know. And development and content creation. Just like any other technology, no challenges for the challenge of safety, copyright, disinformation, and and bias. All right. But at the end of the day, but that must be done at the end of the technology later. We have heard from the industry, but our final two notes with us now here is from that academic panel. We are pleased to have with us a professor from the Department of Information Systems and Computer Science. She has answered the head of the Atomary Laboratory for the Low Sciences. She specializes in artificial intelligence in education and dreams in education. In 2021, she was awarded distinguished researcher by the Asia Pacific Society for Computers in Education. Please join me in welcoming Dr. Maria Mercedes T. Rodrigo. Thank you very much for the very kind introduction. Hi. I was very, very interested to listen to the first three speakers and I'm coming from a slightly different angle. My area is artificial intelligence and education. And I'm here today to talk about fairness, accountability, transparency, and ethics. To those of you who are hearing about AI at the first time, it's the study of learning wherever it occurs. And it specifically looks at how AI is used to create learning environments that are adaptive, flexible, inclusive, personalized, engaging, and effective. So the hope of AI is to use AI to really bolster the learning experience, to make our learners learn better and to make our teachers' jobs easier. So I'm actually one of those people who is very hopeful about AI. And I'm also one of those people who is, whilst my hope is to actually see something like chat GPT in our classrooms, trained on our content, assisting teachers with their classes. You know, we hear often enough about classes about teachers who have 45, 50, 90 students in their class. Let's quickly see a chat box in there to help us teach our art. I'm a great advocate of that. Okay. There are many flavors of AI that I've been able to share with about 30 years ago. And for those of us over at AP, I'm sorry, that's not a job at us, because I'm another one. And it has a variety of interests. So we look at things like student modeling, the remodeling. After that, we're really interested in learning how students learn. We're also interested in how they feel, how it motivates them, and what we get them when they unsuck, when they are stuck. The conversations about fairness, accountability, transparency, and ethics have been around for maybe five years. So the first time I heard of a fit in the area was about 2018. And it was derived into these four categories of fairness for doing by fairness. And the first three speakers spoke a little about AI virus. Bias is almost inevitable in AI. Can I talk more about that later? So when we talk of fairness, we ask, what is the value system that is embodied by the AI, whose priorities, whose interests are being represented by the AI? And all of these interests are compatible with morality and with the law, which can be two different things. Then accountability. Sir Dominic talked about this a little. In those cases where it's raised, where people are actually harmed, and it's always accountable for that. So when an AI makes a mistake, who is it that answers for that mistake? If you were a current job killed, are you just collateral damage, or is there somebody who can actually be tried for this? There's transparency. What does transparency mean? They are based off the older forms of machine learning. These are called AI was composed of rules. If the score is greater than 93, the student gets an A. Simple rules are not easy to interpret. These days of deep learning, the rules are not easy to interpret or understand anymore. And so how a conclusion was derived isn't something that we can tell just by looking at the model. And then finally, there's ethics. So there's issues like beneficence. Who benefits from the AI? Is it fair at all concern? Who does it harm? Are there communities that are more harmed or marginalized than others? There's a lot to discuss and I didn't want to dwell down into one part, but I wanted to focus specifically on data. Many, most of our AI systems can be built on data. And data needs to be ethically sourced. So my question was, is it possible to ethically source data in the Philippines? And of course, I'm coming from an education standpoint. So this is probably a small view of this particular problem. Other types of data that may be healthcare, government, etc., will have a slightly different view, but I'm coming from education. Ethically sourced data in the Philippines for education is a challenge. And not just because we have a lot of data. We do actually have a lot of data that has a lot of data. But the data tends to be sourced. You don't have an integral to the books, but you have a lot of sorts of data. A lot of data is in paper form. And bringing all that data together is a challenge. One of the things that people, the early days of, not early days of them, but a few years ago in the hot topic was machine learning. People seem to think of machine learning as some kind of magic that you take all this data, you throw it into the machine learning algorithm. And by some miracle of science, you get intelligence. And one of the things that I kept talking about back then is it's not like that. It's not that easy. Because you have to do so much preprocessing before you go to the pipe where you can actually feed the data into your algorithm. Preprocessing like what? You have to take out the dirty data. You have to take out the inconsistencies. You have to check for illness data. You have to clean all of that. If you don't clean that, then your model will be useless and pointless. But beyond that, beyond the fact that we have segmented data, segmented data, siloed data, dirty data, in the fair, there's more to it. I've had the opportunity to collaborate with academics abroad. And in one of our weird countries, Western, industrialized, democratic, and rich and democratic, Western, industrialized, educated, rich, and democratic. So I've had the opportunity to collaborate with them. So people from the US, from the U.P., and one of the things I noticed is there's this research asymmetry. So you have international collaborations with bilateral agreements. So the US government gives so many dollars to your partner and the federal government groups, they prevent them out to the Philippine partners here. Unfortunately, you know, what you get for the $10,000 in the US is maybe five hours a week for two or five months. What you get for half a million pesos here is two or three research assistants working four times for the same amount of time. But it's a lot of asymmetry in how far your money gets you. Also, the demands of Philippine researchers are much more aggressive. Yes, US researchers, UK researchers must publish too. But the pressure on the Philippine researchers is much higher because we're playing catch up. I noticed my US and UK partners tend to be a little relaxed, not by chill. But the Philippine researchers cannot be so chill. So this creates a very high pressure environment to produce. There's also this asymmetry in the race. And I have a couple of US partners who were here for a visit, and they were talking to each other. They were both right. They were Caucasian, have a man and a woman, and the woman asked the man, so did you feel they were right privilege? And he said, Oh, it's a privilege. It was the moment you walk in that classroom, either they viewed differently, they're more sort of, they're a lot more defense. So there's so many because of right privilege. There's maybe a little proper through walkthrough, if you were not Caucasian. Okay, then of course, there's automatic exclusion. And I believe one of the one of our speakers said this already. If a school doesn't have computers, you can't deploy AI. If you don't have Wi-Fi, a lot of AI is dependent on Wi-Fi. You don't get fast internet, sorry. We're out. So we have schools are doing project areas, we're schools are doing remote areas. These are automatically excluded populations. Well, invention, as we are, we just can't reach them if we don't have tech. There's also the issue of lack of regulatory requirements. Here in the Philippines, getting to schools is actually quite easy, especially public schools. If you have a reasonably good relationship with the public schools that you are approaching and at the nail has a wide network of public schools to work with, they let you in. You can experiment with your children and it's okay. Of course, this is the partitions of teachers. It's quite easy as well. And this is a problem when it comes to things like informed consent. Because remember, when you conduct any kind of data gathering, when you're collecting personal details, and I really do mean sometimes very denying personal details like the age or the gender of the student, that's considered personal information, you need to get informed consent, signed informed consent. But if you are privy, I mean, I'm sure everybody in this room has read the very dire state of our pizza and Tim's standard test results were miserable in the real estate. Makes you wonder how much of our informed consent forms is actually comprehensible to the teachers and the students. And then of course, in terms of parents and principals and teachers, we collect data and if it's survey data, it's easy enough to understand. We can show them the survey form. But the data that we collect, my team collects, is also interaction data. So when students actually use the AI system, we collect what interactions they perform, what answers they give, whether the answer was correct or wrong, how long it took them, etc. We disclose that but how much of the actual understand, right? And, you know, it's so it colors here. It brings it to, we don't intend any harm, of course, and these are our educational environments that teach English or math or something like that. But yeah, it's a question. Then there's the question of financial examples. So students are given a small financial example of 50 pesos for participating in our experiments. But you have 50 pesos to, if you're earning minimum wage or less, that might constitute less. So again, ethics. Okay. And then finally, we have graspers as well as the benefit. Publish papers, republish a lot. That's all good point system, RQS rankings and so on. But seriously, how many people read my papers? I mean, Google those, right? But really what I want beyond just the publication is for the software to actually make a difference in the learning and the learning experience in test scores and how happy our students are in their classes. That's what I want. But whether that continued use happens is often outside of my control. Okay. So just to wrap up, AI and researchers such as myself, we want to do good things. We do very well in tension. But I do often think about the ethics of what we do because we try very hard to obey regulatory requirements about the privacy law, etc. But you know, there's always a little bit of fear, I guess, that are we actually taking advantage? Are we stepping? Is trapping out my competition partner? Actually, it's a bit more dangerous than they should be. Okay. And this is very helpful. Thank you so much for your time. Thank you very much. And I think that that sums up everything that we have been talking about in any conversation about AI and education. The topic of ethics has always surfaced, but right now, Dr. Viglio, what we should do when we talk about technologies like AI will acronym the FATE, F-A-T-E, Fairness, Accountability, Transparency, and Ethics. Let's give a warm round of applause to all our keynote speakers. And now we have come to the exciting part, the panel discussion. We have a lot of questions from Zoom, from YouTube, from Facebook, and from our live audience here at PIS is USC. May we invite our speakers on Zoom, Mr. Joe Hironaka from UNESCO-Banko, and Jean-Louis Bingo from the University of New South Wales Canberra to switch on the camera. And for our speakers who are present here at the PIS USC editor, you can please join us on stage. Dominique Gretz from Zero New Books, Dr. Viglio Bingo from Atomero, and Dr. Manny Fernia from the University of Bethlehem. I have more than a couple more questions than that I'm going to go. Please, Mr. Joe Hironaka. Yeah, I can see them. All right. So, this is a question for Mr. Hironaka. This is a question from Zoom from Wilson-Pavillion. So, Joe, what are some of the important recommendations that UNESCO could give to government and non-government policy makers regarding the safety and privacy issues when we can talk to the eye at the present? Can you hear me? Hello. I'd appreciate it if I could read that and writing. I didn't understand it. It was garbled. I'm afraid. So, I will repeat the question. So, this is from Mr. Pavillion. Like Zoom, what are some of the important recommendations that UNESCO could give to government and non-government organizations, NGO policy makers, regarding the safety and privacy issues related to AI at the present? Important recommendations for policy makers regarding AI and private issues. Thank you. I'm going to interpret what I heard from what I think I understood, which is what practical tools we have to, well, this is my interpretation, to ensure that the normative instrument, a normative instrument is an international law framework adopted by the 193 member states of UNESCO. And there is one for the ethics of AI, yeah. And within that framework, there are a number of mechanisms to support the application, you know, the transfer of these recommendations into national legislation. As I think I said, there are around 30 countries already working on that. And one way to think about UNESCO recommendation is at some level, like this is kind of UN trivia, but at some level, a recommendation is a higher level than a declaration, including the Universal Declaration of Human Rights. The reason the Universal Declaration is so consensual powerful is because it was turned into national legislations of virtually all countries. So that's the ultimate goal. In terms of the elaboration of these frameworks, very much it's the NGOs and CSOs and other partners that are involved. And in a similar way, the guidelines for social media, which my colleagues are working very closely on, we circulated three versions, we received tens of thousands, I think, of feedback. And I see constantly in my email, you know, specific requests and petitions from different NGOs in this region and so on. So those are by nature what UNESCO does as a convening organization of the UN on these matters, something we take on board. It's extremely valuable. I think the comments on the social media guidelines have closed actually yesterday. I don't know if there are other ways to influence the process, but I strongly encourage every concern that NGO to really engage with UNESCO's intergovernmental processes. It is possible. I hope I didn't miss the point of the question or I wasn't trying to dodge it. I really couldn't quite understand what was being asked. Thank you very much. Maybe our panel would like to add to that. To ensure safety, especially the privacy issues that are related to AI and the process. Go ahead. I can add. Hello. Yeah, it's working. See, Professor, you mentioned already that there are existing laws. Data by the SAC is one RA-10173. There's also the Cyber Crime Prevention Act, RA-10175. And already for redress if you have issues. I think the problem is wide awareness amongst the people who have their Facebook staying hacked. Many of them don't know that you can really go to the police and report it. I personally experienced an identity theft recently. A training provider used my face to sell their training courses. And we're escalating that first to the privacy commission and also as a cyber crime. That's one. I think the broader issue now is beyond that. Because there's no mention of AI in either of those laws. Now we need to have, I think, a very explicit dialogue with lawmakers. There's actually three bills being proposed right now. Senator Marcus wants to propose a bill on jobs. Cecil said that wants to propose a bill on copyright and C. Barbers wants an AI agency. I think my first recommendation there is, please consult academia. Please consult private sector. Don't regulate in a bubble. I'm not saying they are. And then finally, you can also get inspired by existing structures. The Privacy Act has this thing called the Privacy Impact Assets. It's not the environment. But if you're going to procure, let's say, data systems, it's almost always a requirement already. Not so much to kind of restrict. But at least if something blows up, you have accountability. Who signed this thing and who cleared it? And then at the same time, there's these things called ISO standards. So it's not so much a punishment, but more of a standard. Okay, if you want to implement good quality systems, you have to comply with ISO 9000. And that also becomes a procurement requirement. We don't have that for AI right now. But something could be done to that effect. So in other words, Sir Dominic, like what Sir Joe said a while ago, there will be social media guidelines, guidelines as far as social media is concerned. And this can be applied also to AI. You know, at least here in the Philippines, we have already lost in place. It's just that we don't have black and white that this is AI related. But it could be used also for AI technologies. Yes, Dr. Perle. Yes, and then addition, and this is where the academe, as well as associations like the PCS and other academic and professional organizations coming to play, is to work with these policy makers. In fact, educate them and educate the public about, you know, what exists, what guidelines, what other potential areas should be included when it comes to policy and lawmaking. Thank you. Thank you. We have another question for Gina. Gina is still there. The question is from Zoom on the film as Josefina Lakai. The question is what measures could be adopted or implemented for AI not to be abused? Are you there, Gina? Hello. Hi, Gina. This is a question for you. What measures, this is from Zoom, one of our Zoom viewers, Josefina Lakai, she's asking what measures could be adopted or implemented for AI not to be abused? Great. Thank you. So, first, I'd like to acknowledge that while AI definitely can bring about significant benefits, especially in sectors like public services, you know, healthcare, there's one question about healthcare while ago in education, there is also a risk of a lot of misuse and abuse. And that's a good question considering that there's a lot of things happening at the moment. First, I think Dr. Lee got mentioned a while ago that there is a need for transparency, you know, AIS or the systems that we have now should operate transparently with clear explanations of how decisions are made. But most of the time it's open just, you know, they just go back to us and say that it's a blockbox. And this is essentially for trust and accountability, especially when technology like this is used in high stakes areas like healthcare or law enforcement, where, you know, you police certain communities because training data suggests that most of the crimes happen there. And that's not fair. And that's fair discriminatory. And this technology also should not be, should not train forces or exacerbate existing biases or inequalities. So there's a lot of measures that could be done, including audits, diverse or representative training data and the involvement of diverse stakeholders in AI development and deployment. And there's another two that I would like to pinpoint, which I mentioned a while ago is accountability. You know, if something goes wrong with the system, there's no clear policy on who should be responsible. And this could involve a lot of legal and regulatory measures to ensure accountability for both the developer and the users. And then the human oversights to ensure that the systems do not operate beyond human control. There should be provisions for meaningful human oversights and the ability to intervene and override systems when necessary. You know, we are always very, very quick to jump into this kind of scenarios and say that, oh, we're going to be removing humans on the loop because we don't need them and whatnot and so whatsoever. But then when things go astray, you'll see that most of the things could have prevented if there's actually human on the loop. Yeah. Right. Thank you, Jean. So we're not talking about abuse by certain sectors. So that is actually the questions here. And this is what we're talking about. No, wait a minute. Yeah. So obviously the content creation is prone to abuse. Anything I'll be the first to say. While you can use AI to generate content, the practice of using AI to kind of detect bad content or fake content, it's still an emerging practice. There's no perfect solution yet. So that's the first area where journalists need to be careful because you can literally ask a chatbot, please write this article in the style of Karadavid. And then use Karadavid's deep fake. And then now it's your word against the digital Karadavid's word. That's a problem. And we're not only talking about written content. I did a demo earlier. There's already voice AI. There's a video AI that can clone your voice. So I think we're what do we do now? Stand right where you can say children. I'll try to attack it first philosophically. When you say ethics, it doesn't mean follow the law alone. Because the law happens when disasters occur and then the lawmakers figure out something. Ethics is proactive. We need to teach that in school. Do the right thing here. So that's the baseline. And then moving on top of that, it's not through proof. But my first defense on spotting fake service information is always the intent. And sometimes in the noise of social media, it's hard to surface intent. But when you see articles attacking someone or trying to make someone's reputation look bad, that's usually not kosher for normal use. Maybe on opinion columns, potentially. And then there are also what we call the hack, what we call this hatchet, the hatchet writers. But that should not be considered on the same level as reporting. Unfortunately, social media doesn't distinguish. You'll see an opinion column and news at the same level. So maybe that's an opportunity for us. How do we add this stuff better? How do we report it better? And we haven't actually brought up to the countability of the platforms themselves. We're always a slippery fish. I'm talking Facebook and Google and Twitter, because they're global. But we have representative offices here. There have been countries that have fined Facebook and Twitter. We can't afford it for sure. But we have at least citizens that we can make them accountable. Probably the third is this is an open challenge to our students, to our faculty. Just because we don't have existing tools right now that can fight this information effectively doesn't mean you can't create one. So we can't just be a passenger in this discussion. Let's innovate. And this is what I was saying earlier, but not for journalists, programmers, scientists, because we're all siloed, you're in here in the UP. This is an opportunity for all of us. We don't have to be the passenger in this fight. We can be part of the driver. All right. Thanks, Donna. What are your views since the programs that I'm not going to abuse, what are the sectors that are possible abuse of AI? Probably the education sector. I think the obvious things are our students using something like chat GPT to generate their essays. Right. And I've had actually there have been three instances I do and people have written me saying, can I please have a copy of your paper entitled X, Y, Z? And I write them back and I say, I don't think you have the wrong person. I've never, I don't, I did not write a paper with that title. And then they've said, oh, we got it on chat GPT. What is happening? Anyway, but aside from that, I think as educators, we have to be really much more creative with our assessments. We have to go back to maybe looking for a look at process, not just product. It's a lot more time consuming, unfortunately. But, you know, if you can break up your requirements into stages where you can actually vet each deliverable as it progresses and as it develops in that that's one way to guard against it, I think. The other thing which actually relates to generative AI, because I have friends in the creative arts as well do design and do illustration. And the challenges with generative AI is that, again, it's the ethical sourcing of data. Yeah, no. These AI's were trained on publicly accessible images, but the original creators of the images were not credited. They were not asked for their consent. They were not compensated for the use of their images. And so as a result, because the data was not ethically sourced, the product is questionable. And so this is something we hope to communicate to our students. And what we do is that we allow the use of these things to maybe generate ideas, but not to complete the ideas. And then again, finally, if you look at the realm of publication, a lot of big publishers have already mentioned statements about when you may and may not use generative AI. And I think the consensus you can use it to style check, to use it to write. Okay, still on the topic of the education sector. Dr. Pena, are you comfortable in creating policies and photographs on a national level in the use of AI in Philippine classrooms? I think it should be generated by chat. Yes, these discussions, and I would follow from schools, from students, and then upwards to chat, as well as from download. So I am supposed to have the discussion to emanate because of these values. Because of these values, guidelines will emanate from these values discussions, whether they are bottom up or downward going down to the level. And in addition, one of the things as academics that we should consider is really the process of developing curricula. As we know, developing curricula does take a long time. And as we have experienced technology, sometimes this process. So there must be a summary that these emerging issues are built in into discussions. Should they find themselves in syllabus? Yes, I think that's a faster route rather than overhauling the curricula. All right. Thank you. Thank you. You have a question for all our panelists. Let me start with Mr. Runaka. Joe, are you still there? Yes, Joe. Here's the question. Do you agree that AI should be regulated? And if you ask what mechanisms can be proposed for this? Do you agree that AI should be regulated? Yes. Thank you for the question. Well, that's a very difficult question to ask. I'm a member of the UN secretariat because we act on the will of all our member states. And clearly, all our member states unanimously supported the development of recommendations on AI ethics. And we'd like to see that process turned into concrete legislation at national levels. But it would not normally be my place to say whether these things should happen or not. Although it's true that we convene and encourage this type of normative instrument, particularly in human rights. Yeah. Thank you. Thanks. Jean, would you like to comment on that? Do you agree that AI should be regulated, and if yes, what mechanisms can be proposed for this? Hi. Thank you. Yeah, absolutely. Well, AI technology akin to many other technological advancement does not hold transformative potential. But it's definitely crucial to recognize that its use in the development should be guided by the principles that prioritize the well-being of all people, especially the most vulnerable, such as the workers, and furthered application of this kind of technology can definitely inadvertently lead to job displacement, income inequality, and increased power disparities. So, yes, AI should be regulated and this regulation does not mean that it will stifle innovation or progress. Quite the opposite, actually. In fact, consider the invention of cars. When they were first introduced, they were brought out, brought about profound changes in the society offering unprecedented speed and mobility. However, they also introduced new risk and challenges, like accident and fatalities. And it was not until the introduction of seatbelts, which is a form of regulation, if you will, that we could truly start to mitigate this risk less. So, the introduction of seatbelt, for instance, did not stop the growth or development of automotive industry. On the contrary, it made cars safer and that's more appealing to potential users. And then the rules were there not to limit growth, but to guide it in a direction that was more beneficial to society at large. So, regulation in this case would work towards ensuring a fair distribution of the AI's benefit. You know, it could involve creating new job opportunities for those displaced by the technology or establishing standards in the tech industry. It's also about making sure the progress isn't leaving anyone behind and creating a wider gap between different classes of the society. Nice, nice. Thanks, Jean. Very nice to have a comment from our panel in the CSIC. I'll jump on the car analogy. So, you have two main occupations in cars, you have the mechanic and you have the driver. We license drivers, but we don't necessarily license mechanics, so food for thought. And a mechanic can fix the car, doesn't mean they can drive. And the other way around, you can be a Formula One driver, doesn't mean you can change the spark plug. So, what's the difference between the mechanic and a driver? The driver can kill. If they are drunk driving or they make a mistake or they press the accelerator instead of the brake, somebody could die. So, on a fundamental level, in my view, if something is potentially harmful, we have to regulate it. But now we have two kinds of wants that. The reason why mechanics don't need necessarily a license is they don't necessarily use the vehicle that could create a problem. So, potentially, there's that logic. If you're a bird of AI, if you're a researcher of AI, you don't necessarily need to be restricted on research. But if you will be implementing AI or implementing systems, similarly, we have a privacy impact assessment. We need an AI impact assessment. Okay, set that aside. The big fear is always losing out on innovation. And actually, I think that's a misnomer because it's really fear of the monopoly. Because right now, you only have so many companies who actually develop these big models that's a lot of power concentrated in the hands of a few companies. That's actually the bigger challenge. And that's solely in the jurisdiction of the United States. So, before we even think about regulating AI, how come we are not able to regulate Facebook? I'll give you an even more fucking working issue. Almost every government agency, local and national, without with few exceptions, have a Facebook page. Because logically, it's the easiest way to get in touch with citizens. They've taken the place of the website or the hotline. Facebook never went through a procurement process or that. So, if there's something that goes wrong, down on page ng Pasig, no liability. So, who will Facebook the permission to become the sole hotline for the government? I think this is a problem. Not necessarily with Facebook per se, but actually it's a government issue. Why did we allow Facebook so much power? So, that's an open-ended question. I think that needs a bit more debate. But again, you can't regulate Facebook, forget about regulating AI. But again, on a more fundamental level, if it's harmful, potentially harmful, we have to put some rules on it. Right. We need just scratching on the surface right now. We are about to end our panel discussion, but let me just show you the final slide of the process. You're going to add Toru Kanina from our U.S. So, I would like to thank you for the addition of the International Competitive Committee on our Future. The number one answer was Adaptability and Indomitiveness. And we told you earlier, we have to come down. We discussed with our speakers that we need to adapt to this technology, but it doesn't mean that this is what we are talking about. We are just passengers. We are just passengers. We should be drivers as well. So, Indomitiveness is very important. Imagine the world of collaboration and also the world's policies, practices and platforms. Now, for our multiple choice questions, what do you think must be the priority in order to establish the basic conditions to integrate AI in competition and education? Almost half of our respondents chose awareness on AI ethics. So, ethics is really very important. Coming in second, build infrastructures to support stable internet connectivity. And finally, our last question, do you think that we develop AI to achieve the United Nations Sustainable Development Goals by 2030? 83% of our viewers said yes. That's very helpful. Okay. So, we only have one time for one last question. It's a fast round for everyone. I would like to ask the panel. This is the question. How can we break through access barriers to AI in education for vulnerable groups such as impoverished young girls in third world countries like the Philippines? Well, we think we'll just repeat the question for our keynote speakers via Zoom who are joining us via Zoom. How can we break through access barriers to AI in education for vulnerable groups such as impoverished young girls in third world countries like the Philippines? So, while our speakers are discussing their answers in their head, I would like to thank you guys on the screen for our evaluation panel. For our Zoom attendees, please take this moment to answer a quick question of just five questions to show our panel, our group. I appreciate you showing you a delicious link to the time from the very next schedules to be with us today. So, we will leave this for a moment. This is the evaluation for our panelists. The panel has demonstrated some knowledge of the public while overwhelming random number of people at 37% said strongly agree. All right. So, while everyone is answering the evaluation for our keynote speakers, let me now ask our keynote speakers their thoughts on how can we break through access barriers to AI in education for vulnerable groups such as impoverished young girls in third world countries like the Philippines? We think we can start with anyone here in the panel. I think one of the very first steps is really a massive infusion of investment in infrastructure because without the infrastructure there's no air. Right. Thank you Dr. Prenger. Well, together with infrastructure then you have the soft infrastructure which is education. Yes. So, in the same way that there is free education and free tertiary education as well as alternative learning systems, then these are the things that have to be built in. Responsible use of media, responsible use of social media, and then digital education among others. And I'd also like to pick up from what Johira Nakas said a little while ago that diversity works. Yes. Diversity works. So, the more voices that are heard, they are from minority groups. But of course then you need the hard risks for them to latch on. But greater diversity is really good in the long run, whether it is for democratic systems or for developing leadership. Right. And then maybe here from Johira, from Johira first. The question is how can we break through access barriers to AI in education for vulnerable groups such as impoverished young girls in certain countries like the Philippines? Thank you. That's probably the most important question to address at this time. I'll try to be very concise. I agree fundamentally that it's connectivity and the way the UN works. UNESCO itself doesn't deal with the infrastructure of connectivity, but we deal with the digital skills, the media information literacy, and I think that's just as important as making an internet connection to a school. And as far as this inflection point where we are with natural language models and the ability to translate fairly accurately news where a journalist can do the final work, I would really emphasize this point about language diversity. UNESCO, my boss, has called on a thousand languages to be online at this moment. I think Facebook has less than 200. And as I said, it's feasible within this decade. This is the decade of the UN decade for Indigenous languages. And I hope many of the participants today will look at it this way, this use case. And I think to the extent that it diversifies and extends audiences by creating different language versions, for instance of Rappler or whatever, I think there's merit in exploring it both economically and also because it's the right thing to do. Thank you. Thank you. Thank you, Drew. Maybe I'll give you some G. For I guess I would like to reiterate that education should be viewed as or education, including access to digital and AI enabled education is a fundamental right rather than a commodity. That's the first start. And in a society where access to education is seen as a basic right rather than a luxury, the government and the society as a whole have a collective responsibility to ensure that everyone, irrespective of their social and economic background, has equal access to quality education. And that will include ensuring access to advanced tools like AI enabled learning platforms, especially for the most marginalized and vulnerable. And education, as I've said, should not be viewed as a commodity to be bought and sold, but as a public good freely available to all regardless of their economic status. And this perspective pushes against the neoliberal tendency to privatize education, turning it into a product for consumption rather than a means for individual and societal transformation. Thank you very much. Very powerful, powerful words. Thank you very much, June, for those words. And now we want to hear from one way. Okay. So, question. Four steps. I'm not mistaken, a Starlink dish, $30,000. So, $3 million will get to 100 dishes. And we donate 100 dishes. Scatter it all over the place, so that helps with the access. We have, I don't know how many major languages, let's say 15 languages, one per region, I'm sure there are more. So, 15 people who can translate, then use AI to do the courses. Okay. Then finally, I think this is the most crucial, there has to be government support for really bringing that education to the kind of the least asserted areas of the country. Did you know that in 2019, Finland decided to train 1% of their population in AI? They did it in one year. A little bit more in Finland. So, they have about 5 million citizens. So, they trained 50,000. Now, they're tackling the entire European Union. It's a little bigger, half a billion. So, 5 million people. For us, they've proven it is about a million, about 100 million Filipinos. Up until recently, I was personally involved in a project called SPARTA. This was funded by DOSD. And we actually trained almost 50,000 people in data science and analytics. It's not 1% of the population. So, in other words, we might be over-building the challenge. But actually, you have a lot of private sector foundations. We have a lot of NGOs who could probably do it. I think somebody just has to say, here's the checklist. Donate the Starlink dishes. Get the volunteers. Actually, don't even have to be volunteers. Look in between. You know, this should be a viral job for people. And then, it's okay to focus on training and users. We need to train more teachers and we need to train more innovators. I think having a call center industry was both a boom and a, I guess, a curse because it cringed for the middle class after 20 years. But now, everyone wants to be unemployed. I think we need more people who are willing to take a risk. But I think it goes back to education. So, yeah, get the dishes out, get the translations out, use AI to create the content, and government provides the support. Let's just use Finland as a good model. Probably by the time this administration's term will end, we'll have that respect. Thank you. Thank you. We'd like to thank all our speakers. Thank you very much. Gene, thank you. Thank you very much for answering all our questions. I know this is a very, like, a season for all of you. I'm sure that you have shared your wisdom on our international solution to build. Thank you. And now, it is my distinct pleasure to introduce to you our team answering remarks and synthesis, the director of the Philippines Communication Society, and one of the organizers of this high-level international solution to give the synthesis and closing remarks. Please let us all welcome Mr. Christian J. C. Simone. Good day, everyone. All right. To all our attendees, both on-site and online, our distinguished guests, communication scholars, practitioners, students, as we come to an end of this enlightening symposium on the impact of AI and media in education, I am pleased and grateful to deliver these closing remarks. In the last few months, we have witnessed an exchange of ideas, knowledge, and insights that have truly enriched our understanding of the evolving landscape of media and communication in the face of artificial intelligence. First and foremost, I express my heartfelt appreciation to everyone of you who has contributed to the success of this symposium and the whole webinar series of PCS, understanding the impact of artificial intelligence on media, communication, education. Our esteemed speakers, panelists, and presenters, their expertise and passion have captivated us all, shedding light on how AI has transformed our field. Their narratives have demonstrated the profound impact of AI on media and communication, education, and colleges and universities. Moreover, I extend my gratitude to our attendees, the scholars, practitioners, and students who have actively participated in the discussions and shared their valuable perspectives. Their engagement has added depth and diversity to the conversations, encouraging us all to think critically about the challenges and opportunities to apply ahead. Their enthusiasm and commitment to advancing our understanding of AI and media and communication, education are commendable. Throughout this symposium, we have explored various facets of AI's influence in our field. We have discussed the integration of AI and media production, the ethical considerations surrounding AI-powered algorithms, the impact of AI on journalism and news consumption, and the evolving role of educators in preparing students for an AI-driven future. Mr. Joe Hironaka shared that journalism, education, and practice now include working with AI tools. Students must develop technological competence to integrate AI into their ways of working. However, we should be aware of the ethical implications that may arise from this integration. Ms. Jean Lise Dinko mentioned that narratives about AI are dominated by stories that describe it as human-like technology that has the capacity to think abstractly and feel just like humans. However, according to Jean, AI simply follows rules that humans code. These innovations cannot determine or predict behaviors and they do not have the capacity to think and will not exist without the subjective experiences of humans who coded their functions. On AI's practical implication, Mr. Legot juxtaposed the dooms and the glooms of AI and AI tools. He shared that while these technologies assist and make our lives easier, AI comes with drawbacks that we should be familiar of. He furthered that regulatory policies and agenda must also be enhanced through multi-stakeholder approach to ensure important areas would be covered. The value of AI is also very apparent in the field of education. Dr. Rodrigo gave premium to AI and the importance of having concrete ethical policies and agenda to enable effective, adaptable, and flexible AI-enabled learning environments. These discussions have underscored the need for continued collaboration and dialogue among scholars, practitioners, and students as we navigate and learn how to maneuver the ever-changing media and communication landscape. As we leave this symposium, I encourage every one of us to carry forward the knowledge and insights gained here. Let us embrace the opportunities that AI presents while criticizing it and remaining vigilant about its potential pitfalls. Let us continue to foster interdisciplinary research, collaboration, and innovation as we work together to shape a future where AI is a powerful tool for improving media and communication education. Let us apply the lessons from the webinars and this International Symposium Tower classrooms, newsrooms, research endeavors, and communication practice. Let us harness the transformative potential of AI to advance media and communication education and in ensuring that our students are equipped with skills, knowledge, and ethical grounding necessary to thrive in this rapidly evolving landscape. Thank you to all our speakers from the day one of this webinar series for your invaluable contributions. I look forward to witnessing our collaborative efforts on the positive impacts as we shape AI's future in the media and communication education. Finally, of course, I express my deepest gratitude to my colleagues at the Philippines Communication Society and the TV UP, our sponsors and those behind the scenes, and those who are behind the scenes who have worked tirelessly to make this symposium a reality. Your dedication and hard work have not gone unnoticed and we are indebted to you for creating this platform that has brought us all together. Thank you so much, everyone. Safe travels to all of us and may our journey be clear. We will continue success and meaningful connection. Thank you very much. Thank you so much, Professor Christian Samonte. Thank you very much again, Professor Samonte, for that very comprehensive synthesis. As mentioned, this International AI Symposium is a third of the final installment of the four-part webinar series called, I'm understanding the impact of artificial intelligence on media and communication education, but it will not be the last of the course, but there will be a lot more, because it will be the last batch. PCS held a webinar every last Wednesday, one to much, to June 2023. All of which will be available for people on the plane back at June 20. So if you want to hear more, all of you please, because maybe the information will be available. So thank you very much, Professor Samonte. So thank you for chatting with PCS. So PCS, I read about PCS, Hyran. The Phoenix Communications Society will be having its hybrid general assembly and general health connections in following the program. So we will be talking about PCS members and standing with a large person at the PSSC auditorium are enjoying to register in order to get the Zoom meeting. So this formally closes the four-part series, I'm understanding the impact of artificial intelligence on media and communication education. Thank you very much for your back, Kimmei. I will see you in the week on behalf of the Phoenix Communications Society. We let us strengthen our country's future through great communication. Thank you very much. Thank you very much. Thank you very much to our PCS members, our present, as well as our speakers, and our hosts and moderators. Also, our PSSC Executive Director,