 the viewing and the joining us from all over the world. That is our chat box and people are watching from all over the Philippines and some people coming in from other countries. Thank you for joining us today to learn more about the impact of technology giants in amplifying or limiting information flow, most especially during elections. As the official campaign period for candidates for the House of Representatives and other elective posts for regional provincial city and municipal officials begin in a couple of weeks on March 25th to make that. And given, of course, the national elections that we are already deep into, how sure are we that the voters will be getting the messages that they need to make an informed decision at the polls? Welcome to the eighth installment of the National Forum on Communication and Democracy, Philippine Elections 2022. The title of our talk is Election, Paming the Tech Titans. I'm Robi Alampai, I'm a journalist with TV5, Signal TV, and Puma Podcast. And I'll be your host and moderator for today's program which you may also view via live streaming on YouTube and Facebook at the TVUP channel and also the Philippines Communication Society Facebook page. We will also have some live tweeting, so please use the hashtag PCS Forum Series, that's capital PCS Forum Series. Now, before we begin, we'd like to acknowledge the organizations that have made this program possible. We'd like to thank the University of the Philippines System, the Office of the Vice President for Public Affairs, the Philippines Communication Society, the UP Information Technology Development Center, or ITDC, of course TVUP, the Internet Television Network of the University of the Philippines, and everyone who have to make this forum series possible. And because we have many faculty and students watching as today, PCS members will be receiving a certificate of attendance as a benefit for their PCS membership. Now, if you have not yet applied for or renewed your membership yet, this is your chance to be part of the premier organization that represents the communication discipline to the Philippine Social Science Council. The online membership form is available on the PCS website you should be seeing that on your screens right now, philkomsoc.org, philkomsoc.org, slash membership. And of course, because this is a national forum on communication and democracy, we want to make sure that everyone, all of you have an opportunity to be heard. We'll be using Mentimeter. So if you've tried this before, you already know how it works. If not, you'll see it on your screen. There's a QR code, bring out your smartphones. Take a shot of that QR code. Take note of the code, 38365469. If you wanna join via your laptop, you can just go straight to menti.com, enter the code and then you will be part of our discussions. We encourage everyone to participate. We will be flashing questions later on and get your sentiments to these questions. Your answers will then be part of our discussions, we'll be including that as well, maybe prompt some questions to the panel discussion later. And you'll also see how everybody else listening in are watching. By the way, if you can watch again, as we said, you can watch over YouTube, you can watch over Facebook and some of you have registered direct into this webinar. What else? Okay, just take note of that number again for our menti code, 38365469, 38365469. Okay, so I think we're ready to set the tone of election, naming the tech titans, let's hear a few words from an applied linguist who has pioneered forensic linguistics research in the Philippines. She is a regular lecturer at the Philippine Judicial Academy or FILJA, the professional development arm of the Supreme Court of the Philippines. She's a member of the Technical Committee for English of the Commission on Higher Education. She's a member of the Board of Trustees of the Foundation for Upgrading the Standard of Education and a past president of the Linguistics Society of the Philippines and the Philippine Association of Language Teaching. She's currently a professor and the Dean of the Faculty of Arts and Letters at the Pontifical and Royal University of Santo Tomas. Everybody please join me in welcoming Dr. Marilu Madrono. Thank you very much, Ruby, for that very kind introduction. Good afternoon to everyone. I am happy to open this webinar titled, Keeping in Check, The Power of Technology Giants, sponsored by the Philippine Communication Society. What makes it more meaningful and relevant is the fact that it is part of a series entitled National Forum on Communication and Democracy, Philippine Elections 2022. I read an article a few years ago that featured an interview with Farhad Manju who writes a column, State of the Art, which explores the latest technology ideas shaping the future. And he mentioned about the big technology-based companies, Apple, Amazon, Google, Facebook and Microsoft as being the frightful five. Manju states that these five giants make up half of the top 10 valuable companies on the US stock market and which collectively influence everything else that happens in technology as well as the rest of the global economy. But why did he describe them as frightful? Manju thinks that because these are very big global companies, they, wittingly or unwittingly, can influence government decision-making and people's choices anywhere in the world, including the latter's exercise to choose their political leaders. And because of their sheer size and reach, practically all important aspects of human life today are associated with these companies. Facebook, for instance, also owns Instagram and WhatsApp. FB alone reaches at least 2 billion people every month. Google, on the other hand, is used every day by people communicating with each other through email, accessing its search engine and referencing its Google Maps while being associated with YouTube at the same time. Amazon owns different kinds of media in publishing properties and also has audio books in audible and even shoes through Zappos. A most interesting question relating to Filipinas at this time is, will these companies have any impact on local and national elections in the Philippines this 2022? So using the words of our had Manju who writes about technology for the New York Times, these big fights should do a lot of fact-checking as part of their responsibility to society. Little or no checking allows unscrupulous parties to spend this information, to spread this information or fake news through the facilities of these companies. As such, according to Manju, more efforts should be done by, say, Facebook in terms of fact-checking, even if it has partnered with fact-checking companies. Facebook could be in some way the arbiter for what's right and wrong in Facebook to address the growing fake news problem. Indeed, while these big companies help us in more ways than one by making our daily lives convenient, we should also be worried about the disadvantages that they can bring. A few months from now, we shall be holding our elections where the use of information and communications technologies will play a crucial part. In many countries around the world, new technologies have been introduced to aid the electoral process. We can only hope that amidst the pandemic, e-voting can be used in casting our votes and even counting them. Certainly this mode of election is new to us but it can increase the participation of many citizens especially those who work abroad and those who have physical disabilities. E-voting could be a relevant option when mass gatherings are discouraged such as during times of pandemics. Of course, these are serious risks and threats also that can possibly compromise the integrity of election results especially when we consider the fact that we do not have experience using it. So it is always best to invest in institutional proactive interventions to be able to keep pace with advantages of technological changes. Before I end, I wish to thank Dr. Gina Lumawig, board director of PCS and the project head of this webinar for her kind invitation for me to open this event to Dr. Rika Abad for facilitating everything. Dr. Pena, Dr. Alfonso for the warm welcome and of course my warmest congratulations to the Philippines Communication Society for conducting this series titled National Forum on Communication and Democracy, Philippine Elections 22. Thank you very much to one and all and may you all enjoy the lectures organized for you today. Maraming, maraming salamat. Maraming salamat, Dean Marilu Madrugno of the University of Santo Tomas. All right, it's time for our minibus. You remember, Kanina, we encourage you to join us on Menti Meter. You may, if you joined us, if you registered, you can already see a couple of questions there. We'd like to know what you think. If you haven't joined us yet, go to menti.com, enter the code 38365469 and you can join in what we're doing right now. So if you're there, you're seeing on your screens first the question, what is the difference between tech giants and data-based elections? What power do the tech giants have in becoming elections? For this question, you may put in up to three words for the word cloud. You will also see that the words are increasing in size. The more times it is mentioned, the more times you have the same thoughts, the bigger the text that you'll be seeing on your screen. We already, in fact, see some answers already coming in. So we'll leave the Menti Meter poll open for you as we go along with the program. We'll reveal what you're answering later. There's also a second question. Do you think there's a big influence in your choice of who will vote in the election? Do you think that big technology has any influence on who you will vote for or how you will vote in the coming election? So again, you're seeing it there. Okay, so whether you're watching us directly through this webinar or you're joining us via Facebook or YouTube, you can participate. You can go to menti.com, you register, enter your code number and you can answer this. Now, as we continue to hear from our viewers, let's now hear the word on the street with some person on the street. Interview with TV UP. The role of social media is so big in the current election. Because there are a lot of people who can reach their audience. Social media is useful to share information about the candidate, what are his platforms, what is his background, what is his role in our country. Actually, this is a big influence on tech giants. We're not only talking about here in the Philippines, but also in other countries or globally. If it's about tech giants, the responsibility of the public is to maintain false information or to maintain accurate and honest information. Tech giants, they believe a lot in them. If they have a verified social media or account, especially for other people, they think it's credible right away. In their reach, their power, there should be a system in place where they can filter out fake news or have fake checks on the platform. They can also protect the verified accounts of the people who give them information. So they should be able to fix the problem of these kinds of accounts. We should all be open-minded in every candidate. We shouldn't be a group of people. We should all be open-minded in every candidate. We should be able to watch the news, watch the news, watch the news in a different form of the media. For the citizens, let's make the right decision. Thank you TVUP for giving us a sense of the pulse of the people who are these persons of the street interviews. We'll be having a panel discussion, but just a quick note, because some of our panelists may not have understood everything that was mentioned, Kanina, but obviously our people and these young people that we interviewed and they interviewed at TVUP, they see the power, they see the benefits of social media. They also recognize the things that, I mean, the unknowns that scare us. We don't know exactly the algorithms that go into what we see and what we consume. We don't know how exactly we get to see what we see. They're also afraid of their sense that there's nobody vetting the information that's going out there. So that's a big question as well for them. And some suggestions as well, that maybe the platforms themselves should be investing in the process of vetting, but I'm sure some questions there about surrendering that power, is the platforms that they're concerned about? We'll have a conversation about this. We'll get into a roundtable discussion with some experts that we have invited, but just a reminder to everyone, if you're signed up here, DITOS Webinar Natend, you can leave your questions and comments direct on the Q&A, via the Q&A box at the bottom of your screens. But if you're watching over Facebook and YouTube, what you can also do is just leave your comments and questions in the comments section and we have facilitators who will try to get those questions back here. DITOS, a backstage into our webinar, okay? So let's get into the discussion here, not only to talk about what Dr. Madrugno discussed earlier in our keynote address, but also in the things that we've heard from people and that we will be hearing from everyone. Let me introduce our distinguished panel of experts for a roundtable discussion. So we're talking about election, taming the tech titans, we're very privileged to have with us this morning a marketer with broad experience in integrated and loyalty and CRM marketing, branding and event management on a regional scale. He is currently the politics and government outreach head in the Asia-Pacific of META. Of course, we're not in META as Facebook, that is now the mother company, the main organization of Facebook, we refer to it as Facebook, we'll probably be using that interchangeably in the course of this discussion. But anyway, Roy Tan is head of politics and government outreach for Asia-Pacific for META or Facebook, where that means basically he's responsible for working with governments and politicians on how best to use Facebook, Instagram, Messenger, WhatsApp to connect with their constituents. It's all welcome Roy Tan. And actually along with Roy, we'll also have with us assistant professor and assistant professor with the department of communication research at the College of Mass Communications of UP DeLiman, her research centers on the mediation of platforms, algorithms and digital technologies in cultural production, politics and public discourse. Presently she is co-lead of the digital pulse, digital public pulse rather, that's an interdisciplinary big data research that examines the networks, conversations and interactions of users online, about or related to the 2022 Philippine General Elections we will have with us professor Marie Fatima Gao. And then rounding up the panel, we will also have, we're very pleased to have with us a faculty member of the department of communication research at the College of Mass Communication, again with UP DeLiman, he uses quantitative and digital research methods to study how network environments shape the communication of political and scientific information. Let's all welcome John Benedict Bukin, professor Bukin. Okay, so let's dive straight into it. I do wanna bring in Roy Tan, just so everybody knows Roy is Singaporean. He's based in Singapore. This is also why we'll try as much as possible to speak in English, but Roy's pretty used also to working with a lot of people who sleep in and out of Taglish, I'm sure. So I wanna throw this first question to Roy. By way of prompting our conversation, Roy, the title of this series, of this particular episode of this particular conversation is, Paming the Tech Titans. How do you feel? My people are acknowledging the benefits, the power, the change, the positive change that social media brings, but we start off with a title that is pretty loaded, not to mention what Dr. Madrugna mentioned about the frightening or the frightful fight. How does that feel, not just to you, but to the industry when people frame discussions around you as needing to tame the Tech Titans? Thanks, Robbie, and thanks, thanks for introducing me and hello to everyone on the VC today. Look, it's a good question. How do I feel? Look, I think it's a good question, right? I mean, it means people are critical about large corporations like Metta, and it's good to be critical because when you're critical, it means you are questioning your sources, you're questioning the content that you see, and especially coming up to the Philippine elections, that's exactly what we want people to be, right, to be critical, and it keeps us on our toes as well. So I think, you know, I mean, I don't need to kind of go through a list of all the things that Metta has been through the past few years, but it does keep us on our toes. It keeps us wanting to do better and needing to do better. And so I think it's a very valid question. I think it's a very good question. And yeah, I guess we will discuss more about the various things that we could be doing. Yeah, Dr. Professor Gao and Professor Bumkin, where do you think that's coming from? Even the title, if we imagine the brainstorming and the conversations around organizing this, why do we settle on the word taming, attack giants to start with? Okay, maybe I can start, am I clear with my audio? Okay. Yes. Okay, great. So I think we have to still think about why they're called giants to begin with. You know, it's the political and economic power. They have the transcend boundaries of nations. In fact, you know, there are a lot of metaphors used about Facebook, about YouTube, that their economic power is more than a nation state already. And because of their global reach, their influence is really beyond the expanse of the US or wherever else they are prominent. So I actually don't think, when they think about taming, it's as if the giants are gentle. That's, you know, the assumption there. I don't think they're gentle. They're actually quite aggressive because, you know, they are protecting their bottom line. At the end of the day, they are corporations and we have to think about them as entities with, you know, commercial interests. And even if, you know, these platforms have become spaces for political engagement, ultimately, it's still, you know, commercial in nature. So I think we have to think about the tension that the platforms are trying to balance and in that balancing act, what is, you know, sacrifice or compromise? Professor Bunkin? Right. Just to add, you know, with Professor Fatima's inputs, again, problematizing the size and the scale of these, you know, big corporations or in their practically monopolized, you know, the digital infrastructure that they use in our contemporary life. And I guess the word taming is a bit, I guess it can be appropriate given the fact that there were instances in the past, in the past we're in, you know, that there's some sort of control over the kind of information that we get. And again, there were documented cases we're in, you know, there were some, I guess, you know, abuses when it comes to, you know, in terms of manipulation, perhaps some of the kind of information ecosystem that we have. So in that aspect, you know, the word taming can be quite appropriate, you know, in this forum. Roy, I want to bring in a couple of, I want to unbundle a couple of words that Professor Fatima and Professor Benedict brought up, control over the information, manipulation of information or maybe at least the algorithms that define, maybe you could qualify that for us or from the perspective of somebody inside the industry. Is that, are those fair terms? Is it, is it control? And for that matter, I mean, what's a fair word? I know manipulation can be quite, you know, but at the same time, we know where that's coming also from an academic standpoint, but the algorithms can be tweaked, you can target and you can try to engage, raise engagement. But how, what terms would you use if there are any, if there's any problems with the terms of control and manipulation of information that you can vet or you can control that we get to see? Well, I think, yeah, I mean, I think control is too strong a word. Look, I mean, first of all, the goal of newsfeed is to show people the things that they care for that they want to see and they're interested in, right? So if I like sport, I'm gonna engage a lot more sport content and I may see a lot more sporting quotes in my newsfeed. And that's the idea, right? We wanna ensure that people have a, you know, a pleasant experience on our platform and they see the things they wanna see and, you know, we don't serve them things that they are not interested in seeing. It's, I mean, it's not too dissimilar to, you know, other let's say e-commerce sites, you purchase something and, you know, they give you suggestions of what you may wanna purchase after that, right? I think, but having said that, I think it's also important to note that we also realize that that could also be, you know, a double-edged sword in that sense. You know, we do realize that we need to ensure that we have policies in place to, you know, ensure that what you wanna see is also within the means of what you should be seeing, right? Like, you know, if you wanna see terrorism content, we're not gonna serve that to you. And so we do have policies to ensure that, you know, those things, you know, are not on our platform and, you know, we continue to iterate and include on the policies. It's ever changing. It's ever being updated. I think that's one thing key. But also I think, you know, people don't understand that it's not just, you know, they need to look at it, step back and basically look at a bigger picture as well, right? The whole ecosystem of persons on the internet is not just normal users, but, you know, a lot of businesses, you know, NGO, CSOs also use the platform and they are also speaking to certain target groups of audience. So for example, you know, like small, medium business in the Philippines, you know, they are using the algorithm to target people that would like their products, that would want to see their products. And so, you know, all that also comes into play. So there's a bit of factors to really think about there. I wouldn't say it's control, but it's also showing people what they wanna see. But it's also important for people to take a step back and look at, you know, the holistic picture of the different stakeholders, you know, using the platform and so yeah, I guess that's a very easy, like top line way of kind of, you know, answering that question. Yeah, and certainly we understand that there are gray areas here. Dr. Gao, you know, one problem of course is that when it all started, that's really how we saw it, right? I mean, and that's how people appreciated it to be fair. This is a platform that now gives us actively, proactively gives us what we need. I'll get into later the question of how the platforms figure out what we need even before we do. But I wanna talk about that, that the problem is over the past years, it's not that we realize that there's so much gray area between, as Ron put it, what we want to see and our realization that there are things we shouldn't necessarily see just because we want it. And it's one thing to talk about sports, it's one thing to talk about food. But once you start talking about ideology, once you start talking about even ideas or politics, that without even having to go all the way to the extreme of terrorism, there's a lot of, there's a lot of, let's say, minefields or things we didn't necessarily think through that could actually be potentially harmful or dangerous. Could you talk about that Dr. Gao and maybe also Dr. Pukin about that gray area between the two extremes of simple, benign things that we do wanna see and the extreme things that even subconsciously we wanna see, but we shouldn't, but there's a whole range of areas in between that we really haven't thought through. Right, so I'm gonna go into a very specific law in the US called Section 230. So it essentially removes the accountability from platforms from being accountable for the content that circulates in their website or in their space. So the fact is that platforms are very pressured to do content moderation only in the past years, not necessarily in their origin because for them, they're just a vessel, they're just a space where content from people, from businesses, from politicians can circulate and can be distributed. So that's the first thing I want to emphasize. They don't have a legal responsibility technically as long as that Section 230 policy is in place. Now, because of the pressure to content, to moderate content and they have developed policies and Roy's right, it's a growing list. There's a joke that Facebook's content moderation policy is just a page before and then it's just growing and growing because we discover more of these problematic or questionable suspicious contents circulating in the platform. I think it's realistic to say that we cannot expect the platforms to govern all of these because it's just too much. And platforms do two types of moderation, the algorithmic one, there's an automated tagging of content, there's the human moderators as well. Of course, there's labor issues involved in that as well because they're outsourcing those kinds of work as well. But going back to my point, we cannot expect platforms to moderate all of that. At the same time, that's already a concession we can move past. One of the things that is quite problematic also is the platform's definition of what is objectionable in the first place. Of course, they want to uphold freedom of speech, of course, protected speech are encouraged in the platform. That's why the cultures in the platforms are quite rich because of that free exchange of discourse and information. Definitely want to celebrate that. But at some point, there's also some generosity in how they define, for example, this information. If content doesn't really violate a particular content moderation policy, then they don't really want to take it down. So what I'm saying is it's too narrow, the way they define what's objectionable, to the point that there are a lot of borderline content. It might be equally harmful too, but it gets past the moderation filters of the platform. Since I'm not talking about Facebook alone, but also YouTube and other bigger platforms out there, it's that the articulation of the policies are quite narrow to capture the gray and sophisticated forms of this information that we use. Okay, Professor Bukin, any thoughts? Right, maybe I can just make a comment on how platforms really are shaping the way we view reality. It was mentioned that, yeah, they do provide us with the kind of information that we want, actually based on our characteristics, the way we behave online. And all of these algorithms are shaping, are really just providing the kind of information that they think would be applicable for us or think that we would be accepting. But I guess from a platform perspective, that's, it keeps the system running, it keeps the people engaged. But I guess from an information perspective, sort of also homogenizes the kind of information that we're seeing, we're seeing more of the same things, more of the same kind of ideologies, for example, or beliefs, then it sort of, it cultivates a more homogenous view of reality that may not be reflective of what's out there. So in essence, that's another area, I think that the platforms can also be working on, not just on Facebook, but on Twitter and YouTube. It's that opening of spaces for more diverse exposure, for more, for lessening of more extreme beliefs. And it's providing us with that diverse, again, more heterogeneous view of reality. In one of my research studies on the networks of the Filipino youth, I found that when the youth is exposed to more diverse information, they tend to become more politically knowledgeable and politically participative. And I guess these are areas and aspects of socialization that platforms can also contribute. And to that point of encouraging diversity, also diversity in content, diversity in thinking, diversity in knowledge. I wanna get into just one basic question and a basic understanding of, again, the power of tech titans over us right now. And that's the power of knowledge. I wanna throw this to everyone so that we can have a perspective from academic and research standpoint and also from inside. But in all honesty, how much do, you know, when we try to feed as well as we try to encourage diversity, that starts from a base of understanding the users. They want, how much do platforms actually know about, not just about us, but about what we think? There's this, I don't know if it's a joke, but I take it as really partly true that the platforms know me better than I know myself. How true is that? Anyone can start. Honestly, I can start. I think that's a popular knowledge, because the platforms are the data owners who are co-sharing data with us. And it's not just the data they collect in their platform. They're actually buying out data from other platforms out there, creating your data brokerage systems that allow for the platforms to connect a lot of information about us. So there's this concept called algorithmic identity. So they have their own, if I know myself now, Fatima, they have a Fatima version in the back end of their systems that algorithmically is created based on those multiplicity of data points. So definitely the platforms know a lot about us, but those data points that they know are only constructed for the metrics that matter to them. So the Fatima identity in the back end is actually for the commercial interests only. It doesn't really care about their political, maybe it does care about their political interests if that's profitable for them. So let's think about that in economic terms. So definitely they are powerful in that sense because of the surveillance systems around digital economy and admittedly, digital economy actually is anchored in that idea of surveillance. It is, in fact, there's a term called surveillance capitalism. It's the new form of currency. So I think on that level, they know a lot about us, but the next question is that how do they use that information, that knowledge about us to engage us at the same time profit of us? Yeah, I think it's important also to kind of dispel that a bit because it's not about us or Meta knowing more about you, but it's about, do you know the platform? Because I mean, I think it's very important to know that there are lots of settings on Facebook and Instagram that effectively allows you to clear the history, allows you to choose the ads that you wanna see. And there's a lot of different settings in our platform that removes what we know about you. So I think it's more about how can you educate yourself about the platforms that you engage in? At least from Meta's point of view that there is a lot of tools that we allow you to do that. I can't speak for the rest. So that's one. And I think it's also important to understand also that earlier on, speaking about, being very narrow about our enforcement, I think it's important that the more blur you get about enforcement, the more inconsistent you enforce. And the more inconsistent you enforce, then the more issues that you have. Because if you're inconsistent about enforcement, then what is right and what is wrong? So I think that's also a thought process to think about as well. Because at the end of the day, when we work at scale to review content on a platform, it's not as simple as understanding the blur lines. Because the blur lines sometimes can go either way. And without being the avatar of truth, you're not able to kind of determine that most blur lines. And I think for a company like us to kind of dictate what should and shouldn't be done on some of these blur lines, it's not a good thing, right? You don't want any corporate company to do that. And that's why we do talk to governments. We do appreciate that there should be regulation. There should be proper laws on certain topical issues regarding content. But I think a lot of governments out there, they're not at that stage of understanding even how to use the platform or how the platform works to really understand how to kind of legislate. Because at the same time, you also don't want a government. And there are governments out there that are legislating or having laws out there that basically quells or curtailes free speech at the end of the day, right? It's the government's point of view and not your point of view. So I think there needs to be a balance. And it's the world that I think Facebook or Meta plays in right now. And it is not easy to navigate. There are lots of blur lines that we want to try to narrow down and make it clear. Because if it isn't clear for us, then how is it clear for you and how does it benefit the user at the end of the day? So lots of this, you know, at least from my personal point of view and my personal experience at least being in Meta and I've been involved a lot of discussions of policies relating to the Philippines relating to a lot of countries in APAC. Never once has there been a consideration about the bottom line, right? I've never even heard any conversation saying if we do this, it's going to be bad for the bottom line, right? It's more like, is this good for the people? Is it good, you know, is it good for the country? You know, should we be doing this? And then what are the considerations? Can we actually enforce on it, you know, consistently and properly and to the best of our abilities? Because sometimes it's not black and white. And I think that's important to kind of understand. So at least that's from my personal experience. Yeah. Yeah, I mean, to be, again, to remind everybody of the court, it's here when you have Roy here. Roy is head for engagement basically with politics, policy and governments. So there is really that engagement. I imagine that would also include civil society as well and the private sector. So that point is well taken but I'd like to push a bit more on the point of the question of who then becomes the arbiter of truth. Because even when you listen to the man on the street interviews earlier, I think we swing too fast into saying then, you know, well, let's give that responsibility to the platforms. Make the platforms responsible as well for being arbiters of truth and making sure but we know there's a slippery slope there as well. You would take Russia as an example right now where content is being censored and both end on the legal front as well. They double down on both the law and the platforms can crack down on content there. So you don't necessarily want to go down that path as well. Bendik and Fatima, what about that? I mean, do we necessarily want to make the platforms the arbiter of truth quote unquote? And what's the downside there? Maybe not in absolute terms, but I think there has to be some sort of responsibility on the end of the platforms as well. I think right now, of course, Facebook and Twitter and YouTube are recognizing their roles in all of these things, in the proliferation of fake news, in the spread of misinformation, disinformation. And right now there are great developments when it comes to the crackdown of fake accounts. And then even trying to be one step ahead in terms of preventing the spread of new ones and the creation of fake false accounts. So definitely, I think it starts with that. Platforms are realized, well, they have realized it already, but having that responsibility and recognizing that responsibility in terms of the role in the information ecosystem. In as academics, we have a term that we call them social technical actors. So they're not just vessels of information, but they also have an active role in, or they have the agency and the capacity to filter information, and that includes false information as well. So I guess that's my answer to that. It's not like a hard, they should be so arythroes of this, but the roles that they play in truth in this contemporary information environment that we have, that they play a big, big role in that aspect. Just to add to Ben's point, we don't want them to be the sole gatekeeper, but they are so big that they have become the default gatekeeper, right? So in fact, I was thinking that these big tech companies have been shocked. I were just like company, we're not the media company, we don't have editorial decisions, we don't have news values in place to see what to prioritize, right? Like media organizations. But there's a lot of debate around ADAPT that should they be just tech companies and forego the editorial responsibility. I think by default, they have to because they are already doing so through the algorithms because why are algorithms in place to begin with? There are billions of content out there and the space and the news feed or in our platform that we consume, it's just finite, right? So the algorithms are there to sort everything else. So I think the platforms are understandably mobilizing all these technology algorithms and whatnot to make our experience better in the platform that's part of their design. But at the same time, one of the things that I think is concerning is how the platforms have become so big that they're not evading accountability already. And my biggest issue really as a scholar also is that whatever policy is in place to act on disinformation, to act on trolls, et cetera, are happening in the West. And because, you know, Meta, YouTube, Google, they're all situated in the US. Of course, other developing countries like Australia or the European Union actually have power over this platform because if EU, for example, bans Facebook, that's a big, there's a big effect on their operations there. So they have the power to tell Facebook, don't do this or implement this kind of policy, et cetera. What's happening is in the developing side of the world where I would argue it's worse here in terms of your disinformation crisis, it's not addressed. And there's a lot of foundation about Facebook only, not just Facebook, but Google as well, responding to the US sanatoriums, for example, when they have an inquiry. But if the problems in the developing side of the world, then that's something easier to neglect, even if the Filipino nation is actually one of the most active social media users in the world. So there's that discrepancy. So one of the things I wish for the platforms to do, and I think they're already doing that to a certain extent, is to develop, you know, dictionaries for local languages to make their content moderation more effective, you know, having local representatives here. Facebook has, with Facebook Philippines as an office here just a few years back. So there's starting that aspect too, but I think engagement could be better. Sometimes they come in too late and then damage is done. So I think there are things to improve, but definitely, you know, still the locus of power is in the West. I want to get into that discussion about regulation, because when it started, we all celebrate every new platform that allows people to speak up. When YouTube came and Google came and Facebook came, really everybody thought, oh, this is a great equalizer, even for small candidates, small businesses, small people with small voices, everybody gets that platform. And then very quickly over one decade, we also see the excesses, not necessarily of the platform, but even our people. I get your point, Roy, that that's what user and user agreements are for. It's also to remind us of our responsibility. The point about our personal responsibilities well taken, that's big for my own personal experience. I don't, I just click agree, right? And you can change that and that can be 10 pages long now and I just click agree. And the thing is, that's by fault, but the platforms also know this. I guess the question is, to the point of Professor Bukin and Professor Gao, let's still talk about responsibility of everybody's responsibility, not withstanding my own irresponsibility or my own recklessness. What are the platform's responsibility to protect me from myself? Yeah, no, so I think that's a good question. And look, it's not as simple as just clicking agree because it's, I totally get you, everyone clicks agree and doesn't read the T's and C's. So I'm not talking about T's and C's or anything like that, but there are features like if you go to your settings or just before settings, there's like privacy control, privacy checkup. So these are things that we have that make it easier for people to identify as soon as they go to the settings option. And what we're doing, it's a multifaceted approach. What we're doing a lot of in the Philippines already is actually a lot of digital literacy, outreach and engagement, right? So working with both government as well as NGOs and CSOs, to try to push out a lot more of this knowledge of how to use our platforms. So a couple of years ago, we launched the Digital Taiyo. It's basically our flagship digital literacy program in the Philippines. And we continue to iterate on that. I think the most recently, we've added new models on civic education. We have rolled that out as a dip ad before in Philippines. Coming to this elections, we're also working with, I think the legal network for truthful elections, Alente, and to try to promote some of these digital literacy efforts as well as Comalac. We have actually launched a campaign to get people to really think before you share and to kind of look at your sources and be inquisitive about the things that you see. And so we totally get that, yes, it's not just, hey, this is the app, happy, you go and use it. So we are looking into how we can spread the word a bit more to ensure that people know how to use the platform, right? And on top of that, we're also working on supporting inter-news in the Philippines to launch a fact checker incubator because we also want more fact checkers in the Philippines to increase the capacity of what can be fact checked and also the standard and the variety of what can be fact checked in the Philippines. So there's a lot that we're trying to do to kind of increase education, increase better availability of information on the platform as well in the Philippines. And all that is not just in the lead up to the elections, we've actually done some of this work for the past two, three years already. So it's not just like last minute, we're rushing to get things done. So yeah, so I think these are just some of the things that I'm mentioning, but we're also doing a lot of work with journalists as well. So incubators to kind of upscale them to make sure that they're better informed tags on our platform for news. So I mean, those are just some of the things that we have done. Yeah, but speaking as a journalist, one problem is when you talk about training journalists and even for that matter, training fact checkers, you're still addressing it on the level of content, right? And what we're finding is that the problem is not content, the real challenge is still reach. So even when you fact check something, the fact check will never catch up, not even to 10% of the original fake news that went viral. So I will still bring it back to the question of what is the responsibility maybe I'll frame it this way also. We know that one of the extreme examples and dangers that have been demonstrated by this question and this concern is precisely elections in other countries, in our own country and so on. Most notorious, of course, United States, when people talk about very openly now and forensics have borne this out, Russia meddling in the politics, not hacking the systems, but as they put it, hacking the voters because they know so much about the voters and they know so much about how to game the platforms. What are the lessons learned that technology platforms will acknowledge and accept what changes have taken effect at the level of the platforms, at the level of your algorithms? So I mean, of course, what I mentioned earlier are a lot of what externally we're doing. I think, like I mentioned before, this whole effort is a multifaceted approach. So I spoke about the external parts of it. Look, internally, we do a lot of backend stuff that may not always be announced. I think maybe I'll mention some of those that's been announced, so at least to give you an idea. So things like targeting options, right? In terms of how people can target ads for you. Earlier or last year, we have actually announced the removal of certain targeting options like sexual orientation, like social and political beliefs and causes and figures. So that effectively removes a lot of targeting options of what, of how people target misinformation to users on our ad. So that's one thing that we have done. We have also started to include a lot more or update our policies, especially for public figures, political candidates, involved in let's say content which are sexualizing them or derogatory, sexualized Photoshop images of them, degrading images of them. And these are now newly put into our policies since the end of last year where for certain things we get reports off and we take down for certain things their proactive work that we do. So for example, there's a lot of proactive work that goes on to child exploitation in the Philippines, right? And we do proactive monitoring. And similarly, there are certain keywords, I would say that we monitor in the Philippines, not just child exploitation, but other things surrounding what's happening in the Philippines to look at and to see if we can remove content. So there's a lot more signals, there's a lot more teams in the company looking at how we can proactively take down content in the Philippines before they're even served up to the platform. And so, I mean, that continues to be done. There's also things like our take downs of coordinated inauthentic behavior, which we have done in the past in the Philippines, we continue to do. And hopefully you'll see more in the lead up to elections. And so, yeah, I mean, those are just some of the things that we have announced and that we are doing. But yes, there's a lot of backend work that we're doing to try to improve, I guess, the experience online. Speaking of the backend that I'll bring in again, the professors to weigh in, but I do have a follow-up question. And we're already seeing this in the comments, a lot of questions about algorithm. I mean, how far have we gotten in making the algorithms of the tech platforms more transparent and therefore more accountable? Well, I mean, if you, that we have actually announced generally how the algorithm works, there was, if I'm correct towards the end of last year as well, we actually posted quite clearly the signals that we look at and how we actually determine things should be ranked. And that has been, that's actually available in our newsroom and I know Pres covered it here and there. But ultimately at the end of the day, is that we can't exactly also showcase the whole, how exactly the technical ways of how the algorithm works, right? Because then bad actors can use that to curtail whatever we're doing to stop them. So, yes, we have actually announced that and that's actually, I don't have the link here, sorry, I can try to find it and put it in the chat, but yes, we have that, yeah. Rolf? Okay, so my field of study is in algorithmic research. And I think we know for a fact that during the whistleblower account and interview of Frances Haugen in the US, we know that there's a civil integrity committee within Facebook that implements algorithmic breaks, tweaks in the algorithm to ensure that during elections, at least, they tried to surface more credible resources like news and from social institutions, there's that. But the thing is they turn it off after the elections from what we learned from our study, the election now has been happening years ago. After 2016, all these other candidates are already seeding their ecosystem of content out there. So apart from the algorithm there, that is being turned off, at least the safeguards are being turned off after elections, where democracy is still happening, it's not just elections, right? There's also the factor of the algorithm incentivizing hyperfabs and content is this happening across the platform? Hyperpartisan content are actually those scandalous, controversial, they're the click-mating type of content that really works and go viral on these platforms. So there's that issue of the algorithmic incentives with the economic model of Facebook, incentivizing this kind of content in the first place. And the thing is, I know Roy mentioned about and the politicians already made the countable for their content, from our study, we see that it's really the intermediaries or yung mga tagapa magitan in Filipino of the candidates using other Facebook pages, other Facebook groups that campaign for them. And that's where the gap is in terms of regulation. You can police the official, the check mark account of the candidate, but not the people working for them. And the thing is, you can never know who's working for them and who's community organized because the lines are blurred also. I know this talk is about tech titans, but at the same time, I think we have to call for the industry that powers the disinformation ecosystem to begin with, perhaps Facebook, Google, et cetera, can actually not only create the literacy campaign or create more rigorous content policy moderation, but also try to investigate the insidious campaign industry that support all these propagandic work. I wanna bring in Professor Bungkin and maybe Professor Bungkin, I want to continue to lead us into this conversation now directly of what does this all mean for the current elections and the current political climate that we have. But I wanna go back for a quick question to Roy. Roy, when you say that Facebook, for example, you do engage with government, you do engage with the Comilek right now, what are the topics that you discuss in terms of how to make the elections more, and protect the integrity, the fairness and the truthfulness of the elections? What are some things that you talk about on the level and on the level also of what government is asking of you? Well, I think one of the key things was transparency of advertising, right? And that's the reason why we continue to allow political ads on a platform and be transparent about it. So if you don't already know that if you place any political ads or ads on elections in the Philippines, you have to get yourself authorized. And what this means is that you have to be based in the Philippines, get yourself authorized with a proper address. And then after that, when you place such ads on a platform, you need to ensure that you are posting who's paying for such ads. And all these ads go into what we call an ad library. So if you just search or Google Facebook ad library, you'll find it. And all that is up there, transparent for everyone to see for the next seven years in the archive. What's a political ad? So any ad that a politician places that mentions a political candidate, a politician, those should be captured into the ad. To the point of Professor Gao, for example, if I'm not part of the official campaign and I'm my private group, whether or not you believe that I am- Yeah, but if you are campaigning for a politician, if you are campaigning for a party, if you are campaigning for a group, even though you're not part of that group, it's considered a political ad. So it doesn't matter who you are. If you're placing such ads, you'll have to get yourself authorized and all these will go. So it's transparent for all to see. That's my point. Okay, and quickly to make it just clear, an ad here by definition is something that you paid for to increase the reach and the boost. This is different from people watching in. Your individual endorsements, that does not count as an ad. So this is one of the things that we have been speaking to them about and that's the reason why, I guess we continue to allow people to place ads. Yeah, we found it of value and they wanted us to be transparent about it and so we are, yeah. Okay, Professor Bungkin, what does this all mean for the current climate and the conduct of the campaigns and elections? Right, well, definitely for politicians or for political communicators or well-versed in how these algorithms can work, they can potentially make use of this and in some instances manipulate this in whatever way the strategic use essentially of each platforms and their algorithms. It's something that we're seeing right now in our social media driven election landscape and political campaigning landscape. So by these strategic use of the products that's offered by big tech in communication, you have hyper-targeted messages based on individual user behavior, individual user data, then there are also features that are prone to manipulation such as, for example, on Twitter, the trending list is something that you can quote unquote manipulate and create some sort of bigness and grandness and then create or manufacture that illusion that there are candidates that are more popular than the others. So again, it's all of these technological features that are malleable and can be manipulated by individuals who have malicious or who have that desire really to sort of influence public opinion. That's really the biggest implications of the role of big tech in the elections. Professor Gao, we talked about, you started to talk also about regulation. What are the models that already exist from Europe to North America, maybe other parts of the world, Australia, we know, enter this debate about content of news. I mean, not just for helping the business models and independent media to survive, but also in terms of regulating content algorithms. Are there any best practices? Is there any way to strike a balance here? In the US, I think, I'm not sure if this is already implemented. Roy, at least let me know if I'm correct or just recommendations actually from academics. There's this, what they call algorithmic break. So for example, if a content is suspicious before it even goes viral and spread in the network, there are mechanisms in place to pause, hit break in this content's virality or spread in the network and then they check if it's valid or permissible in the platform and then they allow if it is it or if it violates the committee guidelines then it's taken down. So because one of the biggest damage really is when a content that's false, it's fact-checked later on after five days and that's already moot, right? It already garnered millions of views or likes, et cetera. So it is that preventative measure. I think we benefit a lot from policies abroad, especially in the EU where they're more progressive with what they're trying to propose in terms of moderation. But definitely we are following suit only with the Western countries political decisions to govern or to tame quotas and quotas tech giants. In terms of I guess our own regulation, Roy's right. The government doesn't even know how Facebook works less so creating policies that actually have teeth, can really actualize results. And I think Roy is also right. There's multi-faceted solutions to this and the first level of really addressing this understanding of the platform work. Let's go, we have Facebook and Google Ovens in the Philippines but it is necessary to have the conversation with our Congress people to have an understanding how the platform works and we have to have a multi-sectoral approach as well. So it's not just us but media. And I want to emphasize actually media's role in this. So there's this, I guess opposite side of things happening that while the internet and social media is growing as a source of information, news organization are actually dwindling in terms of their media trust, social trust in this institution. And from my own research that I'll be sharing in a few weeks, there's a lot of pages on Facebook and YouTube as well of actors portraying themselves as news. We call this pseudo news. They present us themselves as news. They all the aesthetics of news is there but they're not delivering news, soft research is not vetted. And the platforms allow these pseudo news content to spread and it's actually something that I'm studying right now. I think that's one of the basic things that they can actually act on now. It's news as a concept, protected, there should be vetted, who can speak news or who can spread news on the platform. So I think that's one of the things you can do immediately for the platform. Yeah, but it's a kind of firms, right? You start talking about that. You get to the question of who vets, right? Great question, but it just popped into my head. Does the free market of ideas work? You think given the current technology, the current platform? I think we have to consider the market is unequal to begin with. There's money always involved. So even if Facebook, YouTube, whoever takes down these accounts, they're just gonna create new ones and they're gonna hire all these people who would create content who would troll the public on particular issues. So I think it's always gonna be about the watches or the money involved in maintaining this information ecosystem that should be policed as well. Hopefully the platforms can help us spot them. So contact moderation, show that the content moderation, but perhaps it's about time to call it actor moderation because the agency is in the actors as well. Yeah, Professor Bogin, we haven't even brought up the matter of education, although we have. We have mentioned it, I'm sure, but talk about the role also of education in the long term here. So that people are not just passive consumers of their own biases and so on. What else should we be considering and throwing there in the entire ecosystem to understand and to intervene and hopefully find some long-term sustainable solutions to these questions. Right, I think Roy mentioned earlier that really the need to educate our publics when it comes to using the platform. Digital literacy is being a big role in how we engage the content that we see online and even the way we manage our own social media accounts. So I think that's the first layer. It's educating them on how to use the platform and do the content that they offer and even the algorithms that are involved in the kind of content that we see in the platform. So definitely it's not just the platforms that are supposed to be working here. It's really a multi-sectoral approach in solving this issue. It's also a cultural thing. There has to be a cultural approach to this. Right now, we're not really open to having conversations that are outside our needed circles or that contradict our belief systems. So I guess it's also something that we have to inculcate among our young users that we should be engaging in conversations that can be difficult, not even if they contradict our own belief systems. Sorry, am I still here? I think everyone's frozen. Yeah, you're here. We can all see you and we can all hear you. All right, so that's a cultural approach to this. And it begins with our different social institutions in school. So media literacy programs can emphasize this need to deliberately or consciously engage with conversations that are different or that are opposing to our belief systems. In our research, we saw that really the communities that we form are online are really what we call porous, meaning we can just get into these different communities, even if they are communities of people who believe differently from us. So, but it takes that consciousness, that conscious decision to enter these communities, to engage in these conversations for us to really benefit from the kind of diverse heterogeneous content and be able to understand where others are coming from, whether it's matters of politics or a religion or whatever belief that we might have. Let's set the politics aside and the elections aside. And I wanna discuss everything that we've been discussing, the problems, the power, regulation, algorithms and our own personal biases in the context of something that we all intimately personally can identify with COVID and the pandemic and this information and anti-vaxxers and so on. I bring this up because this is one area where platforms, governments, civil society, private sector, citizens all were together in saying that it's not just a pandemic, it's an infodemic. It's something we need to intervene on. It's something that we need a lot of information at the same time, there's a lot of misinformation going on out there. What are the hard lessons learned over how we've tried to manage information, misinformation, disinformation over the course of this pandemic? Roy, let me start with you. I mean, I know governments also worked hard with the platforms to try to clean up this information and to actively push proper information so that people can take care of themselves. What were some of the interactions? At the same time, hard decisions to make in terms of regulating the platforms and trying to have control over good and bad information. Yeah, I mean, I think first of all was just ensuring who are the authoritative sources, right? You know, you had WHO, of course, being one of the key authoritative sources. But as you go down to country level, like there were countries that WHO were saying you don't initially, you don't need to mask up what certain countries were mandating, you should mask up already. And, you know, things like, you know, some of the different treatments available, some countries were a bit more open to it and some countries weren't. And so having the kind of, you know, decide and being able to give the proper information and proper, you know, details to people, I think that was a bit tough initially. But having said that, I think we, you know, at Facebook or at Meta, we start to roll out what we call the COVID Information Hubs. So you not only have a mixture of sources which are international like WHO, UNICEF and some of the others, but also local, you know, credible, you know, health sources like your Department of Health and other sub agencies under that. Those COVID Information Hubs, you know, we have seen has been quite positively used. You know, we've seen people sharing information from there and going there to get the most up-to-date information. So yeah, I think initially it was really just understanding who was right and who was wrong and what would be the right thing to do because there were a bit of confusion, I guess at the start, you know, what was the right treatments, what are the right things to do? And, you know, we didn't want to be the ones to kind of, you know, give out the wrong information. So I think that's the initial challenges and of course ongoing challenges is just, you know, removing false content, you know, misinformation and, you know, anti-vaccine content. So we work closely with government agencies who report to us a lot of this kind of information and, you know, we take down a lot of groups, a lot of pages and so on and so forth. So yeah, yeah. Yeah, but professors Gao and Bonkin, I bring this up also to ask you about, you know, part of the experience of trying to control information. So it's all well-intended, but the reality is, you know, when we talk about regulating the platforms and trying to regulate content, we talk about that kind of forms, we talk about that slippery slope and we know for a fact that there are governments and there are countries, right? Be careful what you wish for because there are governments in our country that also use the veil and the cover of needing good information on COVID in the pandemic and vaccines to actually crack down on political dissent, to go after, you know, it's a slippery slope from spreading false information to go against government programs to cracking down on legitimate political commentary. What are your thoughts? I mean, what have we learned? Again, in that side of our concerns about wanting the platforms in our content to be reliable and helpful, but at the same time balancing that with press freedom, free expression, and so on. Okay, so in relation to COVID, I think COVID is easy to govern and regulate because there's clear plot science, at least, you know, in the past couple of months, I think it's easier to label new authoritative sources, easier to crack down on anti-vaxxers in fact, in the Philippines, I think they're a minority when they were having rallies, you know, in Manila, they're only a few people there. So it's not a lot in the Philippines. I think it's, you know, a bigger movement elsewhere. So I think addressing COVID misinformation is easier because the science there and all of the stakeholders are, you know, handholding each other to address this. And because the information is much more sophisticated and gray and doesn't have a clear cut, you know, what's wrong and what's right, right? So you're right in your question, how do we protect video of expression? At the same time, you know, crack down on this information. So I think one of the things that we need to consider also is that most of the things, this information we have in our ecosystem in the Philippines is in fact state sponsored. I'm not gonna, you know, sugarcoat this anymore. There's study in Oxford University that in fact most of the foreign and authentic behavior are, you know, propagated and promoted by governments, covert operations in fact under legitimate agencies and it's happening here. So I think that's one of the things that I'm thinking, how do you make them accountable for this? Is it in government, right? It's difficult to make the government police itself. So I know we have sovereignty, international bodies can't really do this, but definitely it's a call for civil society as well to make sure that the government funds are not used and must be appropriated for the purposes of manipulating the people. Yeah, I have to mention, I imagine, there's not a question in the commentary here, but I imagine that you would also be conscious of opening this kind of worms. The timing has something to do with it. I, again, the main context of this discussion is that we're heading into national elections. I think the whole context of who wins in those elections was the ideology, what are the beliefs, what are the commitments of anybody who wins will factor quite strongly in whether or not you want to open that kind of worms now or in the future. Yeah, I just want to add, Robbie, that sometimes it's not about ideological beliefs really. Sometimes it's relational. So if you form relationship from, for example, the 30 communities on Facebook, since they've been the 30 fans ever since, you know, 21 and 2016, it's difficult to disengage, right? So I think more than, you know, I believe in certain things, it's because I believe in certain people and it's part of our, you know, patronage personality politics in the Philippines, but definitely this ecosystem of social media allows for this effective relationship to be formed between the populace and the candidates as well. Okay, I want to bring in some questions from our audience. This is from an anonymous attendee. Anybody can ask her, do you think that the problem regarding information literacy is that educators or the academe do not call out directly? That platforms such as Facebook, YouTube, Google, and so on are the biggest sources of fake news, misinformation and misinformation. Add to this the fact that some digital safety webinars are privacy-washing events. So that's a question for us, right? Yes, for anyone. I'll go first and then Ben can follow since it's addressed to academic. So I think there's enough calling out in the platform except that the conversation between platforms and academics are not necessarily always happening. Of course, we recently connected with Facebook because we have this election research and we want to maintain that relationship to have that discussion as well. And I think that ultimately it's not the voice of the academics that can reach a lot of people. We can only reach certain circles of people and we need the help of the society and our new practitioners to extend originally our research findings into something that could be understood by users. Right, and if I may add, in one of our, in a different event that we hosted, one of our speakers there actually mentioned that there's a lack of demand or clamor from civil society and other sectors of society really for to call out or to engage social media platforms and even like to update the current laws that we have so they can cover more practices related to disinformation in our contemporary landscape. So really, I don't think it's just the academics job to call out because as mentioned, we've had engagements with platforms but it really takes a concerted effort not just from academics but civil society even from the public in terms of improving the information ecosystem that's really where platforms are playing a big role. Okay, again, from an anonymous attendee, this is for you, Roy, and this is actually a question that I wanted to give back to as well. First of all, our anonymous attendees says to be clear, social media also refers to platforms that are not controlled by big tech titans such as those Fediverse and self-hosted open source social media instances. As for Facebook, do you think they should collect less data than what is really needed and that they should not track their users even across websites that they visit? And by the way, he or she adds, complex privacy settings are also not really that helpful. So yeah, I mean like on the last point again, I think the privacy settings are not really that complex and that's the reason why we have created this thing called privacy checkup. If you actually try to click through it, it's like a dummy's guide to your privacy on Facebook. It's just that people don't try and maybe they don't know. So that's one, I think that's the thing. Very, very quickly, I just want to interject here because I can hear the anonymous attendee wanting to, I have to say, I tried it. There are pain points there. Apart from the fact that again, I will plead my own carelessness notwithstanding knowing all of these things, my laziness can get the better of me, right? That's why. And also it's not that simple. I actually got lost in playing around with those privacy settings which kinds of ads do I want to see and so on and so forth. There is some friction there. Well, I mean, look, if you want, at the end of the day, the privacy checkup, it's a step-by-step guide to go through the different settings, right? And if you want to, of course, dig deeper into your settings and check about the actual final details, yes, it may take a bit of effort, but generally the privacy setup checkup is actually quite straightforward if you go step-by-step and definitely encourage everyone to just try it out. I mean, as to the question of, I think what, as for Facebook, do you think we should collect, I guess, less data under what's really needed? Look, I don't think it's for me to really answer that. You know, I will say why we collect data, right? And I think that this has been publicly spoken about. I mean, at the end of the day, we are a free platform and we use ads to help fund the platform, right? And how our ad system work is that, with data, we can serve people the ads that they want to see. And businesses, whether or not they're big businesses, small-medium businesses, they're able to target the audience that they want to target. And so that's why we still continue to collect data. Do we collect less data? I think we, ever since, at least from Cambridge Analytica, I think we definitely have tidied up the data that we collect and the data that's available as well. So, yes, we can collect less data and we have done so. But yeah, I think it's our business model in that sense, right? We do need data for you to be able to serve your ads. And so that will always be, I guess, part of it. But, you know, at the end of the day, also, we do understand that not everyone wants all the data collected and that's why there are options on Facebook to delete the data, to clear a history, and that's available. And so that's something I would definitely encourage people to find out more if they want to do that. Good that you mentioned Cambridge Analytica because another attendee is asking, have we learned our lessons there and in what ways? And what are the concrete manifestations of us having learned from Cambridge Analytica to the direction of making sure it never happens again? Yeah, look, so I would hope that we have learned, you know, look, so ever since Cambridge Analytica, the way we, or the number of APIs and number of connections that we allow apps to connect to on our platform has been greatly reduced. I think if you speak to anyone who has been using applications or building applications connected to Facebook for the longest time before Cambridge Analytica, they'll be able to tell you that it's night and day, before and after, right? Can you put a number to that just to give us a sense of how much that has improved? When you say that, for example, the number of APIs that we allow apps to plug in? Yes, the number, the data that you can collect has been reduced, right? So things like, you know, if you want to collect someone's phone number, you know, you need, you know, you need to be very, very clear what that use is and needs to have, you know, all the proper kind of approvals from the person to do so before. And there are some other demographical data that we have just totally removed, right? And so that's not available for people to kind of tap into anymore. So I think concrete things like that, yes, it's night and day. We don't allow people to tap into a lot of demographical data at all anymore. It's really just top-line data that the user has to allow when using the app or logging into the app. But other than that, there's a lot of data that we have kind of disconnected from the APIs. Hi, Professor Gao and Bunkin. That question, right? It's an interesting question. Look, the tech platforms know a lot about us already. I think the basic question being asked, how much do they need to know? Can you give us an idea of how much they already know and how much ridicule see above that they actually still get to know about us? Whether or not they allow third parties to tap into that? I think data only has value if it's historically curated. So with their knowledge base now and with the Contendo's collection of data, even if it's less, it's still valuable. It's still a lot of things known about us. But the thing is, it's one question to know how many data points they collect about us. Another question of how they use that data, who they sell it to, for what purposes. And in fact, I think I'm not sure if in GDPR, data privacy regulation in the EU, they've already allowed for users to ask for the platforms, hey, what do you know about you? Okay, actually, I call for everyone to request the platforms. If you want to request your data, they should be able to give you that. You can request actually to have those data removed from the platform. That's your right as a citizen. Thanks to the EU. I'm just saying we can do that right now. Yeah, you can request to the platforms. How do I do that? For example, you can go and clear your history and clear your data on the platform. Yeah, there's just Google, I guess, how you would want to delete that. I've done that for Netflix, a separate platform for my research. You can request your viewing history. Similarly for other platforms, you can do the same. So if you want to erase data about you, it's called actually the right to forget. I think it's the right term for that. If you want to be forgotten online and to erase your digital footprint, you can do that. So there's that extent. But the thing is, yes, you can do that, but it takes some time. I think there's a turnaround time of 30 to 45 days before they can give you that data. So there's those mechanism in place, but I think it's being transparent again how the data is used ultimately in forms that are accessible to us, not in complex documents. So it's important to always communicate that. Hi, Professor Bong, I just remembered because Roy does keep reminding us look, you can control your data. You can erase your data. You can opt out. You can choose not to opt in and so on and so forth. But Professor Bong came for the typical Filipino, for the typical citizen. When they reminded of that, do they have the mindset of I need this too because I need to protect my privacy, because I need to protect my integrity, or is it just a matter of the best user experience that I can have? You're on mute, I think, if you're still there. Professor Bong Kim. I'm not here as a professor. Yeah, maybe yes. Both of you, for you, Roy and for Grace, I'm trying to get into the mindset and just being honest here. Even when you say to tell people, you can opt out and so on. The reality is most people are probably looking at that as a user experience matter. Not necessarily, for most people I would imagine, they're not necessarily looking at it in terms of protecting myself as a citizen. Well, I guess with user experience, there is a trade-off. If you want Facebook to serve you the things that you want to see, then, of course, we need data in order to do that, your search history and all that. But if you don't want your search history to be on our platforms, then the user experience drops. So it's a catch 22. We can't serve you what you want to know without data. I mean, there's no other way to do that, I guess. Professor Gao, do you have any thoughts on that? Sure. I think the feeling of users are ambivalent. In English, Roy is correct. The user experience suffers. If we are to follow what Facebook is saying, that more data is a better user experience. At the same time, they are not happy sharing a lot of information, but they have to. It's really the default way of doing things because there's a call to delete Facebook a few years back. And it's been... We lost your audio for a while. Okay. Hello. Okay, you're back. Okay. So I was saying that everyone's on Facebook. Your teachers are there, your parents, for you to reach a particular service of the government. You have to message them on Facebook. There's a lot of things you will miss out if you opt out of Facebook. There's that ambivalence. I have to be there. I have to share my data because I need to be able to function every day because all the services, all the connections I need are in the platform. So I heed the call of several U.S. senators that perhaps it's time to break up Big Tech. And I really believe that it's time to break up their power because if economically they're not as big, perhaps we have more diversity and whoever else platform can enter the market and be another space for discursive discussions. Okay. Another question from another anonymous attendee. Can the Philippine government hold tech titans accountable in the form of fines in the same way that it's happening in other countries? I want to expand on maybe like Aki, but none is a question also. Without talking into how we get into regulation, assuming that there's some regulations that can actually strike a balance. But what will penalties look like? That would be meaningful. Sorry. I can answer but very short. I think we have determined first which violations they will commit before they are penalized. And first defining that before we ask any penalties, that's my answer. And to that point also, I don't bring it up. Do you think we should be, are we ready to even talk about regulation when as we all also acknowledge there's a lot of education that has to happen at the level of citizens and users anyway still? Maybe I can also, maybe a short answer to that. In terms of regulation, right now given their current political climate, maybe not yet. We need to first of course have a government that teach us now and then when it comes to who will be regulating our social media platforms. I guess our best bet right now is to have like an independent watchdog that can serve as that regulatory body. The government itself given their current political climate, given our experience and it comes to freedom of expression. And I think it has to still be reconsidered or maybe slow into a different time frame. What does that look like, professors in your mind? And we're just playing around here but when you think of an independent body, a multi-sectoral, all of society representation. What exactly does that look like? In terms of a starting point that we can trust? Very hard question, Rob, of imagining that kind of organization. I think there are existing organizations that already engage with that kind of in that space of regulating big tech except that they're individual organizations. They're not coordinating in one large organization. I think there's that. But definitely it's going to be a multi-sectoral approach. I think in the hierarchy of things, the Philippines has more bigger problems. And I observe in the debates of the presidentials, vice-presidentials and the parables, platform regulation is not talked about. Oftentimes it's only talked about the problem that they debate about this information but the level to which they engage it is just a very shallow level. So I think we need to make it a public agenda first. Before we can even engage people and vice-stakeholders to form this group or watchdog organization that could really be an independent body. But I know Ben's answer is we're not ready. But hopefully if the Senate, anyway, if the Senate has, they have proponents in the Senate who can forward these institutionalized measures to govern big tech, perhaps that's a good first step. And we know how long certain laws pass in the Philippines. So maybe start now, maybe steps and see if we can actually reach approval or the third reading from that bill. Okay. So I want to go back to one basic question that we have and later on we will see the sentiment of the people. Roy, Professor Gao, Professor Bonkin. Do platforms affect how we vote? Do they have that power over us at this point? Maybe I can answer. My answer that is not directly, it's not as if the platforms will tell you who to vote for. But it's really the mechanisms within the platform that shape our decisions. I mentioned earlier that there's some sort of control happening at the platform level because it's really the platform that's, you know, and their algorithms and their, you know, the structure that they put in place that facilitate all of the information that we are getting on social media. You know, our feeds are what we call their curated algorithmically. So it learns from our interactions. So, you know, and at the same time, you know, you have these tech companies, the big tech that can control these algorithms. So in other words, they can definitely shape, you know, our understanding and the kind of information that reaches to us. And, you know, other players, you know, so people not directly within the platform, not people who don't work for the platform but are in the platform. No, they can strategically use the platform. So in a way, I guess, that influences, you know, or the way we vote for. So it can provide, again, what I mentioned earlier, that illusion of grandness, you know, that can manufacture popularity, can manufacture, you know, all of these messages that can influence our decision-con election time. So it's not their wreck, but, you know, the mechanisms within the space and the kinds of realities that are formed within the platform are shaped or help us decide who to vote for. Yeah, just to add. So platforms don't have ideological power. You know, you know, there's the church, there's the education system, the government, who tells us what's wrong or right. What platforms have is what we call logistical power. To be able to organize things we see, some things, you know, perhaps, you know, two candidates are sharing their campaign materials, but they're not equally giving visibility in the platform. So that's the governance mechanism of the platform. Some things are more visible than others, some things are more salient. In fact, and there's also another qualifier, they're more salient where, in which community. So I think we have to think about platform's power as organizing our discourse, more than, you know, telling us what to talk about. So it's kind of an agenda setting, but of course contextualized to particular communities, to particular spaces in the platform. So I think that's what we need to realize. And then what exactly Professor Munkin said, you know, some actors, malign actors, are actually gaming the platform affordances and logics to be able to be more visible, to be able to be recommended in the platform. So there's that complicity between the platform affordances that are exploited at the same time, actors with malign ends to game the system. Roy, how does Facebook approach this question? In the context also, you know, with great power comes great responsibility. Do the platforms proceed from that premise that the reality is that we have great power to influence how people will behave in both? Well, you know, I don't think, you know, I think all the work that we're doing for the elections, all the work that, you know, I mentioned, and me being here chatting to you right now, shows that, you know, yeah, I think, you know, that to the extent that, you know, we do have an influence, we do have a part to play. And so, you know, we do want to make sure that that we can help, you know, mitigate issues and work on issues together, you know, but I remember as well, like, you know, when you think about the greater issues and the greater problems of why people follow political stakeholders anywhere, I remember seeing a study that said something like believing false information is like a symptom of the problem, not the problem itself, right? And so, what is that problem? Or what is that symptom? Like, why is it festering and why are people, you know, happy to be in echo chambers and not open to other interpretations? So I think those are questions as well, I guess, that's important to ask. But I think the things that we're doing on digital literacy, at least to me, I think it's very important, right? Because I do know, even my own personal friends, I do know folks which are just very fervent in supporting one side of the house and they don't really know how to, you know, check the sources of information everything they see as long as it's meeting the opposition in that sense, they believe it. And I think it's important, you know, and it's multi-sector, it's not just what we can do but can the government do more? Can NGOs and CSOs do more to kind of promote, you know, the whole culture of checking, thinking before you share, checking your sources. You know, I think it also comes down to, it could be a generational change, I don't know, because I mean, you take away, Metta, you take away Facebook and Instagram. The same thing is happening in any media. You know, in America, you've got Fox News, you've got CNN, right? You know, like, for me now, when I watch CNN, I'm thinking okay, you know, I know they are leading a certain way, maybe I should just double check my sources. And similarly, Fox News, right? It's for a certain group of people in the US is the same thing. So you don't just see it, you know, on social media but you see it on news media, you see it in different parts. And even in the Philippines, you know, you've got Rappler on one side and you've got other media on the other side, right? So it's not just about social media, but it's also, I think it's, what is the real issue at hand? I can't put my hand into it, but I think there's a larger kind of discussion that needs to be had as well, you know. But the fact that we are doing things, the fact that we're here, I think we acknowledge that we have a part to play. And we certainly appreciate NETA and Facebook being represented here. Really appreciate all your sharing and thoughts and candidness, Roy. But I want to bring back, because I will put my hand on something and I think we've touched on that. Obviously education is, you know, obviously, everything you were discussing is also a symptom of, I agree, a bigger problem than the platforms because we've had that problem far longer than they've had even the internet and that's really education in the Philippines. I mean, Professor Gao and Professor Boken, what should a digital literacy or information literacy program look like? Are there any improvements there? What are you seeing and what else do we need to do? I have to admit I'm not very knowledgeable of exactly how our media literacy program are rolled out, but what I've heard is that what they do is assign computer specialists or computer science teachers to teach media literacy and it's not the case. What's being taught is how to use Facebook, how to open an account. It's not the critical skills that we need to be able to discern information. They're teaching the technical skills to use technology, but not how to spot information that might be dubious. So I think we need to step up on that aspect of the critical skills more than the technical skills. It goes hand in hand, definitely a knowledge of the technology is good, but at the same time, it's the information that's also that needs to be judged accordingly. And I think one of the things that must be promoted and I'm an advocate of this, we need to rebuild trust in media because ultimately if we do not trust our media in situ shots, we will seek alternative sources of information that might not be vetted or are launched by political actors with political interests other than the public interests. Right. For me, it's a good media literacy program really highlights critical thinking. It's at the forefront of the curriculum. So it's inculcating skills needed by students to really be able to dissect all of the content that they see online. Even if they're not trained fact checkers, they know how to verify sources. They know how to triangulate or validate. So it's important. How early does that start? How early does that start? I think it starts at a very young age, not just at the senior high school or junior high school level. At a young age, we should, I think at a certain extent, it can be cultural, but learning to question authority is, this is, I guess, one step. It's that ability to question everything that's being and not just blindly obeying. So those things begin at very early age, not just at the senior high school level. So in terms of media literacy, really it's contextualizing that critical thinking in the way we engage in the current information ecosystem. So right now, as mentioned by Fatima, the way, at least the ones we know, they focus on the more creative use of platforms, not much on the critical use of it. So I think that aspect needs to be beefed up. Yeah. Roy, I'm Fatima and Benedict pointed out that something very close to my heart, I'll admit it, there's a professional and personal bias to it, but I do subscribe to it as well. It's also this matter as well of really bringing back and backing up the need for media that we can believe, that can backstop, that can be, and there's a great area there as well, I acknowledge, but just in terms of however you would define credible independent media organizations, what are the platforms doing to actively support that, not just in terms of supporting their fact-checking skills, but really as part of the media and information literacy campaigns to tell people that this is part, this is part of critical skills and critical thinking that ultimately, you need sources that you can trust. You're talking about what we're doing for news media on the platform, right? Yeah, look, there are a few things. We have a news partnerships team which just works specifically on news partners. It supports the publishers, the digital literacy, the training, upskilling of how to use the platforms. So that's one part. But we also look at original and authoritative news sources on our platforms, right? So we have things like news labeling. So in certain countries, there's some news that are labeled as state media. We want to be clear who they are. And even if authoritative news sources, we give more information about who they are, like CNN, give some background about who they are and making sure it's authoritative. And we also want to be informative about news that's being shared, right? So we also nudge people, for example, if you're trying to share certain articles which have been fact-checked and things like that, we're just very clear that, hey, this article may have an issue with it. You may want to read another article from the authoritative source. So that's the informative kind of aspect. As well as we try to reduce a lot of clickbaity content on the platform. So sensationalist news organizations love to use headlines. Hey, did you know? Have you seen things like that? And all that kind of feeds into what we call clickbait content and we reduce that on the platform. So we want to be accurate about how we prioritize news sources as well. So there is whole facet of things that we do with supporting news quality on the platform. But we do work with news publishers to ensure that they have the best setup ready to ensure that they are authoritative on the platform. So I think that's quite key for what we do. And of course, at the end of the day, these news sources, depending on your persuasion, may of course be one-sided if you feel a certain way. But yes, we do try to ensure that they all have a voice to play, a part to play. Okay, comment from an anonymous attendee. But a teacher, he or she makes that clear. As a teacher, I do not encourage my current and future students to create and use Facebook. Why? Because it's possible to live and still have access to credible news without Facebook. Comment from one of our users, if anybody would care to react. So I think, I mean, I would just mention that as we know young kids, right, no matter what you tell them, they would do it if they want to. So whether or not Facebook, whether or not Instagram. So I think it goes down to what, how can the teacher teach the right things to the kids? If you want to use the platform, are you of the right age? What should you be looking at? Things like cyberbullying is a big thing. You know, we are concerned about that, especially with youths. And I think it's not about telling them you don't need to use it because yes, you don't need to, but you know they're going to use it. So can you educate them ahead of time before they use it? And that's what I mean. Going back, especially with young children, I have young kids as well and I educate them, you know, you need to know how to switch off, right? If this give them scenarios, give them, give them different kind of thought processes and situational ideas. And these are things that, you know, from a methods perspective, we can actually help. And that's what we have worked with, you know, did that to kind of promote some of these resources. So if there are folks listening on that are playing in this space, you know, definitely don't hesitate to reach out because we do have resources that we can give you, you can just run with it and educate, you know, the youths in a much better way. Okay, we're running out of time. I know that people have to get to their class as well, not just the people here, but also the people. But I do want to put in one quick question as well to Professor Gao and Professor Mungin because of course we're not just talking about Facebook. I'm particularly interested in TikTok. We talk about the power of algorithms and with TikTok, for example, from what I understand, because I will keep, I am not on TikTok. I've installed it five times, I've uninstalled it five times because I just get intimidated by the fact that what's happening, I don't know what's going on, what's the control here, and that it's constantly just feeding me, learning about me, feeding me more. Talk about the impact of these new platforms on our elections. First of all, as researchers, it's difficult for us to study TikTok because it's not, the informational data from TikTok is not something we can access through the tools we have right now. So on the level of tools, at least Facebook has crowd tango in terms of, you know, collecting data, so we can see if we'll be able to study it. TikTok doesn't have that yet. If we want to collect data from TikTok, we need to do it manually. And, you know, do manual collection of data screenshots and whatnot. So there's that. Secondly, definitely TikTok is a growing platform in the Philippines and globally. I think we just have to remember that TikTok is a Chinese-owned company to begin with and that there are historical censorship issues with TikTok as well, especially when it comes to, you know, hot-button issues for China, like Uyghur, Muslims and whatnot. In terms of the election, definitely it's the space where this information can really spread without any checks and balances because the algorithm there is very personal. Whatever you consume, it's harder to know if other people do have access or are seeing the same content you're seeing. So in terms of the platform being transparent of how it works, it's quite difficult to answer now. There are no, I don't think there are existing tools yet that can help us study TikTok. Although I think there are some that have begun, you know, enabling researchers to study the platform. Right now, we haven't, you know, really tapped into them yet. But I think the platform itself prioritizes, you know, virality, you know, as opposed to other types of engagement, which is why it's easy to go viral and TikTok, you know. So definitely that's, that are making use of that culture within TikTok as a platform where you can go viral is something that political communicators or campaign specialists would want to tap into. And there's also a certain culture within the platform, you know, your infotainment kind of content or sort of educational type of content. Really something that hits hard in the audience sets. Those are, again, some facets. Those are platform-specific, you know, cultures and vernacular arts that are, that can be taken advantage of by political communicators. But again, so you don't have, it's really difficult for us to study it in a more empirical manner because you don't have access to the data yet. Maraming salamat. We only have a few minutes left. I'd like to invite our panelists, Roy, Fatima, Benedict to give us a very short parting word. Well, my only parting word is that whatever conversation we're having right now, maybe it's quite late already in terms of the election period because it's only a few days actually before May 9. So if we are taking in lessons today, perhaps it's something we need to work on post-election and to be able to, you know, act on the things we've spotted and the issues we want to address for the next election, for the midterm one. For me, I'm looking at from a network perspective, but there's always this notion that networks influence the way we behave. But at the same time, we don't realize that individuals in a network can configure their networks. So there's really still the power of the individual that can shape whatever information they want to receive and then, you know, what kind of conversations they want to engage in. So it's really highlighting the role of the individual and their agency in all of these, you know, information ecosystem. You can choose actually to get out of your eco chamber, you know, and the platform allows you to do that. You can choose to, you know, to choose the more legitimate and authentic sources of information. But it really takes that, you know, consciousness and for individuals to be able to realize that now. Yeah, I think, look, at the end of the day, I think the conversation doesn't just stop here. You know, I think at Meta, we continue to want to be engaged with all different kinds of stakeholders. But also I'd just like to say that, you know, I've been involved and engaged in the Philippines for the past five years. And, you know, I know that a lot of passionate people, both Filipinos and, you know, folks in Singapore, folks in the U.S. that are doing their best and engaging in the work for the Philippines, whether or not selections or even past that. So, yeah, just want to kind of put it out there. But thank you for the opportunity to speak to everyone and be engaged. And we certainly appreciate it. And to our Zoom attendees, I mean, to show your appreciation as well, we'd ask you to take a moment to answer a quick poll of just five questions. So, show our panel as well, our appreciation for the time that's being flashed on your screen. If you're watching over YouTube and Facebook, don't mind what I'm saying. You won't be seeing this poll, online poll. This is for people already in our, in our, who are registered within our Zoom webinar, but by all means, if you want to leave your comments, suggestions, and still your questions, please do so in the comments section on Facebook or on YouTube. And we will appreciate the feedback. And while you're answering, again, we'd like to thank very much all our speakers. I know this is a very hectic season for all of you. We appreciate that you shared your wisdom in our webinar today. Professor John Benedict Mungin, Professor Marie Fatima Gao, and of course, Roy Tan of META. Maraming, maraming salamat sa inyo. Now, in the meantime, as we mentioned earlier, we're also launching our post-test so you can assess your progress in knowledge and understanding. We actually would like to see how your sentiments have changed. So if you go to Mentimeter, again, the original question, we're facing, we're showing here the new word cloud, an updated word cloud, and this might be good for insights on if anything you've noticed has changed in your answer, in our collective answers, and so on. We will keep the post-test open in the background as we proceed with our program. And again, that question, my Fatima, my influenza, the overwhelming answer seems to be yes, but now it's my distinct pleasure to introduce to you. As I thank you also for having me in this program, I'd like to introduce to immediate past president the Philippines Communication Society. Please welcome Professor Christine Girai. Hello, thank you, Robbie. And thank you to our speakers for a very insightful discussion on the role of this tech giant's this coming election and actually in our lives. Okay, since I'm tasked to give the synthesis for this webinar, I will just mention some key takeaways that I noted during the discussion. Google, Facebook, Microsoft, these are three of the popular tech giants that are said to be in control of the digital world. They have also been identified as major sources of fake news, straw farms, information manipulation, and controlled narratives. Props Gao and Bukin reminded us that this tech giants are, after all, business entities, and it is but natural for them to forward the interests of their companies. Roy suggests that people should take a step back and look at the platforms in a bigger picture, having diverse users with diverse interests and culture. Robbie, through an interesting question about how much the platform knows as better than we do, this may have something to do with the algorithmic analysis done by the platforms. But this may all be because of the issue of commercialization. Your inputs actually provides the platform an idea of what you need or want. Thus, most of the content you see seem to be a reflection of your personality. Robbie asked a question on platform's responsibility to protect the users. And Roy talked about digital literacy and engagement where civic education on the use of digital platforms are shared. The speakers believe that little by little, there are things being done by some tech giants to address the issue of responsibility. In terms of protecting the integrity of election, transparency in ads is what META has been doing in every election. John thinks that political calm managers who know how algorithm works can make use of the platforms to their advantage even to the point of manipulating them. We also heard terms such as algorithmic breaks, algorithmic incentivizing from Fatima which somehow help us better understand how algorithm is used by the platforms. I agree with the idea that it should not be about control of content anymore but the control of actors. There is a need to police those actors who are paid to do false and malicious content. Fatima believes that most of the misinformation are state sponsored. So sad. So the question of how we are going to deal with misinformation when the very institution rely on truthful information is the cultivator of misinformation. John believes that the need to improve the info ecosystem is not just about it's not just a job for the academe but it's a job for the whole society. At the end of the day as mentioned by Roy platforms exist through ads. So your use of any platform for free somehow gave this ads an access to your information. Though there are privacy settings designed to protect your data still basic info about oneself can be used by this ads for their interests. Are we ready to regulate our social media platforms? Well, John believes that we are not. We need to have a complete trust with our government first before we think of regulating the social media. All our speakers believe that there's a need to step up on the media and information literacy education of the Filipinos. The education must start from a very young age to develop critical thinking skills. News partnerships, news labeling, and reduction of click dates are some of the things meta do in supporting news media. Fatima mentioned about platforms having no ideological powers and what they have is logistical power. They organize our discourse. We should understand this to be able to see that the choice is still ours. We just need to be more critical of what we consume from this platforms. In conclusion, I believe that we cannot undermine the power of this tech titans to influence not just our decision to vote for this election but as well as how we live our lives. It is therefore important for us to keep being cautious and prudent on what information to consume and what information to disregard. Though we wish for this technology giants to be responsible to what they share and communicate, it is still our responsibility to ourselves to make better judgment of the information we receive. Again, thank you to all our speakers Mr. Tan, Ms. Gao and Mr. Brunkeen to our participants, to our moderator Robbie and to all organizers of this event. We hope to see you on our next webinar. Keep safe, everyone. I am Salama Professor Christine Birai that pretty much captures everything. And now just to round out our program we're sharing the post-test results for our viewers. This is what you shared. You can compare that to how our group and you individually answered at the start of this program. As you can see from your screens there are differences. There also hopefully is a representation of an increase in knowledge and understanding of the issues based on the post-test results. Those who have actively participated will get the most out of this interactive program. As mentioned, this webinar is part of a series of the National Forum on Communication and Democracy, Philippine Elections 2022. PCS will be having a webinar every second Wednesday puto of the month until May 2022. So please mark your calendars every second Wednesday of the month until May 2022. Next month we'll be featuring citizens vote watch with Malumangahas as your host and moderator. Please stay tuned for updates in the PCS website or Facebook page. If you'd like to watch this or all other previous webinars in playback, or if you would like to share this with your friends, all webinars in this series are available for viewing at your convenience at the TV UP YouTube channel. Okay, so this formally closes the 8th National Forum on Communication and Democracy, Philippine Elections 2022. We look forward to your company again every second Wednesday of the month from 12 noon to 2 p.m. Manilkang. So same time as we had today. I'm Roby Alampai from TV5, Signal TV, and Puma Podcast on behalf of the Philippine Communication Society that strengthens our country's democratic foundations through communication. Enjoy the rest of your day and the rest of your week. Thank you very much.