 Good afternoon to everyone. My name is Titicad. It's not my real name. My real name is Wu Mingxuan. Okay, I'm a deputy CEO from Open Culture Foundation and also the hackathon organizer for Gulf Zero. It's a GOV10O20. It's our CVDAC community in Taiwan. And today I'm going to talk about a chatbot against the fact-checking chatbot come from this community. So in Taiwan, we use Line. I wonder if anybody heard of Line, but it's like WhatsApp. It's an instant messenger app, and it's very, very popular in Taiwan. And in my Line app, you can see there's a lot of Goop, and there's a Goop for your tour friend, there's a co-worker, we don't like it, but yeah, there's a colleague in Line, there's a family member Goop. And in your family Goop, it's like everybody's uncle and or somebody you never met for years, but they were inside the Goop, and they will share a lot of messaging inside these family Goops or a lot of Goops. So like this one, it's a cover story from the time magazine, and they say a Thai government humiliated to international. Time cover story this week is Wonky Monkey Iron Curtain Country for Taiwan. So it's a really great shame for Taiwan. So if you google the time magazine in 2018, you can see actually there's no list cover story at all. It's totally fake, but because the Line user, actually there's a almost like a, you can see start from the 12 years old, there's almost 80, 90% of people using Line. And a lot of people, they don't know how to use internet or they don't have a computer at all, but they're using the Facebook or Line at their smartphone, especially for the people who are not very familiar with the internet. How, what they will do on their phone is actually is a touch, click, slap, and forward. Like the first speaker said, some of this fake news actually it just takes you one minute to google it, but a lot of people, like senior people or people who are not familiar with the internet, they don't know how to google it. There's another, there's another news. So it's our digital minister Audrey Tan. She's from the Gaffa Zero community before. And the news is a sure now step to stop Tsai Ing-wen and Audrey Tan monitoring your Facebook. So the way they say how you stop them to monitor your Facebook is first, go to the setting of your Facebook. Second, manager account. Third, request account deletion. And because a lot of people didn't actually understand what's this English meanings, so I think we think it maybe is some young people try to talk it in to lose senior group or in their family group or something like that. So they make this and there's a lot of people are really afraid like a Gaffa Zero helping the government monitor the people's Facebook or privacy. But we don't want to do this. We don't want to toggle this issue, fake news or disinformation news in this kind of way. So people from the community, they are started thinking how we toggle the disinformation within the closing instant messenger group. It's not like on the Facebook or on the Google or on other web page you can use. Mainly this kind of user, they use the cell phone. So normally with how we stopped the rumor to spreading, the first thing is you try to Google sometimes a minute, sometimes an hour and rewrite into easy to read the chat message and send back to my family group or to my father or to my mother. And every time, in every chat room by chat room. So they start to do this like a chat box. It's called co-facts. Basically it's a cross-source fact-checking chat bot. So last year by support by Gaffa Zero, they get $20,000 USD for six months development. And now they have this prototype. How do you work? So user can use this chat bot to check any kind of message they get. And the chat bot will check from the database and the database will post every report on the website and also every volunteer editor can log in and try to do the fact-check on it. Like this one, so if you ask the chat bot why there's no media cover the accident in power plan today, the government is censorship our news, right? So the first time, the first people who send this message, the database doesn't have any data from it. So they will ask the user, would you like to send this message to the database? And if you click yes, it will come into a database website and it's open. Everybody can see it. And then the volunteer editor, they can log in and see the newest report article and do the fact-checking. You can mark it should not be processed, maybe it's a random or ad message or it contains correct information or contains this misinformation. Then you make a reply and the next time another people check with this chat bot, you can see the reply says this message contains misinformation and actually the accident happened in September 2017 is not today and there's a lot of news report and there's a reference. So and the response can be more than one because it's a cross-source fact-checking. It's like a Wikipedia. You can have a different opinion with other editor and it's all but you have to give reference and you have to give your opinion on why you think it's a disinformation or why you think it's correct and all this kind of report and will be recorded and open to everybody. So it's an old process to finding information. So we need to read the message and confirm and do the fact-check and extract the keywords and open the web browser and Google with the keywords and it compares all the results. It's for a user if they want to check whether or not this message is correct. So we come down to only they have to do is forward the message to the chat bot and compare the result and make the decision by themselves. So luckily my father or my mother they don't know how to Google but they are very very good at forwarding the message to every group inside the line. So the essential idea is let the user ask what they want to know and everybody can be the editor. You can log in sign up and they have online group or fact-checking tours or editor gathering events once a while I think it's once a month. So now they come out to like a almost 300 volunteer editor online and they develop their own guideline for the fact-checking and every document every meeting they have a records and it's open. So it's a fact-checking guideline and everything is on the GitHub and API and it's all open source. So a little bit recap is it's a co-fact that the failure is decentralized in the instant messenger app and the automatic and its challenge is how we control the quality and how we go hacking the course making it more popular and how we make the search more precisely. So now this now is a beta version and they didn't do a lot of promotion but they will start to launch in a couple months later but only for the beta test status from the open database what we can found today. Like I say we have almost 300 percent for the volunteer editor and there's a new apply like 260 new replies for every week and now we have 26,000 people showing this line as their friend. Now we have a 100 sorry 13,000 article messenger be report and in our database and every week we will get around 250 new article or a messenger report. So it's like tightly they can balance right now but another challenge is we don't know if we do a lot of promotion and there's more people more users come in how can we balance the speed we do the fact-check. And there's a type of the message it has been checked. So as you can see there's around 36 percent message is content false information and only 26 content the correct information and the others is like add or or M.I. Henson something like that is you don't have to do anything but there's a little bit different it's a 3.2 percent is opinion. This kind of opinion it's like some messages is like I don't like the current government because I think late this kind of opinion what this group of editor they will do they will mark this kind of a message as personal opinion and they will give another reference like a different opinion the link from or another article or another blog post and they will provide to the user a different opposite opinion article. And once the message be checked it can be request every time through the bot through the chatbot. So this is the status for the type of a message has been request. So the message content of false information become very very very very big and the message of content the correct information become less. And then the life circle for disinformation most messages come in and the last time they been request is within one day or or one day to one month. So like a 50 percent of message they will stay for one month or there's another 21 percent message they will stay for 30 days to 90 days but we do have some 25 percent a quarter message they will stay 90 days or more than one year. So like this one this message is the longest one the first when the chatbot get this message first report first and then the last time they be they being request to some people want to know whether or not it's true it's 415 days period. And then another one another example is this they only be request for two times. It means like only like a three hundred one years ago there's somebody report this message and their editor to we do the fact check already and one years later another people come come up to try to request this kind of message. What's that mean? That means it's not like the last person it's not like he go through this message from one years ago and finally and try to send it to another people right. It's like under like in in some way we don't know where this message has been spread over and over and over inside the closing messenger group. So it's hidden very well inside this kind of instant messenger group so it's pointless it's not pointless sorry it's a it's also very important how we think to target this kind of messenger app not only flag this fact check status on the Facebook or on the website and once the editor they they do the fact checking all the records records will become a one single web page and they will show what the original messenger and the fact check the fact check message. So as you can see from last year's June to now there's over over hundred thousand sorry over yes there's a lot of a page review okay. So to answer the the question this morning what's the civic tag we think the civic tag shoe owned by the civic it's a very simple so that's why gov zero all the gov zero project all the projects from the gov zero community is open source so people can actually see how the algorithm how they build this app how they do the fact check why they do this what's their guideline everything is open and all this analyze is scrapped from the layer open data so actually we now we can we can do some search like we can chest the timeline for one fact message where they come from the first time maybe they come from the from the Facebook or somebody make a blog post and then they come into this instant messenger app in the chat room and then how long it lasts so this kind of timelines or of the article in the long term is what we can do for this kind of open data database. So it's a very small scale while our experiment but you can imagine that if the biggest fact-checking organization when all this kind of when they all open up layer data what we can do for talking combating this disinformation in the world thank you